Unit 5 - Computer Graphics & Multimedia
Unit 5 - Computer Graphics & Multimedia
Tech
Subject Name: Computer Graphics & Multimedia
Subject Code: IT-601
Semester: 6th
Downloaded from www.rgpvnotes.in
Unit V Syllabus:
Compression & Decompression, Multimedia Data & File Format standards, TIFF, MIDI, JPEG, DIB, MPEG,
RTF, Multimedia I/O technologies, Digital voice and audio, Video image and animation, Full motion video,
Storage and retrieval technologies.
.TXT is a file format for files consisting of text usually containing very little formatting (e.g., no bolding or
italics). The precise definition of the .txt format is not specified, but typically matches the format accepted by
the system terminal or simple text editor. Files with the .txt extension can easily be read or opened by any
program that reads text and, for that reason, are considered universal (or platform independent).
The ASCII character set is the most common format for English-language text files, and is generally assumed to
be the default file format in many situations. For accented and other non-ASCII characters, it is necessary to
choose a character encoding. In many systems, this is chosen on the basis of the default locale setting on the
computer it is read on. Common character encodings include ISO 8859-1 for many European languages.
Because many encodings have only a limited repertoire of characters, they are often only usable to represent
text in a limited subset of human languages.
Unicode is an attempt to create a common standard for representing all known languages, and most known
character sets are subsets of the very large Unicode character set. Although there are multiple character
encodings available for Unicode, the most common is UTF-8, which has the advantage of being, backwards-
compatible with ASCII: that is, every ASCII text file is also a UTF-8 text file with identical meaning.
Unicode is a computing industry standard for the consistent encoding, representation and handling of text
expressed in most of the world's writing systems. Developed in conjunction with the Universal Character Set
standard and published in book form as The Unicode Standard, the latest version of Unicode contains a
repertoire of more than 110,000 characters covering 100 scripts. The standard consists of a set of code charts
for visual reference, an encoding method and set of standard character encodings, a set of reference data
computer files, and a number of related items, such as character properties, rules for normalization,
decomposition, collation, rendering, and bidirectional display order (for the correct display of text containing
both right-to-left scripts, such as Arabic and Hebrew, and left-to-right scripts). As of September 2013, the most
recent version is Unicode 6.3. The standard is maintained by the Unicode Consortium.
Unicode can be implemented by different character encodings. The most commonly used encodings are UTF-
8, UTF-16 and the now-obsolete UCS-2. UTF-8 uses one byte for any ASCII characters, which have the same
code values in both UTF-8 and ASCII encoding, and up to four bytes for other characters. UCS-2 uses a 16-bit
code unit (two 8-bit bytes) for each character but cannot encode every character in the current Unicode
standard. UTF-16 extends UCS-2, using two 16-bit units (4 × 8 bit) to handle each of the additional characters.
UTF−8
128 characters are encoded using 1 byte (the ASCII characters). 1920 characters are encoded using 2 bytes
(Roman, Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic characters). 63488 characters are encoded using 3
bytes (Chinese and Japanese among others). The other 2147418112 characters (not assigned yet) can be
encoded using 4, 5 or 6 characters. For more info about UTF−8, do `man 7 utf−8' (manage contained in the
man−pages−1.20 package).
UCS−2
Every character is represented as two bytes. This encoding can only represent the first 65536 Unicode
characters.
UTF−16
This is an extension of UCS−2 which can represent 1112064 Unicode characters. The first 65536 Unicode
characters are represented as two bytes, the other ones as four bytes.
UCS−4
Every character is represented as four bytes. The space requirements for encoding a text, compared to
encodings currently in use (8 bit per character for European languages, more for Chinese / Japanese / Korean),
is as follows. This has an influence on disk storage space and network download speed (when no form of
compression is used).
UTF−8
No change for US ASCII, just a few percent more for ISO−8859−1, 50% more for Chinese / Japanese / Korean,
100% more for Greek and Cyrillic.
RTF
An RTF file is a common text file format that supports "rich text." It includes several types of text formatting,
such as bold type, italics, different fonts and font sizes, and custom tab settings. RTF files also support objects
and images, such as .JPG and .PNG files, saved within the text file.
RTF files are usually 7-bit ASCII plain text. RTF consists of control words, control symbols, and groups. RTF files
can be easily transmitted between PC based operating systems because they are encoded as a text file with 7-
bit graphic ASCII characters. Converters that communicate with Microsoft Word for MS Windows or
Macintosh should expect data transfer as 8-bit characters and binary data can contain any 8-bit values.
JPEG
JPEG is a standardized image compression mechanism. JPEG stands for Joint Photographic Experts Group, the
original name of the committee that wrote the standard. JPEG compresses either full-color or grayscale
images, and works best with photographs and artwork. For geometric line drawings, lettering, cartoons,
computer screenshots, and other images with flat color and sharp borders, the PNG and GIF image formats are
usually preferable.
JPEG uses a lossy compression method, meaning that the decompressed image isn't quite the same as the
original. (There are lossless image compression algorithms, but JPEG achieves much greater compression than
is possible with lossless methods.) This method fools the eye by using the fact that people perceive small
changes in color less accurately than small changes in brightness.
JPEG was developed for two reasons: it makes image files smaller and it stores 24-bit per pixel color data (full
color) instead of 8-bit per pixel data. Making image files smaller is important for storing and transmitting files.
Being able to compress a 2MB full-color file down to, for example, 100KB makes a big difference in disk space
and transmission time. JPEG can easily provide 20:1 compression of full-color data. (With GIF images, the size
ratio is usually more like 4:1.). The file name for a JPEG image is .jpg or .jpeg. There is actually no difference
between JPG and JPEG, except for the number of characters used.
JPEG is the most commonly used format for photographs. It is specifically good for color photographs or for
images with many blends or gradients. However, it is not the best with sharp edges and might lead to a little
blurring. This is mainly because JPEG is a method of lossy compression for digital photography. An advantage
to using the JPEG format is that due to compression, a JPEG image will take up a few MB of data. This means
that while saving the image in a JPEG format, there is a slight loss of quality due to compression. Hence, JPEG
is not the greatest format in case one needs to keep making numerous edits and re-saves to the image.
The JPEG is quite popular for web hosting of images, for amateur and average photographers, digital cameras,
etc. This is mainly due to the fact that high quality images can be saved using less space. JPEG is one of the
most common image formats proposed by the Joint Photographic Experts Group to save and store digital
images. Almost all high-definition digital cameras including modern-day Smartphone cameras use JPEG file
extension to store image files.
DIB
DIB is a graphics file format used by Windows. DIB stands for “Device-Independent Bitmap.” DIB files are
bitmapped graphics that represent color formats. Similar to .BMP format, except they have a different header.
DIB files can be opened and edited in most image editing programs.
The color format of the device on which the rectangular image was created.
The resolution of the device on which the rectangular image was created.
The palette for the device on which the image was created.
An array of bits that maps red, green, blue ( RGB ) triplets to pixels in the rectangular image.
A data-compression identifier that indicates the data compression scheme (if any) used to reduce the
size of the array of bits.
PCM
PCM stands for Pulse-Code Modulation, a digital representation of raw analog audio signals. Analog sounds
exist as waveforms, and in order to convert a waveform into digital bits, the sound must be sampled and
recorded at certain intervals (or pulses).
WAV
WAV stands for Waveform Audio File Format (also called Audio for Windows at some point but not anymore).
It’s a standard that was developed by Microsoft and IBM back in 1991.
A lot of people assume that all WAV files are uncompressed audio files, but that’s not exactly true. WAV is
actually just a Windows container for audio formats. This means that a WAV file can contain compressed
audio, but it’s rarely used for that.
Most WAV files contain uncompressed audio in PCM format. The WAV file is just a wrapper for the PCM
encoding, making it more suitable for use on Windows systems. However, Mac systems can usually open WAV
files without any issues.
AIFF
AIFF stands for Audio Interchange File Format. Similar to how Microsoft and IBM developed WAV for
Windows, AIFF is a format that was developed by Apple for Mac systems back in 1988.
Also similar to WAV files, AIFF files can contain multiple kinds of audio. For example, there is a compressed
version called AIFF-C and another version called Apple Loops which is used by GarageBand and Logic Audio —
and they all use the same AIFF extension.
Most AIFF files contain uncompressed audio in PCM format. The AIFF file is just a wrapper for the PCM
encoding, making it more suitable for use on Mac systems. However, Windows systems can usually open AIFF
files without any issues.
MP3
MP3 stands for MPEG-1 Audio Layer 3. It was released back in 1993 and quickly exploded in popularity,
eventually becoming the most popular audio format in the world for music files. There’s a reason why we have
“MP3 players” but not “OGG players”…
The main pursuit of MP3 is to cut out all of the sound data that exists beyond the hearing range of most
normal people and to reduce the quality of sounds that aren’t as easy to hear, and then to compress all other
audio data as efficiently as possible.
Nearly every digital device in the world with audio playback can read and play MP3 files, whether we’re talking
about PCs, Macs, Androids, iPhones, Smart TVs, or whatever else. When you need universal, MP3 will never let
you down.
AAC
AAC stands for Advanced Audio Coding. It was developed in 1997 as the successor to MP3, and while it did
catch on as a popular format to use, it never really overtook MP3 as the most popular for everyday music and
recording. The compression algorithm used by AAC is much more advanced and technical than MP3, so when
OGG (Vorbis)
OGG doesn’t stand for anything. Actually, it’s not even a compression format. OGG is a multimedia container
that can hold all kinds of compression formats, but is most commonly used to hold Vorbis files — hence why
these audio files are called Ogg Vorbis files.
Vorbis was first released in 2000 and grew in popularity due to two reasons: first, it adheres to the principles
of open source software, and second, it performs significantly better than most other lossy compression
formats (i.e. produces a smaller file size for equivalent audio quality).
MP3 and AAC have such strong footholds that OGG has had a hard time breaking into the spotlight — not
many devices support it natively— but it’s getting better with time. For now, it’s mostly used by hardcore
proponents of open software.
WMA
WMA stands for Windows Media Audio. It was first released in 1999 and has gone through several evolutions
since then, all while keeping the same WMA name and extension. As you might expect, it’s a proprietary
format created by Microsoft.
Not unlike AAC and OGG, WMA was meant to address some of the flaws in the MP3 compression method —
and as such, WMA’s approach to compression is pretty similar to AAC and OGG. In other words, in terms of
objective quality, WMA is better than MP3.
But since WMA is proprietary, not many devices and platforms support it. It also doesn’t offer any real
benefits over AAC or OGG, so in most cases when MP3 isn’t good enough, it’s simply more practical to go with
one of those two instead.
FLAC
FLAC stands for Free Lossless Audio Codec. A bit on the nose maybe, but it has quickly become one of the
most popular lossless formats available since its introduction in 2001.
What’s nice is that FLAC can compress an original source file by up to 60% without losing a single bit of data.
What’s even nicer is that FLAC is an open source and royalty-free format rather than a proprietary one, so it
doesn’t impose any intellectual property constraints.
FLAC is supported by most major programs and devices and is the main alternative to MP3 for CD audio. With
it, you basically get the full quality of raw uncompressed audio in half the file size.
ALAC
ALAC stands for Apple Lossless Audio Codec. It was developed and launched in 2004 as a proprietary format
but eventually became open source and royalty-free in 2011. ALAC is sometimes referred to as Apple Lossless.
WMA
WMA stands for Windows Media Audio. We covered it above in the lossy compression section, but we
mention it here because there’s a lossless alternative called WMA Lossless that uses the same extension.
Confusing, I know.
Compared to FLAC and ALAC, WMA Lossless is the worst in terms of compression efficiency but only slightly.
It’s a proprietary format so it’s no good for fans of open source software, but it is supported natively on both
Windows and Mac systems.
The biggest issue with WMA Lossless is the limited hardware support. If you want lossless audio across
multiple devices, you should stick with FLAC unless all of your devices are of the Windows variety.
MIDI
MIDI, or Musical Instrument Digital Interface, is a standard protocol for the interchange of musical information
between musical instruments, synthesizers, keyboard controllers, sound cards, computers and all other
electronic instruments from all manufacturers. In other words, a MIDI file (with file extension ‘.mid’ or ‘.midi’)
translates music – like what notes are to be played, when they are to be played, how long each note is to be
held, with what loudness, pitch and so on – and then reproduces it on another instrument, much like a human
reading a music sheet.
A MIDI file is very small, often as small as 10 KB for a 1-minute playback (a .wav file of the same duration
requires 5 to 10 MB of disk space). This is because it doesn’t contain audio waves like audio file formats do,
but instructions on how to recreate the music. Another advantage of the file containing instructions is that it is
quite easy to change the performance by changing, adding or removing one or more of the instructions – like
note, pitch, tempo, and so on – thus creating a completely new performance. This is the main reason for the
file to be extremely popular in creating, learning, and playing music.
MIDI actually consists of three distinctly different parts – the physical connector, the message format, and the
storage format. The physical connector connects and transports data between devices; the message format
(considered to be the most important part of MIDI) controls the stored data and the connected devices; and
the storage format stores all the data and information.
Output Devices
Sound cards
A sound card (also known as an audio card) is an internal expansion card that provides input and output of
audio signals to and from a computer under control of computer programs. The term sound card is also
applied to external audio interfaces used for professional audio applications. Typical uses of sound cards
include providing the audio component for multimedia applications such as music composition, editing video
or audio, presentation, education and entertainment (games) and video projection.
Sound functionality can also be integrated onto the motherboard, using components similar to those found on
plug-in cards. The integrated sound system is often still referred to as a sound card. Sound processing
hardware is also present on modern video cards with HDMI to output sound along with the video using that
connector; previously they used a SPDIF connection to the motherboard or sound card.
Image Creation and
Format options Archive Recommendations
Capture Image
1. Raw DNG (or TIFF) file if possible
Dependent on
Digital Cameras 2. Original JPEG: save archive copy on download and for
model of camera
presentation images always work on a copy of the file
Wide range once Save uncompressed/lossless format (TIFF) as archive copy
Scanners
scanned regardless of intended format
Alongside software package files (e.g Photoshop [.psd],
Wide choice of
Corel Draw [.cpt]), save draft images in uncompressed TIFF
Graphics Images formats under ‘Save
format if possible, and replace with archive TIFF of end
As…’ command
product image
Uses of Animation
Cartoons - One of the most exciting applications of multimedia is games. Nowadays the live internet is
used to play gaming with multiple players has become popular. In fact, the first application of multimedia
system was in the field of entertainment and that too in the video game industry. The integrated audio
and video effects make various types of games more entertaining.
Simulations - Computer simulation and animation are well known for their uses in visualizing and
explaining complex and dynamic events. They are also useful in the analysis and understanding of these
same types of events. That is why they are becoming increasingly used in litigation. While simulation and
animation are different, they both involve the application of 3D computer graphics and are presented in
that form with motion on a video screen. Simulation produces motion, which is consistent with the laws of
physics and relies on the inputs by the user to be consistent with the events portrayed. The motion in an
animation can be derived from a reconstruction of the event or can be taken from a simulation. Currently
available animation software is more advanced in the ability to build objects and scenes to achieve photos-
Solid drawing
The principle of solid drawing means taking into account forms in three-dimensional space, or giving them
volume and weight. The animator needs to be a skilled artist and has to understand the basics of three-
dimensional shapes, anatomy, weight, balance, light and shadow, etc. For the classical animator, this involved
taking art classes and doing sketches from life. One thing in particular that Johnston and Thomas warned
against was creating "twins": characters whose left and right sides mirrored each other, and looked lifeless.
Modern-day computer animators draw less because of the facilities computers give them, yet their work
benefits greatly from a basic understanding of animation principles and their additions to basic computer
animation
Animation file formats
There are a number of different types of animation file formats. Each type stores graphics data in a different
way. Bitmap, vector, and metafile formats are by far the most commonly used formats, and we focus on
Multimedia Storage
Multimedia can be stored in mediums such as Optical Disks, Hard Drives, Magnetic Storage Media and such.
Multimedia Retrieval
Multimedia retrieval depends on the type of multimedia file it is which may be continuous or discrete.
Continuous media is data where there is a timing relationship between source and destination. Video,
animation and audio are examples of continuous media. Some media is time independent or static or discrete
media: normal data, text, single images, graphics are examples.
Magnetic Media
Magnetic storage or magnetic recording is the storage of data on a magnetized medium.