Multimedia Notes
Multimedia Notes
ED
SEMESTER VI
ELECTIVE II – MULTIMEDIA
UNIT I :
Multimedia Definition - Use Of Multimedia - Delivering Multimedia - Text: About Fonts and
Faces - Using Text in Multimedia - Computers and Text - Font Editing and Design Tools - Hypermedia
and Hypertext.
UNIT II:
Images: Plan Approach - Organize Tools - Configure Computer Workspace - Making Still
Images - Color - Image File Formats. Sound: The Power of Sound - Digital Audio - Midi Audio - Midi
vs. Digital Audio - Multimedia System Sounds - Audio File Formats -Vaughan's Law of Multimedia
Minimums - Adding Sound to Multimedia Project.
UNIT III :
UNIT IV:
Making Multimedia: The Stage of Multimedia Project - The Intangible Needs - The Hardware
Needs - The Software Needs - An Authoring Systems Needs. Multimedia Production Team.
UNIT V:
Planning and Costing: The Process of Making Multimedia - Scheduling - Estimating - RFPs and
Bid Proposals. Designing and Producing - Content and Talent: Acquiring Content - Ownership of
Content Created for Project - Acquiring Talent
TEXT BOOK
REFERENCE BOOK
UNIT –I
Multimedia Definition:
Multimedia is any combination of text, art, sound, animation, and video delivered to you by
computer or other electronic or digitally manipulated means.
Multimedia is, as described digitally manipulated text, photographs, graphic art, sound,
animation, and video elements. To control what and when the elements are delivered, it is called
interactive multimedia. Provide a structure of linked elements through which the user can
navigate, interactive multimedia becomes hypermedia.
Characteristics of multimedia:
Multimedia is any combination of text, graphic art, sound, animation, and video delivered by
computer or other electronic means.
Multimedia production requires creative, technical, organizing, and business ability.
Multimedia presentations can be nonlinear (interactive) or linear (passive).
Multimedia can contain structured linking called hypermedia
Use of Multimedia:
Multimedia is appropriate whenever a human user is connected to electronic information of any
kind, at the “human interface.” Multimedia enhances minimalist, text-only computer interfaces
and yields measurable benefit by gaining and holding attention and interest; in short, multimedia
improves information retention. When it’s properly constructed, multimedia can also be
profoundly entertaining as well as useful.
Multimedia in Business:
Business applications for multimedia include presentations, training, marketing, advertising,
product demos, simulations, databases, catalogs, instant messaging, and networked
communications.
Voice mail and video conferencing are provided on many local and wide area networks (LANs
and WANs) using distributed networks and Internet protocols.
Multimedia is enjoying widespread use in training programs. Flight attendants learn to manage
international terrorism and security through simulation.
Drug enforcement agencies of the UN are trained using interactive videos and photographs to
recognize likely hiding places on airplanes and ships.
Medical doctors and veterinarians can practice surgery methods via simulation prior to actual
surgery. Mechanics learn to repair engines.
Salespeople learn about product lines and leave behind software to train their customers.
As companies and businesses catch on to the power of multimedia, the cost of installing
multimedia capability decreases, meaning that more applications can be developed both in-house
and by third parties, which allow businesses to run more smoothly and effectively.
Multimedia in Schools:
Multimedia will provoke radical changes in the teaching process during the coming decades,
particularly as smart students discover they can go beyond the limits of traditional teaching
methods.
There is, indeed, a move away from the transmission or passive-learner model of learning to the
experiential learning or active-learner model.
In some instances, teachers may become more like guides and mentors, or facilitators of learning,
leading students along a learning path, rather than the more traditional role of being the primary
providers of information and understanding.
The students, not teachers, become the core of the teaching and learning process.
E-learning is a sensitive and highly politicized subject among educators, so educational software
is often positioned as “enriching” the learning process, not as a potential substitute for traditional
teacher-based methods.
ITV (Interactive TV) is widely used among campuses to join students from different locations
into one class with one teacher.
Remote trucks containing computers, generators, and a satellite dish can be dispatched to areas
where people want to learn but have no computers or schools near them.
In the online version of school, students can enroll at schools all over the world and interact with
particular teachers and other students—classes can be accessed at the convenience of the
student’s lifestyle while the teacher may be relaxing on a beach and communicating via a wireless
system.
Multimedia at Home:
From gardening, cooking, home design, remodeling, and repair to genealogy software,
multimedia has entered the home.
Eventually, most multimedia projects will reach the home via television sets or monitors with
built-in interactive user inputs either on old-fashioned color TVs or on new high-definition sets.
The multimedia viewed on these sets will likely arrive on a pay-for-use basis along the data
highway.
Today, home consumers of multimedia own either a computer with an attached CD-ROM or
DVD drive or a set-top player that hooks up to the television, such as a Nintendo Wii, X-box, or
Sony PlayStation machine.
There is increasing convergence or melding of computer based multimedia with entertainment
and games-based media traditionally described as “shoot-em-up.”
Live Internet pay-for-play gaming with multiple players has also become popular, bringing
multimedia to homes on the broadband Internet, often in combination with CD-ROMs or DVDs
inserted into the user’s machine.
Microsoft’s Internet Gaming Zone and Sony’s Station web site boast more than a million
registered users each—Microsoft claims to be the most successful, with tens of thousands of
people logged on and playing every evening.
In hotels, train stations, shopping malls, museums, libraries, and grocery stores, multimedia is
already available at stand-alone terminals or kiosks, providing information and help for
customers.
Multimedia is piped to wireless devices such as cell phones and PDAs. Such installations reduce
demand on traditional information booths and personnel, add value, and are available around the
clock, even in the middle of the night, when live help is off duty.
The way we live is changing as multimedia penetrates our day-to-day experience and our culture.
Imagine a friend’s bout of maudlin drunk dialing (DD) on a new iPhone, with the camera
accidentally enabled.
Figure shows a menu screen from a supermarket kiosk that provides services ranging from meal
planning to coupons.
Hotel kiosks list nearby restaurants, maps of the city, airline schedules, and provide guest services
such as automated checkout.
Printers are often attached so that users can walk away with a printed copy of the information.
Museum kiosks are not only used to guide patrons through the exhibits, but when installed at each
exhibit, provide great added depth, allowing visitors to browse through richly detailed
information specific to that display.
Today, multimedia is found in churches and places of worship as live video with attached song
lyrics shown on large screens using elaborate sound systems with special effects lighting and
recording facilities.
Virtual Reality:
At the convergence of technology and creative invention in multimedia is virtual reality, or VR.
Goggles, helmets, special gloves, and bizarre human interfaces attempt to place you “inside” a
lifelike experience.
Take a step forward, and the view gets closer; turn your head, and the view rotates. Reach out and
grab an object; your hand moves in front of you.
VR requires terrific computing horsepower to be realistic. In VR, your cyberspace is made up of
many thousands of geometric objects plotted in three-dimensional space: the more objects and the
more points that describe the objects, the higher the resolution and the more realistic your view.
As you move about, each motion or action requires the computer to recalculate the position,
angle, size, and shape of all the objects that make up your view, and many thousands of
computations must occur as fast as 30 times per second to seem smooth.
On the World Wide Web, standards for transmitting virtual reality worlds or scenes in VRML
(Virtual Reality Modeling Language) documents (with the filename extension .wrl) have been
developed. Intel and software makers such as Adobe have announced support for new 3-D
technologies.
Virtual reality (VR) is an extension of multimedia—and it uses the basic multimedia elements of
imagery, sound, and animation.
Because it requires instrumented feedback from a wired-up person, VR is perhaps interactive
multimedia at its fullest extension.
Delivering Multimedia:
Multimedia requires large amounts of digital memory when stored in an end user’s library, or
large amounts of bandwidth when distributed over wires, glass fiber, or airwaves on a network.
The greater the bandwidth, the bigger the pipeline, so more content can be delivered to end users
quickly.
Multimedia projects often require a large amount of digital memory; hence they are often stored
on CD-ROM or DVDs.
Multimedia also includes web pages in HTML or DHTML (XML) on the World Wide Web, and
can include rich media created by various tools using plug-ins.
Web sites with rich media require large amounts of bandwidth.
The promise of multimedia has spawned numerous mergers, expansions, and other ventures.
These include hardware, software, content, and delivery services.
The future of multimedia will include high bandwidth access to a wide array of multimedia
resources and learning materials.
Multimedia also includes web pages in HTML or DHTML (XML) on the World Wide Web, and
can include rich media created by various tools using plug-ins. Web sites with rich media require
large amounts of bandwidth. The future of multimedia will include high bandwidth access to a
wide array of multimedia resources and learning materials.
TEXT
All multimedia content consists of texts in some form. Even a menu text is accompanied by a
single action such as mouse click, keystroke or finger pressed in the monitor in case of a touch
screen
The text in multimedia is used to communicate information to the user. Proper use of text and
words in multimedia presentation will help the content developer to communicate the idea and
message to the user.
Text-keyword using text and symbols is used to communicate with the human.
Text in multimedia
Words and symbols in any form, spoken or written, are the most common system of
communication. They deliver the most widely understood meaning to the greatest number of
people.
Most academic related text such as journals, e-magazines are available in the web browser
readable form.
Since multimedia is usually defined as the integration of sound etc. with text, we start with text.
Strictly, speaking, text is created on a computer, so it doesn’t really extend a computer system the
way audio and video do. But, understanding how text is stored will set the scene for
understanding how multimedia is stored.
With a font editing program like Fontographer from Fontlab, Ltd. at www.fontlab.com,
adjustments can also be made along the horizontal axis of text.
In this program the character metrics of each character and the kerning of character pairs can be
altered.
Character metrics are the general measurements applied to individual characters; kerning is the
spacing between character pairs.
When working with PostScript, TrueType, and Master font but not bitmapped fonts the metrics of
a font can be altered to create interesting effects.
For example, you can adjust the body width of each character from regular to condensed to
expanded, as displayed in this example using the Sabon font:
Or you can adjust the spacing between characters (tracking) and the kerning between pairs of
characters:
Cases:
A capital letter is called uppercase, and a small letter is called lowercase. In some situations,
such as for passwords, a computer is case sensitive, meaning that the text’s upper- and lowercase
letters must match exactly to be recognized.
But nowadays, in most situations requiring keyboard input, all computers recognize both the
upper- and lowercase forms of a character to be the same. In that manner, the computer is said to
be case insensitive.
Placing an uppercase letter in the middle of a word, called an intercap, is a trend that emerged
from the computer programming community, where coders discovered they could better
recognize the words they used for variables and commands when the words were lowercase but
intercapped.
Company and product names such as WordPerfect, Omni Page, Photo Disc, FileMaker, and Web
Star have become popular
On the printed page, serif fonts are traditionally used for body text because the serifs are said to
help guide the reader’s eye along the line of text. Sans serif fonts, on the other hand, are used for
headlines and bold statements.
But the computer world of standard, 72 dpi monitor resolution is not the same as the print world,
and it can be argued that sans serif fonts are far more legible and attractive when used in the small
sizes of a text field on a screen.
The Times font at 9-point size may look too busy and actually be difficult and tiring to read. And
a large, bold serif font for a title or headline can deliver a message of elegance and character in
your graphic layout.
Computer screens provide a very small workspace for developing complex ideas. At some time or
another, you will need to deliver high-impact or concise text messages on the computer screen in
as condensed a form as possible.
Too little text on a screen requires annoying page turns and unnecessary mouse clicks and waits;
too much text can make the screen seem overcrowded and unpleasant.
On the other hand, if you are creating presentation slides for public speaking support, the text will
be keyed to a live presentation where the text accents the main message.
In this case, use bulleted points in large fonts and few words with lots of white space. Let the
audience focus on the speaker at the podium, rather than spend its time reading fine points and
sub points projected on a screen.
Picking the fonts to use in your multimedia presentation may be somewhat difficult from a design
standpoint. Here are a few design suggestions that may help:
For small type use the most legible font is available. Decorative fonts that cannot be read are
useless.
Using text fonts on the same page is called ransom note typography. In text blocks, adjust the
leading for the most pleasing line spacing. Lines too tightly packed are difficult to read.
Anti aliasing blends the color along the edges of the letters to create a soft transition between the
letter and its background.
Coding an initial cap for a web page is simple. Use HTML3.0’s <font> tag’s size attributes
<!--Set to your desired font and size
<font face=”verdana” size=”1”>
<!—Increase the size of the initial letter
<font size=”+2”>T</font> try drop caps
Use meaningful words or phrases for links and menu items.
Use of Cascading Style Sheets (CSS), preferred over the deprecated HTML <font> tag, allows
you to be quite precise about font faces, sizes, and other attributes
Vary the size of a font in proportion to the importance of the message you are delivering.
In large-size headlines, adjust the spacing between letters (kerning) so that the spacing feels
right.
Big gaps between large letters can turn your title into a toothless waif. You may need to kern by
hand, using a bitmapped version of your text.
To make your type stand out or be more legible, explore the effects of different colors and of
placing the text on various backgrounds. Try reverse type for a stark, white-on-black message.
Use anti-aliased text where you want a gentle and blended look for titles and headlines. This can
give a more professional appearance. Anti-aliasing blends the colors along the edges of the
letters (called dithering) to create a soft transition between the letter and its background.
Try drop caps (like the T to the left) and initial caps to accent your words. Most word processors
and text editors will let you create drop caps and small caps in your text. Adobe and others make
initial caps (such as the one shown to the right from Adobe, called Gothic).
If you are using centered type in a text block, keep the number of lines and their width to a
minimum.
For attention-grabbing results with single words or short phrases, try graphically altering and
distorting your text and delivering the result as an image. Wrap your word onto a sphere, bend it
into a wave, or splash it with rainbow colors.
Experiment with drop shadows. Place a copy of the word on top of the original, and offset the
original up and over a few pixels. Then color the original gray (or any other color). The word
may become more legible and provide much greater impact.
With web sites, shadowed text and graphics on a plain white background add depth to a page.
Surround headlines with plenty of white space.
White space is a designer’s term for roomy blank areas, while programmers call the invisible
character made by a space (ASCII 32) or a tab (ASCII 9) white space. Web designers use a
nonbreaking space entity ( ) to force spaces into lines of text in HTML documents.
Pick the fonts that seem right to you for getting your message across, then double-check your
choice against other opinions.
Use meaningful words or phrases for links and menu items.
Text links on web pages can accent your message: they normally stand out by color and
underlining. Use link colors consistently throughout a site.
Bold or emphasize text to highlight ideas or concepts, but do not make text look like a link or a
button when it is not.
On a web page, put vital text elements and menus in the top 320 pixels.
The most commonly reported fonts available on Windows computers are Tahoma, Microsoft
Sans Serif, Verdana, and Courier New. On Macs expect Helvetica, Lucida Grande, and Courier.
Animating Text:
There are plenty of ways to retain a viewer’s attention when displaying text. For example, you
can animate bulleted text and have it “fly” onto the screen. You can “grow” a headline a character
at a time.
For public speakers, simply highlighting the important text works well as a pointing device.
When there are several points to be made, you can stack keywords and flash them past the viewer
in a timed automated sequence.
You might fly in some keywords, dissolve others, rotate or spin others, and so forth, until you
have a dynamic bulleted list of words that is interesting to watch.
But be careful—don’t overdo the special effects, or they will become boring.
For simple presentations, PowerPoint has bells and whistles to reveal a line of text one word or
one letter at a time, or to animate an entire line.
With multimedia, you have the power to blend both text and icons (as well as colors, sounds,
images, and motion video) to enhance the overall impact and value of your message.
Word meanings are shared by millions of people, but the special symbols you design for a
multimedia project are not; these symbols must be learned before they can be useful message
carriers.
Some symbols are more widely used and understood than others, but readers of even these
common symbols had to grow accustomed to their meanings.
Because each PostScript character is a mathematical formula, it can be easily scaled bigger or
smaller so it looks right whether drawn at 24 points or 96 points, whether the printer is a 300 dpi
Laser Writer or a high-resolution 1200, 2400, or even 3600 dpi image setter suitable for the finest
print jobs.
And the PostScript characters can be drawn much faster than in the old-fashioned way. Before
PostScript, the printing software looked up the character’s shape in a bitmap table containing a
representation of the pixels of every character in every size.
PostScript quickly became the de facto industry font and printing standard for desktop publishing
and played a significant role in the early success of Apple’s Macintosh computer.
There are two kinds of PostScript fonts: Type 3 and Type 1. Type 3 font technology is older than
Type 1 and was developed for output to printers; it is rarely used by multimedia developers.
There are currently over 6,000 different Type 1 typefaces available. Type 1 fonts also contain
hints, which are special instructions for grid-fitting to help improve resolution.
Hints can apply to a font in general or to specific characters at a particular resolution
In 1989, Apple and Microsoft announced a joint effort to develop a “better and faster” quadratic
curves outline font methodology, called TrueType.
In addition to printing smooth characters on printers, True- Type would draw characters to a low-
resolution (72 dpi or 96 dpi) monitor.
Furthermore, Apple and Microsoft would no longer need to license the PostScript technology
from Adobe for their operating systems. Because TrueType was based on Apple technology, it
was licensed to Microsoft.
Unicode:
As the computer market has become more international, one of the resulting problems has been
handling the various international language alphabets.
It was at best difficult, and at times impossible, to translate the text portions of programs from
one script to another.
Since 1989, a concerted effort on the part of linguists, engineers, and information professionals
from many well-known computer companies has been focused on a 16-bit architecture for
multilingual text and character encoding called Unicode, the original standard accommodated up
to about 65,000 characters to include the characters from all known languages and alphabets in
the world.
Where several languages share a set of symbols that have a historically related derivation, the
shared symbols of each language are unified into collections of symbols (called scripts).
A single script can work for tens or even hundreds of languages (for example, the Latin script
used for English and most European languages). Sometimes, however, only one script will work
for a language.
Translating or designing multimedia (or any computer-based material) into a language other than
the one in which it was originally written is called localization.
This process deals with everything from the month/ day/year order for expressing dates to
providing special alphabetical characters on keyboards and printers.
Font Editing and Design Tools
Special font editing tools can be used to make your own type, so you can communicate an idea or
graphic feeling exactly.
With these tools, professional typographers create distinct text and display faces. Graphic
designers, publishers, and ad agencies can design instant variations of existing typefaces.
Fontlab
In font editors for both Macintosh and Windows platforms you can use this software to develop
PostScript, TrueType, and OpenType fonts.
Designers can also modify existing typefaces, incorporate PostScript artwork, automatically
trace scanned images, and create designs from scratch.
This is a freehand drawing tool to create professional and precise inline and outline drawings of
calligraphic.
Fontographer’s features include a freehand drawing tool to create professional and precise inline
and outline drawings of calligraphic and script characters, using either the mouse or alternative
input methods (such as a pressure-sensitive pen system).
Fontographer allows the creation of multiple font designs from two existing typefaces, and you
can design lighter or heavier fonts by modifying the weight of an entire typeface.
To make your text look pretty, you need a toolbox full of fonts and special graphics applications
that can stretch, shade, shadow, color, and anti-alias your words into real artwork.
Pretty text is typically found in bitmapped drawings where characters have been tweaked,
manipulated, and blended into a graphic image.
Simply choosing the font is the first step. Most designers find it easier to make pretty type
starting with ready-made fonts, but some will create their own custom fonts using font editing and
design tools.
With the proper tools and a creative mind, you can create endless variations on plain-old type,
and you not only choose but also customize the styles that will fit with your design needs.
Most image-editing and painting applications (see Figure for a PowerPoint example) let you
make text using the fonts available in your system.
You can colorize the text, stretch, squeeze, and rotate it, and you can filter it through various
plug-ins to generate wild graphic results.
Multimedia—the combination of text, graphic, and audio elements into a single collection or
presentation—becomes interactive multimedia when you give the user some control over what
information is viewed and when it is viewed.
Interactive multimedia becomes hypermedia when its designer provides a structure of linked
elements through which a user can navigate and interact.
When a hypermedia project includes large amounts of text or symbolic content, this content can
be indexed and its elements then linked together to afford rapid electronic retrieval of the
associated information.
When words are keyed or indexed to other words, you have a hypertext system; the “text” part of
this term represents the project’s content and meaning, rather than the graphical presentation of
the text.
When text is stored in a computer instead of on printed pages, the computer’s powerful
processing capabilities can be applied to make the text more accessible and meaningful.
The text can then be called hypertext; because the words, sections, and thoughts are linked, the
user can navigate through text in a nonlinear way, quickly and intuitively.
Using hypertext systems, you can electronically search through all the text of a computer-resident
book, locate references to a certain word, and then immediately view the page where the word
was found.
Or you can create complicated Boolean searches (using terms such as AND, OR, NOT, and
BOTH) to locate the occurrences of several related words, such as “Elwood,” “Gloria,”
“mortgage,” and “happiness,” in a paragraph or on a page.
Whole documents can be linked to other documents. Because hypertext is the organized cross-
linking of words not only to other words but also to associated images, video clips, sounds, and
other exhibits, hypertext often becomes simply an additional feature within an overall multimedia
design.
The term “hyper” (from the Greek word “over”) has come to imply that user interaction is a
critical part of the design, whether for text browsing or for the multimedia project as a whole.
When interaction and cross-linking is then added to multimedia, and the navigation system is
nonlinear, multimedia becomes hypermedia.
Using Hypertext
Special programs for information management and hypertext have been designed to present
electronic text, images, and other elements in a database fashion.
Commercial systems have been used for large and complicated mixtures of text and images.
Such searchable database engines are widely used on the Web, where software robots visit
millions of web pages and index entire web sites.
Hypertext databases rely upon proprietary indexing systems that carefully scan the entire body of
text and create very fast cross-referencing indexes that point to the location of specific words,
documents, and images.
Indeed, a hypertext index by itself can be as large as 50 percent to 100 percent the size of the
original document.
Indexes are essential for speedy performance. Google’s search engine produces about
1,220,000,000 hits in less than a quarter of a second!
Simpler but effective hypertext indexing tools are available for both Macintosh and Windows
Searching for words according to their general proximity and order. For example, you might
search for “party” and “beer” only when they occur on the same page or in the same paragraph.
Adjacency:
Searching for words occurring next to one another, usually in phrases and proper names. For
instance, find “widow” only when “black” is the preceding adjacent word.
Alternates:
Applying an OR criterion to search for two or more words, such as “bacon” or “eggs.”
Association:
Applying an AND criterion to search for two or more words, such as “skiff,” “tender,” “dinghy,”
and “rowboat.”
Negation:
Applying a NOT criterion to search exclusively for references to a word that are not associated
with the word. For example, find all occurrences of “paste” when “library” is not present in the same
sentence.
Truncation:
Searching for a word with any of its possible suffixes. For example, to find all occurrences of
“girl” and “girls,” you may need to specify something like girl#.
Multiple character suffixes can be managed with another specifier, so geo* might yield “geo,”
“geology,” and “geometry,” as well as “George.”
Intermediate words:
Searching for words that occur between what might normally be adjacent words, such as a middle
name or initial in a proper name.
Frequency:
Searching for words based on how often they appear: the more times a term is mentioned in a
document, the more relevant the document is to this term.
Hypermedia Structures
Two buzzwords used often in hypertext systems are link and node. Links are connections
between the conceptual elements, that is, the nodes, which may consist of text, graphics, sounds,
or related information in the knowledge base.
Links connect Caesar Augustus with Rome, for example, and grapes with wine, and love with
hate.
The art of hypermedia design lies in the visualization of these nodes and their links so that they
make sense and can form the backbone of a knowledge access system.
Along with the use of HTML for the World Wide Web, the term anchor is used for the reference
from one document to another document, image, sound, or file on the Web.
Links are the navigation pathways and menus; nodes are accessible topics, documents, messages,
and content elements.
A link anchor is where you come from; a link end is the destination node linked to the anchor.
Some hypertext systems provide unidirectional navigation and offer no return pathway; others are
bidirectional.
The simplest way to navigate hypermedia structures is via buttons that let you access linked
information (text, graphics, and sounds) that is contained at the nodes.
A typical navigation structure might look like the following:
A link can lead to a node that provides further links, as shown here:
The standard document format used for pages on the Web is called Hypertext Markup
Language (HTML).
In an HTML document you can specify typefaces, sizes, colors, and other properties by “marking
up” the text in the document with tags.
The remarkable growth of the Web is straining the “old” designs for displaying text on
computers.
Dynamic HTML uses Cascading Style Sheets (CSS) to define choices ranging from line height
to margin width to font face. HTML character entities are represented either by a number or by a
word and always prefixed by an ampersand (escape) and followed by a semicolon.
Hypertext Tools
Two functions are common to most hypermedia text management systems, and they are often
provided as separate applications: building (or authoring) and reading.
The builder creates the links, identifies nodes, and generates the all-important index of words.
The index methodology and the search algorithms used to find and group words according to user
search criteria are typically proprietary, and they represent an area where computers are carefully
optimized for performance—finding search words among a list of many tens of thousands of
words requires speed-demon programming.
Hypertext systems are currently used for electronic publishing and reference works, technical
documentation, educational courseware, interactive kiosks, electronic catalogs, interactive fiction,
and text and image databases.
Today these tools are used extensively with information organized in a linear fashion.
UNIT –I COMPLETED
UNIT I
2 Marks
1. Define Multimedia?
2. What is interactive multimedia?
3. What is hypermedia?
4. List out the characteristic of multimedia?
5. Write about virtual reality?
6. Expand CDROM and give its use.
5 Marks
1. Write about delivering multimedia?
2. Give away the uses of ASCII Character sets.
3. Explain about Fontlab?
4. Write about the power of hypertext?
5. Explain about character sets and alphabets?
10 Marks
UNIT II
Images
An image is a rectangular grid of pixels. It has a definite height and a definite width counted in
pixels is square and has a fixed size on a given display. However different computer monitors may
use different sized pixels.
Plan Approach
Work out your graphic approach, either in your head or during creative sessions with your client
or colleagues.
To get a handle on any multimedia project, you start with pencil, eraser, and paper.
Outline your project and your graphic ideas first: make a flowchart; storyboard the project using
stick figures; use three-by-five index cards and shuffle them until you get it right
Organize Tools
Most authoring systems provide the tools with which you can create the graphic objects of
multimedia (text, interactive buttons, vector-drawn objects, and bitmaps) directly on your screen.
If one of these tools is not included, the authoring system usually offers a mechanism for
importing the object you need from another application.
When you are working with animated objects or motion video, most authoring systems include a
feature for activating these elements, such as a programming language or special functions for
embedding them. Likely, too, your tools will offer a library of special effects—including zooms,
wipes, and dissolves.
Many multimedia designers do not limit their toolkits to the features of a single authoring
platform, but employ a variety of applications and tools to accomplish many specialized tasks.
Configure Computer Workspace
When developing multimedia, it is helpful to have more than one monitor to provide lots of
screen real estate (viewing area).
In this way, you can display the full-screen working area of your project or presentation and still
have space to put your tools and other menus.
This is particularly important in an authoring system such as Flash or Director, where the edits
and changes you make in one window are immediately visible in the presentation window.
During development there is a lot of cutting and pasting among windows and among various
applications, and with an extra monitor, you can open many windows at once and spread them
out.
Both Macintosh and Windows operating systems support this extra hardware.
Making Still Images
Still images may be small or large, or even full screen.
They may be colored, placed at random on the screen, evenly geometric, or oddly shaped.
Still images are generated by the computer in two ways: as bitmaps (or paint graphics) and as
vector-drawn (or just plain “drawn”) graphics.
Bitmaps may also be called “raster” images. Likewise, bitmap editors are sometimes called
“painting” programs. And vector editors are sometimes called “drawing” programs.
Bitmaps are used for photo-realistic images and for complex drawings requiring fine detail.
Vector-drawn objects are used for lines, boxes, circles, polygons, and other graphic shapes that
can be mathematically expressed in angles, coordinates, and distances.
A drawn object can be filled with color and patterns, and you can select it as a single object.
The appearance of both types of images depends on the display resolution and capabilities of your
computer’s graphics hardware and monitor.
Both types of images are stored in various file formats and can be translated from one application
to another or from one computer platform to another.
Typically, image files are compressed to save memory and disk space; many bitmap image file
formats already use compression within the file itself—for example, GIF, JPEG, and PNG.
Still images may be the most important element of your multimedia project or web site.
Bitmaps
A bit is the simplest element in the digital world, an electronic digit that is either on or off, black
or white, or true (1) or false (0).
This is referred to as binary, since only two states (on or off) are available. A map is a two
dimensional matrix of these bits.
A bitmap, then, is a simple matrix of the tiny dots that form an image and are displayed on a
computer screen or printed.
A one-dimensional matrix (1-bit depth) is used to display monochrome images—a bitmap where
each bit is most commonly set to black or white.
Depending upon your software, any two colors that represent the on and off (1 or 0) states may
be used.
More information is required to describe shades of gray or the more than 16 million colors that
each picture element might have in a color image.
These picture elements (known as pels or, more commonly, pixels) can be either on or off, as in
the 1-bit bitmap, or, by using more bits to describe them, can represent varying shades of color (4
bits for 16 colors; 8 bits for 256 colors; 15 bits for 32,768 colors; 16 bits for 65,536 colors; 24 bits
for 16,772,216 colors).
Bitmap Sources
Bitmaps can be made from the following:
Make a bitmap from scratch with paint or drawing program.
Grab a bitmap from an active computer screen with a screen capture program, and then paste it
into a paint program or your application.
Capture a bitmap from a photo or other artwork using a scanner to digitize the image.
Once made, a bitmap can be copied, altered, e-mailed, and otherwise used in many creative ways.
Bitmap Software
The abilities and features of painting and image-editing programs range from simple to complex.
The best programs are available in versions that work the same on both Windows and Mac
platforms, and the graphics files you make can be saved in many formats, readable across
platforms.
Macintosh computers do not ship with a painting tool, and Windows provides only a rudimentary
Paint program, so you will need to acquire this very important software separately.
Many multimedia authoring tools offer built-in bitmap editing features. Director, for example,
includes a powerful image editor that provides advanced tools such as “onion-skinning” and
image filtering using common plug-ins.
Adobe’s Photoshop, however, remains the most widely used image-editing tool among designers
worldwide;
Many designers also use a vector-based drawing program such as Adobe’s Illustrator, CorelDraw,
or InDesign to create curvy and complicated looks that they then convert to a bitmap.
You can use your image editing software to create original images, such as cartoons, symbols,
buttons, bitmapped text
In addition to letting you enhance and make composite images, image-editing tools allow you to
alter and distort images.
A color photograph of a red rose can be changed into a purple rose, or blue if you prefer.
Morphing is another effect that can be used to manipulate still images or to create interesting and
often bizarre animated transformations.
Morphing (see Figure) allows you to smoothly blend two images so that one image seems to melt
into the next, often producing some amusing results
Vector Drawing
Most multimedia authoring systems provide for use of vector-drawn objects such as lines, rectangles,
ovals, polygons, complex drawings created from those objects, and text.
Computer-aided design (CAD) programs have traditionally used vector-drawn object systems for
creating the highly complex and geometric renderings needed by architects and engineers.
Graphic artists designing for print media use vector-drawn objects because the same mathematics
that put a rectangle on your screen can also place that rectangle on paper.
This requires the higher resolution of the printer, using a page description format such as Portable
Document Format (PDF).
Programs for 3-D animation also use vector-drawn graphics. For example, the various changes of
position, rotation, and shading of light required to spin an extruded corporate logo must be
calculated mathematically.
Bitmap Vector
Bitmaps are an image type most appropriate Vector images are most appropriate for lines,
for photo realistic images and complex boxes, circle, polygons and other graphic
drawing requiring shapes that can be mathematically expressed
in angles, coordinates and distances.
Limitations of bitmapped images include Vector object can be filled with color and
large files sizes and the inability to scale or patterns
resize the image easily while maintaining
quality
Bit map is a simple information matrix Vector drawn objects use a fraction of the
describing the individual dots of an image memory space required to describe and store
called pixel the same object in bitmap form
Manipulate and adjust many of its properties Converting bitmaps to vector drawn objects
and cut and paste among many bitmaps using is difficult, auto tracing programs can
specialized image editing or darkroom compute the bounds of shapes and colors in
A great deal of information is needed to display a 3-D scene. Scenes consist of objects that in
turn contain many small elements such as blocks, cylinders, spheres, or cones (described using
mathematical constructs or formulas).
The more elements contained in an object, the more complicated its structure will be and, usually,
the finer its resolution and smoothness.
Objects and elements in 3-D space carry with them properties such as shape, color, texture,
shading, and location.
A scene contains many different objects. Imagine a scene with a table, chairs, and a background.
Zoom into one of the objects—the chair, for example, in Figure.
It has 11 objects made up of various blocks and rectangles. Objects are created by modeling them
using a 3-D application.
To model an object that you want to place into your scene, you must start with a shape.
You can create a shape from scratch, or you can import a previously made shape from a library of
geometric shapes called primitives, typically blocks, cylinders, spheres, and cones.
In most 3-D applications, you can create any 2-D shape with a drawing tool or place the outline
of a letter, then extrude or lathe it into the third dimension along the z axis
When you extrude a plane surface, its shape extends some distance, either perpendicular to the
shape’s outline or along a defined path.
When you lathe a shape, a profile of the shape is rotated around a defined axis (you can set the
direction) to create the 3-D object.
Other methods for creating 3-D objects differ among the various software packages.
Once you have created a 3-D object, you can apply textures and colors to it to make it seem more
realistic
Color
Color is a vital component of multimedia.
A color may be expressed in known physical values (humans, for example, perceive colors with
wavelengths ranging from 400 to 600 nanometers on the electromagnetic spectrum), and several
methods and models describe color space using mathematics and values
Natural Light and Color
Light comes from an atom when an electron passes from a higher to a lower energy level; thus
each atom produces uniquely specific colors.
This explanation of light, known as the quantum theory.
Infrared light is radiated heat. Ultraviolet light, on the other hand, is beyond the higher end of the
visible spectrum and can be damaging to humans.
The color white is a noisy mixture of all the color frequencies in the visible spectrum.
Sunlight and fluorescent tubes produce white light (though, technically, even they vary in color
temperature—sunlight is affected by the angle at which the light is coming through the
atmosphere, and fluorescent tubes provide spikes in the blue-green parts of the color spectrum);
tungsten lamp filaments produce light with a yellowish cast; sodium vapor lamps, typically used
for low-cost outdoor street lighting, produce an orange light characteristic of the sodium atom.
These are the most common sources of light in the everyday (or every night) world.
The light these sources produce typically reaches your eye as a reflection of that light into the
lens of your eye.
The eye can differentiate among about 80,000 colors, or hues, consisting of
Combinations of red, green, and blue.
Computerized Color
Although the eye perceives colors based upon red, green, and blue, there are actually two basic
methods of making color: additive and subtractive.
Additive Color
In the additive color method, a color is created by combining colored light sources in three
primary colors: red, green, and blue (RGB).
This is the process used for cathode ray tube (CRT), liquid crystal (LCD), and plasma displays.
Subtractive Color
In the subtractive color method, color is created by combining colored media such as paints or ink
that absorb (or subtract) some parts of the color spectrum of light and reflect the others back to
the eye.
Subtractive color is the process used to create color in printing.
The printed page is made up of tiny halftone dots of three primary colors: cyan, magenta, and
yellow (designated as CMY). Four-color printing includes black (which is technically not a color
but, rather, the absence of color).
Since the letter B is already used for blue, black is designated with a K (so four-color printing is
designated as CMYK).
The color remaining in the reflected part of the light that reaches your eye from the printed page
is the color you perceive.
The following chart shows the three primary additive colors and how, when one of the primary
colors is subtracted from this RGB mix, the subtractive primary color is perceived.
A zero indicates a lack of that primary color, while 255 is the maximum amount of that color.
Models or methodologies used to specify colors in computer terms are RGB, HSB, HSL, CMYK,
CIE, and others.
Using the 24-bit RGB (red, green, blue) model, you specify a color by setting each amount of
red, green, and blue to a value in a range of 256 choices, from 0 to 255. Eight bits of memory are
required to define those 256 possible choices, and that has to be done for each of the three
primary colors; a total of 24 bits of memory (8 + 8 + 8 = 24) are therefore needed to describe the
exact color, which is one of “millions” (256 × 256 × 256 = 16,777,216).
When web browsers were first developed, the software engineers chose to represent the color
amounts for each color channel in a hexadecimal pair.
Rather than using one number between 0 and 255, two hexadecimal numbers, written in a scale of
16 numbers and letters in the range “0123456789ABCDEF” represent the required 8 bits (16 × 16
= 256) needed to specify the intensity of red, green, and blue.
Thus, in HTML, you can specify pure green as #00FF00, where there is no red (first pair is #00),
there is maximum green (second pair is #FF), and there is no blue (last pair is #00).
In the HSB (hue, saturation, brightness) and HSL (hue, saturation, lightness) models, you specify
hue or color as an angle from 0 to 360 degrees on a color wheel, and saturation, brightness, and
lightness as percentages.
Saturation is the intensity of a color. At 100 percent saturation a color is pure; at 0 percent
saturation, the color is white, black, or gray.
Lightness or brightness is the percentage of black or white that is mixed with a color. A lightness
of 100 percent will yield a white color; 0 percent is black; the pure color has 50 percent lightness.
The CMYK color model is less applicable to multimedia production. It is used primarily in the
printing trade where cyan, magenta, yellow, and black are used to print process color separations.
Color Palettes
Palettes are mathematical tables that define the color of a pixel displayed on the screen. The most
common palettes are 1, 4, 8, 16, and 24 bits deep:
When color monitors became available for computers, managing the computations for displaying
colors severely taxed the hardware and memory available at the time. 256-color, 8-bit images
using a color lookup table or palette were the best a computer could do.
256 default system colors were statistically selected by Apple and Microsoft engineers (working
independently) to be the colors and shades that are most “popular” in photographic images; their
two system palettes are, of course, different.
Dithering
If you start out with a 24-bit scanned image that contains millions of colors and need to reduce it
to an 8-bit, 256-color image, you get the best replication of the original image by dithering the
colors in the image.
Dithering is a process whereby the color value of each pixel is changed to the closest matching
color value in the target palette, using a mathematical algorithm.
Often the adjacent pixels are also examined, and patterns of different colors are created in the
more limited palette to best represent the original colors.
Since there are now only 256 colors available to represent the thousands or even millions of
colors in the original image, pixels using the 256 remaining colors are intermixed and the eye
perceives a color not in the palette, created by blending the colors mixed together.
Dithering software is usually built into image-editing programs and is also available in many
multimedia authoring systems as part of the application’s palette management suite of tools
Most applications on any operating system can manage JPEG, GIF, PNG, and TIFF image
formats.
An older format used on the Macintosh, PICT, is a complicated but versatile format developed
by Apple where both bitmaps and vector-drawn objects can live side by side.
The device-independent bitmap (DIB), also known as a BMP, is a common Windows palette–
based image file format similar to PNG.
PCX files were originally developed for use in Z-Soft MS-DOS paint packages; these files can be
opened and saved by almost all MS-DOS paint software and desktop publishing software.
TIFF, or Tagged Interchange File Format, was designed to be a universal bitmapped image
format and is also used extensively in desktop publishing packages.
Often, applications use a proprietary file format to store their images. Adobe creates a PSD file
for Photoshop and an AI file for Illustrator; Corel creates a CDR file.
DXF was developed by Autodesk as an ASCII-based drawing interchange file for AutoCAD, but
the format is used today by many computer-aided design applications.
IGS (or IGES, for Initial Graphics Exchange Standard) was developed by an industry
committee as a broader standard for transferring CAD drawings.
These formats are also used in 3-D rendering and animation programs. JPEG, PNG, and GIF
images are the most common bitmap formats used on the Web and may be considered cross-
platform, as all browsers will display them.
Adobe’s popular PDF (Portable Document File) file manages both bitmaps and drawn art (as well
as text and other multimedia content), and is commonly used to deliver a “finished product” that
contains multiple assets.
Sound
Sound is produced by the vibration of matter. During the vibration, pressure variations are
created in the air surrounding it. The pattern of the oscillation is called a waveform
Infra-sound from 0 to 20 Hz
Human hearing frequency range from 20 Hz to 20 kHz
Ultrasound from 20 kHz to 1 GHz
Hyper sound from 1 GHz to 10 THz
Multimedia systems typically make use of sound only within the frequency range of human
hearing.
Human hearing is less able to identify the location from which lower frequencies are generated.
In surround sound systems, subwoofers can be placed wherever their energy is most efficiently
radiated (often in a corner), but midrange speakers should be carefully placed
Digital Audio:
Digital audio is created when you represent the characteristics of a sound wave using numbers—
a process referred to as digitizing.
You can digitize sound from a microphone, a synthesizer, existing recordings, live radio and
television broadcasts, and popular CD and DVDs.
Digitized sound is sampled sound. Every nth fraction of a second, a sample of sound is taken and
stored as digital information in bits and bytes.
The quality of this digital recording depends upon how often the samples are taken (sampling
rate or frequency, measured in kilohertz, or thousands of samples per second) and how many
numbers are used to represent the value of each sample (bit depth, sample size, resolution, or
dynamic range).
The more often you take a sample and the more data you store about that sample, the finer the
resolution and quality of the captured sound when it is played back.
Since the quality of your audio is based on the quality of your recording and not the device on
which your end user will play the audio, digital audio is said to be device independent.
The three sampling rates most often used in multimedia are 44.1 kHz (CD-quality), 22.05 kHz,
and 11.025 kHz. Sample sizes are either 8 bits or 16 bits.
The larger the sample size, the more accurately the data will describe the recorded sound.
The value of each sample is rounded off to the nearest integer (quantization), and if the
amplitude is greater than the intervals available, clipping of the top and bottom of the wave
occurs.
Quantization can produce an unwanted background hissing noise, and clipping may severely
distort the sound.
Making digital audio files is fairly straightforward on most computers. Plug a microphone into
the microphone jack of your computer.
If you want to digitize archived analog source materials—music or sound effects that you have
saved on videotape, for example—simply plug the “Line-Out” or “Headphone” jack of the
device into the “Line-In” jack on your computer.
Then use audio digitizing software such as Audacity, to do the work.
Balancing the need for sound quality against file size. Higher quality usually means larger files,
requiring longer download times on the Internet and more storage space on a CD or DVD Setting
proper recording to get a good clean recording.
The basic sound editing operations that most multimedia producers need are described in the
paragraphs that follow.
Trimming:
Removing “dead air” or blank space from the front of a recording and any unnecessary extra
time off the end is your first sound editing task.
Trimming even a few seconds here and there might make a big difference in your file size.
Trimming is typically accomplished by dragging the mouse cursor over a graphic representation
of your recording and choosing a menu command such as Cut, Clear, Erase, or Silence.
Splicing and Assembly:
Using the same tools mentioned for trimming, you will probably want to remove the extraneous
noises that inevitably creep into a recording.
Even the most controlled studio voice-overs require touch-up.
Also, you may need to assemble longer recordings by cutting and pasting together many shorter
ones.
In the old days, this was done by splicing and assembling actual pieces of magnetic tape.
Volume Adjustments:
If you are trying to assemble ten different recordings into a single sound track, there is little
chance that all the segments will have the same volume.
To provide a consistent volume level, select all the data in the file, and raise or lower the overall
volume by a certain amount.
It is best to use a sound editor to normalize the assembled audio file to a particular level, say 80
percent to 90 percent of maximum (without clipping), or about –16 dB.
Format Conversion:
In some cases, your digital audio editing software might read a format different from that read by
your presentation or authoring program.
Most sound editing software will save files in your choice of many formats, most of which can be
read and imported by multimedia authoring systems.
Data may be lost when converting formats.
Resampling or Down sampling:
If you have recorded and edited your sounds at 16-bit sampling rates but are using lower rates
and resolutions in your project, you must resample or down sample the file.
Your software will examine the existing digital recording and work through it to reduce the
number of samples. This process may save considerable disk space.
Equalization:
Some programs offer digital equalization (EQ) capabilities that allow you to modify a
recording’s frequency content so that it sounds brighter (more high frequencies) or darker (low,
ominous rumbles).
Time Stretching:
Advanced programs let you alter the length (in time) of a sound file without changing its pitch.
This feature can be very useful, but watch out: most time-stretching algorithms will severely
degrade the audio quality of the file if the length is altered more than a few percent in either
direction.
Reversing Sounds:
Another simple manipulation is to reverse all or a portion of a digital audio recording.
Sounds, particularly spoken dialog, can produce a surreal, otherworldly effect when played
backward.
Multiple Tracks:
Being able to edit and combine multiple tracks (for sound effects, voice-overs, music, etc.) and
then merge the tracks and export them in a “final mix” to a single audio file is important.
MIDI Audio
MIDI (Musical Instrument Digital Interface) is a communications standard developed in the
early 1980s for electronic musical instruments and computers.
It allows music and sound synthesizers from different manufacturers to communicate with each
other by sending messages along cables connected to the devices.
MIDI provides a protocol for passing detailed descriptions of a musical score, such as the notes,
the sequences of notes, and the instrument that will play these notes.
But MIDI data is not digitized sound; it is a shorthand representation of music stored in numeric
form.
Digital audio is a recording, MIDI is a score—the first depends on the capabilities of your sound
system, the other on the quality of your musical instruments and the capabilities of your sound
system.
A MIDI file is a list of time-stamped commands that are recordings of musical actions (the
pressing down of a piano key or a sustain pedal, for example, or the movement of a control wheel
or slider).
When sent to a MIDI playback device, this results in sound.
A concise MIDI message can cause a complex sound or sequence of sounds to play on an
instrument or synthesizer; so MIDI files tend to be significantly smaller (per second of sound
delivered to the user) than equivalent digitized waveform files.
Composing your own original score can be one of the most creative and rewarding aspects of
building a multimedia project, and MIDI is the quickest, easiest, and most flexible tool for this
task.
To make MIDI scores, however, you will need notation software, sequencer software and a
sound synthesizer (typically built into the software of multimedia players in most computers and
many handheld devices). A MIDI keyboard is also useful for simplifying the creation of musical
scores.
Rather than recording the sound of a note, MIDI software creates data about each note as it is
played on a MIDI keyboard (or another MIDI device)—which note it is, how much pressure was
used on the keyboard to play the note, how long it was sustained, and how long it takes for the
note to decay or fade away, for example.
This information, when played back through a MIDI device, allows the note to be reproduced
exactly.
Because the quality of the playback depends upon the end user’s MIDI device rather than the
recording, MIDI is device dependent.
The sequencer software quantizes your score to adjust for timing inconsistencies (a great feature
for those who can’t keep the beat), and it may also print a neatly penned copy of your score to
paper.
An advantage of structured data such as MIDI is the ease with which you can edit the data.
Let’s say you have a piece of music being played on a piano, but your client decides he wants the
sound of a saxophone instead. If you had the music in digitized audio, you would have to re-
record and re digitize the music.
When it is in MIDI data, however, there is a value that designates the instrument to be used for
playing back the music.
To change instruments, you just change that value. Instruments that you can synthesize are
identified by a General MIDI numbering system that ranges from 0 to 127 (see Table).
Since MIDI is device dependent and the quality of consumer MIDI playback hardware varies
greatly, MIDI’s true place in multimedia work may be as a production tool rather than a delivery
medium.
MIDI is by far the best way to create original music, so use MIDI to get the flexibility and
creative control you want.
Then, once your music is completed and fits your project, lock it down for delivery by turning it
into digital audio data.
In addition to describing the instrument and the note, MIDI data can also describe the envelope of
the sound: the attack (how quickly a sound’s volume increases), the sustain (how long the sound
continues), and the decay (how quickly the sound fades away).
MIDI has several advantages over digital audio and two huge disadvantages.
Advantages:
MIDI files are much more compact than digital audio files, and the size of a MIDI file
is completely independent of playback quality.
In general, MIDI files will be 200 to 1,000 times smaller than CD-quality digital audio
files. Because MIDI files are small, they don’t take up as much memory, disk space,
or bandwidth.
Because they are small, MIDI files embedded in web pages load and play more
quickly than their digital equivalents.
In some cases, if the MIDI sound source you are using is of high quality, MIDI files
may sound better than digital audio files.
You can change the length of a MIDI file (by varying its tempo) without changing the
pitch of the music or degrading the audio quality.
MIDI data is completely editable—right down to the level of an individual note.
You can manipulate the smallest detail of a MIDI composition (often with sub
millisecond accuracy) in ways that are impossible with digital audio.
Because they represent the pitch and length of notes, MIDI files can generally be
converted to musical notation, and vice versa.
This is useful when you need a printed score; in reverse, you can scan a printed score
and convert it to MIDI for tweaking and editing.
Disadvantages:
Because MIDI data does not represent sound but musical instruments, you can be
certain that playback will be accurate only if the MIDI playback device is identical to
the device used for production.
Also, MIDI cannot easily be used to play back spoken dialog, although expensive and
technically tricky digital samplers are available.
ACC is the default format for iPod, iPhone, PlayStation, Wii, Dsi, and many mobile phones
including Motorola, Nokia, Philips, Samsung, Siemens, and Sony Ericsson.
The SWF format is a container for vector-based graphics and animations, text, video, and sound
delivered over the Internet.
Typically created using Adobe’s Flash, SWF files require a plug-in or player be installed in the
user’s browser.
Adobe claims that the Flash Player is installed in more than 98 percent of Web users’ browsers
and in more than 800 million handsets and mobile devices Flash video files (FLV) contain both a
video stream and an audio stream, and the FLV format has been adopted by YouTube, Google,
Yahoo, Reuters.com, BBC.com, CNN.com, and other news providers for Internet delivery of
content.
A codec (compressor-decompressor) is software that compresses a stream of audio or video data
for storage or transmission, then decompresses it for playback.
There are many codecs that do this with special attention to the quality of music or voice after
decompression.
Some are “lossy” and trade quality for significantly reduced file size and transmission speed;
some are “lossless,” so original data is never altered.
Space Considerations:
The substantial amount of digital information required for high quality sound takes up a lot of
storage space, especially when the quantity is doubled for two-channel stereo.
It takes about 1.94MB to store 11 seconds of uncompressed stereo sound. Many multimedia
developers use 8-bit sample sizes at 22.05 kHz sampling rates because they consider the sound to
be good enough (about the quality of AM radio), and they save immense amounts of digital real
estate.
The following formula will help you estimate your storage needs.
If you are using two channels for stereo, double the result.
(Sampling rate * bits per sample) / 8 = bytes per second
Many people feel that MP3s files sampled at 128 Kbps provide decent audio quality for music,
especially when played through small speakers.
For better quality, sample your music at 192 Kbps. Because the human voice does not use a wide
range of frequencies, you can sample speech or voice at 96 Kbps or even 64 Kbps.
Digital audio tape (DAT) systems provide a tape-based 44.1 kHz, 16-bit record and playback
capability.
You may, however, find that DAT is high-fidelity overkill for your needs, because the recordings
are too accurate, precisely recording glitches, background noises, microphone pops, and coughs
from the next room.
A good editor can help reduce the impact of these noises, but at the cost of time and money.
Mobile phones can often record audio (and video), and applications and hardware attachments are
available to manage external microphones and file transfer.
USB and flash memory recorders range in quality, some suitable for voice only, some generating
compressed MP3 files, and some recording in CD-quality stereo.
Recordings can be directly downloaded as digital files using a USB cable or flash memory card
reader.
Audio CDs:
The method for digitally encoding the high-quality stereo of the consumer CD music market is an
international standard, called ISO 10149. This is also known as the Red Book Audio standard
Unlike DVDs, audio CDs do not contain information about artists, titles, or tracklists of songs.
But player software such as Apple iTunes and AOL Winamp will automatically link to a
database on the Internet when you insert a music CD.
The precise length of your CD’s Table of Contents (TOC) is then matched against the known
TOC length for more than five million CDs containing more than 60 million songs.
When it finds a match, the database service sends back what it knows about the CD you inserted.
Adobe’s Flash allows you to integrate the sound tracks that you have made using a sound editor
into a Web-based multimedia presentation, including both event sounds like button clicks and
streaming sounds like background music.
Because it can read and save MP3 files, Flash offers web designers serious and powerful options
for solving the quality conundrum of high-quality (big) files and slow downloads versus low-
quality (small) files and speedy delivery—with nice results.
Because it must break a sound into “frames” so it plays in sync with the timeline, Flash resample
the audio track if you ask it to “stream” in a movie clip; for the best quality, import an
uncompressed audio clip into the Flash library and let Flash do the compression.
Unit –II
2Marks
1. Define image?
2. What is morphing?
3. What is vector drawing?
4. List out basic methods of making color?
5. Define dithering?
6. Expand TIFF.
7. What is trimming?
8. What is vaughan’s law of multimedia minimums?
5 Marks
1. Write short notes on Bitmaps.
2. Explain the various ways of making still images.
3. Differentiate between MIDI vs. Digital Audio
4. Distinguish bitmap and vector drawing.
5. What is Dithering? Explain.
10 Marks
1. Write short notes on MIDI Audio.
2. Describe on various image file formats used in Multimedia.
3. How to add sound to your multimedia project. Explain.
4. What is vaughan’s law of multimedia minimums?
UNIT III
Animation
Animation is a type of optical illusion. It involves the appearance of motion caused by displaying
still images one after another. Often, animation is used for entertainment purposes.
In addition to its use for entertainment, animation is considered a form of art. It is often displayed
and celebrated in film festivals throughout the world. Also used for educational purposes,
animation has a place in learning and instructional applications as well.
Principles of Animation
The 12 Basic Principles of animation is a set of principles of animation introduced by the
Disney animators Ollie Johnston and Frank Thomas. The main purpose of the principles was to
produce an illusion of characters adhering to the basic laws of physics, but they also dealt with
more abstract issues, such as emotional timing and character appeal. The principles still have
great relevance for today’s more prevalent computer animation
1. Squash and stretch
Animated sequence of a race horse galloping. The horse's body demonstrates squash and
stretch in natural musculature.
Illustration of the "squash and stretch"-principle: Example A shows a ball bouncing with a
rigid, non-dynamic movement. In example B the ball is "squashed" at impact, and
"stretched" during fall and rebound.
2. Anticipation
Anticipation is used to prepare the audience for an action, and to make the action appear
more realistic. A dancer jumping off the floor has to bend the knees first; a golfer making
a swing has to swing the club back first.
Anticipation: A baseball player making a pitchprepares for the action by moving his arm
back
3. Staging
This principle is akin to staging in theatre, as it is known in theatre and film, Its purpose
is to direct the audience's attention, and make it clear what is of greatest importance in a
scene; What is happening and what is about to happen.
4. Straight ahead action and pose to pose
These are two different approaches to the drawing process. straight ahead action scenes
are animated frame by frame from beginning to end, while "pose to pose" involves
starting with drawing a few key frames, and then filling in the intervals later.
5. Follow through and overlapping action
These closely related techniques help render movement more realistic and give the
impression that characters follow the laws of physics. “Follow through” means that
separate parts of a body will continue moving after the character has stopped
6. Slow in and slow out
The movement of the human body and most other objects, needs time to accelerate and
slow down.
7. Arc
Most human and animal actions occur along an arched trajectory, and animation should
reproduce these movements for greater realism. This can apply to a limb moving by
rotating a joint, or a thrown object moving along a parabolic trajectory.
8. Secondary action
Adding secondary actions to the main action gives a scene more life, and can help to
support the main action.
Secondary action: as the horse runs, its mane and tail follow the movement of the body
9. Timing
Timing in reality refers to two different concepts: physical timing and theatrical timing. It is
essential both to the physical realism, as well as to the story telling of the animation, that the
timing is right.
10. Exaggeration
Exaggeration is an effect especially useful for animation, as animated motions that strive for a
perfect imitation of reality can look static and dull in cartoons. The level of exaggeration
depends on whether one seeks realism or a particular style, like a caricature or the style of an
artist.
11. Solid drawing
The principle of solid or good drawing, really means that the same principles apply to an
animator as to an academic artist.
12. Appeal
Appeal in a cartoon character corresponds to what would be called charisma in an actor. A
character who is appealing is not necessarily sympathetic villains or monsters can also be
appealing the important thing is that the viewer feels the character is real and interesting.
Animation by Computer
Computer animation or CGI animation is the art of creating moving images with the use
of computers
It is a subfield of computer graphics and animation. Increasingly it is created by means of
3D computer graphics, though 2D computer graphics are still widely used for stylistic,
low bandwidth and faster real time rendering needs.
Sometimes the target of the animation is the computer itself, but sometimes the target is
another medium, such as film.
To create the illusion of movement, an image is displayed on the computer screen and
repeatedly replaced by a new image that is similar to the previous image, but advanced
slightly in the time domain ( usually at a rate of 24 or 30 frames / second ).
This technique is identical to how the illusion of movement is achieved with television
and motion pictures.
Computer animation is essentially a digital successor to the art of stop motion animation
of 3D models and frame by frame animation of 2D illustrations.
Then the limbs, eyes, mouth, cloths, etc., of the figure are moved by the animator on key
frames.
The differences in appearance between key frames are automatically calculated by the
computer in a process known as tweening or morphing, finally the animation is rendered.
Architectural Animation
Architects use services from animation companies to create 3-dimensional models for
both the customers and builders. It can be more accurate than traditional drawings.
Architectural animation can also be used to see the possible relationship the building will
have in relation to the environment and its surrounding buildings.
Animation File Formats
Some file formats are designed specifically to contain animation and the can be ported
among application and platform with the proper translators
Director *.dir, *.dcr
AnimationPro *.fli, *.flc
D studio Max *.max
Compuserve *.gif
Flash *.fla, *.swf
Following is the list of few software used for computerized animation:
o 3D Studio Max
o Flash
o AnimationPro
Animation Techniques
When you create an animation, organize its execution into a series of logical steps.
1. First, gather up in your mind all the activities you wish to provide in the animation. If it is
complicated, you may wish to create a written script with a list of activities and required objects
and then create a storyboard to visualize the animation.
2. Choose the animation tool best suited for the job. Then build and tweak your sequences.
3. Allow plenty of time for this phase when you are experimenting and testing. Finally, post-
process your animation, doing any special renderings and adding sound effects.
1. Cel Animation
The term cel derives from the clear celluloid sheets that were used for drawing each frame which
have been replaced today by acetate or plastic.
Cels of famous animated cartoons have become soughed after suitable for framing collector’s
items.
2. Computer animation
Computer animation programs topically employee the same logic and procedural concepts as cel
animation using layer key frame and tweening techniques and even borrowing from the
vocapulary of classic animators
On the computers, paint is most often filled or drawn with tools using features such as gradients
and antialiasing.
The word links, in computer animation terminology, usually means special methods for
computing RGB pixel values, providing edge detection, and layering so that images can blend or
otherwise mix their colors produce special transparencies, inversions , and effects.
3. Kinematics:
It is study of the movement and motion of structures that have joints, such as a walking amn.
Inverse kinematics is in high end 3D programs, it is the process by which you link objects
such as hands to arms and define their relationship and limits
Once those relationships are set you can drag these parts around and let the computer
calculate the result.
Inverse kinematics,
Available in high-end 3-D programs such as Lightwave and Maya, is the process by which you
link objects such as hands to arms and define their relationships and limits (for example, elbows
cannot bend backward).
Once those relationships and parameters have been set, you can then drag these parts around and
let the computer calculate the result.
Morphing
Morphing is popular effect in which one image transforms into another.
Morphing application and other modeling tools that offer this effect can perform transition not
only between moving images as well.
Today, the most widely used tool for creating multimedia animations for Macintosh and
Windows environments and for the Web is Adobe’s Flash. Flash directly supports several 2½-D features,
including z-axis positioning, automatic sizing and perspective adjustment, and kinematics. External
libraries can extend Flash’s capabilities: open-source Papervision3D (http://blog.papervision3d.org)
provides extensive support for true 3-D modeling and animation
A Rolling Ball
First, create a new, blank image file that is 100 × 100 pixels, and fill it with a sphere.
Create a new layer in Photoshop, and place some white text on this layer at the center of the image
Make the text spherical using Photoshop’s “Spherize” distortion filter, and save the result.
To animate the sphere by rolling it across the screen, you first need to make a number of rotated images
of the sphere. Rotate the image in 45-degree increments to create a total of eight images, rotating a full
circle of 360 degrees. When each is displayed sequentially at the same location, the sphere spins:
For a realistic rolling effect, the circumference (calculated at pi times 100, or about 314 pixels) is divided
by 8 (yielding about 40 pixels). As each image is successively displayed, the ball is moved 40 pixels
along a line.
A Bouncing Ball
With the simplest tools, you can make a bouncing ball to animate your web site using GIF89a, an image
format that allows multiple images to be put into a single file and then displayed as an animation in a
web browser or presentation program that recognizes the format. The individual
frames that make up the animated GIF can be created in any
paint or image-processing program, but it takes a specialized program to put the frames together into a
GIF89a animation. simply figure that your ball will uniformly accelerate and decelerate up
and down the pixels of your screen by the squares: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100 are the squares of 1,
2, 3, 4, 5, 6, 7, 8, 9, and 10. This is illustrated in Figure. You can use the same images for downward
motion as you use for upward—as in frames 11 through 18 in Figure —by reversing them
Open a graphics program and paint a ball about 15 pixels in diameter. If you wish to be fancy,
make the ball with a 3-D graphics tool that will shade it as a sphere. Then duplicate the ball, placing each
copy of it in a vertical line at the ten locations 1, 4, 9, 16, 25, 36, 49, 64, 81, and 100.
The goal is to create a separate image file for each location of the ball, like the pages of a
flip-book. With Photoshop, you can create a single file with ten layers to contain each ball at its
proper location, and you can add an eleventh background layer, too. Then save each layer
showing against the background as a separate file.
Creating an Animated Scene
A creative committee organized a brief storyboard of a gorilla chasing a man.
From a stock library containing many images licensed for unlimited use, a photograph was
chosen of Manhattan’s Central Park where a bridge crossed a small river and high-rise apartments
lined the horizon.
The chase scene would occur across the bridge. To produce frames of the running man, a real
actor was videotaped running in place against an Ultimatte chroma-keyed blue background in a
studio; a few frames of this were grabbed, and the blue background was made transparent in each
image.
The gorilla was difficult to find, so a toy model dinosaur about 25 centimeters tall was used;
again, a few frames were captured and the background made transparent to form a composite.
That was all that was required for image resources.
As illustrated in Figure a, the background was carefully cut in half along the edge of the bridge, so
that the bridge railing could be placed in front of the runners. The running man was organized in a series
of six frames that could be repeated many times across the screen to provide
the pumping motions of running. The same was done for the dinosaur, to give him a lumbering, bulky
look as he chased the little man across the bridge (see Figure b). The result, in Figure c, was simple and
quickly achieved.
Figure a
Figure b
Figure c
The upper portion of the photo was placed behind the runners (b) and the lower portion in front of
them, to make them appear to run behind the bridge railing (c).
Video
Video is the most challenging multimedia content to deliver via the web.
One second of uncompressed NTSC video, the international standard for television and video,
requires approximately 27 megabytes of disk storage space.
The amount of scaling and compression required to turn quantity of data into something that can
possible tailor your video content for the web.
Using Video
Carefully planned, well-executed video clips can make a dramatic difference in a multimedia
project.
All the multimedia elements, video places the highest performance demand on your computer or
device—and its memory and storage.
Consider that a high-quality color still image on a computer screen could require as much as a
megabyte or more of storage memory.
Compression (and decompression), using special software called a codec, allows a massive
amount of imagery to be squeezed into a comparatively small data file, which can still deliver a
good viewing experience on the intended viewing platform during playback.
If you control the delivery platform for your multimedia project, you can specify special
hardware and software enhancements that will allow you to work with high-definition, full-
motion video, and sophisticated audio for high-quality surround sound.
you can design a project to meet a specific compression standard, such as MPEG2 for Digital
Versatile Disc (DVD) playback or MPEG4 for home video.
You can install a superfast RAID (Redundant Array of Independent Disks) system that will
support high-speed data transfer rates.
When light reflected from an object passes through a video camera lens, that light is converted
into an electronic signal by a special sensor called a charge-coupled device (CCD).
Top-quality broadcast cameras and even camcorders may have as many as three CCDs (one for
each color of red, green, and blue) to enhance the resolution of the camera and the quality of the
image.
It’s important to understand the difference between analog and digital video.
Analog video has a resolution measured in the number of horizontal scan lines (due to the nature
of early cathode-tube cameras), but each of those lines represents continuous measurements of the
color and brightness along the horizontal axis, in a linear signal that is analogous to an audio
signal.
Digital video signals consist of a discrete color and brightness (RGB) value for each pixel.
Digitizing analog video involves reading the analog signal and breaking it into separate data
packets. This process is similar to digitizing audio, except that with video the vertical resolution
is limited to the number of horizontal scan lines.
Analog Video
In an analog system, the output of the CCD is processed by the camera into three channels of
color information and synchronization pulses (sync) and the signals are recorded onto magnetic
tape.
There are several video standards for managing analog CCD output, each dealing with the
amount of separation between the components—the more separation of the color information, the
higher the quality of the image (and the more expensive the equipment).
If each channel of color information is transmitted as a separate signal on its own conductor, the
signal output is called component
Lower in quality is the signal that makes up Separate Video (S-Video), using two channels that
carry luminance and chrominance information. The least separation (and thus the lowest quality
for a video signal) is composite, when all the signals are mixed together and carried on a single
cable as a composite of the three color channels and the sync signal.
Audio is recorded on a separate straight-line track at the top of the videotape, although with some
recording systems (notably for ¾-inch tape and for ½-inch tape with high fidelity audio), sound is
recorded helically between the video tracks.
At the bottom of the tape is a control track containing the pulses used to regulate speed.
Tracking is the fine adjustment of the tape during playback so that the tracks are properly aligned
as the tape moves across the playback head. These are the signals your grandmother’s VCR reads
when you rent Singing in the Rain (on video cassette) for the weekend.
Diagram of tape path across the video head for analog recording Many consumer set-top devices
like video cassette recorders (VCRs) and satellite receivers add the video and sound signals to a
subcarrier and modulate them into a radio frequency (RF) in the FM broadcast band.
Digital Video
In digital systems, the output of the CCD is digitized by the camera into a sequence of single
frames, and the video and audio data are compressed before being written to a tape (see Figure) or
digitally stored to disc or flash memory in one of several proprietary and competing formats.
Digital video data formats, especially the codec used for compressing and decompressing
video (and audio) data, are important
Diagram of tape path across the video head for digital recording
HDTV
High Definition Television (HDTV) This standard, which was slightly modified from both the
Digital Television Standard and the Digital Audio Compression Standard from an analog to a
digital standard.
It also provided TV stations with sufficient bandwidth to present four or five Standard Television
signals or one HDTV signal
HDTV provides high resolution in a 16:9 aspect ratio
DISPLAYS:
Colored phosphors on a cathode ray tube (CRT) screen glow red, green, or blue when they are
energized by an electron beam.
Because the intensity of the beam varies as it moves across the screen, some colors glow brighter
than others
Finely tuned magnets around the picture tube aim the electrons precisely onto the phosphor
screen, while the intensity of the beam is varied according to the video signal.
This is why you needed to keep speakers (which have strong magnets in them) away from a CRT
screen.
If a computer displays a still image or words onto a CRT for a long time without changing, the
phosphors will permanently change, and the image or words can become visible, even when the
CRT is powered down.
Screen savers were invented to prevent this from happening. Flat screen displays are all-digital,
using either liquid crystal display (LCD) or plasma technologies, and have supplanted CRTs for
computer use.
(megabits per second) of video and 250 Kbps (kilobitsper second) of two-channel stereo audio using CD-
ROM technology.
MPEG-2 3 to 15 Mbps but also delivered higher image resolution, improved picture quality,
interlaced video formats, multi resolution scalability, and multichannel audio features.MPEG-2 became
the video compression standard required for digital television (DTV) and for making DVDs.
As a container, MPEG-4 provides a content-based method for assimilating multimedia
elements. It offers indexing, hyper linking, querying, browsing, uploading, downloading, and
deleting functions, as well as “hybrid natural and synthetic data coding,” which will enable
harmonious integration of natural and synthetic audiovisual objects.
Chroma Keys
Chroma keys allow you to choose a color or range of colors that become transparent, allowing
the video image to be seen “through” the computer image. This is the technology used by a newscast’s
weather person, who is shot against a blue background that is made invisible when merged with the
electronically generated image of the weather map. The weatherman controls the computer part of the
display with a small handheld controller. A useful tool easily implemented in most digital video editing
applications is blue screen, green screen, Ultimatte, or chroma key editing.
Blue Screen:
Blue screen is a popular technique for making multimedia titles because expensive sets are not
required. Incredible backgrounds can be generated using 3-D modeling and graphic software, and one or
more actors, vehicles, or other objects can be neatly layered onto that background. Video editing
applications provide the tools for this.
When you are shooting blue screen, be sure that the lighting of the screen is absolutely even;
fluctuations in intensity will make this “key” appear choppy or broken. Shooting in daylight, and letting
the sun illuminate the screen, will mitigate this problem. Also be careful about “color spill.” If your
actors stand too close to the screen, the colored light reflecting off the screen will spill onto them, and
parts of their body will key out. While adjustments in most applications can compensate for this,the
adjustments are limited. Beware of fine detail, such as hair or smoke, that wisps over the screen; this
does not key well. Figure shows frames taken from a video of an actor shot against blue screen on a
commercial stage. The blue background was removed from each frame, and the actor himself was turned
into a photo-realistic animation that walked, jumped, pointed, and ran from a dinosaur.
This walking, jumping, and pointing actor was videotaped against a blue screen.
Composition
The general rules for shooting quality video for broadcast use also apply to multimedia. When
shooting video for playback in a small window, it is best to avoid wide panoramic shots, as the sweeping
majesty will be lost.Use close-ups and medium shots, head-and-shoulders or even tighter.
Depending upon the compression algorithm used consider also the amount of motion in the shot:
the more a scene changes from frame to frame, the more “delta” information needs to be transferred from
the computer’s memory to the screen. Keep the camera still instead of panning and zooming; let the
subject add the motion to your shot, walking, turning, talking.
Beware of excessive backlighting—shooting with a window or a bright sky in the background—
is a common error in amateur video production. Many cameras can be set to automatically compensate
for backlighting.
Titles and Text
Titles and text are often used to introduce a video and its content. They may also finish off a
project and provide credits accompanied by a sound track. Titles can be plain and simple, or they can be
storyboarded and highly designed.
For plain and simple, you can use templates in an image editor and then sequence those images
into your video using your video editing software. Or you can create your own imagery or animations
and sequence them. More elaborate titles, typical for feature films and commercial videos, can become
multimedia projects in themselves.
2 marks:
1. Define animation.
2. What is called Kinematics?
3. Mean out the term HDTV.
4. Give away the meaning of Codec.
5. What are chrome keys?
5 Marks:
1. Differentiate Analog and Digital Video.
2. Write Short Notes On Morphing.
3. What is the use of Cell Animation?
4. How to shoot and edit a video?
5. Brief out the term Analog Video.
10 Marks-
1. Elaborate on the various animations used by the computer.
2. How to create a working animation? Provide examples.
3. Describe the Digital Video Containers.
4. Give a brief note on animation displays.
UNIT IV
Making Multimedia
3. Testing:
Test your programs to make sure that they meet the objectives of your project, work properly on
the intended delivery platforms, and meet the needs of your client or end user.
4. Delivering:
Package and deliver the project to the end user. Be prepared to follow up over time with tweaks,
repairs, and upgrades.
Creativity:
Before beginning a multimedia project, you must first develop a sense of its scope and content.
The most precious asset you can bring to the multimedia workshop is your creativity.
The evolution of multimedia is evident when you look at some of the first multimedia projects
done on computers and compare them to today’s titles.
Taking inspiration from earlier experiments, developers modify and add their own creative
touches for designing their own unique multimedia projects.
Organization:
It’s essential that you develop an organized outline and a plan that rationally details the skills,
time, budget, tools, and resources you will need for a project.
These should be in place before you start to render graphics, sounds, and other components, and a
protocol should be established for naming the files so you can organize them for quick retrieval
when you need them.
These files—called assets—should continue to be monitored throughout the project’s execution.
Communication:
Many multimedia applications are developed in workgroups comprising instructional designers,
writers, graphic artists, programmers, and musicians located in the same office space or building.
The workgroup members’ computers are typically connected on a local area network (LAN).
The client’s computers, however, may be thousands of miles distant, requiring other methods for
good communication.
Communication among workgroup members and with the client is essential to the efficient and
accurate completion of your project.
If your client and you are both connected to the Internet, a combination of Skype video and voice
telephone, e-mail, and the File Transfer Protocol (FTP) may be the most cost-effective and
efficient solution for both creative development and project management.
Hardware Needs
The Apple Macintosh operating system (OS) and the Microsoft Windows OS, found
runningon most Intel-based PCs (including Intel-based Macintoshes).
Thesecomputers, with their graphical user interfaces and huge installed base ofmany millions of
users throughout the world, are the most commonly usedplatforms for the development and
delivery of today’s multimedia.
Certainly, detailed and animated multimedia is also created on specializedworkstations from
Silicon Graphics, Sun Microsystems, and even onmainframes, but the Macintosh and the
Windows PC offer a compellingcombination of affordability, software availability, and
worldwide obtainability.
Regardless of the delivery vehicle for your multimedia—whetherit’s destined to play on a
computer, on a Wii, Xbox, or PlayStation gamebox, or as bits moving down the data highway—
most multimedia willprobably be made on a Macintosh or on a PC.
The basic principles for creating and editing multimedia elements arethe same for all platforms.
A graphic image is still a graphic image, and adigitized sound is still a digitized sound, regardless
of the methods or toolsused to make and display it or to play it back.
Indeed, many software toolsreadily convert picture, sound, and other multimedia files (and even
wholefunctioning projects) from Macintosh to Windows format, and vice versa,using known file
formats or even binary compatible files that require noconversion at all.
While there is a lot of talk about platform-independent delivery of multimedia on the Internet,
with every new version of a browserthere are still annoying failures on both platforms.
These failures in cross platform compatibility can consume great amounts of time as you prepare
for delivery by testing and developing workarounds and tweaks so yourproject performs properly
in various target environments.
Selection of the proper platform for developing your multimedia projectmay be based on your
personal preference of computer, your budget constraints,project delivery requirements, and the
type of material and contentin the project.
Many developers believe that multimedia project developmentis smoother and easier on the
Macintosh than in Windows, even thoughprojects destined to run in Windows must then be
ported and tested acrossplatforms.
A Windows computer is not a computer per se, but rather a collection of parts that are tied
together by the requirements of the Windows operating system.
Power supplies, processors, hard disks, CD-ROM and DVD players and burners, video and audio
components, monitors, keyboards, mice, WiFi, and Bluetooth transceivers—it doesn’t matter
where they come from or who makes them.
These components are assembled and branded by Dell, HP, Sony, and others into computers that
run Windows.
In the early days, Microsoft organized the major PC hardware manufacturers into the Multimedia
PC Marketing Council, in order to develop a set of specifications that would allow Windows to
deliver a dependable multimedia experience.
Unlike Microsoft, primarily a software company, Apple is a hardware manufacturing company
that developed its own proprietary software to run the hardware.
In 2006, Apple adopted Intel’s processor architecture, an engineering decision that allows
Macintoshes to run natively with any x86 operating systems, same as Windows.
All recent models of Macintosh come with the latest Mac operating system, and using Boot Camp
or Parallels software, Macs can also run the Windows operating system.
It is also frustrating to wait the extra seconds required of each editing step when working with
multimedia material on a slow processor.
In spite of all the marketing hype about processor speed, this speed is ineffective if not
accompanied by sufficient RAM.
A fast processor without enough RAM may waste processor cycles while it swaps needed
portions of program code into and out of memory.
In some cases, increasing available RAM may show more performance improvement on your
system than upgrading the processor chip.
Read-Only Memory (ROM)
Unlike RAM, read-only memory (ROM) is not volatile.
When you turn off the power to a ROM chip, it will not forget, or lose its memory.
ROM is typically used in computers to hold the small BIOS program that initially boots up the
computer, and it is used in printers to hold built-in fonts. Programmable ROMs (called
EPROMs) allow changes to be made that are not forgotten when power is turned off.
Hard Disks
Adequate storage space for your production environment can be provided by large-capacity hard
disks, server-mounted on a network.
As multimedia has reached consumer desktops, makers of hard disks have built smaller-profile,
larger-capacity, faster, and less-expensive hard disks.
As network and Internet servers drive the demand for centralized data storage requiring terabytes
(one trillion bytes)
A CD-RW (read and write) recorder can rewrite 700MB of data to a CD-RW disc about 1,000
times.
Digital Versatile Discs (DVD)
In December 1995, nine major electronics companies (Toshiba, Matsushita, Sony, Philips, Time
Warner, Pioneer, JVC, Hitachi, and Mitsubishi Electric) agreed to promote a new optical disc
technology for distribution of multimedia and feature-length movies called Digital Versatile
Disc(DVD).
With a DVD capable not only of gigabyte storage capacity but also full-motion video (MPEG2)
and high-quality audio in surround sound, this is an excellent medium for delivery of multimedia
projects.
Commercial multimedia projects will become more expensive to produce, however, as
consumers’ performance expectations rise.
There are three types of DVD, including DVD-Read Write, DVD-Video, and DVD-ROM. These
types reflect marketing channels, not the technology.
Input Devices
A great variety of input devices—from the familiar keyboard and handy mouse to touchscreens
and voice recognition setups—can be used for the development and delivery of a multimedia
project.
If you are designing your project for a public kiosk, use a touchscreen. If your project is for a
lecturing professor who likes to wander about the classroom, use a remote handheld mouse.
If you create a great deal of original computer-rendered art, consider a pressure-sensitive stylus
and a drawing tablet.
Scanners enable you to use optical character recognition (OCR) software, such as OmniPage
from ScanSoft.
With OCR software and a scanner, you can convert paper documents into a word processing
document on your computer without retyping or rekeying.
Barcode readers are probably the most familiar optical character recognition devices in use today
—mostly at markets, shops, and other pointof- purchase locations.
Using photo cells and laser beams, barcode readers recognize the numeric characters of the
Universal Product Code (UPC) that are printed in a pattern of parallel black bars on
merchandise labels.
With OCR, or barcoding, retailers can efficiently process goods in and out of their stores and
maintain better inventory control.
An OCR terminal can be of use to a multimedia developer because it recognizes not only printed
characters but also handwriting
Output Devices
Presentation of the audio and visual components of your multimedia project requires hardware
that may or may not be included with the computer itself, such as speakers, amplifiers, projectors,
and motion video devices.
It goes without saying that the better the equipment is, of course, the better the presentation.
There is no greater test of the benefits of good output hardware than to feed the audio output of
your computer into an external amplifier system: suddenly the bass sounds become deeper and
richer, and even music sampled at low quality may sound acceptable.
Speakers with built-in amplifiers or attached to an external amplifier are important when your
project will be presented to a large audience or in a noisy setting.
The monitor you need for development of multimedia projects depends on the type of multimedia
application you are creating, as well as what computer you’re using.
A wide variety of monitors is available for both Macintoshes and PCs. High-end, large-screen
graphics monitors and LCD panels are available for both, and they are expensive. Serious
multimedia developers will often attach more than one monitor to their computers because they
can work with several open windows at a time.
For example, you can dedicate one monitor to viewing the work you are creating or designing,
and you can perform various editing tasks in windows on other monitors that do not block the
view of your work.
Cathode-ray tube (CRT) projectors, liquid crystal display (LCD) panels, Digital Light
Processing (DLP) projectors, and liquid crystal on silicon (LCOS) projectors, as well as (for
larger projects) Grating-Light-Valve (GLV) technologies, are available.
Hard-copy printed output has also entered the multimedia scene. From storyboards to
presentations to production of collateral marketing material, printouts are an important part of the
multimedia development environment.
Software Needs
The basic tool set for building multimedia projects contains one or more authoring systems and
various editing applications for text, images, sounds, and motion video.
A few additional applications are also useful for capturing images from the screen, translating file
formats, and moving files among computers.
The tools used for creating and editing multimedia elements on both Windows and Macintosh
platforms do image processing and editing, drawing and illustration, 3-D and CAD, OCR and text
editing, sound recording and editing, video and moviemaking, and various utilitarian
housekeeping tasks.
OCR Software
With OCR software, a flatbed scanner, and your computer, you can save many hours of rekeying
printed words, and get the job done faster and more accurately than a roomful of typists.
OCR software turns bitmapped characters into electronically recognizable ASCII text. A scanner
is typically used to create the bitmap.
Then the software breaks the bitmap into chunks according to whether it contains text or graphics,
by examining the texture and density of areas of the bitmap and by detecting edges.
The text areas of the image are then converted to ASCII characters using probability and expert
system algorithms. Most OCR applications claim about 99 percent accuracy
Painting and drawing tools, as well as 3-D modelers, are perhaps the most important items in your
toolkit because, of all the multimedia elements, the graphical impact of your project will likely
have the greatest influence on the end user.
Painting software, such as Photoshop, Fireworks, and Painter, is dedicated to producing crafted
bitmap images. Drawing software, such as CorelDraw, FreeHand, Illustrator, Designer, and
Canvas, is dedicated to producing vector-based line art easily printed to paper at high resolution.
Some software applications combine drawing and painting capabilities, but many authoring
systems can import only bitmapped images.
Look for these features in a drawing or painting package:
An intuitive graphical user interface with pull-down menus, status bars, palette
control, and dialog boxes for quick, logical selection
Scalable dimensions, so that you can resize, stretch, and distort both large and small
bitmaps
Paint tools to create geometric shapes, from squares to circles and from curves to
complex polygons
The ability to pour a color, pattern, or gradient into any area
The ability to paint with patterns and clip art
Customizable pen and brush shapes and sizes
An eyedropper tool that samples colors
An auto trace tool that turns bitmap shapes into vector-based outlines
Support for scalable text fonts and drop shadows
Multiple undo capabilities, to let you try again
A history function for redoing effects, drawings, and text
A property inspector
A screen capture facility
Painting features such as smoothing coarse-edged objects into the background with
anti-aliasing (see illustration); airbrushing in variable sizes, shapes, densities, and
patterns; washing colors in gradients; blending; and masking.
Every program that layers objects on the screen must know each object’s “z” axis. Web browsers,
for example, place objects on the screen using the CSS “z-index” attribute.
Some software programs (such as Flash CS4 and ToonBoom Studio) can simulate depth by
automatically scaling images based on a z-axis value to create a cartoonish or simulated 3-D
effect.
Powerful modeling packages such as VectorWorks, AutoDesk’s Maya, Strata 3D, and Avid’s
SoftImage are also bundled with assortments of prerendered 3-D clip art objects such as people,
furniture, buildings, cars, airplanes, trees, and plants
A good 3-D modeling tool should include the following features:
Multiple windows that allow you to view your model in each dimension, from the
camera’s perspective, and in a rendered preview
The ability to drag and drop primitive shapes into a scene
The ability to create and sculpt organic objects from scratch
Lathe and extrude features
Color and texture mapping
The ability to add realistic effects such as transparency, shadowing, and fog
The ability to add spot, local, and global lights, to place them anywhere, and
manipulate them for special lighting effects
Unlimited cameras with focal length control
The ability to draw spline-based paths for animation
Image-Editing Tools
Image-editing applications are specialized and powerful tools for creating, enhancing, and
retouching existing bitmapped images.
These applications also provide many of the features and tools of painting and drawing programs
and can be used to create images from scratch as well as images digitized from scanners, video
frame-grabbers, digital cameras, clip art files, or original artwork files created with a painting or
drawing package.
Sound-Editing Tools
Sound-editing tools for both digitized and MIDI sound let you see music as well as hear it.
By drawing a representation of a sound in fine increments, whether a score or a waveform, you
can cut, copy, paste, and otherwise edit segments of it with great precision
Here are some features typical of image-editing applications and of interest to multimedia
developers:
Multiple windows that provide views of more than one image at a time
Conversion of major image-data types and industry-standard file formats
Direct inputs of images from scanner and video sources
Employment of a virtual memory scheme that uses hard disk space as
RAM for images that require large amounts of memory
Capable selection tools, such as rectangles, lassos, and magic wands, for selecting
portions of a bitmap
Image and balance controls for brightness, contrast, and color balance
Good masking features
Multiple undo and restore features
Anti-aliasing capability, and sharpening and smoothing controls
Color-mapping controls for precise adjustment of color balance
Tools for retouching, blurring, sharpening, lightening, darkening, smudging, and
tinting
Geometric transformations such as flip, skew, rotate, and distort, and perspective
changes
Consider the following tips for making your production work go smoothly:
Use templates that people have already created to set up your production.
These can include appropriate styles for all sorts of data, font sets, color arrangements,
and particular page setups that will save you time.
Use wizards when they are available—they may save you much time and pre-setup
work.
Use named styles, because if you take the time to create your own it will really slow
you down. Unless your client specifically requests a particular style, you will save a
great deal of time using something already created, usable, and legal.
Create tables, which you can build with a few keystrokes in many programs, and it
makes the production look credible.
Help readers find information with tables of contents, running headers and footers, and
indexes.
Improve document appearance with bulleted and numbered lists and symbols.
Allow for a quick-change replacement using the global change feature.
Reduce grammatical errors by using the grammar and spell checker provided with the
software. Do not rely on that feature, though, to set all things right—you still need to
proofread everything.
Include identifying information in the filename so you can find the file later.
As the message traveled, it looks for handlers in the script of each object; if it finds a matching
handler, the authoring system then executes the task specified by that handler.
Following are some typical messages that might pass along the object hierarchy of the LiveCode
and ToolBook authoring systems:
For example, to go to the next card or page when a button is clicked, place a message handler into
the script of that button. An example in RunRev’s LiveCode language would be:
on mouseUp
go next card
end mouseUp
You can print out your navigation map or flowchart, an annotated project index with or without
associated icons, design and presentation windows, and a cross-reference table of variables.
Flash :
Flash is a time-based development environment.
Flash, however, is also particularly focused on delivery of rich multimedia content to the Web.
With the Flash Player plug-in installed in more than 95 percent of the world’s browsers, Flash
delivers far more than simple static HTML pages.
Director:
Adobe’s Director is a powerful and complex multimedia authoring tool with a broad set of
features to create multimedia presentations, animations, and interactive multimedia applications.
It requires a significant learning curve, but once mastered, it is among the most powerful of
multimedia development tools.
In Director, you assemble and sequence the elements of your project, called a “movie,” using a
Cast and a Score.
The Cast is a multimedia database containing still images, sound files, text, palettes, QuickDraw
shapes, programming scripts, QuickTime movies, Flash movies, and even other Director files.
You tie these Cast members together using the Score facility, which is a sequencer for
displaying, animating, and playing Cast members, and it is made up of frames that contain Cast
members, tempo, a palette, timing, and sound information.
Each frame is played back on a stage at a rate specified in the tempo channel. Director utilizes
Lingo, a full-featured object-oriented scripting language, to enable interactivity and programmed
control.
A typical team for developing multimedia for DVD or the Web consists of people who bring
various abilities to the table.
Often, individual members of multimedia production teams wear several hats: graphic designers
may also do interface design, scanning, and image processing.
A project manager or producer may also be the video producer or scriptwriter
A multimedia production team may require as many as 18 discrete roles, including:
Executive Producer
Producer/Project Manager
Project Manager
A project manager’s role is at the center of the action. He or she is responsible for the overall
development and implementation of a project as well as for day-to-day operations.
Budgets, schedules, creative sessions, time sheets, illness, invoice, and team dynamics—the
project manager is the glue that holds it together.
A good project manager must completely understand the strengths and limitations of hardware
and software so that he or she can make good decisions about what to do and what not to do.
Multimedia Designer
The look and feel of a multimedia project should be pleasing and aesthetic, as well as inviting and
engaging.
Screens should present an appealing mix of color, shape, and type. The project should maintain
visual consistency, using only those elements that support the overall message of the program.
Navigation clues should be clear and consistent, icons should be meaningful, and screen elements
should be simple and straightforward.
Graphic designers, illustrators, animators, and image processing specialists deal with the visuals.
Instructional designers are specialists in education or training and make sure that the subject
matter is clear and properly presented for the intended audience.
Interface designers devise the navigation pathways and content maps.
Information designers structure content, determine user pathways and feedback, and select
presentation media based on an awareness of the strengths of the many separate media that make
up multimedia.
Interface Designer
Like a good film editor, an interface designer’s best work is never seen by the viewer—it’s
“transparent.”
In its simplest form, an interface provides control to the people who use it.
It also provides access to the “media” part of multimedia, meaning the text, graphics, animation,
audio, and video— without calling attention to itself.
The elegant simplicity of a multimedia title screen, the ease with which a user can move about
within a project, effective use of windows, backgrounds, icons, and control panels—these are the
result of an interface designer’s work.
Writer
Multimedia writers do everything writers of linear media do, and more. They create character,
action, and point of view—a traditional scriptwriter’s tools of the trade—and they also create
interactivity.
They write proposals, they script voice-overs and actors’ narrations, they write text screens to
deliver messages, and they develop characters designed for an interactive environment.
Writers of text screens are sometimes referred to as content writers.
They glean information from content experts, synthesize it, and then communicate it in a clear
and concise manner.
Scriptwriters write dialog, narration, and voice-overs. Both often get involved in overall design.
Video Specialist
Video images delivered in a multimedia production have improved from postage-stamp-sized
windows playing at low frame rates to full-screen (or nearly full-screen) windows playing at 30
frames per second.
As shooting, editing, and preparing video has migrated to an all-digital format and become
increasingly affordable to multimedia developers, video elements have become more and more
part of the multimedia mix.
For high-quality productions, it may still be necessary for a videospecialist to be responsible for
an entire team of videographers, sound technicians, lighting designers, set designers, script
supervisors, gaffers, grips, production assistants, and actors.
Audio Specialist
The quality of audio elements can make or break a multimedia project.
Audio specialists are the wizards who make a multimedia program come alive, by designing and
producing music, voice-over narrations, and sound effects.
They perform a variety of functions on the multimedia team and may enlist help from one or
many others, including composers, audio engineers, or recording technicians.
Audio specialists may be responsible for locating and selecting suitable music and talent,
scheduling recording sessions, and digitizing and editing recorded material into computer files
Multimedia Programmer
A multimedia programmer or software engineer integrates all the multimedia elements of a
project into a seamless whole using an authoring system or programming language.
Multimedia programming functions range from coding simple displays of multimedia elements to
controlling peripheral devices and managing complex timing, transitions, and record keeping.
Creative multimedia programmers can coax extra (and sometimes unexpected) performance from
multimedia-authoring and programming systems.
Without programming talent, there can be no multimedia. Code, whether written in JavaScript,
OpenScript, Lingo, RevTalk, PHP, Java, or C++, is the sheet music played by a well-orchestrated
multimedia project.
Producer of Multimedia for the Web
Web site producer is a new occupation, but putting together a coordinated set of pages for the
World Wide Web requires the same creative process, skill sets, and (often) teamwork as any kind
of multimedia does.
With a little effort, many of us could put up a simple web page with a few links, but this differs
greatly from designing, implementing, and maintaining a complex site with many areas of content
and many distinct messages.
A web site should never be finished, but should remain dynamic, fluid, and alive. Unlike a DVD
multimedia product replicated many times in permanent plastic, the work product at a web site is
available for tweaking at any time
10 Marks :
1. Describe stages of a multimedia project.
UNIT 5
PLANNING AND COSTING
The process of making multimedia:
Plan for the entire process: beginning with your first ideas and ending with completion
and delivery of a finished product. Think in the overview.
The stepwise process of making multimedia is illustrated in Figure Use this chart to help
you get your arms around a new web site or DVD production
Idea Analysis
Ultimately, you will generate a plan of action that will become your road map for production.
Who needs this project? Is it worthwhile? Do you have the materials at hand to build it? Do you
have the skills to build it?
Your idea will be in balance if you have considered and weighed the proper elements:
What is the essence of what you want to do? What is your purpose and message?
Is there a client, and what does the client want?
How can you organize your project?
What multimedia elements (text, sounds, and visuals) will best deliver your message?
Will interactivity be required?
Is your idea derived from an existing theme that can be enhanced with multimedia, or will you
create something totally new?
What hardware is available for development of your project? Is it enough?
How much storage space do you have? How much do you need?
What multimedia software is available to you?
What are your capabilities and skills with both the software and the hardware?
Can you do it alone? Who can help you?
How much time do you have?
How much money do you have?
How will you distribute the final project?
Will you need to update and/or support the final product?
Pretesting
If you decide that your idea has merit, take it to the next step. Define your project goals in
greater detail and spell out what it will take in terms of skills, content, and money to meet these
goals.
If you envision a commercial product, sketch out how you will sell it. Work up a prototype of
the project on paper, with an explanation of how it will work. All of these steps help you
organize your idea and test it against the real world.
Task Planning
There may be many tasks in your multimedia project. Here is a checklist of action items for
which you should plan ahead as you think through your project:
Prototype Development
Once you have decided that a project is worth doing, you should develop a working
prototype. This is the point at which you begin serious work at the computer, building screen
mock-ups and a human interface of menus and button clicks.
Your messages and story lines will take shape as you explore ways of presenting them.
For the prototype, sometimes called a proof-of-concept or feasibility study, you might select
only a small portion of a large project and get that part working as it would in the final product.
Indeed, after trying many different approaches in the course of prototyping, you may end
up with more than one viable candidate for the final product.
Alpha Development
As you go forward, you should continually define the tasks ahead, because just as if you were
navigating a supertanker, you should be aware of the reefs and passages that will appear along your
course and prepare for them. With an alpha stage prototype in hand and a commitment to proceed,
the investment of effort will increase and, at the same time, become more focused. More people may
become involved as you begin to flesh out the project as a whole.
Beta Development
By the time your idea reaches the beta stage of development, you will have committed serious
time, energy, and money, and it is likely too late to bail out. You have gone past the point of no return
and should see it through.
Delivery
By the time you reach the delivery stage, you are going gold—producing the final product.
Your worries slide toward the marketplace
Scheduling
. To create this schedule, you must estimate the total time required for each task and then
allocate this time among the number of persons who will be asynchronously working on the
project (see, for example, Figure ).
Scheduling can be difficult for multimedia projects because so much of the making of
multimedia is artistic trial and error.
A recorded sound will need to be edited and perhaps altered many times. Animations need to be
run again and again and adjusted so that they are smooth and properly placed.
A QuickTime or MPEG movie may require many hours of editing and tweaking before it
works in sync with other screen activities.
Scheduling multimedia projects is also difficult because the technology of computer hardware
and software is in constant flux, and upgrades while your project is under way may drive you to
new installations and concomitant learning curves.
The general rule of thumb when working with computers and new technology under a deadline
is that everything will take longer to do than you think it will.
In production and manufacturing industries, it is a relatively simple matter to estimate costs and
effort. To make chocolate chip cookies, for example, you need ingredients, such as flour and
sugar, and equip- ment, such as mixers, ovens, and packaging machines.
Once the process is running smoothly, you can turn out hundreds of cookies, each tasting the
same and each made of the same stuff. You then control your costs by fine-tuning known
expenses, like negotiating deals on flour and sugar in quantity, installing more efficient ovens,
and hiring personnel at a more competitive wage. In contrast, making multimedia is not a
repeti- tive manufacturing process.
Billing Rates
Your billing rate should be set according to your cost of doing business plus a reasonable profit
margin. Typical billing rates for multimedia production companies and web designers range from
$60 to $150 an hour, depending upon the work being done and the person doing it.
If consultants or specialists are employed on a project, the billing rate can go much higher.
Everyone who contributes to a project should have two rates associated with their work: the
employee’s cost to the employer (including salary and benefits), and the employee’s rate billed to the
customer.
Multimedia production companies and web site builders with high billing rates claim their
skill-sets and experience allow them to accomplish more work in a given amount of time,
expertly, thus saving money, time, and enhancing the finished quality and reliability of a project.
This is particularly the case with larger-scale, complex projects.
Smaller and leaner companies that offer lower billing rates may claim to be more
streamlined, hungry, and willing to perform extra services. Lower rates do not necessarily mean
lower-quality work, but rather imply that the company either supports fewer overheads or is
satisfied with a reduced profit margin.
Table of Contents
Busy executives want to anticipate a document and grasp its content in short order. A
table of contents or index is a straightforward way to present the elements of your proposal in
condensed overview.
In some situations, you may also wish to include an executive summary—a prelude
containing no more than a few paragraphs of pithy description and budget totals.
The summary should be on the cover page or immediately following. In an electronic
submission, you can hotlink to the Table of Contents and to important sections.
Creative Strategy
A creative strategy section—a description of the look and feel of the project itself—can be
important to your proposal, especially if the executives reviewing your proposal were not present for
creative sessions or did not participate in preliminary discussions
If you have designed a prototype, describe it here, or create a separate heading and include graphics
and diagrams.
Project Implementation
A proposal must describe the way a project will be organized and scheduled. Your estimate of
costs and expenses will be based upon this description.
The Project Implementation section of your proposal may contain a detailed calendar, PERT
and Gantt project planning charts, and lists of specific tasks with associated completion dates,
deliverables, and work hours. This information may be general or detailed, depending upon the
demands of the client.
The project implementation section is not just about how much work there is, but how the work
will be managed and performed. You may not need to specify time estimates in work hours, but
rather in the amount of calendar time required to complete each phase.
Budget
The budget relates directly to the scope of work you have laid out in the project implementation
section. Distill your itemized costs from the project implementation description and consolidate the
minute tasks of each project phase into categories of activity meaningful to the client.
Designing
The design part of your project is where your knowledge and skill with computers; your talent in
graphic arts, video, and music; and your ability to conceptualize logical pathways through
information are all focused to create the real thing. Design is thinking, choosing, making, and doing.
It is shaping, smoothing, reworking, polishing, testing, and editing. When you design your project,
your ideas and concepts are moved one step closer to reality.
Competence in the design phase is what separates amateurs from professionals in the making of
multimedia.
Depending on the scope of your project and the size and style of your team, you can take two
approaches to creating an original interactive multi- media design. You can spend great effort on the
storyboards, or graphic outlines, describing the project in exact detail—using words and sketches
for each and every screen image, sound, and navigational choice, right down to specific colors and
shades, text content, attributes and fonts, but- ton shapes, styles, responses, and voice inflections
GUIs
The Macintosh and Windows graphical user interfaces (GUI, pronounced
“gooey”) are successful partly because their basic point-and-click style is simple,
consistent, and quickly mastered. Both these GUIs offer built-in help systems, and both
Producing
Production is the phase when your multimedia project is actually rendered. During
this phase you will contend with important and continuous organizing tasks.
There will be times in a complex project when graphics files seem to disappear from
the server, when you forget to send or cannot produce milestone progress reports, when
your voice talent gets lost on the way to the recording studio, or when your hard disk
crashes.
Starting Up
Here are some examples of things to think
about.
Tracking
Organize a method for tracking the receipt of material that you will incorporate
into your multimedia project. Even in small projects, you will be dealing with many
digital bits and pieces.
Develop a file-naming convention specific to your project’s structure. Store the files
in directories or folders with logical names. Version control of your files (tracking
editing changes) is critically important, too, especially in large projects.
If more than one person is working on a group of files, be sure that you always know
what version is the latest and who has the current version. If storage space allows,
archive all file iterations, in case you change your mind about something and need to
go back to a prior rendering.
Copyrights
Commonly used authoring platforms may allow access to the software programming
code or script that drives a particular project. The source code of HTML pages on the
Web may also be easily viewed.
In such an open-code environment, are you prepared to let others see your
programming work? Is your code neat and commented? Perhaps your mother
cautioned you to wear clean underclothing in case you were suddenly on a table
among strangers in a hospital emergency room—well, apply this rule to your code.
Hazards and Annoyances
Small annoyances, too, can become serious distractions that are counterproductive.
The production stage is a time of great creativity, dynamic intercourse among all
contributors, and, above all, hard work. Be prepared to deal with some common
irritants, for example:
Creative coworkers who don’t take (or give) criticism well
Clients who cannot or are not authorized to make decisions
More than two all-nighters in a row
Too many custom-coded routines
Instant coffee and microwaved corn dogs
Too many meetings; off-site meetings
Missed deadlines
Software and hardware upgrades that interrupt your normal operations
Acquiring Content
Content acquisition can be one of the most expensive and time-consuming tasks in
organizing a multimedia project. You must plan ahead, allocating sufficient time (and
money) for this task.
■ If your project describes the use of a new piece of robotics machinery, for example,
will you need to send a photographer to the factory for the pictures? Or can you
digitize existing photographs?
■ Suppose you are working with 100 graphs and charts about the future of petroleum
exploration. Will you begin by collecting the raw data from reports and memos, or
start with an existing spreadsheet or data- base? Perhaps you have charts that have
already been generated from the data and stored as TIFF or JPEG files?
■ You are developing an interactive guide to the trails in a national park, complete with
video clips of the wildlife that hikers might encounter on the trails. Will you need to
shoot original video footage, or are there existing tapes for you to edit?
After you have tested everybody you know and you still have vacant seats in your
project, you may need to turn to professional talent. Getting the perfect actor, model, or
narrator’s voice is critical.
Professional voice-over talents and actors in the United States usu- ally belong
to a union or guild, either AFTRA (American Federation of Television and
Radio Artists) or SAG (Screen Actors Guild). They are usually represented by a
talent agent or agency that you can find in the yellow pages.
Begin by calling a talent agency and explain what you need. The agency will probably
suggest several clients who might fit your needs and send you to their web site for
video or audio samples of the actors’ work. After reviewing the samples, you can
arrange auditions of the best candidates, at your office or at a studio.
Working with Union Contracts
The two unions, AFTRA and SAG, have similar contracts and terms for minimum pay
and benefits. AFTRA has approved an Interactive Media Agreement to cover on- and
off-camera performers on all interactive media platforms.
AFTRA definitions related to interactive media.
10 Marks Questions:
1. Describe about process of multimedia?
2. Detail about scheduling?
3. Detail about estimation?
4. Describe RFP’s and Bid Proposals?
5. Detail about designing.
6. Describe about producing.
7. Details about acquiring content.
8. Detail about acquiring talent.