0% found this document useful (0 votes)
28 views198 pages

155e1150 Multimedia and Its Applications

The document is a course outline for a Master of Science in Computer Science program at Annamalai University, focusing on Multimedia and its Applications. It covers various aspects of multimedia, including its usage in education, entertainment, advertising, and healthcare, along with technical details about multimedia systems and tools. The document also outlines objectives, content structure, and learning activities across multiple units.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views198 pages

155e1150 Multimedia and Its Applications

The document is a course outline for a Master of Science in Computer Science program at Annamalai University, focusing on Multimedia and its Applications. It covers various aspects of multimedia, including its usage in education, entertainment, advertising, and healthcare, along with technical details about multimedia systems and tools. The document also outlines objectives, content structure, and learning activities across multiple units.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 198

DMSCSE15

ANNAMALAI UNIVERSITY
DIRECTORATE OF DISTANCE EDUCATION

M.Sc COMPUTER SCIENCE


FIRST SEMESTER

MULTIMEDIA AND ITS APPLICATIONS


ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Copyright Reserved
(For Private Circulation Only)
Multimedia and Its Applications
Table of Content

Unit-I Page no

1.0 Introduction 1
1.1 Objective 1
1.2 Content 1
1.2.1 Usage of Multimedia 1
1.2.2 Introduction To Making Multimedia 6
1.2.3 Multimedia Skills And Training 9
1.2.4. Multimedia For The Web 11
1.2.5. The Sum Of The Parts 11
1.3 Revision points 14
1.4 Intext Question 14
1.5 Summary 14
1.6 Terminal exercises 14
1.7 Supplementary Materials 16
1.8 Assignments 17
1.9 Suggested Reading 17
1.10 Learning Activities 17
1.11 Key words 17

Unit-II

2.0 Introduction 18
2.1 Objective 18
2.2 Content 18
2.2.1 Macintosh And Windows Production Platforms Pc Platform 18
2.2.2 Hardware Peripherals 19
2.2.3 Connections 20
2.2.4 Memory And Storage Devices 23
2.2.5. CD-Technologies 30
2.2.6. DVD 32

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
2.2.5 Input Devices
2.2.6 Touch Screens
41
44
2.2.7 Magnetic Card Encoders And Readers 46
2.2.8 Flat Bed Scanners 47
2.2.9 Voice Recognition Systems 52
2.2.10 Digital Camera 53
2.2.11 Output Hardware 54
2.2.12 Projectors Printers 55
2.2.13 Communication Devices 57
2.2.14 Modems 58
2.2.15 Cable Modems 58
2.3 Revision points 59
2.4 Intext Question 59
2.5 Summary 60
2.6 Terminal exercises 60
2.7 Supplementary Materials 60
2.8 Assignments 60
2.9 Suggested Reading 61
2.10 Learning Activities 61
2.11 Key words 61

Unit-III

3.0 Introduction 63
3.1 Objective 63
3.2 Content 63
3.2.1 Text Editing 63
3.2.2 Word Processing Tools 66
3.2.3 Painting And Drawing Tools 67
3.2.4 3d Modeling And Animation Tools 68
3.2.5 Animation, Video And Digital Movie Tools 70
3.2.6 Making Instant Multimedia 75
3.2.7 Spreadsheets 75
3.2.8 Presentations Tools 79
3.2.9 Multimedia Authoring Tools 81
3.2.10 Types Of Authoring Tools 83
3.2.11 Time Based Authroing Tools 85
3.2.12 Object Oriented Authoring Tools 85
3.3 Revision points 86
3.4 Intext Question 86
3.5 Summary 87
3.6 Terminal exercises 87
3.7 Supplementary Materials 87
3.8 Assignments 88
3.9 Suggested Reading 88
3.10 Learning Activities 88
3.11 Key words 89
ANNAMALAI
ANNAMALAI UNIVERSITY
Unit-IV
UNIVERSITY
4.0 Introduction 90
4.1 Objective 90
4.2 Content 90
4.2.1 Text 90
4.2.2 Sound 107
4.2.3 Images 114
4.2.4 Color 119
4.2.5 Animation 133
4.2.6 Video 141
4.3 Revision points 154
4.4 Intext Question 154
4.5 Summary 155
4.6 Terminal Exercises 156
4.7 Supplementary Materials 156
4.8 Assignments 156
4.9 Suggested Reading 156
4.10 Learning Activities 156
4.11 Key Words 157

Unit-V

5.0 Introduction 158


5.1 Objective 158
5.2 Content 158
5.2.1. Multimedia and the Internet 158
5.2.2. How the Internet Works 166
5.2.3. Internetworking 167
5.2.4. Connections 168
5.2.5. Internet Services 177
5.2.6. The World Wide Web 177
5.2.7. Web Servers 180
5.2.8. Web Browsers 180
5.2.9. Web page makers and Site Builders 187
5.2.10. Plug-Ins and Delivery Vehicles 190
5.3 Revision points 192
5.4 Intext Question 192
5.5 Summary 193
5.6 Terminal exercises 193
5.7 Supplementary Materials 193
5.8 Assignments 194
5.9 Suggested Reading 194
5.10 Learning Activities 194
5.11 Key words 194
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Multimedia and Its Application

UNIT-I

1.0) Introduction

Multimedia is a combination of text, graphic art, and sound, animation and video
elements. When you allow an end user-the viewers of a multimedia project-to control
what and when the elements are delivered, it is interactive multimedia. When you provide
a structure of linked elements through which the user can navigate, interactive
multimedia becomes hypermedia.
The IBM dictionary of Computing describes multimedia as "comprehensive material,
presented in a combination of text, graphics, video, animation and sound. Any system
that is capable of presenting multimedia in its entirely, is called a multimedia system". A
multimedia application accepts input from the user by means of a keyboard, voice or
pointing device. Multimedia applications involve using multimedia technology for
business, education and entertainment.
Multimedia can be divided into three broad categories that are based on the applications
they cover. These are
 Fun Material
 Powerful Material
 Creative Material.
The first category covers games, animation sequences, realistic sounds and anything else
you can think of. The Powerful Material category comprises software packages that have
not been run on the PC earlier. These include Encyclopedias on CD-ROMs, Works of
literature, magazines with graphics and sound and reference works. The Creative
Material category covers software that enables users to create their own multimedia
programs, presentation and tools.

1.1) Objective
 To study the various uses of multimedia in different applications.
 To understand the roles and responsibilities of Multimedia project team members.

ANNAMALAI
ANNAMALAI UNIVERSITY
1.2) Content UNIVERSITY
1.2.1 Usage of Multimedia
Applications of Multimedia:
Many multimedia applications are driving the
development of new technology, and many more
applications are becoming viable because of the
technological advances. The technology push and the
application pull form a self-supporting cycle that is
hastening the pace of development. Many applications

Page 1
Multimedia and Its Application

that did not use multimedia content in their earlier versions presently include multimedia,
because multimedia makes a product more attractive and marketable. In this section,
various multimedia applications are described, with a view to understanding the demands
they put on networking systems.
Education:
Educational programs are designed as educational games that appeal to children,
beginning from the elementary class. These programs present Letter Recognition,
Elementary Mathematics, Spelling, Science, History, and Geography. Even students of
the higher class can use the interactive programs of Physics, Chemistry, etc. These
programs can be procured by schools

Entertainment
Multimedia has become the basic mode of development of
entertainment programs. TV programs, video games, etc.
Mostly depend upon the multiple design facility of
multimedia.

Computer Based Teaching (CBT)


If you want to get yourself trained by your computer on the following
subjects, just go and buy a CD, and you will become expert in any of these subjects.
 Foreign Languages.

 Cooking and Nutrition.


 Anatomy, Hobbies and Music.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Most of the above programs are based on multimedia. You will notice that the careful
use of animation, graphic, sound and video has
made each one very interesting.

Games
Most of you are familiar with computer/video
games. Why don’t you try to become a
multimedia designer and create your own
computer games.

Page 2
Multimedia and Its Application

Sports
Multimedia technology is being exploited extensively in the
filed of sports telecast and training. Animation digitized
video, graphics, etc. have enhanced the power of sports
presentation.
For cricket matches shown on TV, the replays, graphic
overlays, batting averages, slow motion etc. are being
developed by multimedia software.

Cyberart
Cyberart has become a
powerful medium and a specialized field to
express ideas. The creativity of an individual may
be expressed on a three dimensional canvas. In the
hands of an imaginative artist, this virtual canvas
can have the addition of video and sound effects to
an extent that would be the envy of an artist with
an easel and paintbrush.

Advertising
Advertisements for TV and the film industry are
developed with multimedia. All kinds of special
effects like animation, moving through a building,
sound and video manipulation, special objects, etc.
are created through multimedia.

Newspapers and Magazines


Newspapers and magazines are using the multimedia
tools to prepare CD-ROMs for wide circulation. The
advantage of a CD-ROM over the print media is in the
fact that by means of multimedia application, the
electronic publication can display music, sound and
graphics along with the text. For example, if your
colleges have a dance competition, which could be
presented through a CD this, may also be given to other
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY college.

Training
Multimedia has become the most effective means of
imparting training. A large number of institutions
have embraced this method to impart knowledge in
preference to the older means. It must be remembered

Page 3
Multimedia and Its Application

that the common features are text, graphics, animation, sound and video integration.

Interactive Multimedia
Interaction between the user and the computer has
been evolved over a period of time. Form the
keyboard and Mouse to the Joystick and trackball
was the initial phase. Gradually it changed to the
touch sensitive screen and presently computers
has begun to recognize voices to carry out order.
Artificial Intelligence is no longer a distant dream.
Multimedia and Internet
Multimedia technology is being used extensively
on the Internet. Every page of the Internet has
text, graphics, animation, etc.
Health Care – Telemedicine

Multimedia Information
Networking can be used for
enhancing service quality
and reducing costs in the
health care area. Delivery
of health care services via a
network is also called
telemedicine. Computer
technology has been
applied to health care
functions for more than a
decade now. But most such
systems have remained
disjointed islands of
technology. The aim of telemedicine is to use Multimedia Information Networking
technology to create seamless information transmission systems for use in the health care
industry.
Example
Administration

ANNAMALAI
ANNAMALAI UNIVERSITY
Registration

Authorization
UNIVERSITY
While registering incoming patients, photos can be added to improve
authentication
The hospital can check all relevant data to authorize the care; e.g.,
checking Medicare and insurance data by saving forms signed by the
patient or representative
Claims The organization processing the claims can access multimedia
Processing information, such as patient photo, signature, X-rays etc. before
processing the claims.
Diagnostics
Tests Digital storage of test results including X-rays, CAT scans, etc.
Consulting doctor(s) can view the test results on high-resolution

Page 4
Multimedia and Its Application

display screens.
Consultation The local doctor can communicate with remotely located specialists
over a collaborative videoconference and discuss the patient’s
condition and test results.
Patient Care
Monitoring Monitoring of patients in the hospital or their homes can be done via a
multimedia network. Expert systems connected to the monitoring
systems can be used to warn the caregiver of any abnormal conditions,
and the patients could be observed and talked to over a
videoconference link.
Emergency Care
Record Access Patent records can be accessed over a multimedia network in an
emergency situation, even on the roadside, by using wireless
communication.
Carrier Opportunities:
Now you must be wondering about the carrier opportunities in fields related to
multimedia technology. The following lists are few indications about the applicability of
this specialized field:
Entertainment:
 Special effects in TV serials and films.
 Interactive computer games.
 Internet games.
 Animation and virtual reality simulation.
Advertising:
 Marketing through WebPages.
 TV commercials.
 Multimedia Advertising.
Education:
 Education related software.
 Textbooks based on multimedia.
 Classroom instructional materials.
 Internet distance learning programs.
 Research facilities through libraries.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Science and Research
 All fields of scientific research.
 Astronomy.
 Space Technology and Aviation.
 Medical.
Interactive Publishing:
 Multimedia books.
 Internet Web page design.
 CD-ROM (based on multimedia) based electronic magazines.

Page 5
Multimedia and Its Application

Police Department:
 Image composing of suspects. Trials can be reenacted.
Investment:
 Investment analysis.
 Statistical modeling.
 Market analysis.
There are opportunities for a multimedia specialist also in the fields of:
 Architectural designing.
 Interior designing.
 Landscape designing.
Needs and benefits:
Multimedia can be used to perform a wide range of sophisticated functions. With
multimedia we can:
 Browse through an encyclopedia and see animations on subjects ranging form the
nervous system to electrons in a fission reaction.
 Build business presentations using text, graphics, sound, video and animation.
 Create interactive computer presentations.
 Explore the anatomy of the human body for the anatomy paper.
 Create 3-D effects in various ways.
 Explore the map of any country that you may like to visit.
 Add sound to files or tasks.
 Create animated birthday/greeting cards for friends who have computers.
 Watch a man walk on the surface of the moon.
 Use multimedia for selling a product.
 Learn a language.
 Capture an image from video and use it as a bitmap on the Windows desktop.
The functions and possibilities are endless. Till now, people used to dream about an
electronic paperless office. With multimedia, this dream can be realized. It is indeed a
futuristic concept that is taking shape now!
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
1.2.2 Introduction To Making Multimedia
Multimedia, is a very powerful tool for influencing people. With this objective in mind,
carefully observe the following principles:
 Multimedia means getting your message across in the shortest possible time with
maximum effect. To do this, match your presentation to the target audience. In other
words, if your audience is illiterate, use more of graphics and sounds in place of
written text. Evidently, illiterate persons can appreciate a picture but will be unable to
read the most beautifully drafted text.

Page 6
Multimedia and Its Application

 Learn to convey your messages in the least possible words. Do also remember that,
whereas excess of detail is boring, brevity by itself is no virtue. In other words, learn
to make a balanced presentation. It should neither be so short as to lose meaning nor
so detailed that people start yawing or go to sleep.
 Be careful in deciding the medium that you will use more than the other media. In
this regard, pay attention to the age, sex, education, cultural background, economic
position and other details of the audience. Taking care of these characteristics of the
audience will enable you to choose the media best suited to influence the receivers of
your message.
 Multimedia presentations can, at times, be quite expensive. Hence, be innovative and
learn to make use of locally available skills, idioms, forms of expression, etc. so that
people can easily understand your message. To explain, in TamilNadu, people enjoy
the Kargham dancers. In Haryana, people like Saangs (a folk music cum drama
presentation).
 Clearly, combining locally popular and accepted forms of expressions and your
knowledge of multimedia is likely to make your presentations more effective and
successful.
 Ensure that the message that you wish to convey and the words or graphics used by
you do not hurt the popular sentiments. In other words, do not use words or
expressions that may be considered offensive, dirty or socially unacceptable in any
way. The same criterion applies to the use of graphics, wherein you must learn to
respect the local idiom. Forgetting this principle may not make your presentation
unacceptable but also create unpleasant consequences for you and all others
associated with you in making the presentation.
 Avoid repetition. The availability of a variety of media gives you enough
opportunities to experiment and innovate. For example, it is now being increasingly
realized that street plays can be used as a very effective means of communication for
spreading awareness. Now experiment, how you can combine a street play with a
slide show, a film show, or attractively prepared posters, hand bills, etc., with a few
of your teammates dispersed in the crowd to elicit the people's response to your
presentation.

Multimedia Production

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The production of interactive multimedia applications is a complex one, involving
multiple steps. This process can be divided into the following phases:
 Conceptualization
 Development
 Pre production
 Production
 Post production
 Documentation

Page 7
Multimedia and Its Application

Conceptualization:
The process of making multimedia starts with an “idea” or better described as “the
vision”-which is the conceptual starting point. The starting point is ironically the
visualization of the ending point- the multimedia experiences that the targeted end-user
will have. Conceptualisation involves identifying a relevant theme for the multimedia
title. We prefer choosing themes that are socially important and exiting to work on. Other
criteria like availability of content, how amenable are the content to multimedia treatment
and issues like copyright are also to be considered.
Development:
Defining projects goal and objectives:
After a theme has been finalized for a multimedia project, specific goals, objectives and
activities matrix must be laid down.
 Goals: In multimedia project, specific goals are general statements of
anticipated project outcomes, usually more global in scope.
 Objectives: Specific statements of anticipated project outcomes.
 Activities: These are actions, things done in order to implement and
objective.
Specific people are responsible for their execution, a cost is related to their
implementation and there is a time frame binding their development.
Defining the Target Audience:
A very importantly element that needs to be defined at this stage is the potential target
audience of the proposed title. Since, this will determine how the content needs to be
presented.
Preproduction:
It is the process of intelligently mapping out a cohesive strategy for the entire multimedia
project, including contents, technical execution and marketing. Based on the goals and
objectives, the three pillars of multimedia viz. hardware, software and user participation
are defined. At this stage the multimedia producer begins to assemble the resources and
talent required for creating the multimedia application. The Production Manager
undertakes the following activities.
 Development of the budget control system
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
 Hiring of all specialists involved in the multimedia applications process
 Contracting video and audio production crews and recording studios
 Equipment rental, leasing and purchasing
 Software acquisition and installation
 Planning the research work of the content specialists
 Development of the multimedia application outline, logic flow, scripts and video
and audio files production scripts and schedules
 Coordination of legal aspects of production.

Page 8
Multimedia and Its Application

Production:
Once all the pre production activities have been completed, the multimedia application
enters the production phase. An activity in this phase includes:
 Content Research
 Interface Design
 Graphics Development
 Selection of musical background and sound recording
 Development of computer animation
 Production of digital video
 Authoring
Post production:
In this phase, the multimedia application enters the alpha beta testing process. Once the
application is tested and revised, it enters the packaging stage. It could be burned into a
CD-ROM or published on the Internet as website.
Developing documentation:
User documentation is a very important feature of high-end multimedia titles. This
includes instructions for installing, system requirements, developing acknowledgements,
copyrights, technical support and other information important of the user.

1.2.3 Multimedia Skills And Training


Most multimedia projects are the result of teamwork-many graphic artists, sound
producers, programmers are involved. Many a times each person is assigned more than
one responsibility. Let us now explore some important members of a team:
Member Responsibility Skills Required
Project manager Responsible for the entire Good technical skill, solid
project-cycle including organizational skills,
monitoring the progress. communication skills to interact
Design and management are with the team members.
the two major
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
responsibilities. Scheduling
time, resources and running
meetings are also handled.
Multimedia These designers decide the Experience in designing good
Designer. The ultimate user experience. information systems, creativity,
various designers Knowledge on program,
are mentioned architecture and good
below: organizational skills are
essential.

Page 9
Multimedia and Its Application

Graphic Designers, These designers deal with


Animators, and visual elements.
Image Processing
specialists
Instructional Responsible for the
Designers Consistency and relevance
of subject presented.
Interface Responsible for the
Designers structure and navigation of
the content. They decide the
ultimate user navigation
through the multimedia
project.
Information Responsible for the content
Designers of the multimedia project
and also the feedback
Multimedia Responsibilities include Skills required include but not
Writers creation of characters, experience in copy writing,
defining their interaction ability to follow schedules and
and their roles. They working well as a team
actually develop the
content.
Video Specialist Handles equipment to Good creativity and editing skills
record and edit video are a must.
elements. Works together
with sound engineers, set
designers, light engineers
and actors.
Audio Specialist Design and produce audio, Solid technical knowledge in
music, and sound effects. handling various ‘sounds’
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY formats. Working well as a team
is essential.
Multimedia Brings various elements of Lingo, JAVA, C/C++ skills are
Programmer multimedia together using essential along with knowledge
authoring tools/ of HTML, Flash.
programming
Multimedia Brings together various Knowledge of HTML, CGI
Producer for the elements to the World Wide scripting, good communications
Web Web. Should take area of with team members are essential.

Page 10
Multimedia and Its Application

the web site since changes


are dynamic.

1.2.4. Multimedia For The Web


Interactive Web pages replete with graphics, sound, animations, and full-motion video
have made multimedia popular on the Internet. For example, visitors to the msn.com
Interactive Web site can access new stories from news channel, photos, on-air transcripts,
video clips, and audio clips. The video and audio clips are made available using
streaming technology, which allows audio and video data to be processed as a steady and
continuous stream as they are downloaded from the Web. If Internet transmission
capacity and streaming technology continue to improve, Web sites could provide
broadcast functions that compete with television along with new tow-way interactivity.
Multimedia Web sites are also being used to sell digital products, such as digitized music
clips. A compression standard known as MP3 (also called MPEG3), which stands for
Motion Picture Experts Group, audio layer 3, can compress audio files down to one-tenth
or one-twelfth of their original size with virtually no loss in quality. Visitors to Web sites
such as MP3.com can download Mp3 must clips over the Internet and play them on their
own computers.

1.2.5. The Sum Of The Parts


Creating Content
Content is the “stuff” around which an application is developed. It is the text, narration,
graphics, colors, backgrounds, videos and animation. In other words, content are all the
elements that compose a multimedia application.
Content has a value and a cost. Cost refers to the monetary price incurred to acquire or
develop content, while value refers to its merit, usefulness, importance, or significance. A
balance has to be struck between the value and cost of the content against the production
budget and the desired outcomes.
Content acquisition is one of the most time-consuming and budget intensive activities
during the development of a multimedia application. The multimedia producer has to
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
determine if it is feasible to incorporate the suggested content based on its cost and value;
determine the alternatives; evaluate the legal implications of using proposed content; and,
determine the best strategy to develop or modify the desired content.
Content Identification, selection, Development and Acquisition
Content either has to be sourced or if it is not available then it has
to be created. This implies that the source must be identified, selected and the content
acquired, or it must be developed. Most, budgetary constraints define whether content is
developed, purchased or borrowed. Copyright issues are the next most important
constraints that influence contents generation.

Page 11
Multimedia and Its Application

The main responsibility of content development lies with the content Specialist,
scriptwriter or Computer Graphic Artist. The content specialist undertakes the following
tasks:
 Content research
 Identifying document sources
 Identification of the building blocks like colours and graphics representative of
the theme, time or period to be presented in the application
 Identifying individuals to be interviewed
 Location to be video taped
The responsibilities of the scriptwriters are the following:
 Content evaluation
 Adaptation of the content to the goals and objectives of the application
 Development of the application script and storyboard based on the content
 the computer graphics artist is responsible for the development of the
following:
 developing line art necessary for the application
 scanning and editing of photos, backgrounds, and other Graphic elements
 chart development
 map preparation
 text manipulation
 3-D graphics and walkthrough
 computer animation
If content is not readily available then it needs to be developed. The creations of story,
graphics, or the compositions of music are examples of content development. Sometimes
content needs to be adapted to meet the needs of the application. This includes editing
and manipulation of existing graphics, photo, video, sound or text.
Checklist for Multimedia Production
There may be many tasks in your multimedia project. Here is a brief check-list of action
items for which you should plan ahead as you think through your project:
 Design Instructional Framework
 Hold Creative Idea Session(S)
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
 Determine Delivery Platform
 Examine Available Content
 Draw Navigation Map
 Create Storyboards
 Design Interface
 Design Information Containers
 Research/Gather Content
 Assemble Team
 Build Prototype

Page 12
Multimedia and Its Application

 Conduct User Test


 Revise Design
 Create Graphics
 Create Animations
 Produce Audio
 Digitize Audio and Video
 Take Still Photographs
 Program and Author
 Test Functionality
 Fix Bugs
 Conduct Beta Test
 Create Golden Master
 Replicate
 Prepare Package
 Deliver or Install at Web Site
 Award Bonuses
 Throw Party

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 13
Multimedia and Its Application

1.3) Revision points

Multimedia is now available on standard computer platforms. It is the best way to gain
attention of users and is widely used in many fields as follows:
Business: In any business enterprise, multimedia exists in the form of advertisements,
presentations, video conferencing, voice mail, etc.
Schools: Multimedia tools for learning are widely used these days. People of all ages
learn easily and quickly when they are presented information with the visual treat.
Home PCs equipped with CD-ROMs and game machines hooked up with TV screens
have brought home entertainment to new levels. These multimedia titles viewed at some
would probably be available on the multimedia highway soon.
Public places _ Interactive maps at public places like libraries, museums, airports and the
stand-alone terminals at super markets would do much good to the users to gain
information quickly and easily

1.4) Intext Question


1. What is multimedia? Explain at least five applications of multimedia in distance
education.
2. What is copyright? List and explain two legal issues related to copyright in
multimedia application development.
3. Explain any two multimedia features which can be used in business.
4. Explain why psychology of learner should be taken into consideration at the time
of design of a multimedia based learning application.
5. Explain briefly seven applications of multimedia in Business.
6. Explain two basic criteria on which publishing industry can be classified. Also,
explain three advantages of using multimedia in publishing industry.
7. Explain, with example, the use of storyboard for graphical representation of
multimedia project.

1.5) Summary

Hardware, Software, Creativity, talent and technical skills are required for making
good multimedia. Following the time schedule and budget is essential, as time and

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
money are major requirements.
Most multimedia projects are results of team work. Many team work-many graphic
artists, sound producers, programmers are involved.

1.6) Terminal exercises


1 What is the first phase in a project?
A) Analysis
B) Development
C) Design

Page 14
Multimedia and Its Application

D) Evaluation

2 What is another name for implementation?


A) Alpha test
B) Integration
C) Roll-out
D) Organization
3 What are deliverables?
A) Milestones
B) Work products that the team submits
C) An organization
D) A department within an organization
4 The project manager is involved in which phases of a project?
A) The first one
B) All of them
C) The last one
D) The first and the third
5 Which person in the group is responsible for how users respond to a multimedia
application?
A) User interface designer
B) Producer
C) Programmer
D) Systems architect
Fill up the Blanks
I. Team Efforts
Creating large multimedia applications requires ___________________________
.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
II. Project Phases
Many large-scale projects follow the ________________________________
.
III. The Project Team
The makeup of a project team depends on the __________________ and
________________ its __________________ and __________________.
On small teams, members often fill more than one role.
IV. Project Team Roles
Project teams may include:
A.Client Representative
B. Project Manger
C. Produced

Page 15
Multimedia and Its Application

D. User Interface Designer


E. ______________________
F. ______________________
G. ______________________
H. ______________________
I. Audio and Video Specialist
J. Quality Assurance Analyst
K.Webmaster
V. _____________________ The organization or person who pays for the application and
publishes it
VI. _______________________________ Responsible for ensuring that the team meets its
deadlines and stays within its budget
VII. Responsible for planning and coordinating the development of all the ______________
different media in an application
VIII. _____________________ Responsible for planning all the aspects of an application
that affect a users’ ability to understand and navigate it
IX. ____________________________ Uses programming languages to develop the
underlying code that enables an application to run on computers
X. _____________________ Responsible for the overall technical design of an application,
including:
A.The hardware and operating system it will run on
B. How its components will work together
C. The programming languages and other tools that will be used to develop it
XI. __________________ Creates original graphics for the application, working closely with
the UID, photographers, and any others who contribute to the application’s visual content
XII. __________________
A.Writers specialize in content or documentation.
B. Editors review the work of all writers.
XIII. __________________ May record sounds, integrate existing files into the application,
edit recordings for clarity, and convert files to different formats
XIV. ______________ A specialist whose job involves testing the application and reporting
problems to be corrected
XV. ___________________ Responsible for keeping a Web site up-to-date and running
smoothly, and for helping visitors who have questions about the site

1.7) Supplementary Materials

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
1. http://en.wikipedia.org/wiki/Multimedia
2. http://multimedia.expert-answers.net/multimedia-glossary/en/
3. http://nrg.cs.usm.my/~tcwan/Notes/MM-BldgBlocks-I.doc
4. www.edb.utexas.edu/multimedia/PDFfolder/WEBRES~1.PDF

Page 16
Multimedia and Its Application

1.8) Assignments
1. Explain the need for planning a multimedia application. Explain the need of logic
flow chart for development of interactive multimedia application with an
example.

1.9) Suggested Reading


1. Keven Jeffay & Hong Jiang Zhang, 'Reading in Multimedia Computers and
Networks', Academic Press, 2002.
2. Nigel chapman & Jenny chapman, 'DIGITAL MULTIMEDIA', Wiley,2000.
3. Praabhat K.Andleigh & Kiran Thakrar,'Multimedia Systems Design',PHI,2003
4. Tay Vaughan, 'Multimedia Making it Work',Tata McGraw Hill,2002

1.10) Learning Activities


1. Multimedia Databases need to store and retrieve images, sounds and videos.
Keyword searching can be used to retrieve information for these media types but
current methods of retrieval are more concerned with content-based retrieval
search methods.

2. Discuss how content-based retrieval can be used for the three media types: image,
sound and video.
3. What problems need to be overcome in providing an effective and efficient
service to the user?

1.11) Key words

Conceptualization: The process of making multimedia starts with an “Idea”


Development: Formulating specific goals, objectives and activities matrix
Pre production: Assembling the resources and talent required for creating the
multimedia application.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Production: Content Research, Interface Design, Graphic Development etc.
Post production: Testing and Delivery
Documentation: Instructions for installing, system requirements, developing
acknowledgements, etc.,

Page 17
Multimedia and Its Applications

UNIT-II

2.0) Introduction
The processing, input, output, and storage technologies can be used to create interactive
multimedia applications that integrate sound, full-motion video, or animation with
graphic and text. PCs today come with built-in multimedia capabilities, including a high-
resolution color monitor; a CD-ROM drive or DVD drive to store video, audio, and
graphic data; and stereo speakers for amplifying audio output.
2.1) Objective
To study the various Hardware devices of multimedia systems
To understand the working principle of some I/O devices and storage devices.

2.2) Content
2.2.1 Macintosh And Windows Production Platforms Pc Platform
Macintosh versus PC:
The debate on selecting the platform for multimedia project
between Macintosh and PC has been going on for a long
time. Most developers have their mind set on believing that
Macintosh provides an easier and smoother platform for
multimedia development. It is true that with the advent of
hardware and authoring software tools for window,
multimedia development on both
Macintosh and windows platform is
easily good. Ultimately personal
preference, budget constraints and requirements decide the platform
for development.

The Macintosh platform:


All Macintosh are provided the capability to record and play sound. There
are hardware and software for editing video and creating DVDs. The
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
significant difference from Windows PC is that Macintosh requires a mouse
for every action. Today all Macintosh models are equipped with sufficient
resources for multimedia development.

The Windows platform:


Combining a group of parts that are bound together by the requirements of


the Windows Operating System can assemble Windows computers. Most
Windows computers today come equipped with audio software, CD-ROM
drive large amounts of RAM, good processor speed and high-resolution
monitor. These features all make multimedia experience in Windows
smooth and good.

Page 18
Multimedia and Its Applications

2.2.2 Hardware Peripherals


Multimedia Workstation
A multimedia workstation is vastly different, in terms of almost all components, from a
PC that is used for normal productivity work. In fact, it is necessary to have a
workstation, and not just any PC. The various components of Multimedia Workstation
are:
Processor
Encoding, editing and designing involve a lot of data processing
handled by a multitude of algorithms. So it is necessary to have a
fast processor. It’d be better still to go in for a dual-processor-
based workstation.
RAM
Again, all the data that is written and read off the various disks
needs to be crunched formats changed, and filters applied. All this requires a lot of RAM.
Hence it is better to have DDR or RDRAM, which are faster than regular SDRAM.
Hard disk
Considering the large amount of read/write operations, a faster RPM hard disk will surely
help, such as SCSI hard disk. It is also preferable to have two hard disks instead of one.
This way we can have the OS on one and use the other for the temporary swap space that
applications use. There is a need for large amounts of storage space. FireWire disk arrays
are an option for those into professional video work.
Graphics card
Ironically, a graphics card that gives the highest frame rates in QuakeIII may not be the
best for graphic editing or high-end designing needs. It is necessary to have specialized
cards that give clarity and detail.
Capture card
To capture video from external devices like camcorders, VCRs or Beta stations, we need
to have a video-capture card. There is a whole range of capture cards, from low-cost
amateur cards to high-end ones for professional-quality work. Choose one that gives
good frame rates even at higher resolutions. Many display cards come with integrated
capture capabilities. It is preferable to have separate, dedicated cards for display and
capture. Make sure that S-Video and Composite video input/output are on the card.

ANNAMALAI
ANNAMALAI UNIVERSITY
Firewire
UNIVERSITY
For live video, it is necessary to have FireWire port to bring in video to system. Macs
come with built-in FireWire; so do most high-end workstation PCs.
TV tuner card
As the name suggests, a tuner card is used to display TV on PC. Frankly, these are more
for entertainment than for your work. Video captured from a TV card will be of inferior
quality, and would tend to be grainy.
Sound card
Multimedia is both video and audio. So make sure that a good sound card is installed for
doing a lot of music-based work. Preferably a 5.1 output sound card with Dolby and DTS

Page 19
Multimedia and Its Applications

decoder on the hardware. Check that it has all the outputs that are needed, like optical and
digital. The same applies to the inputs available (auxiliary, line or microphone
Speakers
Apart from the number of channels, other important parameters for a speaker system are
their frequency response and power output. The better the frequency range they cover,
the more the clarity of sound across the spectrum
CD/DVD/CDR drives
Depending upon the application need, these should be incorporated. A CD writer will
obviously be a requirement. Combo drives are also available, which can read and write
CDs and read DVDs
Data backup
If the volume of data is more then there is a need for backup. Apart form CD/DVD drive
that can archive one project at a time, it is necessary to have complete backup. Hence a
good tape backup system of adequate capacity should be incorporated. If the system is
one of many workstations at the same place, then the tape drive can be shared.

2.2.3 Connections
Many multimedia applications are developed in workgroups comprising instructional
designers, writers, graphic artists, programmers, and musicians located in the same office
space or building. The workgroup members’ computers typically are connected on a local
area network (LAN). The client’s computers, however, may be thousands of miles
distant, requiring other methods for good communication.
Communication among workgroup members and with the client is essential to the
effective and accurate completion of the project. Our Postal Service mail delivery is too
slow to keep pace with most projects; courier services are better. And when you need it
immediately, an Internet connection is required. If your client and you are both connected
to the Internet, a combination of communication by e-mail and by FTP (File Transfer
Protocol) may be the most cost-effective and efficient solution for both creative
development and project management.
In the workplace, quality equipment and software used communications setup. The cost -
in both time and money - of stable and fast networking will returned to you.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Networking Macintosh and Windows computers
It is desirable to network Macintosh and Windows computers and Windows computers so
that share resources (like printers) between them. Based on the geographical distance
between the networking devices, the network is classified.
Local Area Network (LAN): are those in which the distance between the workstations is
less. Examples of LAN are computers connected within a building. Resources like
printers file servers; scanners etc. are shared directly between these network devices. The
most common protocol used for LAN connections is Ethernet or Token Ring.”CAT-5” or
“data-grade level5”, twisted-pair telephone wire set up these connections.

Page 20
Multimedia and Its Applications

Wide Area Network (WAN): are network systems separated by great distance.
Examples of WAN are connections between large corporate enterprises and institutions
spanning over a large geographic area. These are more expensive to install and maintain
compared to LANs. WANs can operate using dedicated phone lines, wireless network,
and dial-up connections through an Internet Service Provider (ISP). These dial-up
services use telephone line to connect to the ISP’s server and so we are charged for the
telephone lines for the duration we are connected.
While working across Mac and Windows platforms for multimedia development, we
need to establish an Ethernet connection to enable the PCs and Macs to be able to talk to
each other and share resources.
Macs have in-built Ethernet cards. We can fix inexpensive Ethernet cards on PCs.
Windows PC use Microsoft Client TCP/IP as the client/server software. We can add to
Macs to connect to the network of PCs. Another option is to add software to Windows
PC to connect it to a network of Mac, which use AppleTalk as their client/server
software. Both these methods use Ethernet to connect.
Connection: The equipment needed for a multimedia project depends on the design and
contents of the project. A fast computer with ample RAM and disc storage would be the
basic need. To avoid using tool to develop multimedia content we can compile pre-
existing sound, music, art, clip animation etc and reuse them in our project. Most
multimedia developed special equipment for digitizing sound or skills from videotapes.
Communication channels
By communication channels of network, it is meant that the connecting cables are being
talked about. The cables that connect two or more workstations are the communication
channels.
In LANs many different types of media are in use. Copper conductors in the form of
twisted pair or coaxial are by far the most common. More recently, very serious
consideration has been given to the use of optical fiber technology in LANs. Other media
e.g., microwave transmission, infrared, telephone line etc. are also used. The basic types
of cables are:
 Twisted pair cable
 Coaxial cable
 Optical fibers
Twisted pair cable:

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The most common form of wiring in data
communication application is the twisted pair cable.
As a voice grade medium (VGM), it is the basis for
most internal office telephone wiring. It consists of
two identical wires wrapped together in a double
helix.
Problems can occur due to differences in the electrical
characteristics between the pair (e.g., length,
resistance, and capacitance). For this reason, LAN
applications will tend to use a higher-quality cable
known as data grade medium (DGM).

Page 21
Multimedia and Its Applications

The main advantages of twisted pair cable are its simplicity and ease of installation. It is
physically flexible, has a low weight and can be easily connected.
The data transmission characteristics are not so good. Because of high attenuation, it is
incapable of carrying a signal over long distance without the use of repeaters. Its low
bandwidth capabilities make it unsuitable for broadband applications.
Coaxial cable:
This type of cable consists of a solid wire core
surrounded by one or more foil or wire shields, each
separated by some kind of plastic insulator. The inner
core carries the signal, and the shield provides the
ground. While it is less popular than twisted pair, it is
widely used for television signals. In the form of
(CATV) cable, it provides a cheap means of
transporting multi-channel television signals around
metropolitan areas. Large corporations in building
security systems also use it.
The data transmission characteristics of coaxial cable
are considerably better than those of twisted pair. This opens the possibility of using it as
the basis for a shared cable network, with part of the bandwidth being used for data
traffic.
Optical Fibers:
Optical fibers consist of thin strands of glass or glass like
material, which are so constructed that they carry light from a
source at one end of the fiber to a detector at the other end. The
light sources used are either light emitting diodes or laser diodes.
The data to be transmitted is modulated onto the light beam using
frequency modulation techniques. The signals can then be picked
up at the receiving end and demodulated. The bandwidth of the
medium is potentially very high. For LEDs, this range between
20 and 150 mbps and higher rates are possible using LDs.
The major problems with optical fibers are associated with installation. They are quite
fragile and may need special care to make them sufficiently robust for an office
environment. Connecting either two fibers together or a light source to a fiber is a

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
difficult process.
One of the major advantages of optical fibers over other media is their complete
immunity to noise, because the information is traveling on a modulated light beam.
A side effect of this noise immunity is that optical fibers are virtually impossible to tap.
In order to incept the signal, the fiber must be cut and a detector inserted.
Despite its shortcomings, optical fiber is an important technology and will be very
attractive transmission indeed.

Page 22
Multimedia and Its Applications

2.2.4 Memory And Storage Devices


History of Storage Devices
Year Invention
1898 Herman Hollerith invents the punch card for computing the census.
1949 Magnetic core memory used in a computer.
1956 IBM becomes the first company to ship a computer hard drive. It stored 5
MB, with a 24-inch platter size.
1963 Philips introduces the compact audio cassette
1971 IBM introduces the first flexible “floppy” disk drive
1973 IBM announces the first modern-day fixed Winchester hard drive, the 3340.
1980 Seagate builds industry’s first 5.25-inch hard drive.
1984 Philips and Sony launch the CD-ROM
1986 SCSI was invented
1988 Sony, Philips and Taiyo Yuden co-invent the CD-R
1991 IBM launches Magneto resistive (MR) heads.
1992 Seagate introduces the first 7,200 rpm disc drive. Holographic storage
demonstrated.
1993 The DVD is invented by Toshiba
1995 Iomega launches the Zip drive
1996 Seagate introduces the first 10,000 rpm drive.
1997  Seagate introduces the world’s first fiber channel interface disc drive.
 IBM introduces the first hard drive with giant magneto resistive
(GMR) heads.
1998 IBM launches the Micro drive.
2000  Blue Laser storage is demonstrated
 Seagate introduces world’s first 15,000 rpm disc drive.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
 Seagate reaches a landmark capacity with the 180 GB Barracuda hard
drive.
2001  Seagate demonstrates world record a real density 100 gigabit per
square inch.
 First use of a hard disk drive in a game console-the Xbox.
2002 IBM debuts Millipede for the next generation of data storage

Page 23
Multimedia and Its Applications

Storage Technology:
Rapid advances in computing, communication, and compression technologies coupled
with the dramatic growth of the Internet has led to the emergence of a wide variety of
multimedia applications – such as distance learning, interactive multiplayer games, online
virtual worlds, and scientific visualization of multi-resolution imagery. These
applications differ from conventional application in at least two ways. First, they involve
storage, transmission, and processing of heterogeneous data types – such as text, image,
audio, and video – that differ significantly in their characteristics (e.g., size, data rate,
real-time requirements, etc.). Second, unlike conventional best-effort applications, these
applications impose diverse performance requirements – for instance, with respect to
timeliness on the networks and operating systems. Because of these differences,
techniques employed by conventional file systems for managing textual files do not
suffice for managing multimedia objects.
Selecting Multimedia Storage Device
You need large capacities, fast access and high data-
transfer rates for storing video. Let us review storage
devices typically used in such environments and the
reasons for choosing them.

Tapes
Tapes have always been the choice for capturing and
storing video. They are compatible with digicams and camcorders as well. But, they
cannot be used for storing while processing (editing), as they are sequential access
devices. Digital videotapes or DV tapes are becoming popular, as they’re ideal for high
quality digital video recordings.

SCSI/IDE Drives
Hard disks are used in editing systems. Traditionally, these machines used SCSI drives
though IDE or Ultra ATA drives are being used these days. On professional systems,
there is AV (Audio-Visual) drives that avoid thermal recalibration between read/writes
and are suitable for desktop multimedia. (Thermal recalibration is a process by which
older hard drives operate smoothly despite heating).

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Firewire Hard Disks:
Firewire hard disks find application in the post-production
market. They work on Firewire technology that provides high-
speed serial input/output connection when connecting digital
devices like camcorders to desktop or portable computers.
Most of the DV camcorders available today have Firewire
ports. These disks are hot plug and daisy chain capable, which
means you, can add many as external drives without shutting
down or restarting.

Page 24
Multimedia and Its Applications

Optical Storage Systems


The storage media of most optical storage systems in production today are in the form of
a rotating disk. Figure illustrates a typical optical disk system. In general the disks are
preformatted using grooves and lands (tracks) to enable positioning an optical pickup and
recording head to access information on the disk. A focused laser beam emanating from
the optical head records information on the media as a change in the material
characteristics. To record a bit, the laser generates a small spot on the media that
modulates the phase, intensity, polarization, or reflectivity of a readout optical beam; that
beam is subsequently "read" by a detector in the optical head. Drive motors and servo

systems rotate and position the disk media and the pickup head, thus controlling the
position of the head with respect to data tracks on the disk. Additional peripheral
electronics are used for control and for data acquisition, encoding, and decoding. As for
all data storage systems, optical disk systems are characterized by their storage capacity,
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
data transfer rate, access time, and cost.

Fig. A:Key components of an optical disk system.

Storage capacity:
The storage capacity of an optical storage system is a direct function of spot size
(minimum dimensions of a stored bit) and the geometrical dimensions of the media. A
good metric to measure the efficiency in using the storage area is the areal density
(MB/sq. in.). Areal density is governed by the resolution of the media and by the
numerical aperture of the optics and the wavelength of the laser in the optical head used
for recording and readout. Areal density can be limited by how well the head can be

Page 25
Multimedia and Its Applications

positioned over the tracks; this is measured by the track density (tracks/in.). In addition,
areal density can be limited by how closely the optical transitions can be spaced; this is
measured by the linear density (bits/in.).
Data transfer rate
The data transfer rate of an optical storage system is a critical parameter in applications
where long data streams must be stored or retrieved, such as for image storage or backup.
Data transfer rate is a combination of the linear density and the rotational speed of the
drive. It is mostly governed by the optical power available, the speed of the pickup head
servo controllers, and the tolerance of the media to high centrifugal forces.
Access time
The access time of an optical storage system is a critical parameter in computing
applications such as transaction processing; it represents how fast a data location can be
accessed on the disk. It is mostly governed by the latency of the head movements and is
proportional to the weight of the pickup head and the rotation speed of the disk.
Cost:
The cost of an optical storage system is a parameter that can be subdivided into the drive
cost and the media cost. Cost strongly depends on the number of units produced, the
automation techniques used during assembly, and component yields. Optical storage
R&D typically concentrates on the following efforts: reducing spot size using lower-
wavelength light sources; reducing the weight of optical pickup heads using holographic
components; increasing rotation speeds using larger optical power lasers; improving the
efficiency of error correction codes; and increasing the speed of the servo systems.
Equally active R&D efforts, especially in Japan, are focused on developing new
manufacturing techniques to minimize component and assembly costs.
Optical Disk Formats:
Depending on the access times required by given applications, optical disk products come
in two different formats: the compact disk (CD) format used for entertainment systems
(audio, photo, or digital video disk applications), and the standard or banded format used
for information processing or computing applications.
CD format:
In the optical disk CD format, information is recorded in a spiral while the disk turns at a
constant linear velocity. The standard disk diameter used is 12 cm, which offers a typical
capacity of 650 MB with a seek time (access time) in the order of 300 ms and data rate of

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
about 10 kB/s. A minidisk format is currently being adopted in some Sony products that
use 6 cm disks providing 140 MB capacity. Various types of products belong to the CD
family, including CD recordable (CD-R) products, which are the write-once, read-many
(WORM) version of standard CDs; the CD-E erasable products, which are to appear
shortly in the market; the Photo-CD systems, which were first marketed by Kodak for
storing images; and video CDs, which may become available over the next two years.
Several standards for videodisk systems are presently being put forward, including the
double-sided video disk (DVD) standard proposed by Toshiba and the double-layer
format proposed by Sony. Major improvements in CD technology are expected to take
place within the next few years.

Page 26
Multimedia and Its Applications

Standard format:
The access time achieved by the CD format is too slow for use in computing applications.
To shorten access times, a standard format is commonly used in magnetic as well as
optical disk systems, where the disk turns at a constant angular velocity and data is
recorded on concentric tracks. Whether the inner or outer tracks are read, the disk's speed
of rotation remains constant, allowing for faster access times; however, this format
wastes valuable disk space on the outer tracks, because it requires a constant number of
bits per track, limited by the number of bits that can be supported by the innermost track.
To eliminate this waste, a "banded" format is now used where tracks of similar length are
grouped in bands, allowing the outer bands to support a much larger number of bits than
the inner bands. This, however, requires different channel codes for the different bands in
order to achieve similar bit error rates over the bands.
In the standard format, 12 in., 5.25 in., and 3.5 in. disk diameters are commercially
available, and 14 in. and 2.5 in. disk diameters are being investigated. The 12 in. products
(mostly WORM) provide high-capacity solutions on the order of 7 GB on a single platter
for storage of large databases, achieving areal densities exceeding 500 MB/sq. in. The
5.25 in. disks are most commonly used today and provide data capacities of 2 GB per
disk, seek times on the order of 35 to 40 ms, and data rates on the order of 2 to 5 MB/s.
They achieve an areal density of 380 MB/sq. in., and are cost-competitive. The 3.5 in.
disks presently provide one-eighth of the capacity of 5.25 in. disks, reaching only 128
MB.
Optical Storage in Hierarchical Memory Systems
During the past fifty years, many memory technologies have been developed. Despite
intense competition, several widely different approaches are currently in use: magnetic
and optical tape; hard disks, floppy disks, and disk stacks (Bell 1983); and both electronic
static random-access memory (SRAM) (Maes at al. 1989) and dynamic random-access
memory (DRAM) (Singer 1993). There are also several newer technologies now
available, such as the solid-state disk (Sugiura, Morita, and Nagasawa 1991), the Flash
Erasable Electrically Programmable Read-Only Memory (EEPROM) (Kuki 1992), and
the Redundant Array of Inexpensive Disks (RAID) (Velvet 1993) systems.
This proliferation of technologies exists because each technology has different strengths
and weaknesses in terms of its capacity, access time, data transfer rate, storage
persistence time, and cost per megabyte. No single technology can achieve maximum
performance in all these characteristics at once; modern computing systems use a

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
hierarchy of memories rather than a single type. The memory hierarchy approach utilizes
the strong points of each technology to create an effective memory system that
maximizes overall computer performance given a particular cost.
Hierarchy levels
In standard sequential computer architecture there are three major levels of the storage
hierarchy: primary, secondary, and tertiary.
Primary memories (cache and main): Primary memories are currently implemented in
silicon and can be classified as cache memory (as local storage within the processing
chip) and main memory (as RAM and DRAM chips located on the same board). The
access times of primary memories are comparable to the microprocessor clock cycle, but

Page 27
Multimedia and Its Applications

their data capacity is limited (10 to 100 MB for main), although it has been doubling
every year.
Secondary memories: Secondary memories, such as magnetic or optical disk drives,
have significantly increased capacity (into gigabytes), with significantly lower cost per
megabyte, but the access times are on the order of 10 to 40 ms.
Tertiary (archival) memories: Tertiary memories store huge amounts of data (into
terabytes, or 10 12 bytes), but the time to access the data is on the order of minutes to
hours. Presently, archival data storage systems require large installations based on disk
farms and tapes often operated off line. Archival storage does not necessarily require
many write operations, and write-once, read-many (WORM) systems are acceptable.
Despite having the lowest cost per megabyte, archival storage is typically the most
expensive single element of modern supercomputer installations.
Storage capacity versus access time
Magnetic systems: Arial density of magnetic systems is governed by the minimum
switchable area of a magnetic domain. The size of these domains is governed by the
dimensions of the magnetic heads and their distance to the active media. These domains
can be made quite small, since the magnetic heads can be miniaturized and are "flown"
right against the media (approximately 50 nm above). The access time of magnetic disk
devices is in general shorter than optical disk systems by about one order of magnitude,
because of the low inertia of these miniature heads and the faster rotation speed of the
media. This same advantage, however, is also associated with two of the main
disadvantages of magnetic storage: head crashes and non-resolvability. It should be
pointed out, however, that some magnetic disk products provide resolvability at the
expense of longer access times.
Optical systems Up to recently, interest in optical storage systems was restricted to use
for very large storage systems and backup systems, because of their robustness and
resolvability. Optical storage for very large storage devices employing interchangeable
and recordable media in automatic "jukeboxes" is a market traditionally outside the range
of magnetic disk drives but directly in competition with magnetic tapes. The advantage of
optical systems for this market is that they have much shorter access times than tapes.
Storage capacity versus cost

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The market direction for optical disk systems can be anticipated by examining cost per
megabit as a function of system capacity, as shown in Figure B The generally decreasing
trend seen in this graph indicates that as capacity increases, cost per megabit decreases.
The solid lines show total system cost for the three storage system types. These lines
indicate that the total cost of secondary and tertiary memory systems far exceeds the cost
of primary memory.

Page 28
Multimedia and Its Applications

Fig. B. Comparison of computer memory systems in terms of cost and capacity.

The strong linear relationship shows that as capacity increases cost per megabit
decreases, but not in the same proportion. The result is that high-capacity systems have a
much higher total system cost (Call/Recall Inc.).
Benefits and applications of optical storage

Optical media is a newer technology than tape. Following are some of its advantages:

 Durability. With proper care, optical media can last a long time, depending on what
kind of optical media you choose.
 Great for archiving. Several forms of optical media are write-once read-many,
which means that when data is written to them, they cannot be reused. This is
excellent for archiving because data is preserved permanently with no possibility of
being overwritten.
 Transportability. Optical media are widely used on other platforms, including the
PC. For example, data written on a DVD-RAM can be read on a PC or any other

 ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
system with an optical device and the same file system.
Random access. Optical media provide the capability to pinpoint a particular piece of
data stored on it, independent of the other data on the volume or the order in which
that data was stored on the volume.

While optical has many advantages, there are also some disadvantages to consider, as
follows:

 Reusable. The write-once read-many (WORM) characteristic of some optical media


makes it excellent for archiving, but it also prevents you from being able to use that
media again.

Page 29
Multimedia and Its Applications

 Writing time. The server uses software compression to write compressed data to
your optical media. This process takes considerable processing unit resources and
may increase the time needed to write and restore that data.

Another option that you can use for optical storage is virtual optical storage. When you
use virtual optical storage, you create and use optical images that are stored on your disk
units.

2.2.5. CD-Technologies

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

CD-R
Write Once/Read Many storage (WORM) has been around since the late 1980s, and is a
type of optical drive that can be written to and read from. When data is written to a
WORM drive, physical marks are made on the media surface by a low-powered laser and
since these marks are permanent, they cannot be erased, hence write once. The

Page 30
Multimedia and Its Applications

characteristics of a recordable CD were specified in the Orange Book II standard in 1990


and Philips was first to market with a CD-R product in mid-1993. It uses the same
technology as WORM, changing the reflectivity of the organic dye layer, which replaces
the sheet of reflective aluminium in a normal CD disc. In its early days, cyanine dye and
its metal-stabilised derivatives were the de facto standard for CD-R media. Indeed, the
Orange Book, Part II, referred to the recording characteristics of cyanine-based dyes in
establishing CD-Recordable standards. Phthalocyanine dye is a newer dye that appears to
be less sensitive to degradation from ordinary light such as ultraviolet (UV), fluorescence
and sunshine. Azo dye has been used in other optical recording media and is now also
being used in CD-R. These dyes are photosensitive organic compounds, similar to those
used in making photographs. The media manufacturers use these different dyes in
combination with dye thickness, reflectivity thickness and material and groove structure
to fine tune their recording characteristics for a wide range of recording speeds, recording
power and media longevity. To recreate some of the properties of the aluminium used in
standard CDs and to protect the dye, a microscopic reflective layer - either a proprietary
silvery alloy or 24-carat gold - is coated over the dye. The use of noble metal reflectors
eliminates the risk of corrosion and oxidation.
The CD-R media manufacturers have performed extensive media longevity studies using
industry defined tests and mathematical modelling techniques, with results claiming
longevity from 70 years to over 200 years. Typically, however, they will claim an
estimated shelf life of between 5 and 10 years.
CD-RW
Just as CD-R appeared to be on the verge of becoming a consumer product, the launch of
CD-Rewritable CD-ROM, or CD-RW, in mid-1997 posed a serious threat to its future
and provided further competition to the various superfloppy alternatives.
The result of collaboration between Hewlett-Packard, Mitsubishi Chemical Corporation,
Philips, Ricoh and Sony, CD-RW allows a user to record over old redundant data or to
delete individual files. Known as Orange Book III, CD-RW's specifications ensure
compatibility within the family of CD products, as well as forward compatibility with
DVD-ROM.
The technology behind CD-RW is optical phase-change, which in its own right is nothing
radical. However, the technology used in CD-Rewritable does not incorporate any
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
magnetic field like the phase-change technology used with MO technology. The media
themselves are generally distinguishable from CD-R discs by their metallic grey colour
and have the same basic structure as a CD-R disc but with significant detail differences.
A CD-RW disc's phase-change medium consists of a polycarbonate substrate, moulded
with a spiral groove for servo guidance, absolute time information and other data, on to
which a stack (usually five layers) is deposited. The recording layer is sandwiched
between dielectric layers that draw excess heat from the phase-change layer during the
writing process. In place of the CD-R disc's dye-based recording layer, CD-RW
commonly uses a crystalline compound made up of a mix of silver, indium, antimony and
tellurium. This rather exotic mix has a very special property: when it's heated to one

Page 31
Multimedia and Its Applications

temperature and cooled it becomes crystalline, but if it's heated to a higher temperature,
when it cools down again it becomes amorphous. The crystalline areas allow the
metalised layer to reflect the laser better while the non-crystalline portion absorbs the
laser beam, so it is not reflected.
In order to achieve these effects in the recording layer, the CD-Rewritable recorder uses
three different laser powers:
 the highest laser power, which is called "Write Power", creates a non-crystalline
(absorptive) state on the recording layer
 the middle power, also known as "Erase Power", melts the recording layer and
converts it to a reflective crystalline state
 the lowest power, which is "Read Power", does not alter the state of the recording
layer, so it can be used for reading the data.
2.2.6. DVD
The compact disc (CD) is surely a major technological innovation of our era. Beginning
as a pure, high-quality sound reproduction system, it rapidly evolved into an entire family
of systems, with applications covering the entire landscape of data storage and
distribution. When Sony introduced the CD-ROM drive in 1987, it was an ideal platform
for all software makers who could now deliver applications on one mass-produced disc
rather than dozen or more floppy diskettes. It also opened new possibilities for storing the
most vital contents (Sound and Video) of the multimedia age.
Multimedia computer applications make every effort for increased realism, with, more
full-screen, high-quality video, 3D animations, and multimedia hi-fi audio. The resulting
demand for storage capacity exceeded the capacity of existing CD-ROMs and began to
be measured in gigabytes. The result was the emergence of second-generation optical
data storage devices in 1995. A new high-capacity, universally applicable optical disc
initially called the digital videodisc, and eventually the digital versatile disc or DVD, was
born.
DVDs have the potential to store more than 17 gigabytes of data, which is more than 25
times the capacity of CD-ROMs. This huge capacity can be used to store up to nine hours
of studio quality video and multichannel surround-sound audio, interactive multimedia
computer programs, 30 hours of CD-quality audio, and just above everything that can be
represented as digital data.

ANNAMALAI
ANNAMALAI UNIVERSITY
Some Basics:
UNIVERSITY
A DVD looks like an ordinary CD: It is a silvery platter, 4.75 inches in diameter (the
same as a CD-ROM) and about 0.05 inches thick, with a hole in the center. Unlike
conventional CDs, the DVD is comprised of two platters cemented together, each with a

Page 32
Multimedia and Its Applications

thickness of 0.6mm. Each of these platters can be a complete disc, recordable on both
sides. The resultant sandwich thus has two layers per side, or four separate recording
surfaces. A DVD is therefore four spaces in one, with separate schemes for storing 4.7 to
17 GB of data on a single disc. By taking advantage of the media layering and using both
sides of the disc platter, four capacity levels (see Figure) are supported.
(Compares the pit size of an audio CD to that of a DVD)
Data is recorded on a DVD in a spiral trail of tiny pits and the discs are read using a laser
beam, just like on a CD. But the similarity ends here; the tracks on a DVD are placed
together, thereby allowing more tracks per disc. The DVD track pitch (the distance
between two adjacent tracks) is reduced to 0.74 microns, less than half of the CD’s 1.6
micron. The pits, in which data is stored, are also much smaller, allowing more pits per
track. The minimum pit length of a single layer DVD is 0.4 microns, compared to 0.834
microns for a CD. With the number of pits equating to capacity levels, a DVD’s reduced
track pitch and pit size creates four times as many pits as a CD’s
To read these tightly packed discs, a laser with a shorter wavelength is required, as are
more accurate aiming and focusing mechanisms. CD drives use a laser beam with a
wavelength of 780nm (which falls in the infrared region), whereas DVD drives use a
635nm or 650nm red light laser to read data. The reduction in the wavelength of the laser
beam is what has made DVD technology possible.

(Figure shows a two layer DVD).


The focusing mechanism of a DVD drive allows information to be scanned from more
than one layer of a DVD, simply by changing the focus of the read laser. Instead of using
one opaque reflective layer, it’s possible to use a translucent layer with an opaque
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
reflective layer at the back, which can carry more data. This doesn’t quite double the
capacity as dense as the first layer, but it does enable a disc to deliver 8.5GB of data
without having to be removed from the drive and turned over.
An interesting feature of DVDs is that the second data layer of a disc can be read form
the inside out, as well as from the outside in. In standard density CDs, the information is
always stored near the hub of the disc first. The same is to for single - and dual layer
DVDs, but the second layer of each disc can contain data recorded within ‘backwards’, or
in a reverse spiral track.
With this feature it takes only an instance to refocus a lens from one reflective layer to
another. A single-layer CD, on the other hand, stores all data in a single spiral track, and

Page 33
Multimedia and Its Applications

it takes longer to relocate the read head mechanism to another location or file on the same
surface.

Having understood the basic structure of a DVD, let us take a look at the fundamental
functioning of DVD-ROM drives.
These drives can read data from a DVD or CD but
can’t make any changes to it. As far as physical
appearance goes there is little to distinguish a DVD-
ROM drive from an ordinary CD-ROM drive. The
only giveaway is the DVD logo on the front. Inside
the drive two there are more similarities than
differences. The interface is ATAPI (also called IDE
in common parlance) or SCSI, and transport is much
like any other CD-ROM drive. But as opposed to a
CD-ROM the data is recorded near the top surface of
the disc, the data layer in a DVD is right in the
middle, so that the disc can be double sided. The laser
is also different, having a pair of lenses on a swivel:
One to focus the beam on to the DVD data layers and
the other for reading ordinary CDs.
DVD-ROM drives spin the disc a lot slower than their CD-ROM counter parts. However,
since the data is packed much closer together, there throughput is substantially better than
that of a CD-ROM drive at equivalent spin speed. While a 1X CD-ROM drive has a
maximum data rate of only 150KBps, a 1X DVD-ROM drive can transfer data at 1,250
KBps, which is a mite higher than the speed of an 8X CD-ROM Drive.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
DVD-ROM drives became available in early 1997 and these yearly 1X devices were also
capable of reading CD-ROMs at 12X-sufficient for full screen video playback. As with
CD-ROMs, higher speed DVD drives appeared as technology matured. By the beginning
of 1998, multi-speed DVD-ROM drives has already hit the market. These drives were
capable of reading DVD media at double their initial speed, producing a sustained
transfer rate of 2,700KBps, and spinning CDs at 24X. By the end of the year DVD read
performance had been increased to 5X. almost a year later performance has improved to
6X (8,100KBps) reading of DVD media and 32X reading of CD-ROMs.

Page 34
Multimedia and Its Applications

Universal Disc Format:


A major breakthrough of DVD technology is that it has brought all conceivable
applications of the CD for data, video, audio, or a mix of all three within a single physical
file structure called the Universal Disc Format (UDF). Promoted by the Optical Storage
Technology Association (OSTA), the UDF file structure ensures that any file format can
be accessed by any DVD drive or consumer video. It is also allows interfacing with
standard operating systems since it supports CD standard ISO 9660 compatibility. UDF
overcomes the incompatibility problems from which CD technology suffered, where the
standard had to be constantly rewritten each time a new application like multimedia,
interactivity, or video emerged.
Since UDF wasn’t supported by Windows until Microsoft shipped Windows 98, DVD
provides were forced to use an interim format called UDF Bridge. UDF Bridge is a
hybrid of UDF and ISO 9660. Windows 95 OSR2 supports UDF Bridge, but earlier
versions do not. To be compatible with Windows 95 version previous to OSR2, DVD
vendors therefore had to provide UDF Bridge support along with their hardware.
DVD Formats:
Physical Format
By physical formats, we mean the physical characteristics of DVD discs. There are four
types of DVD discs formats:
DVD-5: DVD-5 is a single-sided, single-layered DVD with 4.7 GB storage
capability with a video playing time of 133 minutes.
DVD-9: DVD-9 is also single-sided, but dual-layered disc with 8.5 GB storage
capability with a video playing time of 240 minutes.
DVD-10: DVD-10 is a double-sided, double layered DVD with 9.4 GB storage
capability with a video playing time of 266 minutes.
DVD-18: DVD-18 is a double sided, double-layered DVD with 17GB storage
capability with a video playing time of 480 minutes.
Application formats:
While physical format describes the media on which the data is stored, application
formats describe what kind of data-software, video, or music, is stored and how. These
include DVD-Video, DVD-Audio and of course plain data.
DVD-Audio: DVD Audio provides higher quality audio storage than what is available
for CDs. It provides Dolby Digital AC-3, 5,1 channel surround sound.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
DVD-Video: DVD has the capability to produce near-studio-quality video using high-
quality MPEG-2 video
compression. High quality video
combined with surround-sound
audio can give a true home
theatre experience. In addition,
the DVD format defines a way to
have multiple languages in
movies along with subtitles;
DVD-video players available

Page 35
Multimedia and Its Applications

commercially may or may not play DVD-Audio.


DVD-ROM: DVD-ROM discs are similar to CD-ROM discs but store data at a much
higher density, giving greater capacity than CD-ROM. LIKE CD-ROM, they store
computer files and can only be read and not written to. DVD-ROM can be read in DVD
video players and computer DVD drives.
There are five recordable versions of DVD:
 DVD-R for General
 DVD-R for Authoring
 DVD-RAM
 DVD-RW, and
 DVD+RW.
The DVD-R (DVD – Recordable) followed hard on the heels of the DVD-ROM,
appearing in the autumn of 1997 with an initial capacity of 3.95GB per side – a DVD-R
does not quite have the capacity of a DVD-ROM. It uses organic dye polymer technology
similar to the one used in CD-R’s and in compatible with almost all DVD drives.
An early release of the DVD-R was vital to the development of DVD-ROM titles since
software developers needed a simple and relatively cheap way of producing test disc
before going into mass production. Its capacity was soon extended to 4.7GB, which was
crucial for desktop DVD video production. However, the DVD-R couldn’t gain much
ground because a more versatile rewritable DVD-RAM soon followed it.
DVD-RAM:
DVD-RAM technology combines the many features of DVD with enhanced
rewriteability. Owing to its large storage capacity, DVD-RAM is perfectly suited for a
range of applications such as data back up, document archiving, self-made, multimedia
works and presentations. DVD-RAM allows users to record and re-record from 2.6GB to
5.2GB of data on one disc. DVD-RAM drives use phase-change technology with some
magneto optical features thrown in rather than the pure optical technology of CD/DVD
discs. A ‘land groove’ format allows signals to be recorded on both the grooves formed
on the disc and the lands between the grooves. The grooves and pre-embossed lands are
molded into the disc during manufacturing. A laser beam heats the inner surface of the
disc and charges it magnetically. This allows the data to be written once, twice, and
virtually millions of times.
DVD-RW
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
DVD-RW discs use phase-change
technology for reading, writing and
erasing information. A 650nm
wavelength laser beam heats a phase-
change alloy to change it between either
crystalline (reflective) or amorphous
(dark, non-reflective) conditions,
depending on the temperature level and
subsequent rate of cooling. The resulting
difference between the recorded dark

Page 36
Multimedia and Its Applications

spots and erased, reflective areas between the spots is how a player or drive can discern
and reproduce stored information.
DVD-RW media uses the same physical addressing scheme as DVD-R media. During
recording, the drive's laser follows a microscopic groove to ensure consistent spacing of
data in a spiral track. The walls of the microscopic groove are modulated in a consistent
sinusoidal pattern so that a drive can read and compare it with to an oscillator for precise
rotation of a disc. This modulated pattern is called a "wobble groove", because the walls
of the groove appear to wobble from side to side. This signal is used only during
recording, and has no effect on the playback process. Among the DVD family of formats
only recordable media use wobble grooves.
+RW:
This rewritable DVD format was
born out of the competition
between CD originators Sony
and Philips, and the principal
DVD protagonists – hitachi,
Matsushita Electric and Toshiba.
Unsatisfied with DVD-RAM
specifications, Philips and Sony
began work on drives, which
were originally called DVD
+RW and later rechristened as
+RW under pressure from the
DVD forum. This is a rewritable
format, based on DVD and CD-
RW technology.
+RW drives read all previous CD
formats, and stores 3GB of data on proprietary discs. Manufactures claim that the drives
will have a sustained data transfer rate of 1.7 MBps as opposed to the 1.35 MBps of
DVD-RAM and will offer a better access time than DVD-RAM drives. +RW backers
believe that their specs are better that suited for some applications. For instance, +RW
drives can easily be modified to create discs readable in any DVD-ROM drive.
DVD+R
In October 2003, Philips Electronics and Mitsubishi Kagaku Media (better known by its
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Verbatim brand name) demonstrated its new dual-layer DVD recordable technology at

Page 37
Multimedia and Its Applications

the Ceatec Japan 2003 exhibition. The new technology virtually doubles data storage
capacity on DVD+R recordable discs from 4.7GB to 8.5GB, while remaining compatible
with existing DVD Video players and DVD-ROM drives.
The dual-layer DVD+R system uses two thin embedded organic dye films for data
storage separated by a spacer layer. Heating with a focused laser beam irreversibly
modifies the physical and chemical structure of each layer such that the modified areas
have different optical properties to those of their unmodified surroundings. This causes a
variation in reflectivity as the disc rotates to provide a read-out signal as with
commercially pressed read-only discs.
The following table summarizes the read/write compatibility of the various formats.
Some of the compatibility questions with regard to DVD+RW will remain uncertain until
product actually reaches the marketplace. A "Yes" means that it is usual for the relevant
drive unit type to handle the associated disc format it does not mean that all such units
do. A "No" means that the relevant drive unit type either doesn't or rarely handles the
associated disc format:
Type of DVD Unit
DVD
DVD DVD- DVD-
Disc DVD-RAM DVD-RW DVD+RW
Player R(G) R(A)
Format
R W R W R W R W R W R W
DVD-
Yes No Yes No Yes No Yes No Yes No Yes No
ROM
DVD-
Yes No Yes Yes Yes No Yes No Yes Yes Yes No
R(G)
DVD-
Yes No Yes No Yes Yes Yes No Yes No Yes No
R(A)
DVD-
No No No No No No Yes Yes No No No No
RAM
DVD-
Yes No Yes Yes Yes No Yes No Yes Yes Yes No
RW
DVD+R
Yes No Yes Yes Yes No No No Yes No Yes Yes
W
CD-R No No No No No No Yes No Yes Yes Yes Yes
CD-RW No No No No No No Yes No Yes Yes Yes Yes
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
DVD Regions and its Intricacies:
Soon after the DVD format was standardized worldwide, the movie industry divided the
world in to six regions. These are:
Regio Region
n No.
1 USA and Canada
2 Europe, Near East, South Africa and Japan
3 South East Asia

Page 38
Multimedia and Its Applications

4 Australia, Middle and South America


5 Africa, Asia and Eastern Europe
6 The people’s Republic of China
7 Reserved
8 Special international venues (airplanes, cruise ships, etc.)
This was done mainly to stop movement of movies across boundaries. Earlier, PC DVD-
ROM manufactures earlier used to manufactures region-free DVD drives that could play
DVD’s from any region. Such drives were called RPC-1 drives. But after January 1,
2000, this changed; the new RPC 2 drives were region locked. You can only change the
region five times, after which your drive is locked to the last selected region.
For the region protection to work, the disc itself must be stored to a specific region code
(which most discs are), and then either the DVS drive or the playback software must
match the disc’s code to their own code for playback to work. If the drive itself is locked,
the software or hardware decoder will rely on the drive to confirm the region match. If
the drive is region free, then the decoders try to enforce the region protection.
If your drive is set to a specific region, you will be unable to play a disc from a different
region. This can not be by passed without replacing the checking mechanism within the
drive itself. This can only be done with a firmware update. However, there is software
that helps the drive bypass this protection. One such software is DVD region Free
Future Storage
The rate of advancement in storage technology has been truly amazing. From the first
instance of digital storage with IBM’s
5MB RAMAC hard disk drive invented in 1956, all the way to the mammoth 180 GB
hard disk drives available today, there has been a 36,000 fold increase (yes thirty-six
thousand!) in storage capacities over the last 50-odd years.
However, we are fast approaching the physical limit for storing information on media
such as the magnetic platters of hard disks or the chemical layers in optical devices such
as CDs and DVDs. With the promise of tomorrow’s operating systems incorporating
stunning graphical interfaces that offer truly immerse virtual reality and next generation
games that will blur the line between fiction and reality, the stored on hard disks today.
To retrieve a piece of information, you first provide some reference data

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The Operational Concept behind Millipede
The Millipede chip is created using an array of very tiny cantilevers to create almost
atomic-sized indentations in a plastic substrate that is used as the recording material. This
array works in a massively parallel fashion where a bank of cantilevers accesses or
creates information.
Before read or write or information, the polymer-based medium, which is just about 50
nanometers thick, is positioned beneath the cantilever array. This medium is mounted on
a magnetically driver scanner that can move in three dimensions. During read-write the
cantilevers along the x-y axis move the medium while the cantilevers actuate and create
indentations on recording surface. Using this process and with a single cantilever design,
researchers have managed to achieve a storage density of an astounding 60 to 80 GB per

Page 39
Multimedia and Its Applications

square centimeter. Also, this substrate can be ‘erased’ and data can be rewritten onto it
repeatedly. This is achieved by momentarily heating the polymer to a temperature of
150C so that the surface is effectively smoothened and ready for rewrite. However,
individual bits of information cannot be erased; only larger sections of the polymer
surface can be cleared. The image above also shows an actual electron microscope image
of one of the Millipede cantilevers. The tip of the cantilever head is about 50 Angstroms
wide—that’s just a few atoms clustered together.
One of the most obvious advantages with this technology is the fact that very large
storage densities can be achieved in very small areas, Lower power consumption makes it
ideal for mobile applications such as handheld computers and cellular phones your next
generation cellular phone would be able to hold a gigabyte of multimedia content and
contact information! The main hurdle that lies in the path of commercialization of this
technology is the fabrication of the controllers that go into these chips.
Blue laser, Blu-ray or BD:
BD is a joint effort by nine consumer electronics companies, namely Hitachi, LG,
Matsushita (Panasonic), Pioneer, Philips, Samsung, Sharp, Sony, and Thomson. This
technology will make recording possible of up to 2-3 hours of HDTV on a 27CB disk.
Such high capacities have been made possible by using a blue laser (that’s why the name
Blu-ray) instead of the regular red laser used in CD/DVD. The blue laser has shorter
wavelength of 405 nanometers as compared to 650 nanaometers of the red laser. This
makes easier to focus the laser beam with more precision, thus making it possible to hold
more data on the disk.
The disk features a data transfer rate of 36 MBps and will be compatible with the
prevailing optical disk technologies. So a BD drive will be able to playback CDs and
DVDs. An interesting feature of this technology is that it can simultaneously record from
TV and play pre-recorded video from the same disk.
Another exciting form of Blu-ray disks is being developed along the lines of the Sony
Minidisk, which is fairly popular today. Philips has already showcased a 30mm re-
writable disk, code named SFFO (Small Form Factor Optical Storage) based on the Blu-
ray technology that can hold up to one GB of data. The company plans to use these disks
in mobile phones and PDAs instead of existing memory cards. The showcased drive
measures just 5.6x 3.4x0.75cm in size.
Talking of miniature storage, Iomega has also announced a 1.5 GB capacity magnetic
digital capture technology (DCT) disk. It is a small form factor disk that weighs just 9
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
grams and is the size of a small coin. It comes in its own stainless steel casing to protect
the data from being damaged.
If it is simply a question of using laser light with a smaller wavelength, you might ask
why this wasn’t done before. The reason for this is that the materials used to generate
blue lasers have relatively shorter life span compared to those used for red lasers. While
blue lasers are still in the research phase, there are three types of methods that are used to
generate these lasers
Zinc Selenide (Znse): The initial method for implementing blue lasers involved the use
of Zinc Selenide to fabricate the diodes that generate blue lasers. However, this material
has a relatively short life span and its power requirements make it economically

Page 40
Multimedia and Its Applications

unsuitable for commercial implementation. Also, these lasers have wavelengths ranging
from 460 to 520nm, putting them at the end of the blue and closer to the green light band
of the spectrum.
Gallium Nitride (GaN): This material has proved to be very successful in the creation of
blue lasers and has generated wavelengths as low as 370 nm with relatively high
reliability. Most of the work in blue lasers today is based on this material.
Second Harmonic Generation lasers: These lasers are relatively new on the blue laser
scene but have exhibited very high levels of reliability. Through this intelligent method,
the frequency of a given laser is doubled (that is, the wavelength is halved) and laser light
within the blue spectrum is generated. This is done through an apparatus called a
Distributed Bragg Reflector (DBR) where, for example, the frequency of an infrared laser
with a wavelength of 850 nm is doubled, resulting in a blue laser with a wavelength of
425 nm.
It will be some time before blue laser technology becomes commercially viable. The
major hindrance to this technology to the cost of implementation. A Blue-ray device
available today would cost about 2 lakh! The second hurdle is that of reliability: red
lasers that are used in all CD-ROM and DVD-ROM drives today have a life cycle of
about 10,000 hours. Now compare that to the meager hundreds of hours that Gallium
Nitride based blue lasers last for using today’s technology! However, it is just a matter of
time before these issues are addressed Scientists predict that in just a couple of years
nearly all new optical storage devices would be based on blue lasers.
Fluorescent Multi-layer Disc:
A DVD can record data on a maximum of two layers since an increase in the number of
layers in the disk increases the interference and the data cannot be read properly. A
Company called Constellation 3D had developed a technology by which data could be
recorded on up to 20 layers in a disk. Instead of using a reflective surface, the FMD
technology used a disk that can appear transparent to the human eye. It used layers of
fluorescent dyes. This technology was also red- laser based and thus compatible with the
legacy CD/DVD media.

2.2.5 Input Devices

Types of Input Devices

ANNAMALAI
ANNAMALAI UNIVERSITY
Traditional UNIVERSITY
Keyboards Pointing Device Source Data-Entry Devices
Computer Mice, trackballs, Scanning devices; imaging
keyboards pointing sticks, systems, bard-code readers, mark-
touch pads and character-recognition devices
(MICR, OMR, OCR), fax
machines
Specialty keyboards and Touch Screens Audio-input devices
terminals: dumb terminals, Video-input devices
intelligent terminals (ATMs, Pen-based Digital cameras
POS terminals), Internet computer Voice-recognition systems
terminals) systems, light Sensors

Page 41
Multimedia and Its Applications

pens, digitizers Radio-frequency identification


(digitizing Human-biology input device
tablets)

Traditional Devices
Peripheral requirements for MM development - Input Devices:
Key devices for multimedia input.
 Keyboard and OCR for text.
 Digital cameras, scanners for graphics.
 Midi keyboards and microphones for sound.
 Video cameras, CD-ROMs, and frame grabbers for video.
 Mice, trackballs, joy sticks, virtual reality gloves and wands for spatial data.
Keyboard:
These are used for textual input. Pressing a
key on a keyboard closes a circuit
corresponding to the key to send a
unique code to the CPU of the
computer.
Midi keyboards are used with microphones
to input original sounds. Microphones have a

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
diaphragm that vibrates in response to sound waves.
Vibrations modulate a continuous electric current
analogous to sound waves. Modulated current can be digitized
and stored as standardized format for audio data, such as .WAV files.
The microphones plugs into a sound input board.
Mice, trackballs, joysticks, drawing tablets:
These devices are used to enter positional as 2D or 3D data
(Latitude, longitude, and altitude) from a standard reference point.
The common methodology is to define a point on the computer
screen and react with respect to the screen co-ordinates.

Page 42
Multimedia and Its Applications

The mouse is a pointing device with a roller on its base. Its size is about the size of the
normal cake of bath soap. When a mouse rolls on a flat surface, the cursor on the screen
also moves in the direction of the mouse’s movement. A movement of the mouse across
flat surface causes the roller to move and potentiometers coupled to the roller, sense the
relative movements. This motion is then converted to digital values that determine the
magnitude and direction of the mouse’s movement. Movement of the mouse is tracked by
software, which can also set the tracking speed.
The trackball and drawing tablets works the same way as the mouse.

Joystick
The joystick is a device that lets the user move an object on the screen. Children can play
with computers in a simple way by the use of a joystick (or a tracker ball). While playing
certain games, the user needs to move certain object(s) quickly across the screen.
Through, pressing key(s) on the keyboard can do this but it is not convenient for small
children. Joystick makes it much easier for them and provides a better control.
A joystick is a stick set in two crossed grooves and can be moved left or right, forward or
backward. The movements of the stick are sensed by a potentiometer. As the stick is
moved around, the movements are translated into binary instructions with the help of
electrical contacts in its base.
A joystick is generally used to control the velocity of the screen cursor movement rather
than its absolute position.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

The tracker ball also does the same thing but is round in shape. Both joystick and the
tracker ball allow you to move objects around the screen easily.
Multimedia software should be able to determine the positional information as well as the
signal context such as the mouse press.

Page 43
Multimedia and Its Applications

Video Camera:
A standard camera contains photosensitive cells,
scanning one frame after another. Output of the cells
gets recorded as analog stream of colors, or sent to
digitizing circuitry to generate a stream of digital
codes.
Video input cards are required for use of video
camera to input video stream into computer. It
digitizes the analog signal from camera. The output
can be sent to a file for storage, CPU for processing,
or monitor for display (or all of them).
Frame grabber:
Allows the capture of a single frame of data from video stream. It does not have a good
resolution as a still camera. Typical frame grabbers process 30 frames per second for real
time performance.
2.2.6 Touch Screens
Touch Screens:
A touch screen is an intuitive computer input device that works by simply touching the
display screen, either by a finger, or with a stylus, rather than typing on a keyboard or
pointing with a mouse. Computers with touch screens have a smaller footprint, and can
be mounted in smaller spaces; they have fewer movable parts, and can be sealed. Touch
screens may be built in, or added on. Add-on touch screens are external frames with a
clear see-through touch screen, which mount onto the monitor bezel and have a controller
built into their frame. Built-in touch screens are internal, heavy-duty touch screens
mounted directly onto the CRT tube.
The touch screen interface - whereby users navigate a computer system by touching icons
or links on the screen itself - is the most simple, intuitive, and easiest to learn of all PC
input devices and is fast is fast becoming the interface of choice for a wide variety of
applications, such as:
 Public Information Systems: many people that have little or no computing
experience use Information kiosks, tourism displays, and other electronic displays.
The user-friendly touch screen interface can be less intimidating and easier to use
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
than other input devices, especially for novice users, making information accessible
to the widest possible audience.
 Restaurant/POS Systems: Time is money, especially in a fast paced restaurant or
retail environment. Because touch screen systems are easy to use,
overall training time for new employees can be reduced. And
work can get done faster, because employees can simply touch
the screen to perform tasks, rather than entering complex
keystrokes or commands.
 Customer Self-Service: In todays fast pace world, waiting in
line is one of the things that have yet to speed up. Self-service

Page 44
Multimedia and Its Applications

touch screen terminals can be used to improve customer service at busy stores, fast
service restaurants, transportation hubs, and more. Customers can quickly place their
own orders or check themselves in or out, saving them time, and decreasing wait
times for other customers.
 Control / Automation Systems: The touch screen interface is useful in systems
ranging from industrial process control to home automation. By integrating the input
device with the display, valuable workspace can be saved. And with a graphical
interface, operators can monitor and control complex operations in real-time by
simply touching the screen.
 Computer Based Training: Because the touch screen interface is user-friendlier than
other input devices, overall training time for computer novices, and therefore training
expense, can be reduced. It can also help to make learning more fun and interactive,
which can lead to a more beneficial training experience for both students and
educators.
Future touch applications:
Constant innovation is happening to improve the performance in all sensor technologies.
This restricted-view angle adds a measure of privacy and security to transactions on a
touch screen system. Pen capability is an additional draw that allows for a denser touch
point, do annotations, drawings and checklists.
The immediate future has more in store. The touch system here will address medical,
geophysical, design, engineering and other 3D applications. Such applications indicate a
design focus on the potentials of touch technology. The end product allows the User to
perform more tasks with his hands. Imagine sketching with charcoal on Adobe
Photoshop, with your hands applying various pressures on a screen. Touch technology is
advancing towards this and much more at a rapid pace
Light pen:

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
A light pen is also a pointing device. The light pen consists of a photocell mounted in a
pen shaped tube. When the pen is brought in front of a picture element of the screen, it
senses light coming from the screen causes the photocell to respond by generating a
pulse. Thus electric response is transmitted to a processor that identifies the pixel
(graphic point) the light pen is pointing to. Thus, to identify a specific location, the light
pen is very useful. But the light pen provides no information when held over a blank part
of the screen because it is a passive device with a sensor only.

Page 45
Multimedia and Its Applications

The light pen is also used to draw images on the screen. With the movement of the light
pen over the screen, the lines are drawn.
2.2.7 Magnetic Card Encoders And Readers
With the increasing deployment of magnetic strip cards, the demand for less expensive
and more robust card encoding and issuing equipment has also grown.

A visual inspection of a credit card may leave the impression the credit card has but a
single magnetic strip. In actuality, the International Organization of Standardization
(ISO) dictates the locations of three strips, a standard observed by nearly every type of
card. Each of these strips, or tracks, is recorded at different bit densities using the
character-encoding standards shown in the following table.

Common Card Formats

Track Encoding Density Format Characters Use


1 IATA 210 BPI Alpha 79 Name
2 ABA 75 BPI BCD 40 Account
3 THRIFT 210 BPI BCD 107 Uncommon

Airline customers are often greeted by name after the ticket agent swipes their credit
card. That’s because the International Air Transport Association (IATA) standard for
placing the customer’s name and account information is assigned to track one of a credit
card. A quick swipe of the card and the customer’s name becomes instantly available,
with no database query required.

Track two is written in the lingua franca of the credit card processing world as set forth
by the American Banking Association (ABA). Nearly all credit cards and credit card
equipment around the world use track two, though there is currently a movement to
relocate their data to track one because it holds more information.

Track three was originally intended to support offline automated teller machine (ATM)
transactions. Once deployed, ATMs were quickly networked. The need to support offline
transactions quickly diminished, and with it the use of track three.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Applications of Magnetic card
Magnetic cards are used in common credit card, magnetic strip card use has rapidly
spread to student IDs, grocery store discount cards, copy machine user ID cards, vending
machine debit cards, library cards, etc.
Bar Code Reader
In a bar code, data is coded in the form of light and dark bars.
These bar codes are commonly used to identify merchandise in
retail stores. A coding scheme known as Universal Product Code
(UPC) is used. The manufacturer records these codes on the

Page 46
Multimedia and Its Applications

product. In a retail shop, the most popular way to read data from these codes is through a
hand-held scanner, called a bar code reader, which is flashed over the bar code. The data
that is transmitted to the computer picks up the price from the computer, updates
inventory and sales records in the computer, and enables the computer’s bill to be printed
out.
2.2.8 Flat Bed Scanners
A scanner is just another input device, much like a keyboard or mouse, except that it
takes its input in graphical form. These images could be photographs for retouching,
correction or use in DTP. They could be hand-drawn logos required for document
letterheads. They could even be pages of text which suitable software could read and save

as an editable text file.


The list of scanner applications is almost endless, and has resulted in products evolving to
meet specialist requirements:
High-end drum scanners, capable of scanning both reflective art and transparencies, from
35mm slides to 16-foot x 20in material at high (10,000dpi+) resolutions compact
document scanners, designed exclusively for OCR and document management
Dedicated photo scanners, which work by moving a photo over a stationary light source.
Slide/transparency scanners, which work by passing light through an image rather than
reflecting, light off it
Handheld scanners, for the budget end of the market or for those with little desk space.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
However, flatbed scanners are the most versatile and popular format. These are capable
of capturing color pictures, documents, pages from books and magazines, and, with the
right attachments, even scan transparent photographic film.
Operation:
On the simplest level, a scanner is a device, which converts light (which we see when we
look at something), into 0s and 1s (a computer-readable format). In other word, scanners
convert analogue data into digital data.
All scanners work on the same principle of reflectance or transmission. The image is
placed before the carriage, consisting of a light source and sensor; in the case of a digital
camera, the light source could be the sun or artificial lights. When desktop scanners were

Page 47
Multimedia and Its Applications

first introduced, many manufacturers used fluorescent bulbs as light sources. While good
enough for many purposes, fluorescent bulbs have two distinct weaknesses: they rarely
emit consistent white light for long, and while they're on they emit heat which can distort
the other optical components. For these reasons, most manufacturers have moved to
"cold-cathode" bulbs. These differ from standard fluorescent bulbs in that they have no
filament. They therefore operate at much lower temperatures and, as a consequence, are
more reliable. Standard fluorescent bulbs are now found primarily on low-cost units and
older models.
By late 2000, Xenon bulbs had emerged as an alternative light source. Xenon produces a
very stable, full-spectrum light source that's both long lasting and quick to initiate.
However, xenon light sources do consume power at a higher rate than cold cathode tubes.
To direct light from the bulb to the sensors that read light values, CCD scanners use
prisms, lenses, and other optical components. Like eyeglasses and magnifying glasses,
these items can vary quite a bit in quality. A high-quality scanner will use high-quality
glass optics that are color-corrected and coated for minimum diffusion. Lower-end
models will typically skimp in this area, using plastic components to reduce costs.
The amount of light reflected by or transmitted through the image and picked up by the
sensor, is then converted to a voltage proportional to the light intensity - the brighter the
part of the image, the more light is reflected or transmitted, resulting in a higher voltage.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

This analogue-to-digital conversion (ADC) is a sensitive process, and one that is


susceptible to electrical interference and noise in the system. In order to protect against
image degradation, the best scanners on the market today use an electrically isolated

Page 48
Multimedia and Its Applications

analogue-to-digital converter that process data away from the main circuitry of the
scanner. However, this introduces additional costs to the manufacturing process, so many
low-end models include integrated analogue-to-digital converters that are built into the
scanner's primary circuit board.
The sensor component itself is implemented using one of three different types of
technology:
PMT (photomultiplier tube), a technology inherited from the drum scanners of yesteryear
CCD (charge-coupled device), the type of sensor used in desktop scanners
CIS (contact image sensor), a newer technology which integrates scanning functions into
fewer components, allowing scanners to be more compact in size.
Scan modes
PCs represent pictures in a variety of ways - the most common methods being are line art,
halftone, grayscale, and color:
Line art is the smallest of all the image formats. Since only black and white information
is stored, the computer represents black with a 1 and white with a 0. It only takes 1-bit of
data to store each dot of a black and white scanned image. Line art is most useful when
scanning text or line drawing. Pictures do not scan well in line art mode.
While computers can store and show grayscale images, most printers are unable to print
different shades of gray. They use a trick called halftoning. Halftones use patterns of dots
to fool the eye into believing it is seeing grayscale information
Grayscale images are the simplest of images for the computer to store. Humans can
perceive about 255 different shades of gray - represented in a PC by a single byte of data
with the value 0 to 255. A grayscale image can be thought of as equivalent to a black and
white photograph
True color images are the largest and most complex images to store, PCs using 8-bits (1
byte) to represent each of the color components (red, green, and blue) and therefore 24-
bits in total to represent the entire color spectrum.
File formats
The format in which a scanned image is saved can have a significant effect on file size -
and file size is an important consideration when scanning, since the high resolutions

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
supported by many modern scanners can result in the creation of image files as large as
30MB for an A4 page.
Windows bitmap (BMP) files are the largest, since they store the image in full color
without compression or in 256 colors with simple run-length encoding (RLE)
compression. Images to be used as Windows wallpaper have to be saved in BMP format,
but for most other cases it can be avoided.
Tagged image file format (TIFF) files are the most flexible, since they can store images
in RGB mode for screen display, or CMYK for printing. TIFF also supports LZW
compression, which can reduce the file size significantly without any loss of quality. This
is based on two techniques introduced by Jacob Ziv and Abraham Lempel in 1977 and

Page 49
Multimedia and Its Applications

subsequently refined by Unisys researcher Terry Welch. LZ77 creates pointers back to
repeating data, and LZ78 creates a dictionary of repeating phrases with pointers to those
phrases.
CompuServe's graphics interchange formats (GIF) stores images using indexed color.
A total of 256 colors are available in each image, although what these colors are can
change from image to image. A table of RGB values for each index color is stored at the
start of the image file. GIFs tend to be smaller than most other file formats because of this
decreased color depth, making them a good choice for use in WWW-published material.
The PC Paintbrush (PCX) format has fallen into disuse, but offers a compressed format
at 24-bit color depth. The JPEG file format uses lossy compression and can achieve small
file sizes at 24-bit color depth. The level of compression can be selected - and hence the
amount of data loss - but even at the maximum quality setting JPEG loses some detail
and is therefore only really suitable for viewing images on-line. The number of levels of
compression available depends on the image editing software being used.
Unless there is a need to preserve color information from the original document, images
stored for subsequent OCR processing are best scanned in grayscale. This uses a third of
the space of an RGB color scan. An alternative is to scan in line-art mode - black and
white with no grayscales - but this often loses detail, reducing the accuracy of the
subsequent OCR process.
The table below illustrates the relative file sizes that can be achieved by the different file
formats in storing a "native" 1MB image, and also indicates the color depth supported:
Image
File format No. of colors
size
BMP – RGB 1MB 16.7 million
BMP –RLE 83KB 256
GIF 31KB 256
JPEG - min. compression 185KB 16.7 million
JPEG - min. progressive compression 150KB 16.7 million
JPEG - max. compression 20KB 16.7 million
JPEG - max. progressive
16KB 16.7 million
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
compression
PCX 189KB 16.7 million
TIFF 1MB 16.7 million
TIFF - LZW compression 83KB 16.7 million

Optical Character Recognition (OCR)


OCR is used to scan and recognize text on paper, or convert scanned paged saved as
images to text. It analyses the raster image and creates an index of areas that resemble
possible text fields.

Page 50
Multimedia and Its Applications

The software then attempts to recognize text characters by comparing the shape of the
scanned objects to a database of words categorized by different fonts or typefaces.
Thereafter, it groups individual characters and compares them with the words in the
dictionary that is set to use a particular language.
This step is extremely crucial for
accuracy in recognition. The more
comprehensive the dictionary, the
more accurate is the finished product.
The OCR software marks certain
words that it ‘considers’ inaccurate for
you to correct them manually.
Finally, the OCR software uses the
index it created to align the text fields
as accurately as possible.
OCR software accuracy depends on
scanner quality paper used to print
text, etc. The latest breed of OCR
packages uses optimization algorithms,
neural networks and even AI concepts
to get this done. Using pattern-recognition techniques, the software tries to guess the
character as whole and look at all possibilities before arriving at a hypothesis. Some
OCR packages have inbuilt tools that enable them to ‘learn’ from the changes you make
to the output.
Field-specific recognition, wherein the scanned data is automatically stored in the
appropriate field in, say, a database, will also be an integral part of the software in the
near future.
2.2.9 Infrared Remotes
Infrared remotes work just like TV remotes - they
require a direct line of sight between the remote
and the unit. Radio frequency (RF) remotes that
do not require line of sight are becoming more
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY common and can be useful if you have employees
who like to pace around the room while giving a
presentation.

The Process
Pushing a button on a remote control sets in
motion a series of events that causes the
controlled device to carry out a command. The
process works something like this:

Page 51
Multimedia and Its Applications

1. You push the "volume up" button on your remote control, causing it to touch the
contact beneath it and complete the "volume up" circuit on the circuit board. The
integrated circuit detects this.
2. The integrated circuit sends the binary "volume up" command to the LED at the
front of the remote.
3. The LED sends out a series of light pulses that corresponds to the binary "volume
up" command.
One example of remote-control codes is the Sony Control-S protocol, which is used for
Sony TVs and includes the following 7-bit binary commands:

Button Code
1 000 0000
2 000 0001
3 000 0010
4 000 0011
Channel up 001 0000
Channel
001 0001
down
Power on 001 0101
Power off 010 1111
Volume up 001 0010
Volume down 001 0011

The remote signal includes more than the command for "volume up," though. It carries
several chunks of information to the receiving device, including:

 a "start" command
 the command code for "volume up"
 the device address (so the TV knows the data is intended for it)
 a "stop" command (triggered when you release the "volume up" button)

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Infrared (IR) remote controls are most commonly used for devices such as TVs, VCRs,
DVDs, home theater systems, etc. Repeaters can be used to extend the range.
2.2.9 Voice Recognition Systems
These use speech or voice as input. This is a form of pattern recognition where the
spoken sound patterns are matched against previously recorded patterns.
The problems faced by users are that the voice quality of different people differs in pitch,
timbre, volume, rate of speech and accent. The subject can train the computer by
speaking certain words repeatedly.
The major problems facing these systems are:

Page 52
Multimedia and Its Applications

 Limited vocabulary
 People use different words to convey the same meaning.
 Some sentences make sense but cannot be properly parsed.
 Accentuating a word may be important.
 Tone of speaker’s voice can alter the meaning of words.
 Cultural or language issues.
 Homonyms (see Vs sea, know Vs no).
2.2.10 Digital Camera
Digital Camera:
Digital cameras are used to capture digital images.
Real images are those images that are present in
nature. Digital images are the representation of real
images in terms of pixels. A still image is the snapshot
of a motion image, which is a sequence of images
giving the impression of continuous motion.
In principal, a digital camera is similar to a traditional film-
based camera. There's a viewfinder to aim it, a lens to focus the
image onto a light-sensitive device, some means by which several images can be stored
and removed for later use, and the whole lot is fitted into a box. In a conventional camera,
light-sensitive film captures images and is used to store them after chemical
development. Digital photography uses a combination of advanced image sensor
technology and memory storage, which allows images to
be captured in a digital format that is available instantly -
with no need for a "development" process.
Although the principle may be the same as a film camera,
the inner workings of a digital camera are quite different,
the imaging being performed either by a charge coupled
device (CCD) or CMOS (complementary metal-oxide
semiconductor) sensors. Each sensor element converts
light into a voltage proportional to the brightness, which

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
is passed into an analogue-to-digital converter (ADC)
which translates the fluctuations of the CCD into discrete binary code. The digital output
of the ADC is sent to a digital signal processor (DSP) which adjusts contrast and detail,
and compresses the image before sending it to the storage medium. The brighter the light,
the higher the voltage and the brighter the resulting computer pixel. The more elements,
the higher the resolution, and the greater the detail that can be captured.
This entire process is very environment-friendly. The CCD or CMOS sensors are fixed in
place and it can go on taking photos for the lifetime of the camera. There's no need to
wind film between two spools either, which helps minimize the number of moving parts.

Page 53
Multimedia and Its Applications

2.2.11 Output Hardware


Monitors:
These are the most important output devices. It provides all the
visual output to the user. Monitors should be designed for the
highest quality image, with least distortion.
Monitors contain a large vacuum tube with electron gun at one
end aimed at a large surface (viewing screen) on the other end.
The viewing screen is coated with chemicals that glow with
different colors. Three different phosphorus are used for these
color screens. The source of electron beam is electrically negative pole or cathode (hence
the name Cathode Ray Tube, or CRT). Two different sets of colors are used in monitors-
RGB and CMY, either with set capable of full color spectrum.
The monitor screen is divided into individual picture elements called pixel. Each pixel is
made of its own phosphorus elements to give the color.
The physical size of monitor is an important factor in the quality of multimedia
presentation. It is Typically between 11 and 20 inches on diagonal. Another important
factor is the number of pixels per inch. Too few pixels make the image look grainy.
Liquid crystal display (LCD): Liquid crystal display
(LCD) technology works by blocking light rather than
creating it, while light-emitting diode (LED) and gas
plasma work by lighting up display screen positions based
on the voltages at different grid intersections. LCDs require
far less energy than LED and gas plasma technologies and
are currently the primary technology for notebook and
other mobile computers. As flat-panel displays continue to
grow in screen size and improve in resolution and
affordability, they will gradually replace CRT-based
displays.
Speakers and amplifiers:
Generally, the speakers we use during the development of a project will not suffice while
presenting the project. Speakers with built in amplifiers or attached to an external
amplifier are used when making presentations.
All Macintosh computers are equipped with an in-built speaker and dedicated sound chip.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY Headphone

Page 54
Multimedia and Its Applications

2.2.12 Projectors Printers


Projector
LCD projects are divided into two categories: standard
LCD and the polsilicon LCD projectors.
Standard LCD projectors cost less and have an LCD panel
that controls the three primary colors. These projectors are
being slowly phased out and replaced with the new
polysilicon LCD and DLP projectors.
Polysilicon LCD projectors offer better color saturation
since they control colors through three panels and are of a
higher quality than standard LCD.
DLP or Digital Light Processing projectors are the new
breeds of projectors, which are available in the market. These projectors are considered
to be of a better quality than standard LCD projectors and find them being used
extensively in the high-end home theatre market as in project TVs.
Ink Jet Printers:
Inkjet printing, like laser printing, is a non-impact method.
Ink is emitted from nozzles as they pass over a variety of
possible media, and the operation of an inkjet printer is
easy to visualize: liquid ink in various colors being squirted
at the paper to build up an image. A print head scans the
page in horizontal strips, using a motor assembly to move it
from left to right and back, as another motor assembly rolls
the paper in vertical steps. A strip of the image is printed,
and then the paper moves on, ready for the next strip. To
speed things up, the print head doesn't print just a single row of pixels in each pass, but a
vertical row of pixels at a time.
On ordinary inkjets, the print head takes about half a second to print a strip across a page.
Since A4 paper is about 8.5in wide and inkjets operate at a minimum of 300dpi, this
means there are at least 2,475 dots across the page. The print head has, therefore, about
1/5000th of a second to respond as to whether or not a dot needs printing. In the future,
fabrication advances will allow bigger print-heads with more nozzles firing at faster
frequencies, delivering native resolutions of up to 1200dpi and print speeds approaching
those of current color laser printers (3 to 4ppm in color, 12 to 14ppm in monochrome).
ANNAMALAI
ANNAMALAI UNIVERSITY
LASER PRINTER UNIVERSITY
Hewlett-Packard introduced the laser printer in 1984, based
on technology developed by Canon. It worked in a similar
way to a photocopier, the difference being the light source.
With a photocopier a page is scanned with a bright light,
while with a laser printer the light source is, not surprisingly,
a laser. After that the process is much the same, with the light
creating an electrostatic image of the page onto a charged
photoreceptor, which in turn attracts toner in the shape of an electrostatic charge.

Page 55
Multimedia and Its Applications

Laser printers have a number of advantages over the rival inkjet technology. They
produce much better quality black text documents than inkjets, and they tend to be
designed more for the long haul - that is, they turn out more pages per month at a lower
cost per page than inkjets. So, if it's an office workhorse that's required, the laser printer
may be the best option. Another factor of importance to both the home and business user
is the handling of envelopes, card and other non-regular media, where lasers once again
have the edge over inkjets.
PLOTTER
Apart from printed outputs, quite a few applications require to produce good quality

drawing and graphs. For this purpose plotters are used. There are to types of plotters –
drum plotters and flat bed plotters. These plotters use either pens or inkjet technology to
the drawing.

Drum plotter: On a drum plotter, the paper on which the


drawing or the graph is to be drawn is mounted on a drum that
rotates. The drum can move both in clockwise as well as
anticlockwise directions. Its movements are controlled by the
plotting instructions sent by the computer.

Flat bed plotter: The flat bed plotter has a stationary


horizontal-plotting surface on which the paper is fixed. The
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
pen is mounted on a carriage which can move in either X or Y
direction. The pen can also move up or down. A graph-plotting program is used to move
the pen to trace the desired drawing path.

Page 56
Multimedia and Its Applications

2.2.13 Communication Devices


Principles of all communication
There are six principles to consider, each of which needs to be defined either explicitly
for a particular communication task. The principles are as follows:
a. Standards: The use of particular methods for any aspect of the communication
process (e.g. the use of a particular language or specified coding technique).
b. Protocols: Agreement on the framework for communication, which may also
include the specification of standards for this aspect.
c. Error control, redundancy and accuracy: To control errors requires that there
is some extra information in the communication that enables either the detection
or correction of a problem. This redundant data allows recovery of the intended
information from the communication. This provides an accurate transmission of
the original information.
d. Channel: Communication takes place between end users via a channel. There
can more than one channel in use for one communication session, or more than
one mode of communication taking place on a single channel as in the case of
multimedia communication.
e. Context: The context within which a communication takes place helps to define
some aspects of other parts of the communication. For example, in a speech
communication between two strangers in a Tamil Nadu village it would be
assumed that Tamil would be the appropriate language to use.
f. Coding (Encoding and decoding): Accurate and unambiguous coding is
essential for accurate transmission of the information. This may require a number
of separate levels of coding to achieve a reliable scheme that can use the available
channels.
Multimedia Communication
The process of communication has given rise to many and varied forms of transmitting
information. Principally, speech and writing have been the preferred means of
communication form many centuries. There are, however, many forms of communication
that use image or moving pictures (video) to transfer a message. How these are used has
always been a point of discussion but there are now systems that can allow many
different types of communication to be employed. The main information types are shown
in the following fig., in order to size for a typical piece of information.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
As the complexity of each communication type increases so does the size of typical
objects of the type being considered. Text is the simplest medium to store and transmit
and can contain a large amount of information in a relatively small space. The other

Complexity Video

Image
Search
High quality
quality sound sound

Text Page 57

Size of Object
Multimedia and Its Applications

media increase is in terms of the complexity of the machinery needed to exploit them and
in the techniques used to store and transmit the information.
Video, is the most complex and, and potentially, the most massive of all the media.
Many multimedia applications are developed in workgroups comprising instructional
designers, writers, graphic artists, programmers, and musicians located in the same office
space or building. The workgroup members’ computers typically are connected on a local
area network (LAN). The client’s computers, however, may be thousands of miles
distant, requiring other methods for good communication.
Communication among workgroup members and with the client is essential to the
effective and accurate completion of the project. Our Postal Service mail delivery is too
slow to keep pace with most projects; courier services are better. And when you need it
immediately, an Internet connection is required. If your client and you are both connected
to the Internet, a combination of communication by e-mail and by FTP (File Transfer
Protocol) may be the most cost-effective and efficient solution for both creative
development and project management.
In the workplace, quality equipment and software used communications setup. The cost -
in both time and money - of stable and fast networking will returned to you.

2.2.14 Modems
Modems:
A Modem is a computer peripheral that allows you to
connect and communicate with computers via telephone
lines.
Modems allow us to combine the power of the
computer with the global reach of telephone system.
Because ordinary telephone lines cannot carry digital
information, a modem changes the digital data from
your computer into analog data, a format that can be
carried by telephone lines. In a similar manner, the modem receiving the call then
changes the analog signal back into digital data that the computer can digest. This shift
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
of digital data into analog data and back again, allows two computers to speak with one
another, called modulation/demodulation, this transformation of signals is how the
modem received its name.
2.2.15 Cable Modems
You will agree that accessing the Internet through
normal modems, be it from office or home, is not the
easiest thing in the world to do. One of the
alternatives that are becoming popular among home
users is a cable modem.

Page 58
Multimedia and Its Applications

Cable Internet means accessing the Internet through the same cable that brings TV
channels like Star, Zee, and MTV into your homes. The two main devices which make
this possible are a Cable Modem Termination System (CMTS), which has to be installed
at your cablewallah or broadband service provider’s end, and a cable modem, which has
to be installed in your home. Simply put, a cable modem is a device that lets you access
the Internet through your Cable TV (CATV) network.

2.3) Revision points


The capabilities of multimedia systems depend not only on the speed and capacity of
the CPU but also on the speed, capacity, and design of storage, input and output
technology. Storage, input, and output devices are called peripheral devices because
they are outside the main computer system unit.
 Memory and Storage devices: Relatively long-term, nonvolatile storage of data
outside the CPU and primary storage.
 CD Technologies: Optical disk system that allows individual and organisaitons to
record their own CD-ROMs.
 DVD: High-capacity optical storage medium that can store full length videos and
large amounts of data.
 Mouse: Handheld input device whose movement on the desktop controls the
position of the cursor on the computer display screen
 Touch Screens: Input device technology that permits the entering or selecting of
command and data by touching the surface of a sensitized video display monitor
with a finger or a pointer.
 Magnetic Card Encoders and readers: Input technology that translates
characters written in magnetic ink into digital codes for processing
 Voice output device: A converter of digital output data into spoken words.
 Projectors and printers: A computer output device that provides paper hard-
copy output in the form of text or graphics.

2.4) Intext Question


ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
1. Explain different input/output devices used in Multimedia Applications. Also
explain different components of multimedia.
2. List and explain two input and two output devices for multimedia systems.
3. What is a multimedia component? Explain any four components (except video) of
multimedia with an example of each.
4. What is digital video? Explain the use of digital video in developing multimedia
applications.
5. What is a sound card? Explain the process of sound card installation.

Page 59
Multimedia and Its Applications

2.5) Summary
The principal input devices are keyboards computer mice, touch screens, magnetic
ink, optical character recognition, pen-based instrument, digital scanners, sensors, and
voice input. The principal output devices are video display terminals, printers,
plotters, voice output devices and microfilm. The principal forms of secondary
storage are magnetic disk, optical disk, and magnetic tape. Magnetic disk permits
direct access to specific records. Optical disks can store vast amounts of data
compactly. CD-ROM disk systems can only be read from but rewritable optical disk
systems are becoming available.
2.6) Terminal exercises
Compare and Contrast the following in the context of Multimedia:
 CD-ROM and DVD
 Magnetic storage and Optical Storage
 Light pen and Touch screen
 Printer and Plotter
 Joystick and mouse
 Flat bed scanner and Hand held scanner
 Bar code reader and MICR

2.7) Supplementary Materials


1. http://en.wikipedia.org/wiki/Multimedia
2. http://multimedia.expert-answers.net/multimedia-glossary/en/
3. http://nrg.cs.usm.my/~tcwan/Notes/MM-BldgBlocks-I.doc
4. www.edb.utexas.edu/multimedia/PDFfolder/WEBRES~1.PDF

2.8) Assignments

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
A company has a number of sites, each of which has a PC equipped LAN, with the LANs
connected over a WAN. The company wishes to use the network for videoconferencing
between the PCs on the LANs. The company has asked you, as an independent technical
consultant, to advise them on the issues below. Prepare a report covering the following:
a. Definitions and descriptions of the network technology.
b. Compression issues in video conferencing.
c. Co-operative working
d. Advantages and disadvantages of a communication system of this type within this
environment.

Page 60
Multimedia and Its Applications

2.9) Suggested Reading


Jenette steemers, Richard wise, “Multimedia a critical introduction”
Boyle.T, “Design for multimedia learning”, Prentice Hall

2.10) Learning Activities


Modern multimedia computers and games consoles often use digital signals processors
(DSP’s) for sound processing. Digital communications systems also employ DSP’s to
encode and decode signals. Integrated media DSP’s are becoming more common,
centralising many functions.

a) Some functions take too much processor time to be viable without using some
form of hardware acceleration. Give one example of input and one example of
output that a DSP might make possible.
b) What benefits are there for someone with a general-purpose multimedia DSP as a
core part of a computer system, other than combining many functions? What
disadvantages might there be?
c) The Bluetooth communications system is designed as a system to help replace the
cables that join devices together. However, it offers much more functionality than
a simple cable.
 Briefly describe the Bluetooth communications system.
 Give one persuasive reason for adopting Bluetooth technology.
Give one major disadvantage of Bluetooth technology.

2.11) Key words


CD Jargon
Hybrid CD: A CD that’s readable by both MACs and PCs. Making a copy of each file in
both, the ISO (for PC) and HFS (for MAC) formats create it.
HFS CD: Only Apple computers can read HFS CDs. To create a HFS CD, you need to
use a hard disk with the HFS file system connected to a PC, and a CD-writing software
that supports the file system such as Nero 6.
Mixed Mode CD: a CD that contains a data track, and one or more audio tracks. The
data is located in track number one, and audio in the following tracks.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
UDF: A file system that is optimized to handle large data sizes, and to minimize the
changes necessary when a file is added or deleted. Windows 98 and higher versions can
write to and read from the UDF file system, without any special driver support. This is
the best format for DVD-RW drives where the data size goes into GBs.
Simulation: This is the process of testing the recording process without actually doing
so. The writing is done by sending the data to the recorder, whilst keeping the laser off.
This way, the blank CD remains intact. Simulation is used to check if the recording will
be successful.
CD Extra: A CD format that combines a music CD with a regular data CD-ROM. These
discs have audio tracks in the first part, and computer data in the second. You can play
the discs on music players, and also use them as regular data CDs.

Page 61
Multimedia and Its Applications

Overburning: A technique by which you burn more data onto a CD than its specified
capacity. For overburning, the CD recorder should support this feature, and the writing
should be done in the 'Disc-at-once' mode. How much data can be overturned, depends
on the medium and the recorder.
DDCD: Expanding to mean Doubly Density CD, it allows data capacity to be doubled to
1.3GB. DDCD is made possible by a few simple modifications to the regular CD format,
such as miniaturization of track pitch and minimum pit length. DDCD is mainly and
minimum pit length. DDCD is mainly targeted at the mass data backup segment. The
media is not backward compatible.
DAO: In the Disc At Once method, the laser is never turned off, and hence no gap or
delay is necessary. This mode is much faster than Track At Once, but the recorder should
support it.
SAO: Session at Once is the mode used in multi-session CD-writing. In this mode, a
session is written without turning off the laser just as in DAO, but the session is closed
only after the data and the ToC are written into the program Memory Area.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 62
Multimedia and Its Applications

UNIT-III
3.0) Introduction
Multimedia tools and products depend on the ability of the computer to capture, process
and present text, pictures, audio and video. Multimedia applications offer significant
challenges to computer architecture in terms of its ability to:
 Input various data formats, including converting analog data to digital format.
Input may be as simple as typing in characters from a keyboard, scanning images
or capturing analog audio and video.
 Manipulate and edit multimedia data.
 Output Multimedia data including converting digital version into an analog format
suitable for the end user.
 This includes placing graphics on the screen, sending sound to speaker or
generating television signals for display on large screen project systems.

3.1) Objective
 To study the various types of Multimedia Authoring Tools and their applications.

3.2) Content
3.2.1 Text Editing
Using Text in Multimedia:
We generally use text for titles, headlines, menus, and navigation and for content.
a. Designing with text:
From a designer perspective the choice of font size and the number of headlines we place
on a particular screen must be related both to the complexity of the message and top its
venue. If messages are part of an interactive project or web site then we can pack a great
deal of text information on to the screen before it becomes busy. We must strike a
balance between too little text and too much text. If in case we are providing public

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
speaking support the text will be keyed to a live presentation where the text accents the
main message. Here we use large fonts and few words with, lots of space.
b. Choosing Text Fonts:
Selection of font is and important and difficult task. Listed below are a few design
suggestions:
 Choose the most legible font instead of a decorative font for small text.
 Try to use the minimum number of faces. We can vary the weight and size of the
typeface whenever needed.
 Make sure the line spacing is pleasing to the eye.

Page 63
Multimedia and Its Applications

 Keep a proportion between the letters is proper. Try experimenting with the colors
and effects.
c. Menus for Navigation:
An interactive multimedia project or website typically consists of a body of information
or content through which a user navigates by pressing key, clicking a mouse, pressing a
touch screen. The simplest menu consists of text lists of topics. Text is helpful to users to
perceptual cues about their location within the body of the content.
d. Buttons for interaction:
In multimedia buttons are objects that make things happen when they are clicked. They
are used to manifest properties such as highlighting or other visual or sound effects to
indicate that we have hit the target.
The automatic button making tools supplied with multimedia and HTML page authoring
systems are useful, but in creating our own text, they offer little opportunity to fine tune
the look of the text. Character and Word wrap highlighting and inverting are
automatically applied to the buttons as needed by the authoring system.
Before using a font, we must make sure it is recognized by the computer’s Operating
system. If we want to use other fonts than those installed with the Operating System, then
we need to install them first.
In most authoring platforms, it is easy to make your own buttons from bitmaps of drawn
objects. In a message passing authoring system where we can script activity when the
mouse button is up or down over an object, we can quickly replace one bitmap with
another highlighted or colored version of bitmap.
e. Field for reading:
Fields are useful when the very purpose of your multimedia project or website is to
display very large blocks of text
 Try to print only a few paragraphs of text per page.
 Use a font, which is easy to read rather than a decorative illegal font.
 Try to displace whole paragraphs on the screen.
 Avoid breaks where users must go back and forth between pages to read the entire
content.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
f. Portrait versus Landscape:
Traditional hard copy and printed documents in the taller than wide orientation are not
readable on a typical monitor with a wider than tall aspect ratio. The taller than wide
orientation used for printed documents is called portrait while the wider than tall
orientation normal to monitors is called landscape.
g. HTML Documents:
The standard document format used for pages on the web is called HyperText Markup
Language (HTML). In a HTML document we can specify typefaces, size, colors, and
other properties by “making up” the text in the document with tags.

Page 64
Multimedia and Its Applications

h. Symbols and Icons:


Symbols are concentrated text in the form of stand-alone graphic constructs. Symbols
convey meaningful messages. Symbols such as familiar trashcan and hour-glass are more
properly called “icons”; they are symbolic representations of objects and processes
common to the graphical user interfaces of many computer Operating Systems.
When HyperCard was first introduced in 1987, there was flurry of creative attempts by
graphics artists to create interesting navigational symbols to alleviate the need for text.
The screens were pure graphic art. But many users were frustrated because they could not
get the data right away, as they had to learn the symbols and their meanings first. So from
a product design point of view it became safer to blend icons and text for a better
understanding for the users.
i. Animating Text:
Animation is one of the main ways by which we can capture the viewer’s attention. For
example, we can animate bulleted text and have it “fly” onto the screen. For speakers,
highlighting important text works well as a pointing device. When there are several
points to be made, we stack key words and flash the past the viewer in a timed automated
sequence.
A powerful but inexpensive application which lets us create 3D text using both True type
and Type1 Adobe Fonts is “XAOSTOOLS” typecast. We can also use “illustrator” or
“Free-Hand EPS”(encapsulated postscript) outline files to create still images in 3D and
then animate the results to create quick time movies.
j. Computers and Text:
Text looks like the easiest medium to create and the least expensive to transmit, but
there’s more to text creation than meets the eye! First, effective use of text requires good
writing, striving for conciseness and accuracy.
 Advertising wordsmith sell product line with a logo or tag lines with just a few words
 Similarly, multimedia developers are also presenting text in a media-rich contents,
weaving words with sounds, images , and animations
 Design labels for multimedia title screens, menus and buttons using words the precise
and powerful meanings
 Which feedback is more powerful: “that answer was correct”. Or “Terrific!”
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
 When is “Terrific” more appropriate or effective?
 Why is “quit” more powerful than “close”? Why does one uses “out” instead?
 Why is the title of a piece especially important?
 It should clearly communicate the content.
 It should get the user interested in exploring the content
Some guidelines for writing effective script:
 Write for your audience, bearing in mind your audience’s background and interests
 Can you assume that your audience knows what the travelling salesman problem is?

Page 65
Multimedia and Its Applications

 Yes, if your audience is CS faculty; no, if it’s CS undergraduates.


 When should you use a casual, idiomatic style or a formal, business-like style?
Again, it depends on your audience.
 Recommended reading for writers: The Elements of style, by William Strunk, full of
pithy advice and rules of thumb:
 Say it in active voice:
 “Genetic algorithms were invented by John Holland in 1970’s.” as against
 “John Holland invented genetic algorithms in 1970’s.”
 Avoid wordiness: “computer algorithms” vs. just “algorithms”
 Avoid high-faulting phrases: “appropriate incorporated” Vs “using”.
 Write and rewrite, bearing in mind that users won’t read much on a screen.

3.2.2 Word Processing Tools


Word Processors:
Features of Word Processors:
 Fast: Typing text in a word processor becomes speedy, as there is no mechanical
carriage movement associated.
 Editing Features: Any type of correction (insertion, deletions, modifications etc.)
can be made easily as and when required.
 Permanent Storage: With word processors, documents can be saved as long as
desired. The saved document can be retrieved whenever desired.
 Formatting Features: The typed text can be made to appear in any form or style
(bold, italic, underline, different fonts etc). All this is possible due to formatting
features of word processors.
 Graphics: Most modern word processors provide the facility of incorporating
drawings in the documents, which enhances their usefulness.
 OLE (Object Linking and Embedding): Most modern word processors provide
facilities to link or embed objects in a document. OLE is a program-integration
technology that you can use to share information between programs through objects.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Objects are saved entities of different types like charts, equations, video clips, audio
clips, pictures etc.
 Spell check: Word Processors not only are capable of checking spelling mistake but
also can suggest possible alternatives for correctly spelt words. Some word processors
can check for grammatical mistakes and suggest alternatives or improvements.
 Mail Merge: The mail merge facility enables you to print a large number of
letters/documents with more or less similar text. For example, same invitation letter
was being sent to invitees; only the names and addresses were to be changed.

Page 66
Multimedia and Its Applications

Mail merge feature actually merges main document with a data source. The main
document stores the original text with data area at appropriate places. These data areas
are successively filled by the information in the data source and the merged document is
printed.

3.2.3 Painting And Drawing Tools


Graphics is one multimedia element that plays and important role of capturing user
attention. Painting and drawing tools used for such creations are thus one of the most
widely used tools.
Microsoft Paint Brush:

To select the freeform To select a rectangular


part of picture to, part of picture to
move , copy or edit move , copy or edit

To erase a picture or a To fill the area with


part current selected color

To pick a color from


picture or drawing Magnification tool

To draw free form To draw using a


brush with
selected shape
and size
To draw with spray
To insert text in
pattern
drawing

To draw a curved
To draw straight line line with selected
line width
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY To draw polygon
To draw rectangle or with selected fill
square style

To draw circle or oval To draw rounded


rectangle

Page 67
Multimedia and Its Applications

Let us look at some popular commercial painting and drawing tools:


TYPE TOOL PRODUCT
Painting Photoshop Bitmap image Render fine details
software Fireworks and effect
Painter
Drawing Corel draw High-resolution
Software Freehand vector based
Illustrator Line art printed to
Designer paper
Canvas
Some tools combine drawing and painting capabilities.

3.2.4 3d Modeling And Animation Tools


3D Modeling Tools:
Graphics design is now mainly taken over by 3d Modeling software due to its easy use.
Consequently the level of projects and its expectations has also increased. 3-d is preferred
due to its realistic approach (you can change the lighting etc of your choice)
Type Tool Product Example
People,
Auto desk’s
Modeling Pre render furniture,
discreet Start
Package 3-D clipart building,
vision’s 3-D
art, etc
3-D with Quick Time or
Moving Simple
moving AVI animation
view or architecture,
view or file vector
journey Floor plan
journey works
Factors to be considered for time of completion
1. complexity of drawing
2. Number of objects used.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The main features of 3-d modeling are given below:
1. Ability to view the model in each dimensions by opening multiple windows
2. Ability to drag and drop primitive shapes in a scene and to create new objects
from the beginning with drawing tools.
3. Ability to include color and texture mapping, extrude features etc.
4. Ability to add real time effects like transparency, fog and shadowing.
5. Cameras with focal length control and manipulation of lighting effects.

Page 68
Multimedia and Its Applications

Digitizing real 3D Models:


Good examples of digitized 3D models can be seen in the excellent BBC TV series
“walking with dinosaurs”. Here the serial effect artists modeled precise scale replicas of
the various dinosaurs in clay. Using a variety of scanning devices (including laser
scanning and digitizing arm systems) it is possible to produce extremely life like virtual
models. However the digital artist’s tools are currently so well developed that most 3D
modelling is now executed within the program and the use of digitizing scanners appear
to be waning.
Type of 3D Modelling software:
Most 3D modelling software falls into one of two types, solid or surface modelling.
a. Solid modelling:
As its name implies solid modelling relies on working with geometrical shapes which
have all the inherent characteristics of solid entities. If you cut through such as entity it is
filled with virtual material. This is particular useful when executing subtractive Boolean
functions (cutting one shape away from another).
b. Surface modelling:
Unlike a solid model, surface model has only an outer skin. If you cut through it there is
no interior material. It is hollow. This is particularly useful when editing and stitching
complex surface elements together.
c. Hybrid Modelling:
a number if contemporize modelling programs are described as Hybrid Modelers in that
they combine both solid and surface modelling capabilities in the one package.
Furthermore these applications allow conversion between solids and surface as required.
d. Polygonal Vs NURBS:
In addition to solid verses surface modelling there is another way to differentiate modern
3D modelling applications. Traditional modelers tend to be polygonal programs whereas
the current trend is towards NURBS based solutions.
e. Polygonal:
Models generated by polygonal modelers derive their geometry from triangulated surface
facets much like a geodesic some. Each triangular plane is a function of the position of
three points in space and the location of the connecting “edges”. The degree of
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
tessellation (faceating) dictates how smooth the final model and/or image will be.
f. NURBS:
NURBS is an acronym for non-uniform Rational B-splines. NURBS can be 2D splines,
surfaces or solids. Their real strength is in their flexibility, adjustability and lack of
tessellation. Many different NURBS modelling software applications are based on a
small number of core modelling engines. One common engine is the ASCIS system,
which incorporate NURBS technology extensively. One of the real benefits of NURBS
surface and solids is that they are resolution independent. The tessellation of the surfaces
can be decided at anytime prior to export from the modelling application into the
rendering application therefore greatly speeding up the modelling process.

Page 69
Multimedia and Its Applications

Regardless of whether the 3D form is derived from polygonal or from a NURBS object,
the rendered image is usually capable of being smoothed by rendering software.
XYZ Virtually all 3D modelling software relies on a virtual world built on the concept of
space defined as positions within an axial coordinate system called XYZ. In most systems
the horizontal coordinates of space are defined along the X&Y-axes with the Z-axis
providing the vertical dimension. A small number of 3D programs, however,
transpose the Y and Z axes adding to confusion when exporting models from one
program to another. This is a typical system (let us assume one where Z implies
verticality) where there is a point at which the X, Y & Z-axes interact. This point is
usually referred to as the origin point and is described as X0, Y0, and Z0. Using a map
metaphor the X-axis runs from west to East and the Y-axis runs south to North. The X
and Y-axes therefore form a plan like a flat map. Points in space, which have a positive X
component are to the right of the Y-axis and those with, a negative component, are to the
left.
Similarly points in space, which have a positive Y component, are to the North of the X-
axis and those with a negative component are to the south. It follows therefore those
points in space, which have a positive Z components are above XY plane and those with
a negative component are below it.

3.2.5 Animation, Video And Digital Movie Tools


Animation:
Animation may be defined as the creation of moving pictures one frame at a time; the
word is also used to mean the sequences produced in this way. Throughout the twentieth
century, animation has been used for entertainment, advertising, instruction, art and
propaganda on film, and latterly on video; it is now also widely employed on the World
Wide Web and in multimedia presentations.
To see how animation works consider making a sequence of drawings or paintings on
paper, in which those elements or characters intended to change or move during the
sequence are altered or repositioned in each drawing. The changes between one drawing
and the next may be very subtle, or much more noticeable. Once the drawings are
complete, the sequence of drawings is photographed in the correct order, using a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
specially adapted movie camera that can advance the film a single frame at a time, when
the film is played back, this sequence of still images is perceived in just the same way as
the sequence of frames exposed when the action has been filmed in real time: persistence
of vision causes the succession of still images to be perceived as a continuous moving
image. If you wish to convey the illusion of fast movement or change, the differences
between successive images in the sequence must be much greater than if the change is to
be gradual, or the movement slow.
Etymologically, ‘animate’ means ‘to bring to life’, which captures the essence of the
process: when played back at normal film or video speeds, the still characters, objects,

Page 70
Multimedia and Its Applications

abstract shapes, or whatever, that have been photographed in sequence, appear to come to
life.

As film is projected at 24 frames per second, drawn animation, as we have just described
it, technically requires 24 drawings for each second of film, that is, 1440 drawings for
every minute – and even more for animation made on video. In practice, animation that
does not require seamlessly smooth movement can be shot ‘on 2s’, which means that two
frames for each drawing, or whatever, are captured rather than just one. This gives an
effective frame rate of 12 frames per second for film, or 15 for NTSC video.
If animation is made solely from drawings or paintings on paper, every aspect of the
image has to be repeated for every single frame that is shot. In an effort to reduce the
enormous amount of labor this process involves, as well as in a continuing search for new
expressive possibilities, many other techniques of animation have been devised. The most
well known and widely used – at least until very recently – has been cell animation. In
this method of working, those elements in a scene that might move – tom & Jerry, ‘for
example – are drawn on sheets of transparent material known as ‘cell’, and laid over a
background – the Jerry’ living room, perhaps – drawn separately. In producing a
sequence, only the moving elements on the cell; need to be redrawn for each frame; the
fixed part of the scene need only be made once. Many cells might be overlaid together,
with changes being made to different ones between different frames to achieve a greater
complexity in the scene. To take the approach further, the background can be drawn on a

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
long sheet, extending well beyond the bounds of a single frame, and moved between
shots behind the cells, to produces an effect of travelling through a scene. The concept
and techniques of traditional cell animation have proved particularly suitable for transfer
to the digital realm.
Largely because of the huge influence of the Walt Disney studios, where cell
animation was refined to a high degree, with the use of multi-plane set-ups that added a
sense of three-dimensionality to the work, cell has dominated the popular perception of
animation. It was used in nearly all the major cartoon series, from Popeye to the
Simpsons and beyond, as well as in many full-length feature films, starting Micky Mouse
to Allahudin.

Page 71
Multimedia and Its Applications

Captured Animation and Image Sequences:


The digital technology has brought new ways of creating animation, but computers can
also be used effectively in conjunction with the older methods discussed above, to
produce animation in a digital form, suitable for incorporation in multimedia productions.
Currently, preparing animation in this way -- using digital technology together with a
video camera and traditional animation methods--offers much richer expressive
possibilities to the animator working in digital media than the purely computer generated
methods we will describe later in this chapter.
Instead of recording your animation on film or video tap, a video camera (either a digital
camera or an analogue camera connected through a video capture card) is connected
directly to a computer, to capture each frame of animation to disk--whether it is drawn on
paper or cell, constructed on a 3-D set, or made using any other technique that does not
depend on actually marking the film. Instead of storing the entire data stream arriving
from the camera, as you would if you were capturing live video, you only store the digital
version of a single frame each time you have set up a shot correctly. Most digital video
editing application provides a facility for frame grabbing of this sort.
Premiere, for example, offers a Stop Frame command on its Capture menu. This causes a
recording window to be displayed showing the current view through the camera. You
can use this to check the shot, then press a key to capture one frame, either to a still
image file, or to be appended to an AVI or QuickTime movie sequence. You then change
your drawing, alter the position of your models, or whatever, and take another shot.
Frames that are unsatisfactory can be deleted; an option allows you to see a ghost image
of the previously captured frame, to help with alignment and making the appropriate
changes. When you have captured a set of frames that forms a sequence, you can save it
is a QuickTime movie or a set of sequentially numbered image files The latter option is
useful if you want to manipulate individual images in Photoshop, for example.
For certain types of traditional animation, it is not even necessary to use a camera. If you
have made a series of drawings or paintings on paper, you can use a scanner to produce a
set of image files from them. You can also manipulate cutouts on the bed of a scanner,
almost as easily as under a camera. A film scanner will even allow you to digitize
animation made directly onto film stock. You might be able to use a digital still camera
instead of a video camera, provided it allows you to download images directly to disk. In
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
all of these cases you are able to work at higher resolution, and with a larger color gamut,
than is possible with a video camera.
For drawn or painted animation you can dispense with the external form and the
digitization process entirely by using a graphics program to make your artwork, and save
your work as a movie or as a movie or as a sequence of image files.
Digital Cell and Sprite Animation:
Layers allow you to create separate parts of still image -- for example, a person and the
background of a scene they are walking through -- so that each can be altered or moved
independently. Combining a background layer, which remains static, with one or more
animation layers, in which any changes that take place between frames are made, can

Page 72
Multimedia and Its Applications

make the frames of an animated sequence. Thus, to create animation, you begin by
creating the background layer in the image for the first frame. Next, on separate layers,
you create the elements that will move; you may want to use additional static layers in
between these moving layers if you need to create an illusion of depth. After saving the
first frame, you begin the next by pasting the background layer from the first; then, you
add the other layers, incorporating the changes that are needed for your animation. In
this way, you do not need to recreate the static elements of each frame, not even using a
script.
Where the motion in animation is simple, it may only be necessary to reposition or
transform the images on some of the layers. To take a simple example, suppose we wish
to animate the movement of a planet across a background of stars. The first frame could
consist of a background layer containing the star field, and a foreground layer with an
image of our planet. To create the next frame, we would copy the two layers, and then,
using the move tool, displace the planet's image a small amount. By continuing in this
way, we could produce a sequence in which the planet moved across the background. (If
we did not want the planet to move in a straight line, it would be necessary to rotate the
image as well as displace it, to keep it tangential to the motion path). Simple motion of
this sort is ripe for automation, and we will see in a later section how After Effects can be
used to animate Photoshop layers semi-automatically.
Using layers as the digital equivalent of cell saves the animator time, but, as we have
described it, it does not affect the way in which the completed animation is store: each
frame is saved as an image file, and the sequence will later be transformed in to a
QuickTime movie, an animated GIF, or any other conventional representation. Yet there
is clearly a great deal of redundancy in a sequence whose frames are all built out of the
same set of elements. Possibly, when the sequence comes to be compressed, the
redundant information will be squeezed out, but compressing after the event is unlikely to
be as successful as storing the sequence in a form that exploits its redundancy in the first
place. In general terms, this would mean storing a single copy of all the static layers and
all the objects (that is, the non-transparent parts) on the other layers, together with a
description of how the moving elements are transformed between frames.
This form of animation, based on moving objects, is called sprite animation, with the
objects being referred to as sprites. Slightly more sophisticated motion can be achieved

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
by associating a set of images, sometimes called faces, with each sprite. This would be
suitable to create a 'walk cycle' for a humanoid character. By advancing the position of
the sprite and cycling through the faces, the character can be made to walk.
QuickTime supports sprite tracks, which store animation in the form of a 'key frame
sample' followed by some 'override samples'. The key frame sample contains the images
for all the faces of all the faces of all the sprites used in this animation, and values for the
spatial properties (position, orientation, visibility, and so on) of each sprite, as well as an
indication of which face is to be displayed. The override samples contain no image data,
only new values for the properties of any sprites that have changed in any way. They can

Page 73
Multimedia and Its Applications

therefore be very small. QuickTime sprite tracks can be combined with ordinary video
and sound tracks in a movie.
We have described sprite animation as a way of storing an animated sequence, but it is
often used in different way. Instead of storing the changes to the properties of the sprites
the changed values can be generated dynamically by a program. Simple motion
sequences that can be described algorithmically can be held in an even more compact
form, therefore, but more interestingly, the computation of sprite properties can be made
to depend upon external events such as mouse movements and other user input. In other
words, the user can control the movement and appearance of animated objects. This way
of using sprites has been extensively used in two-dimensional computer games, but it can
also be used to provide a dynamic form of interaction in other contexts, for example
simulations.
Key Frame Animation:
During the 1930s and 1940s, the large American cartoon producers led by Walt Disney
developed a mass production approach to animation. Central to this development was
division of labor. Just as Henry ford’s assembly line approach to manufacturing motor
cars relied on breaking down complex tasks into small repetitive sub-tasks that could be
carried out by relatively unskilled workers, so Disney’s approach to manufacturing
dwarfs relied on breaking down the production of a sequence of drawings into sub-tasks,
some of which, at least, could be performed by relatively unskilled staff. Disney was less
successful at de-skilling animation than ford was at de-skillling manufacture – character
design, concept art, storyboards, tests, and some of the animation, always had to be done
by experienced and talented artists. But when it came to the production of the final cells
for a film, the role of trained animators was largely confined to the creation of key
frames.
We have met this expression already, in the context of video compression and also in
connection with QuickTime sprite tracks. There, key frames were those, which were
stored in their entirety, while the frames in between them, were stored as differences
only. In animation, the meaning has a slightly different twists: key frames are typically
drawn by a ‘chief animator’ to provide bottom of a fall, and so on – which determine
more or less entirely what happens in between, but they may be used for any point the
pose and detailed characteristic of characters at important points in the animation.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Usually, key frames occur at the extremes of a movement – the beginning and end of a
walk, the top and which marks a significant change. 'in-betweeners' can then draw the
intermediate frames almost mechanically. Each chief animator could have several in-
betweeners working with him to multiply his productivity. (In addition, the tedious task
of transferring drawings to cell and coloring them in was also delegated to subordinates.)
In-betweening (which is what in-betweeners do) resembles what mathematicians call
interpolation: the calculation of values of a function lying in between known points.
Interpolation is something that computer programs are very good at, provided the values
to be computed and the relationship between them can be expressed a hand-drawn
animation is too complex to be reduced to numbers in a way that is amenable to computer

Page 74
Multimedia and Its Applications

processing. But this does not prevent people trying – because of the potential labor
savings.
All digital images are represented numerically, in a sense, but the numerical
representation of vector images is much simpler than that of bitmapped images, making
them more amenable to numerical interpolation. To be more precise, the transformations
that can be applied to vector shapes - translation, rotation, scaling, reflection and shearing
– are arithmetical operations that can be interpolated. Thus, movement that consists of a
combination of these operations can be generated by a process of numerical in-
betweening starting from a pair of key frames.

3.2.6 Making Instant Multimedia


Authoring tools are use to make instant multimedia, authoring tools usually refers to
computer software that helps multimedia developers create products. These tools are
different from computer programming languages in that they are supposed to reduce the
amount of programming expertise required in order to be productive. Some authoring
tools use visual symbols and icons in flowcharts to make programming easier. Others use
a slide show environment.

Authoring tools help in the preparation of texts. Generally, they are facilities provided in
association with word processing, desktop publishing, and document management
systems to aid the author of documents. They typically include an on-line dictionary and
thesaurus, spell-checking, grammar-checking, and style-checking, and facilities for
structuring, integrating and linking documents. Authoring tools can also be provided
which enable users to author better quality documents in languages, other than their own,
which they understand to a degree but in which they could not normally compose a
document.

3.2.7 Spreadsheets
Spreadsheets:
Spreadsheets have become the backbone of many users' information management
systems. A spreadsheet organizes its data in columns and rows. Calculations are made
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
based on user-defined formulas for, say, analyzing the survival rates of seedlings, or the
production of glass bottles in Karnataka, or a household's consumption of energy in ergs
per capita. Spreadsheets can answer what-if-questions, build complex graphs and charts,
and calculate a bottom line. From Kashmir to Kanyakumari, spreadsheets have become a
ubiquitous computer tools.
Most spreadsheet applications provide excellent chart-making routines; some allow you
to build a series of several charts into an animation or movie, so you can dramatically
show change over time or under varying conditions Full-color curves that demonstrate
changing annual sales, robbery and assault static's, or birth rates may have a far greater
effect on an audience than will a column of numbers.

Page 75
Multimedia and Its Applications

The latest spreadsheets let you attach special notes and drawings, including full
multimedia display of sounds, pictures, animations, and video clips.
Lotus 1-2-3:
Lotus 1-2-3 lets you rearrange graph elements by clicking and dragging and using a menu
to access data objects from the outside world. You can place bitmapped pictures and
other objects such as QuickTime movies anywhere in your spreadsheet. There is a
complete color drawing package for placing lines, circles, arrows, and special text on top
of the spreadsheet to help illustrate its content.
Excel:
You can embed object from many applications into Excel, too, Figure shows an Excel
document embedded with a Windows WAV sound, an image from Photoshop and a
video movie. The Insert menu shown in Figure will be used to add a picture into the
spreadsheet directly from a digital camera.
Spread Sheets:

A Spreadsheet is a software tool that lets one enter, calculate, manipulate, and
analyze set of numbers. Various components of spreadsheets are being discussed being
discussed below:

ANNAMALAI
ANNAMALAI UNIVERSITY
Worksheet:
UNIVERSITY
It is a grid of cells made up of horizontal rows and vertical columns. Number of rows
and columns vary form package to package.
Lotus 1-2-3 worksheet contains 8192 rows and 256 columns. MS-Excel worksheet
contains 65,536 rows and 256 columns.
Each intersection of a row and column is called a cell wherein data can be stored.

Page 76
Multimedia and Its Applications

Row number:
Data in a worksheet are divided in rows and columns. Each row is given a number that
identifies it. Row numbers start form 1 and go as 2,3,4 …
Column Letter:
Each column is given a letter that identifies it. Column letter start from A and go as B,C
… Z, AA, AB, AC … AZ, BA … BZ, IV. That is, columns are lettered A-Z, AA-AZ,
BA-BZ, … , IA-IZ.
Cells:
Cell is a unit of worksheet where numbers, descriptive text, formulas etc. can be placed.
Cell is formed by intersection of a row and a column and this intersection gives a cell a
unique address i.e., the combination of the column letter and the row number.
For instance, if a column F intersects row 3, then the cell formed out of it gets an address
F3. Similarly, C5 identifies the cell in column C, row 5.
Cell Menu Bar Toolbar Address
Address Bar

Cell (A6) Status Bar


ANNAMALAI
ANNAMALAI UNIVERSITY
Cell Pointer: UNIVERSITY
It is a highlight cell-boundary that specifies which cell is active at that moment.
Current Cell:
It is the cell, which is active. This is the cell where cell pointer points. And it is the cell
where the next entry would take place. Always an entry takes place at the current cell.
Range of cells:
A range of cells is a group of cells that forms a rectangular area in shape. A range may
contain just a single cell, or a group of cells, but must form a rectangle in order to be

Page 77
Multimedia and Its Applications

valid. Giving the addresses of first cell in the range and the last cell of the range specifies
a range.
For instance, a range starting from F7 till G14 would be written as F7..G14 in Lotus 1-2-
3 (..is the range indicator in Lotus 1-2-3) and F7:G14 in MS-Excel (: is the range
indicator in MS-Excel).
Status Bar and Control Panel:
Apart from other things a worksheet also has a status bar and a control panel. The status
bar is an area where the status, the particular program condition, is displayed.
For instance, a status indicator CALC means that the worksheet area needs to be
recalculated. Date and Time are also displayed in status bar. Also, if an error takes place,
the error messages are displayed at the status bar. A control panel is the area wherein
information regarding the current cell, mode indicators and the commands are displayed.
In Windows based spreadsheets the commands are replaced by pull-down menus and
toolbars. A toolbar is a bar having icons for various commands.
Workbook:
A spreadsheet allows you to combine more than one worksheet in a file. A file having
multiple worksheets is known as a workbook.
DATABASES
Database Management System (DBMS):
The data base management programs are a set of programs that create and use a database
consisting of one or more files. DBMS allow you to create and access multiple,
interrelated files.
Characteristics of Database:
1. Data can be stored and accessed in more than one file.
2. Data does not have to be duplicated. The accounts receivable and addressed, so they
do not have to be entered in the charges file. This lack of duplication saves storage
space and reduces the possibility of inconsistencies.
3. Files must share a common field containing unique attributes.
4. Different users can have different views of database.
5. Records can be found as they can be with a record management program but the data
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
can be pulled from one or more files.
6. Queries can be posed that require that data be drawn from one or more files.
7. Reports can be performed using data from one or more files.
8. The data is entered and used without the user knowing how
it is physically stored on the disk or how queries located it in the files in which it is
stored.
9. Data can be shared by more than one application.
10. The data can be shared by more than one user.
11. Data is independent of the application program that manipulates it.

Page 78
Multimedia and Its Applications

12. The relationship between files is established using a query language.


13. Disk storage is an important consideration. If you think you will need a large
database, the program should support either a hard disk or the storage of the database
on more than one disk.
14. A few programs have a graphics capability built in so that data does not have to be
transferred to a separate graphics program to display and print graphs.
15. It is frequently helpful to able to exchange files with word processing or spreadsheet
programs.
16. All packages include multiple levels of subtotaling for the purpose of report
generation.
17. Audit trails no security system is perfect. When data is externally sensitive an
auditorial is a record of insertions; deletions, charges performed on a file.
18. Query facilities are provided. These include commands, natural languages and menu
or graphics queries.
19. Usage statistics given the complex nature of database management and DBMS, it also
helps to know that there is a large body of other users, consultant’s developers, and
users groups to whom you can turn it you need help. Programs that are widely used
generally have these networks already established.
20. Since database Management Programs are generally used for important files, it is
important that the files be backed up so that these can be reconstructed in case of a
problem. Since this process can be time consuming, a good program will make the
process as easy and tool proof as possible.
21. Some programs compress the data to save disk space. For example the city and state
fields are not needed in every record it separate tables maintained that cross indexed
zip codes with city and state are needed the program can find the zip code and then
look in that file to found the city and state.

3.2.8 Presentations Tools


Slide Show:
All the presentation software has easy-to-use slide show option, with the facility to add
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
notes or to use a pen. Apart from a slide show, you could even have output in the form of
handouts if you desire. Impress even allows you to edit while the presentation was on.
Power Point has a live broadcast feature, though we didn’t actually check out how this
feature works.
Freelance Graphics treats each line, image or clipart as a separate object. Therefore, you
can modify the properties for each object and this greatly helps in sequencing during a
presentation. For instance, just right-click or after a certain time duration. Add other
aspects such as transition effect you want when the image appears or even add the action
that has to be performed when the object is clicked on (for instance, you can specify that

Page 79
Multimedia and Its Applications

a movie be played, or the next slide be presented, etc). Not just that, you can even have
content advice related to the content template that you have created.
MS PowerPoint leaves the rest far behind when it comes to the task of creating slide
shows and adding custom animation. Just right-click on any object (text, images, etc) and
you get all the options you could ever want. You can specify animation on the page, and
all with Auto Preview (if you have the option checked). The numbers of animation
effects, templates, slide designs and layouts have all increased considerably. Also, with
the same right-click option you can specify a certain action for the object, if clicked or on
a mouse over. The action you define can, for instance, be playing a sound or running a
program.
In terms of sheer features, there’s nothing that can be beat PowerPoint. Free-lance
Graphic and 1-2-3, offers more simplicity and optimal utilization of workspace, so if you
like your presentations simple, then definitely check these out.
Presentation Freelance Graphics PowerPoint 2002
Wizard and Provision of content Quite extensive
templates templates templates
Transition and Customizable Task Pane provides
Effects sequencing of objects number of transition and
animation effects.
Slide Show Rehearse Timings, Narration, Add Buttons,
add pointers Live Broadcast.
The Multimedia authoring tools is the glue
that holds the data together in order to inform
educate or entertain. Authoring tools offer two
basic features.
 First, the ability to create and edit a
product.
 Second the presentation scheme for
delivering the product. The selection of an
authoring tools is based on how it works
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
and what it does can be one of the most critical element of multimedia product
development.
Microsoft PowerPoint:
Start  Programs  Ms Office  Ms PowerPoint

Ruler

Slide
Page 80
Multimedia and Its Applications

Using PowerPoint we can create


 On-screen presentations
 Paper printouts
 Over-head transparencies
 35mm slides
 Notes, handouts, and outlines
When a new slide is created, the layout for the
new slide can be selected from 24 AutoLayouts.
Each offers a different layout, depending on what
you want to do.
Example :
Layout that has placeholders for a title, text, and a chart. We can move, resize, or
reformat the placeholders.

Slide Show
Slide Sorter View

3.2.9 Multimedia Authoring Tools


Authoring Tools
Authoring tools are common to nearly all multimedia tools, which perform a number of
features for encapsulating the content, presenting data, obtaining user input, and

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
controlling the execution of the product. These features include:
 Page: The page is an instant in time for a multimedia presentation. The page acts as a
container for other features or objects such as text or graphic content or controls such
as buttons. It is usually capable of various transitional effects such as fade in/out on
the screen.
 Control: Controls enables the user to interact with the product. Controls may be used
to manage or direct sequences events or to collect data, or to manage data objects.
There are three general categories of controls:
 Navigation: Buttons, hotspots, and hypertext. Many tools have a fixed set of object,
such as standard gray buttons that can be modified. Hot spots are used over graphic

Page 81
Multimedia and Its Applications

object to allow the user to interact with data. Highlighting through changing the color
of the text or underlying it usually indicates hypertext. When the user selects the
highlighted text, an action is initiated such as presenting a definition or related
information.
 Input: Text, checkboxes, radio buttons, combo/list boxes. These are used for
collecting information from the user or controlling a sequence of the information
presentation. Data collected from these may be directed to a file or other device for
storage or transmission.
 Media Controls: Applies to managing or presentation of fonts, graphics, audio, and
video. Specific features for each of these include
 Fonts: Select type, size, bold italic
 Graphics: Zoom, Scroll, mark-up/edit
 Audio/video: Play, stop, pause, rewind and volume, Zoom.
Data:
Depending on the authoring tool, data may be stored internally as part of the
application program or as external files that are accessed during program execution.
Internal storage of data usually means faster data presentation and easier transport of the
application and the data. On the other hand, external data storage usually means smaller
executable programs and faster program startup. Another advantage of external data
storage is that data can be modified to change the application without having to use the
authority tool.
Data types include:
 Text
 Graphic
 Audio
 Video
 Live audio/video
 Database
 Execution: Execution is the process that controls the presentation and sequencing of
the application. It can take on one or more of the following mechanisms,
 Linear sequenced: the user pages on one or more of the following mechanisms
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
control Presentation.
 Program Controlled: Code or scripting within the application controls program
execution. Execution, in some cases may be fixed or altered by the user.
 Temporal Controlled: a timer that initiates events in a predetermined order controls
Presentation. The presentation may pause and wait for the user to provide input prior
to continuing.
 Interactively Controlled: Presentation waits for the user to select a function such as
pressing a button before continuing.

Page 82
Multimedia and Its Applications

3.2.10 Types Of Authoring Tools


Categories of Authoring Tools:
Authoring tools consists of two basic features:
 First, an authoring facility for creating and editing.
 Second a presentation vehicle for delivery.
The authoring facility usually is based on a metaphor for organizing information and
enabling interactivity. Authoring tools can be divided into five basic categories.
 Simple authoring tools such as Word Processors and Presentation software.
 Programming tools, especially those with graphic user interface design and
programming features.
 Simple interactive authoring tools that enable the developer to create applications
with data and controls for basic interactivity.
 Complex interactive authoring tools that enable the developer to create applications
with data and controls for basic interactivity.
 Complex interactive authoring tools that allow the developers to use programming
constructs to provide a final degree of application control.
Page - Based Authoring:
In page based authoring tools, scripting usually handles specialized function such as
graphic transitions, jumping to pages according to programmed conditions (using an IF-
THEN or SELECT statement), timing and other complex presentation metaphors.
Page base authoring systems with programming capability are a natural transition from
those tools without the programming features. In fact, if you have conventional
programming experience in C, Pascal or Basic, these tools are very easy to learn and
apply.
Image Editing Tools:
Image-editing applications are specialized and powerful tools for enhancing and
retouching existing bitmapped images. These applications also provide many of the
features and tools painting and drawing programs and can be used to create images from
scratch as well as images digitized from scanners, video frame-grabbers, digital cameras,
clip art files, or original artwork files created with a painting or drawing package.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Here are some features typical of image-editing applications and of interest to multimedia
developers:
 Multiple windows provide views of more than one image at a time.

Page 83
Multimedia and Its Applications

 Conversation of major image data types and industry-standard.


 Direct inputs of images form scanners and video sources.
 Employment of a virtual memory, scheme that uses hard disk space as RAM for
images that require large amounts of memory.
 Capable selection tools, such as rectangles, lassos, and magic wands, to select
portions of a bitmap.
 Image and balance controls for brightness, contrasts and color balance.
 Good masking features.
 Multiples undo and restore features.
 Anti-ill-using capability, and sharpening and smoothing controls.
 Color mapping controls for processing adjustment of color balance.
 Tools for retouching, blurring, sharpening, lightening, darkening, smudging and
tinting.
 Geometric transformation such as flip, skew, rotate and distort and perspective
changes.
 Ability to resample and resize an image.
 ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
24-bit color, 8 or 4-bit indexed color, 8-bit gray-scale, black & white and
customizable color palettes.
 Ability to create images from scratch, using line, rectangle, square, circle, ellipse,
polygon, airbrush, paintbrush, pencil and eraser tools, with customizable brush shapes
and user-definable bucket and gradient fills.
 Multiple typefaces, styles and sizes and type manipulation and masking routines.
 Filters for special effects, such as crystallize, dry brush, emboss, facet, graphic pen,
mosaic, smooth, watercolor, wave and wind.
 Support for thirty-party special effect plug-ins.

Page 84
Multimedia and Its Applications

 Ability to design in layers that can be combined, hidden, and recorded.


These tools are important for improving and retouching existing bitmap. These tools also
serve the purpose of creating images from scratch. These can also be used for creating
digital images from scanners, cameras, etc.

3.2.11 Time Based Authroing Tools


Time-based authoring systems provide the developer with the ability to develop an
application that runs like a movie. Frames are created with objects that are sequentially
played or halted to allow user input to change the flow of the program. Temporal based
systems include the following features.
 A page or display frame on which objects such as buttons, text or audio/video
data are placed. Objects may have code attached for special processes such as
jumping to a different page or loading a data file.
 An array for keeping track of pages/frames including their sequence, timing and
content.
 A controller for playing, stopping, stepping or rewinding a presentation.
The primary advantage of temporal-based tools is the ability to create complex
animations and transitions. A sequence of pages or frames played in rapid succession, for
example, one per 1/30 of a second, will create a movie like effect.
3.2.12 Object Oriented Authoring Tools
In these authoring systems, multimedia elements and events become objects that live in a
hierarchical order of parents and child relationships: Messages passed among these
objects order them to do things according to the properties or modifiers assigned to them.
Object-Oriented tools are particularly useful for games that contain many components
with many “personalities” and for simulating real-life situations, events, and their
constituent objects. Some examples of object-oriented systems include:
 M Tropolos (Macintosh/Windows)
 QuarkImmedia (Macintosh/Windows)
JAVA is also and object-oriented programming environment. Increasingly Ling, the
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
scripting language used in Director is taking an object-oriented approach.
Icon Based Authoring Tool
Icon-based authoring enables complex interaction and layering of multimedia products.
This is accomplished by letting the author select objects, timing and sequencing
“devices” in the form of icons and placing them on a layout screen where they are
interconnected. As icons are selected and linked, screens can be defined with text,
graphic, buttons and other objects. As with the other advanced authoring tools, code or
scrip can be attached to objects such as buttons to facilitate user interaction or program
execution. One of the major advantages of this approach is the ability to graphically
design complex interactions for repetitive applications. Templates can be constructed

Page 85
Multimedia and Its Applications

that are “filled in” by the developer are reused within the application or across similar
applications. Templates often contain data and controls that are common to a number of
pages within the application.
For example, a template could contain a standard page title, background color, and
positioning of recurring buttons, that can be reused over and over during development.

3.3) Revision points


Authoring can be done by two ways:
a. Content assembly: The content is put together and linked to another content.
b. Functional Programming: Software is created to provide specific behavior.
There are many considerations in selecting the appropriate authoring environment
such as:
 Level of interactivity required. Is the application a simple page turner or
does it have complex interactive behavior.
 Platform requirements including type and features of hardware and
operating system
 Interaction with other software and systems such as database and
networks.

3.4 ) Intext Question


1. What is the need of interface design ? Explain five fundamental rules for
interface design in multimedia applications.
2. What is hypertext? Explain any two application areas of hypertext.
3. What is authoring software? Explain three features of authoring software.
4. What are annotations? Explain the role of annotations in the applications of
hypertext with an example.
5. What is hypertext? Explain the use of hypertext in any three applications.
6. Explain the three categories of presentation tools available for multimedia
development.
7. What is QuickTime? Explain the working of QuickTime. Also write two
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
advantages of using QuickTime.
8. Explain the use of button element of hypertext with an example.
9. What is Macromedia Director? Explain two features of Macromedia Director.
10. What is animation? Explain, with an example, how animations are used in
multimedia project.
11. What is digital audio? Also explain two advantages of digital audio over
conventional audio.
12. What are image editing tools? State the applications.
13. Describe time based authoring tools.
14. What comprises of multimedia? Explain.

Page 86
Multimedia and Its Applications

15. Explain OCR, painting and drawing tools.


16. Describe the procedure for linking multimedia objects.
17. Write briefly on hypermedia and hypertext.

3.5) Summary
Authoring is the process of assembling the content into the multimedia software
development environment following the map provided by the storyboards. It is the
focal point of the software design and storyboarding / content efforts.

3.6) Terminal exercises

Fill ups (with answers)

1. A multimedia presentation is the combination of light, motion and sound.


2. One of the most spectacular multimedia shows is the natural phenomenon of a
thunderstorm.

3. You should understand two key principles when you use multimedia. Our eyes
are attracted to light. Our eyes are attracted to motion.

4. When you present with multimedia you are more than just a performer. You are
a director and producer.

5. When each slide first appears on the screen you should know by design the first
place their eyes go and after a few seconds - the second place their eyes go.

6. The most common error in creating computer slides – is too much on the slide.
The most important element of your presentation is you. When presenting with a
computer projector be sure that the audience can see you and hear you because it
is you that they buy – not your slides.

7. Murphy is the one who said that whatever can go wrong will go wrong. Murphy
loves technology. Some even believe that Murphy invented computers.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
8. You can be prepared for Murphy to visit during your PowerPoint Presentation by
having a saver line. This allows you to save your presentation by relaxing the
audience and assuring them that you are in control.

9. A great PowerPoint show will not save a bad presenter. But a superior presenter
can save the audience from a bad PowerPoint show.

3.7) Supplementary Materials


http://www.mcli.dist.maricopa.edu/authoring/lang.html
http://www.cgsb.nlm.nih.gov/authorb

Page 87
Multimedia and Its Applications

3.8) Assignments
1. Explain the statement "Multimedia productions are tailored to specifically meet
the users needs" with an example.
2. Compared with printed text, hypermedia dramatically changes the way
information is characterised, accessed and utilised. This highly rich media
experience presents problems and benefits for its users and developers.
Demonstrate your depth of knowledge on the subject by identifying and
discussing issues you find relevant to the above statement.

3.9) Suggested Reading


1. Keven Jeffay & Hong Jiang Zhang, 'Reading in Multimedia Computers and
Networks', Academic Press, 2002.
2. Nigel chapman & Jenny chapman, 'DIGITAL MULTIMEDIA', Wiley,2000.
3. Praabhat K.Andleigh & Kiran Thakrar,'Multimedia Systems Design',PHI,2003
4. Tay Vaughan, 'Multimedia Making it Work',Tata McGraw Hill,2002

3.10) Learning Activities


1. Design a multimedia package that helps in teaching "Multimedia in Education”.

You must follow the standard methodology for development of multimedia. You must fill
up various templates required for the development of Multimedia.

Present a prototype of your design using MS-Office tools. Your prototype should include
at least 10 slides having suitable graphics, simple animation and few audio clips. (Audio
clips need not be recorded professionally but can be recorded using built in microphone
of simple multimedia computer).

2. Interactive Multimedia is able to enhance the teaching, learning and assessment


processes in the field of education.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
1. Using your knowledge of Multimedia and CBL provide justification for this
statement and suggest a scenario where this concept could be demonstrable.

2. Using this scenario, propose learning tasks that would benefit from a CBT (or
your chosen variant) approach. Explain the added value of using Multimedia for
the tasks.

3. Do you envisage any disadvantages with a computer based learning approach;


explain your thoughts on this.

Page 88
Multimedia and Its Applications

3.11) Key words


Authoring tool: A computer program designed to be simple to use when building an
application. Supposedly no programming knowledge is needed, but usually common
sense and an understanding of basic logic are necessary.
Modeling: In 3D graphics, building a scene by defining objects in the scene and
arranging them and their environment.
Texture Mapping: The process of applying a 2D image to a 3D object.
WYSIWYG: Describes an application that shows you the end result of our work
exactly as it will be seen by the end-users: What You See Is What You Get.
Zoom Tool: A tool for magnifying the image you are currently working on.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 89
Multimedia and Its applications

UNIT-IV

4.0) Introduction
The ability to access information stored as different media depends on the availability of
standard data formats that is understood by most applications in use. Proprietary formats
are typically more compact compared with open standard formats. Although there are
many proprietary formats for each media type, they are often not suitable for use in
defining multimedia building blocks since the ability to access the information contained
in those data files depend very much on the availability of filters for the respective
applications.
Basic Multimedia Building Blocks are
 Text
 Sound
 Images
 Color
 Animation
 Video

4.1) Objective
To improve the full digital content chain, covering creation, acquisition, management and
production, through effective multimedia technologies enabling multi-channel, cross-
platform access to media, entertainment and leisure content in the form of film, music,
games, news and alike.

4.2) Content
4.2.1 Text
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Using text for communication is very recent human development that is popular now but
actually began about 6000 years ago. Nowadays, text and the ability to read is doorway
of power and knowledge.
Every single word can be interpreted in different ways. So it is important it cultivate
accuracy and conciseness in the specific words that we choose.
Multimedia authors weave words, symbols, sound and images and then blend text to
create integrated tools and interfaces or acquiring, displaying and dissemination messages
and data using computers.

Page 90
Multimedia and Its applications

The most common tool for manipulating text is a word processor and most like Microsoft
Word, have built-in wizards to create interesting looking text, as shown in following fig.
However the problem with this technique is transferring the text into a multimedia or
Web page because of the proprietary nature of the image format.

Using Text In Multimedia


Using Text
The term typeface is the most general term used to describe text. It refers to a family of
graphic characters. There is some basic consistency of look that makes the individual
characters, regardless of size and style variations, part of the same family. "Helvetica"
and "Times" are examples of typefaces. Below is an example of a typeface called "Serb."
All of these characters belong to the "Serb" family or typeface.

The term type size (sometimes called font size) is the next level of specification. The
type size is the distance from the top of the capital letters to the bottom of the
"descenders" in letters such as "g" and "y." Type sizes are generally expressed in
"points." On paper, one "point" is .0138 inches or about 1/72 of an inch. However, due to
different size monitors, in the electronic world this term is generally meaningful only as a
method of comparison. Below is an example of how type size is determined.

The term font style refers to... the particular style of textual characters. Styles are usually

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
standard, bold and italic. Below is an example of font style.

The term font is the most specific unit of text. Fonts are a particular typeface, size and
style. Helvetica 12 - bold would be an example of a Font. The typeface (Helvetica), size
(12 points), and style (bold) are all included. Below is an example of two different fonts.

Page 91
Multimedia and Its applications

Another way to categorize text has to do with its historical origin. A typeface can be
either Serif or Sans Serif. Serif characters have a little "flag" or decoration at the end of
the letter stroke. It could be said that Serif characters are embellished. Below is an
example of the letter "T" as a Serif character. Notice the "flag" or decoration.

Sans Serif (sans is French for "without") characters don't have these decorations. The
example below shows several characters as Serif characters and as Sans Serif characters.
Notice the more basic look to the Sans Serif characters.

Certain rules about the use of text for print don't work well in the electronic world. For
example, in print it has mostly been assumed that headlines are best in Sans Serif and that
body text is best when done in a Serif typeface. On a computer screen or television set it's
often best to go with something simple and bold. Therefore, it is generally recommended
that multimedia/web developers reverse the print rule. In other words, because body text
is often small, it is best to use a Sans Serif typeface in a bold style. Only when the
characters will be large (such as headings) is it a good idea to use Serif characters.
Text on the Web
Now, that we understand some of the terms and rules for text, lets look at how text is
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
used on the World Wide Web. The WWW is often called a user definable interface. In
other words, much of what a web browser shows on the computer screen can be changed
to fit the particular tastes of the user. Software such as Netscape allows the user to
determine what font is used for general text. So, the basic textual information you write
as part of a web presentation is something you, as a web creator, don't determine.
However, often time’s text on a web page is part of an inline image. In other words, you
might utilize text as part of an inline graphic image which is displayed. In this case you,
the web creator, do determine what font the user sees. You control everything from font
to color to drop shadow.

Page 92
Multimedia and Its applications

Below is an example of text that is used as part of a graphic image. In this case it's the
graphic that's used at the top of most of these pages.
Notice that the image contains a particular font, different colors and includes a drop
shadow. This is different from the text you're reading now. This text is something the
user can control. The image above is something you control.
Computer And Text
About fonts and faces
A typeface is a family of graphic characters that usually includes many type, sizes and
styles. A font is a collection of characters of a single size and style belonging to a
particular typeface family. The usual style or fonts are BOLDFACE and ITALIC. Type
sizes are express in points (1point=013 inches). A few examples of typefaces are:
 Bookman Old Style
 Times New Roman
 Courier
Times 12.italic is a Font. Usually the term font is commonly used while typeface would
be more correct.
A font size differs. So it cannot be used to describe
the height or width of the character. Tools usually
automatically add space to provide appropriate line
spacing or “leading”. Leading can be adjusted in
most programs on both Mac and Windows. We can
usually find this either fine-tuning adjustment or in
the paragraph menu. For best results, we need to
experiment and find out. With a front editing program like FontoGrapher (from
MacroMedia) adjustments can also be made along the horizontal access of text: the
character metric of each of character and the kerning of character pairs can be altered.
Character metrics is the general measurement applied to individual characters. Kerning
is the spacing between character pairs.
Parts of a typeface design.

ANNAMALAI
ANNAMALAI UNIVERSITY
Arm UNIVERSITY
Ascender Ear Bracketed
Serif

EbgjeLi
Stem Counter Loop Tail Terminal Serifs

Page 93
Multimedia and Its applications

SERIF versus SANS SERIF

Typefaces can be described in many ways. One simple way of categorizing it is “serif”
versus “san serif”. The type either has a serif or it does not! (“SANS” is “without” in
French). The serif is the little decoration at the end of a letter stroke.
Example
Examples of 12 point size 24 point size
Serif fonts Times
Courier Times
Courier
Sans-serif fonts Helvetica
MS Reference
Helvetica
Sans Serif MS Reference
Sans Serif
Lucida
Script Fonts
Monotype Corsiva Lucida
Monotype
Corsiva
Display Fonts Matisse ITC
Westminister Matisse ITC
Westminister

Symbol fonts
 

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Font Editing And Design Tools
Font Technology
When a computer displays a character on a monitor or prints it on a laser, inkjet, or dot-
matrix printer, the character is nothing more than a collection of dots in an invisible grid.
Bit mapped fonts store characters in this way with each pixel represents as black or white
bit in a matrix. A bit mapped font usually looks fine on screen in the intended point size
but doesn’t look smooth when printed on a high-resolution printer or enlarged on screen.
Most computer systems now use scalable outline fonts to represent type in memory until
it is displayed or printed. A scalable font represents each character as an outline that can

Page 94
Multimedia and Its applications

be scaled- increased or decreased in size without distortion. Curves and lines are smooth
and don’t’ have stair-stepped, jagged edges when they’re resized. The outline is stored
inside the computer or printer as series of mathematical statements about the position of
points and the shape of the lines connecting those points.
Downloadable fonts (soft fonts) are stored in the computer system (not the printer) and
downloaded to the printer only when needed. These fonts usually have matching screen
fonts and are easily moved to different computer systems. Most importantly, you can use
the same downloadable font on many printer models.
The problem with most of the programs listed below is that they don't deal natively with
TrueType. Instead, as they load the font, they convert the outlines into PostScript-style
cubic Bézier curves, and discard all the hints. For high quality fonts at low resolution, this
is a tragic loss.
TrueType hinting takes the form of little programs attached to each glyph, and it is
admittedly hard, in fact virtually impossible, to work out automatically which program
instructions can remain, and which must change when a glyph is modified. However, it's
a shame that you can't leave alone any glyphs you don't modify: these programs affect
every glyph.
Failing the introduction of
affordable native TrueType
editing tools, if you're making
TrueType fonts from scratch,
or converting PostScript Type
1 fonts to TrueType, then the
following programs are
certainly worth a look. But be
warned of the serious problem
that arises when editing
existing TrueTypes: total loss
of all hints (followed by semi-
automatic, almost always
inferior, hint regeration); and of
the not so serious problem:
conversion of quadratic curves

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
into cubics and back again,
with probable loss of precision.
MacroMedia Fontographer
Fontographer is a very old program
that Macromedia took over from the
original developers, Altsys, and
didn't do very much with. It was last
updated in 1996 and is now
effectively dead in the water. It's
still available for Mac OS9 and

Page 95
Multimedia and Its applications

Windows but there is no OSX version.


in Fontographer, the 'pixels' are drawn with the rectangle tool like this. None of the other tools
are required.
On the plus side, Fontographer is relatively easy to use, especially for beginners. It's
designed for creating printer fonts primarily, but will handle pixel fonts with a little extra
effort.
FontLab 3.0
FontLab is a much more sophisticated font editor, more up-to-date and capable than
Fontographer. FontLab is available for Mac OS9, OSX and Windows but with all the
bells and whistles it provides, can take a while to master. It also has a number of bolt-on
programs to add to its functionality, one of which is a pixel font editor called BitFonter.
You can download demo versions of these programs and their manuals so that you cans
see what you are letting yourself in for.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
BitFonter gives FontLab pixel editing
facilities

 Letraset FontStudio
Written by Ernie Brock, Harold
Grey and others at Ares (long
before it was taken over by
Adobe), it is no longer marketed.

Page 96
Multimedia and Its applications

FontStudio remains the choice of many professional type designers.


 URW Ikarus
Ikarus was originally developed in 1973-74 at the Rudolf Weber company in
Hamburg (now URW) by Peter Karow. URW (Unternehmensberatung Rubow
Weber, from the founders' names), in Hamburg, Germany, produces font software
(notably, the IKARUS font design system), fonts, and logos. This was the first
time type had been digitized as
outlines.
The native curves of Ikarus are
Hermite splines, having the important
property that all the control points lie
on the outline itself. (It must be
disappointing to URW that designers
have generally accepted off-curve
Bézier handles.) Ikarus was written in
FORTRAN and has been used by
foundries on VAX and Sun
workstations to store thousands of
type designs.

 DTP Type Designer


by Manfred Albracht in Aachen, Germany, lets you "quickly and easily design
professional-quality Type 1 and TrueType fonts". A complete Type 1 editor, it is
commonly regarded as performing great Type 1 to TrueType conversions (equal
best with the converter inside Windows NT 3.5), as it examines each Type 1 hint
and recodes it as TrueType instructions. (Most programs have separate, automatic
TrueType auto-hinters, the results of which are often unpredictable.) Runs on
Windows 3.1.
 CorelDraw

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 97
Multimedia and Its applications

From version 3.0 onwards the Corel behemoth is occasionally used to create and edit
TrueType and Type 1 fonts. But most people find it lacks too many features and
move to a dedicated font editor if they are serious about type design
Drawing Font
There are several font formats in use these days but the two main ones are Adobe Type 1
PostScript font and TrueType - invented by Apple.
Type 1 fonts are generally used for print in conjunction with DTP programs like Quark
XPress and Adobe InDesign. They come in two parts, a vector outline (printer) font and
bitmap (screen) font. The bitmap fonts are only used on older systems to give a rough
approximation of what the printed result will look like. On modern systems, the screen
display is generated on-the-fly from the vector font.
More common these days, TrueType fonts are outline (vector) fonts and don't require a
separate bitmap screen font, the screen display is rendered from the vector outline.
In principle, TrueType fonts are capable of higher quality than the older PostScript fonts
because they have more 'sample' points. In reality, the quality of the 'cut' is generally
better for PostScript fonts because of the skill of the designers. For pixel fonts, TrueType
is the obvious choice.
Whichever program you choose to use, making a pixel font is only a matter of
transferring your penciled design to corresponding square shapes in the font editor. If a
number of squares run together, then you can draw a rectangle but before you start, you
should make a grid of guidelines.
If you count the number of pixels from the top of the highest character to the bottom of
the lowest, that is the 'pixel height' of your font. You can add a few extra pixels of line
spacing above and below if you like.
Draw horizontal guidelines for each row of pixels and similarly, enough vertical
guidelines to allow for the widest character – generally the thousand percent symbols. If
you then switch-on snap to guide, you’re drawing with the rectangle drawing tool will
lock-on to perfect pixels.
You need to add a certain amount of space at one or both sides of each character. This is
called the 'sidebearing' and depends on the size and style of the font – but it must always
be an exact number of pixels so there are no possibilities for subtle kerning. It's best to
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
start with one pixel space on the left and one on the right of each character. It's not until
you do a test setting that you can decide whether to increase or reduce the space on one
side or both. It's something that you learn from experience.
When you have drawn all the characters, then you can generate the font.
Generating Font
When it comes to saving the actual font, you have a number of options to consider.
Firstly, are you using it on a Mac or a PC. Fonts are different between Mac OS and
Windows – both in the file format and in the order of the characters. Characters with
ASCII values between 32 and 127 are common to both platforms but the characters from

Page 98
Multimedia and Its applications

128 up are in different 'encodings' or character order, and different again according to
your language.
The most common encoding for Macintosh computers is 'MacRoman' and the Windows
'standard' encoding provides slots for characters from 128 to 255 with some slots
'reserved' for control characters.
More recent 'Unicode' fonts have slots for thousands of characters so that you can include
characters or 'glyphs' for multiple languages if you like.
When you have chosen the appropriate encoding, you can generate the font file.
Testing and find tuning
Install your font in the usual way and try some test settings. Copy and past a chunk of
text from anywhere into you paint program. Select it and apply the font you have
designed, whatever you called it when you saved it. Make sure that the document
resolution is 72 pixels per inch and set the font size to the the same size as the number of
vertical pixels that you used for your grid. Make sure anti-aliasing is turned off and that
no stretching or kerning is in operation.
All being well, you should have a crisp, sharp font with no blurring anywhere. You now
have to look at the character shapes and the spacing between them and make a note of
anything that needs to be fixed.
When you have done that, uninstall the font before going any further and put it away
somewhere out of the way – or better still, trash it completely. Go back to the font editor
and make the adjustments you noted down, regenerate the font, reinstall it and try again.
You will probably have to go round this sequence of events a number of times until all
the quirks are ironed out.
Hypermedia and hypertext
Text becomes hypertext with the addition of links, which connects separate locations
within a collection of hypertext documents. Links are active: using some simple gesture,
usually a mouse click and a user can follow a link to read the hypertext it points to. To
make this happen, a piece of software called a browser is required. Usually, when you
follow a link, the browser remembers where you came from, so that you can backtrack if
you need to. The World Wide Web is an example of a (distributed) hypertext system and
a Web browser is a particular sort of a browser.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 99
Multimedia and Its applications

Browsers tend to encourage people to read hypertext in a non-linear fashion: instead of


staring at the beginning and reaching steadily, you might break off to follow a link, which
in turn might lead you to another link that you can follow, and so on. At some point, you
might go back to resume reading where you left off originally, or you may find it more
fruitful to go on pursuing links.
Hypertext raises some new issues of storage and display.
 How are links to be embedded in a document?
 How are their destinations to be identified?
 How are they to be distinguished from the surrounding text?
To appreciate the answers to those questions, we need a clear understanding of what a
link is, and what sort of thing links connects.
The World Wide Web exemplifies the simplest case. If we confine ourselves for the
moment to pages consisting purely of text, these are self-contained passages, which may
be of any length, but usually fit on a few screens. Their only connection with other pages
is through links (even though a link may be labeled ‘next’ or ‘previous’, such sequential
connections between pages must be explicit). Within a page though, the normal
sequential structure of text is exhibited: you can sensibly read a page from beginning to
end (although you don’t have to), and elements such as headings and paragraphs are used
to structure the content of the page. The HTML source of a Web page is often held in a
file, but some pages are generated dynamically, so you cannot always identify pages with
files, or indeed, with any persistent stored representations.
Hypertext systems are generally constructed out of self-contained elements, analogous to
Web pages, which hold textual content. In general, these elements are called nodes.
Some systems impose restriction on their size and format – many early hypertext systems
were built on the analogy of 3 by 5 index cards, for example – whereas other allow
arbitrarily large or complex nodes.
In general, hypertext links are connections between nodes, but since a node has content
and structure, links need not simply associate two entire nodes – usually the source of a
link is embedded somewhere within the node’s content. To return to the World Wide
Web, when a page is displayed, the presence of a link is indicated by highlighted text
somewhere on the page. Furthermore, a link may point either to another page, or to a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
different point on the same page, or to a specific point on another page. Hence, Web
links should be considered as relating specific locations within pages, and, generally links
connect parts of nodes.
In HTML, each link connects a single point in one page with a point (often implicitly the
start) in another, and can be followed from its source in the first page to its destination in
the other. We call links of this type simple unidirectional links. XML and other more
elaborate hypertext systems provide a more general notion of linking, allowing the ends
of a link to be regions within a page (regional links), links that can be followed in either
direction (bi-directional links), and links that have more than just two ends (multi-
directional links).

Page 100
Multimedia and Its applications

Adobe’s Portable Document Format (PDF) supports hypertext linkage. PDF links are
uni-directional, but not quite simple, since a restricted form of regional link is provided;
each end of a link is a rectangular area on a single page. Since Acrobat Distiller and
PDFWriter make it possible to convert just about any text document to PDF, and links
can then be added using Acrobat Exchange, this means that hypertext links can be added
to any document, however it was originally prepared.
Acrobat Reader in a variety of styles may display links. The default representation
consists of a rectangle outlining the region that is the source of the link. When a user
clicks within this rectangle it is highlighted – by default, colors within the region are
inverted – and then the viewer displays the page containing the destination region. The
actual destination, always a rectangular region, is highlighted. When the link is created,
the magnification to be used when the destination is displayed can be specified, which
makes it possible to zoom in to the destination region as the link is followed.
The links that can be embedded in a Web page composed in HTML are simple and uni-
directional. What distinguishes them from links an earlier hypertext system is the use of
Uniform Resource Locators (URLs) to identify destination. The URL syntax provides a
general mechanism for specifying the information required accessing a resource over a
network. For Web pages, three pieces of information are required: the protocol to use
when transferring the data, which is always HTTP, a domain name identifying a network
host running a server using that protocol, and a path describing the whereabouts on the
host of the page or a script that can be run to generate it dynamically. The basic syntax
will be familiar: every Web page URL begins with the prefix http://, identifying the
HTTP protocol. Next is the domain name, a sequence of sub-names separated by dots,
for example, www.chennaionline.com.
After the domain name in a URL comes the path, giving the location of the page on the
host identified by the preceding domain name. A path looks very much like a UNIX
pathname: it consists of a /, followed by an arbitrary number of segments separated by
/characters. These segments identify components within some hierarchical naming
scheme. In practice, they will usually by the names of directories in a hierarchical
directory tree, but this does not mean that the path part of a URL is the same as the
pathname of a file on the host – not even after the minor cosmetic transformations
necessary for operating systems that use a character other than / to separate pathname
components. For security and other reasons, URL paths are usually resolved relative to
some directory other than the root of the entire directory tree.
ANNAMALAI
ANNAMALAI UNIVERSITY
Hypermeida UNIVERSITY
A Brief History of Hypermedia
Definition of the Hypertext:
"Information is linked and cross-referenced in many different ways and is widely
available to end users" (Hooper, 1990).
"Hypertext means a database in which information (text) has been organised nonlinearly.
The database consists of nodes and links between nodes" (Multisilta, 1995).
A Hypermedia Timeline

Page 101
Multimedia and Its applications

Ted Nelson: Xanadu is Ted Nelson's dream since early `60s: all the world literature in
one publicly accessible global online system (analogy: you can today get a telephone link
from anywhere to anywhere, so why not from any text to any other?). Every reference to
a text will lead to royalties being paid automatically to the author. Includes the use of full
versioning (claimed to be horrifyingly complex), "hot links" (called transclusions) and
zippered texts (eg. parallel texts like for translations or annotations.). A few of ideas in
Xanadu are now implemented in WWW.
Dough Englebart: Can be called as The father of the Hypertext. He invented mouse and
touch screens. He is also creator of one of the first hypermedia systems NLS/Augment.
Definitions of Concepts
A link is defined by source and destination nodes, and by an anchor in the source node.
The destination of a link can be a file (so-called string-to-lexia link) or a string in a file
(string-to-string link).
With a string-to-lexia link it is not possible to reference to a certain part of a file. This
kind of link can make hypermedia easily navigable, especially if the destination nodes are
"short" documents. String-to-string links would permit the destination to be a string in a
file, but this kind of link requires more planning in the design process.
String-to-lexia links also support implicit linking. Implicit links are generated by the
hypermedia software on runtime, for example referential links from a concept to the
definition of the concept. To some extent hypermedia software should generate links
from the fluctuated forms of concepts (a link from word "matrix" and "matrices"). In
contrast to implicit links, the hypermedia author generates explicit links.
The nodes and links form a network structure in the database. Hypermedia is a database,
which contains pictures, digitised videos, sound and animations in addition to text.
Document Markup Languages
Why we need a document markup? Bryan defines markup as: "Markup is the term used
to describe codes added to electronically prepared text to define the structure of the text
or the format in which it is to appear." There can be two types of markups: specific
markup and generalized markup. Specific markup describes the format of the document
whereas generalized markup describes the structure of the document (headings, citations
etc.). For example, Rich Text Format (RTF) is a specific markup language and TeX,

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
LaTeX, SGML and HTML are general markup languages.
Standard Generalized Markup Language (SGML)
SGML is an international stardard (ISO 8879) for document markup. An SGML
document contains a document type definition (DTD) and a set of elements that are
defined in DTD. Each element has a name and it can be used as a tag in SGML
document.
HyperText Markup Language (HTML)
HTML is a SGML based markup language for WWW documents. HTML is actually a
DTD, a set of definitions of how to interpret HTML tags.

Page 102
Multimedia and Its applications

HyTime
HyTime is an international standard for hypermedia documents. It is based on SGML, but
it can reference to a data in almost any format. Only the hypertext link information is
required to be in SGML format.
TeX and LaTeX
TeX and LaTeX are also general markup languages in the sense that we only describe
document structures with LaTeX macros. The definition of the macros can be later
changed and the document could be formatted differently.
There is a HyperTeX that has limited hypertext capability by implementing special
keyword so that is supports for example URL's. DVI viewer is then used to display ps
files containing URL's. DVI viewer could call WWW browser to follow the URL.
Rich Text Format (RTF)
The difference between SGML and RTF is that SGML describes the structure of a
document, whereas RTF describes mainly the physical characteristics of the text (text
face, size, etc). However, RTF includes also certain tags that describe document
structure. The author can define a set of styles for the document (heading 1, heading 3,
abstract, etc) that are written into the beginning of the RTF file and have a special tag in
the RTF markup. An RTF file contains all text formatting, pictures and formulas and it is
a standard defined by Microsoft.
OpenMath
OpenMath consortium is an international group of researchers designing a protocol for
exchanging mathematical information between applications. For example, a general-
purpose computer algebra system could call a specific purpose application to execute an
algorithm implemented only in this application. OpenMath tries to preserve semantic
information in addition to the structural information of the formula. For example, TeX
describes only the visual appearance of a formula, not the semantic structure of the
formula. Similar visual representation of mathematical formulas has been planned to
SGML. MathLink is communications protocol for exchanging Mathematica expressions
and data between Mathematica and external applications. The difference between
MathLink and OpenMath is that MathLink does not define the semantical information of
a formula.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
OpenMath will include SGML compatibility, so that OpenMath objects can also be
included in SGML documents.
Hypermedia Models
Hypertext Abstract Machine
Hypermedia is divided into:
1) User interface,
2) hypermedia application (client),

Page 103
Multimedia and Its applications

3) HAM Hypermedia "engine" (server) that retrieves link and node information from
database and passes that to the hypermedia application,
4) database.
Dexter Hypertext Reference Model
Purpose of Dexter model is that it is standard hypertext terminology coupled with a
formal model of the important abstractions commonly found in a wide range of hypertext
systems. Dexter model is actually a formal specification of generic hypermedis system
written in Z.
Run-time layer: presentation of the hypertext, user interaction, dynamics
|
(Presentation specifications)
|
Storage layer: database containing network of nodes and links
|
(Anchoring)
|
Within component layer: the contents/structure of nodes.
Components are Dexter model concept of nodes, frames, cards and links in other systems.
Hypermedia Systems
Intermedia
A well known hypermedia system is Intermedia developed at Brown Universitys Institute
for Research in Information and Scholarship (IRIS) between 1985 and 1990. Intermedia
is a multiuser hypermedia framework where hypermedia functionality is handled at
system level. Intermedia present the user graphical file system browser and a set of
applications that can handle text, graphics, timelines, animations and videodisc data.
There is also a browser for link information, a set of linguistic tools and the ability to
create and traverse links. Link information is isolated from the documents and is saved
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
into separate database. The start and end position of the link is called anchors.
World-Wide-Web
World Wide Web (WWW) is a global hypermedia system on Internet. It can be described
as wide-area hypermedia information retrieval initiative aiming to give universal access
to a large universe of documents. It was originally developed in CERN for transforming
research and ideas effectively throughout the organization. Through WWW it is possible
to deliver hypertext, graphics, animation and sound between different computer
environments. To use WWW the user needs a browser, for example NCSA Mosaic and a
set of viewers that are used to display complex graphics, animation and sound. NCSA
Mosaic is currently available on X-Windows, Windows and Macintosh.

Page 104
Multimedia and Its applications

NSCA Mosaic and Netscape


The browser itself can read hypertext documents that are marked with HyperText Markup
Language (HTML). HTML is based on Standard Generalized Markup Language
(SGML), and contains all formatting and link information as ASCII text. HTML
documents can reside on different computers on Internet, and a document is referenced
by URL (Universal Resource Locator). URL is of the form
http://computer.org.country/doc.html where computer.org.country is the name of the
computer and doc.html is the search path to the document. In order to create a node for
WWW, a HTTP (HyperText Transfer Protocol) server application is needed. A link in
WWW document is always expressed as URL. Links can be references to files in ftp-
servers, Gophers, HTTP-servers or Usenet newsgroup.
Netscape is a popular WWW browser developed by Netscape Communications Corp.
Netscape 1.1 supports some HTML 3.0 features (tables) and has interesting API, that
makes it possible to develop
Documents that are formatted using RTF can be transfered to HTML by using a converter
RTFtoHTML. It generates HTML document from the original RTF document and a set of
picture files if the RTF document contained pictures. In the HTML document links are
created to the graphics files. The graphics can be viewed on most environments if
pictures are of type GIF.
Arena
Arena is an experimental WWW browser developed in CERN. It supports HTML 3.0 and
thus is able to display mathematical formulas and tables.
HyperCard, Toolbook and MetaCard
HyperCard is hypermedia authoring software for Macintosh computers. It is based on a
card-metafora. Hypercard application is called a stack or a collection of stacks. Each
stack consists of cards and only one card is visible in a stack. A card is displayed in fixed
size window. Hypertext links can be programmed by creating buttons and writing a
HyperTalk script for the button.
MetaCard is similar application than HyperCard but it runs in Unix environments.
MetaCard offers the ability to create and modify applications using interactive tools and a
simple scripting language.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Intrestingly, HyperCard stacks can be imported to MetaCard. However there are some
incoptabilities on the HyperTalk and MetaTalk, so advanced stacks dont run without
modifications.
LinksWare
LinksWare is a commercial hypermedia authoring software for Macintosh that can create
hypertext links between texts files created with different word processors. LinksWare
uses a set of translators to convert files to its own format (Claris XTND system). This can
make the opening of a file very slow. LinksWare can open files that contain mathematical
text, but files may be formatted differently than in original document, especially formulae
do not appear to have proper line heights. In addition, it can not create links to other

Page 105
Multimedia and Its applications

applications. However, it can create links to Apple script command files that can open an
application and execute commands for that application.
HyperG
Hyper-G is the name of a hypermedia project currently under development at the IICM.
Like other hypermedia undertakings, Hyper-G will offer facilities to access a diversity of
databases with very heterogeneous information (from textual data, to vector graphics and
digitized pictures, courseware and software, digitized speech and sound, synthesized
music and speech, and digitized movie-clips). Like other hypermedia-systems it will
allow browsing, searching, hyperlinking, and annotation. Like no other big hypermedia
system known today, it will also support automatic indexing and link-generation, a
variety of automatic consistency-checks, a built-in messaging and computer conferencing
system, a special editor allowing the incorporation of animation sequences,
question/answer dialogues, and a number of unorthodox man-machine interfaces. Further,
and maybe most important of all, it is built on the basis of already existing large
databases: hundreds of CAI lessons, a large general-purpose encyclopaedia in
hypermedia form, a number of smaller special-purpose lexica, a data-base of thousands of
pictures, some pieces of digitized sound and movie-clips, and links to other databases in
other networks. A number of smaller spin-off applications are surfacing which are mainly
pursued by IMMIS and have led to research in the area of computerisation of various
aspects of museums.
Designing a Hypermedia
Important questions in designing the hypermedia are:

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
 Converting linear text to hypertext
 Text format conversions
 Dividing the text into nodes
 Link structures, automatic generation of links
 Are nodes in a database or are they separate files on file system

Page 106
Multimedia and Its applications

 Client-server of standalone
Text indexing is a well-known problem area and results from there can be used to study
automatic generation of links. In principle, a document can be analysed semantically
(with the help of AI), statistically or lexically (by computing the occurrences of words).
Problems in semantic analysis are that natural language is not easy to understand by the
computer. In lexical analysis problems are for example the conflation of words and
recognition of phrases (Esim. matriisi, matriisin, matriisilla mutta ei jälki, jälkeen).
Solutions:
 Conflation algorithm
 Stemming algorithm
 Stopword-list
Hypermedia Applications
Hypermedia is applied in many areas, especially in education and technical
documentation.
Future Directions of Hypermedia
There is a trend that hypertext features start to appear in ordinary applications like word
processors, spreadsheets etc. This is called hypertext functionality within an application.
Good examples of this are Microsoft Internet Assistant, MathBrowser and MatSyma.
Eventually, this will lead to system software containing support for hypertext features,
nodes, and links and browsing.

4.2.2 Sound
Physically, sound is vibration of some medium. The word is also used to describe the
sensation of this vibration when received by the ear.
Sound is created when some
object vibrates. Consider a guitar
string that has been plucked. The
string is stretched in one
direction and then the elasticity
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
of the string forces it back to its
original straight position. The
momentum of the string carries it
past the original position in the
opposite direction. This back and
forth motion continues until the
energy has dissipated. As the string moves, it pushes air molecules in front of it and
compresses them together, creating a high-pressure area. Also, air molecules behind the
string are drawn into the space vacated by the string, creating a low-pressure area.

Page 107
Multimedia and Its applications

Air itself is elastic. The high-pressure area pushes the molecules next to it and this sends
a wave of compression outward from the string. As the string reverses direction, a low-
pressure area is sent out following the high-pressure area. This flow of high and low-
pressure areas continues to move away from the vibrating string at a high velocity,
spreading out in all directions. When these sound waves reach an object, that object is
also forced to vibrate in a pattern closely resembling the vibration of the string that
originally created the sound. Thus, the sound is transmitted from the source to the

listener's ear.
How do we represent it?
Sound can be represented as a graph of the air pressure created by the vibrating object
over time. By convention, high pressure is represented by positive numbers (above the
centerline) and low pressure by negative numbers. The centerline itself represents normal
air pressure with no sound
An object vibrating more rapidly would have the waves shorter and closer together.
Slower vibration would result in longer waves spaced farther apart. This change in
vibration speed is perceived as pitch; faster vibrations are higher pitches and slower
vibrations are lower pitches.
An object that vibrates more forcefully will produce more pressure and will result in
waves that are "taller" on the graph. This is perceived as loudness.
Converting sound energy
Energy can be converted from one form to another. Electricity can be converted to light,
chemical energy can be converted to heat, and so forth. Sound waves are energy and they
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
can be converted to different forms as well.
Consider a thin membrane attached to a coil of wire suspended in a magnetic field. When
sound waves make contact with this membrane, it will vibrate. This vibration moves the
coil of wire back and forth through the magnetic field and this produces a movement of
electrons in the wire. This movement is electricity and the pressure (voltage) of the
electricity will be proportional to the pressure of the sound wave. Such a device is called
a microphone and it is commonly used to pick up sound waves and convert them to
electrical energy.

Page 108
Multimedia and Its applications

A similar device can be used to convert this electrical energy back into sound by having
the electricity flow through another coil and making this coil move in another magnetic
field. The coil is attached to a membrane that will vibrate against the air and set up sound
waves similar to the original sound. This device is called a loudspeaker.
Typically, the electrical energy put out by a microphone is insufficient to move a
loudspeaker enough to be heard, so an additional device is used to amplify the level of
the signal. These three devices (microphone, amplifier, and loudspeaker) can be used to
make a quiet sound loud enough to be heard over a large room, or to carry sound to
distant locations.
Recording
It is often desired to preserve sound and recreate it later. Processes for recording sound
waves for later playback were developed to accomplish this.
Analog methods
Early methods for preserving sound were analog. This means that some pattern was
created by the sound that contained a form similar to the sound wave. The electrical
waveform from the microphone is used to vibrate a cutting device or create a magnetic
pattern. The goal was to create a recording of the original sound in some medium that
follow a pattern analogous to the original sound wave.
Analog media
The earliest device used for recording sound was the phonograph. This device created a
groove in the medium that had a shape modulated by the sound wave. Phonograph
records are played back by having a needle follow the groove. The needle will vibrate in
the same pattern that was used to cut the groove, and this vibration could be amplified
and output through loudspeakers. Another common analog recording device is the tape
recorder. A thin strip of plastic (tape) coated with a magnetic material is passed by an
electromagnet that is modulated by the sound wave. This creates magnetic patterns on the
tape that may be reproduced by reversing the process; the tape is drawn past a coil and
the changing magnetic patters induce an electric current, which is then amplified. These
recording techniques have several problems. At each step, sound to microphone,
microphone to electricity, electricity to magnetism or groove, and then back to sound
afterwards, errors can accumulate. The microphone diaphragm may not vibrate in exactly
the same pattern as the sound wave. There may be outside interference in the cables. But
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
the majority of the problems are in the recording medium itself. If the groove of the
record is cut too slowly, then there is not enough room to accurately represent the detail
of the higher frequencies. If the groove is cut too fast, then noise from the record rubbing
against the needle becomes apparent. There may be spots in the plastic records that are
malformed. Dust can accumulate and cause a hissing noise. Similar problems also exist
for magnetic tape. Even in the best possible circumstances, the quality of the sound
degrades with each step since the physical media used to preserve it contains flaws and
imperfections. If the recording is copied to new media (such as for editing or
reproduction/marketing) then these flaws accumulate.

Page 109
Multimedia and Its applications

Digital methods
Since most of the problems with recording sound accurately are due to the medium used
for analog recording, methods were sought to prevent these problems. The single largest
problem with analog recording is that the information being recorded must be represented
as an analog to the original sound wave. What is needed is a different way to represent
the sound; a way that doesn't suffer from the flaws of the recording media.
With the advent of the computer age, it became quite easy to represent waveform
information as a series of numbers rather than as a analogous pattern. The voltage level of
the waveform could be measured, and "samples" taken every so often. These
measurements were numerical (digital) and these numbers could be converted to pulses
that could be more reliably recorded than analog waveforms. To play back the digitally

recorded sound, the numbers are read back from the recording medium and the voltage of
an electrical signal is varied in precisely the same way as the original signal.
The numbers representing the strength of the waveform are set up on a scale from -32767
to 32767. This gives a fine enough gradation that listeners can't tell the difference
between digitally recorded sound and analog recordings. This range of numbers can be
represented in binary (base 2) with 16 bits (a bit is 0 or 1, off or on). Since a bit is either
on or off, it is much more reliable to read it from a tape than an analog signal. Even a
large amount of noise or imperfections on the medium won't interfere with distinguishing
between a 1 or a 0. This avoids the single biggest source of poor quality that had been
present with analog recording.
Since sound waves vibrate rapidly, the waveform must also be sampled very rapidly. The

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
more often the waveform is sampled, the closer the reproduction will be to the original.
Of course, as the waveform is sampled more often, more data must be stored. A sample
rate must be chosen that is fast enough to accurately represent the sound without resulting
in more data than necessary. Experimentation determined that sampling just over twice
the rate of the highest frequency to be reproduced is sufficient. Humans can hear a
maximum frequency of 20,000 cycles per second (20,000 Hertz). A standard sample rate
of 44,100 Hertz was chosen.

Page 110
Multimedia and Its applications

Digital Audio
Preparing Digital Audio Files:
Preparing digital audio files is fairly straightforward. If you have analog source material -
music or sound effects that you have recorded on analog media such as cassette tapes -
the first step is to digitize the analog material by recording it onto computer-readable
digital media. In most cases, this just means playing sound from one device (such as tape
recorder) right into your computer, using appropriate audio digitizing software.
You want to focus on two crucial aspects of preparing digital audio files:
 Balancing the need for sound quality with your available RAM and hard disk
resources.
 Setting proper recording levels to get a good, clean recording.
Setting Proper Recording Levels:
A distorted recording sounds terrible. If the signal you feed into your computer is too
“hot” to handle, the result will be an unpleasant crackling or background ripping noise.
Conversely, recordings that are made at too low a level are often unusable because the
amount of sound recorded does not sufficiently exceed the residual noise levels of the
recording process itself. The trick is to set the right levels when you record.
Any good piece of digital audio recording and editing software will display digital meters
to let you know how loud your sound is. Watch the meters closely during recording, and
you’ll never have a problem. Unlike analog meters that usually have a 0 setting
somewhere in the middle and extend up into ranges line +5, +8, or even higher, digital
meters peak out. To avoid distortion, do not cross over this limit. If this happens, lower
your volume and try again. Try to keep peak levels between –3 and –10. Any time you go
over the peak, whether you can hear it or not, you introduce distortion into the recording.
Editing Digital Recordings:
Once a recording has been made, it will
almost certainly need to be edited. Apple’s
QuickTime Pro, shown in Figure, provides
a basic look at a sound file’s structure and
allows for primitive playback and editing.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
A more serious sound editor is Sonic
Foundry’s sound forge for Windows,
shown in Figure with its special effects
menu. With this tool you can create
professional sound tracks and digital
mixers.
Trimming:
Removing “dead air” or blank space from the front of a recording and any unnecessary
extra time off the end is your first sound-editing task. Trimming even a few seconds here
and there might make a big difference in your file size. Trimming is typically

Page 111
Multimedia and Its applications

accomplished by dragging the mouse cursor over a graphic representation of your


recording and choosing a menu command such as Cut, Clear, Erase, or Silence.
Spacing and Assembly:
Using the same tools mentioned for trimming, you would probably want to remove the
extraneous noises that inevitably creep into a recording. Even the most controlled studio
voice-overs require touch-up. Also, you may need to assemble longer recordings by
cutting and pasting together many shorter ones. In the old days, splicing and assembling
actual pieces of magnetic tapes did this.
Volume Adjustments:
If you are truing to assemble ten different recordings into a single sound track, there is
little chance that all the segment will have the same volume. To provide a consistent
volume level, select all the data in the file, and raise or lower the overall volume by a
certain amount. Don’t increase the volume too much, or you may distort the file. Best is
to use a sound editor to normalize the assembled audio file to a particular level, say 80
percent to 90 percent of maximum, or about – 16dB (Decibel). Without normalizing to
this rule-of-thumb level, your final sound track might play too softly or too loudly. Even
pros can leave out this important step. Sometimes an audio CD just doesn’t seem to have
the same loudness as the last one you played, or it is too loud and you can hear clipping.
Figure shows the normalizing process at work in Sound Forge.
Format Conversion:
In some cases, your digital audio editing software might read a format different from that
read by your presentation or authoring program. Most Macintosh sound editing software
sill save files in SND and AIF formats, and most authoring systems will read these
formats. In Windows, most editing software writes WAV files.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Resampling or Downsampling:
If you have recorded and edited your sound at 16-bit sampling rates but are using lower
rates and resolutions in your project, you must resample or downsample the file. The
process will save considerable disk space.

Page 112
Multimedia and Its applications

Fade-ins and Fade-outs:


Most programs offer enveloping capability, useful for long sections that you wish to fade
in or fade out gradually. This enveloping is important to smooth out the very beginning
and the very end of a sound file.
Equalization:
Some programs offer digital equalization (EQ) capabilities that allow you to modify a
recording’s frequency content to sound brighter or darker.
Time Stretching:
More advanced program let you alter the length of a sound file without changing its pitch.
This feature can be very useful, but watch out: most time-stretching algorithms will
severely degrade the audio quality of the file if the length is altered more than few
percent in either direction.
Digital signal Processing (DSP):
Some programs allow you to process the signal with reverberation, multitap delay,
chorus, flange, and other special effects.
Reversing Sounds:
Another simple manipulation is to reverse all or a portion of a digital audio recording.
Sound, particularly spoken dialog, can produce a surreal, other wordy effect when played
backward.
Audio File Formats
Each type of computer platform and operating system uses a unique file storage format to
record and play back digital audio. With the emergence of the World Wide Web, and the
need to communicate among different operating systems, several new audio formats have
emerged that are playable on many computer platforms and auxiliary programs give the
user of one platform the ability to play back sounds created on another operating system.
There are far too many audio file formats to list here- only some of the most common are
listed below that you may use in creating audio for your Web pages
 WAV - The Microsoft Windows WAV format is used in the Windows 3.1, 95, 98
and NT operating systems for recording and playback of recorded sounds. The
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
advantage of placing a “WAV” file on your computer is the ease of making the file
on your part-a recorder/player for this type of file is already located on your user
machine (if it is a windows computer). The disadvantages are that many people on
the Internet will be unable to play the file if they are using another operating
system and in fact that WAV files are typically larger, taking longer for the end
user to download.
 AU - “AU” is a standard for UNIX computers that gained widespread use for
sounds on the Internet a few years ago when the World Wide Web first emerged.
Most web browsers include the capability to play “AU” files directly, so it makes
the format a good choice for Internet work that will be received by a larger Net

Page 113
Multimedia and Its applications

audience. This type of file, like the Microsoft WAV format, can be larger that
other types of audio files, so it is best used for short sound clips for effective
download times. When using AU files on a Web page, you will need to use an
outside program (such as the share package Cool Edit or the commercially
available program Sound Forge) to unload the WAV you have recorded and
convert it AU format.
 Real Audio (RA) - Real Audio and Real Video are formats that were developed
by an independent company, progressive Networks. It was one of the first audio
and video format specifically designed for Internet use and has gained widespread
use. The files are format specifically designed for Internet use and has gained
widespread use. The files are played using a free “plug-in” that is available from
their Web site and you can obtain a free encoder to convert files from WAV to
Real Audio format. Real Audio and Real Video files use compression schemes to
make the file small for Internet use- some loss of quality is apparent, but you can
control what type of compression you wish to use for the file, depending on the
target audience and recorded contents.
 MP3 (MPEG Audio) – MPEG audio is a standard for high-quality audio and
video files that has gained widespread use. MPEG (Motion picture Experts Group)
is a family of compression formats and audio/video storage formats developed by
a cooperative effort under the joint direction of the International Standards
Organization (ISO) and the International Electro-Technical commission (ICE).
There are many companies that offer quality with a smaller file size. CD-quality
MP3 sounds files can still be larger, however, for use will have to have an audio
player in their computer capable of handling the format. MP3 has found a
receptive audience among musician on the Net, who offers samples of their works
available at sites such as the Internet users and the music industry has been
lobbying for standards in the format that will prevent illegal copying of
copyrighted works.

4.2.3 Images
Still Images
Major components of most multimedia productions are images as opposed to movie or

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
sound files. There are basically two types used: Bit-mapped and Vector graphics:

Page 114
Multimedia and Its applications

Bitmapped Images or pixel based graphics.


According to Webster’s New World Computer Dictionary Bit map is “The representation
of a video image stored in a computer’s memory as a set of bits. Each picture element
(pixel), corresponding to a tiny dot on screen, is controlled by an on or off code stored as
a bit (1 for on, 0 for off) for black and white displays. Color and shades of Grey require
more information. The bit map has grid of rows and columns of the 1s and 0s that a
computer translates into pixel to display on-screen.”
When we say video image this means as image capable of being displayed on a video or
computer monitor not necessarily an image derived from traditional “video” source such
as a video camera, VCR or the like. Bit-mapped images can be derived from a number of
sources:
Scanning:
This implies the use of a peripheral device such as flatbed scanner, a drum scanner or a
slide/neg. scanner used to “digits” a graphic art photograph into a bit-mapped image file.
Digital photography:
Using a digital still camera to capture a bit-mapped image
file directly without using conventional photographic film
as an intermediary step.
Paint or graphics programs:
Using computer-based graphics, illustration or “paint”
programs it is possible to generate graphical images,
which are in a bit-mapped form. These can be exported in
a variety of image formats for use in multimedia
authoring programs. In effect the user “draws” and
“paints” on-screen using either a mouse or a digitizing
tablet to input the data. Expect in the hands of an
extremely skilled artist images generated in this manner
tend to be flat, two-dimensional an distinctively graphical
in nature. It is therefore more common to use these
applications to modify images obtained from other
sources such as scanning or CGI (Computer Generated Images).

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Computer Generated Images:
These are images generated by 3D modeling and
rendering programs. The artist builds a model in virtual
world within the computer. Sophisticated rendering
algorithms within the software then reproduce high
quality rendered images, which can approach
photorealism. Because of the unlimited level of control
over image quality, lighting and creative input, many
images traditionally obtained by photography are now
created by computer artists using virtual computer

Page 115
Multimedia and Its applications

graphics software and techniques. Again these images can be further modified or
“retouched” in graphics editing programs.
Vector based graphics
Vector graphics are also referred to as object oriented graphics and consist of
mathematically exact defined curves (basic geometrical forms: primitives) and lines that
are designed as “vectors”.
For example, in order to define a line, only three pieces of information are necessary:
 The coordinates of the starting point (Origin),
 The coordinates of the end point (Vector top) and
 The line width (Attribute).

Similarly, the center coordinates (Origin, the radius (vector top) and the line width
(attribute) suffice for a circle. Such graphics are scalable, this means that simply by
modifying the measurements the object’s size is arbitrarily vector graphics variable. Also
vector graphics are resolution independent; they are displayed or printed according the
resolution settings of the respective printer or computer screen.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Advantages of vector graphics:
 Vector graphics require less memory space than Bitmap images,
 Vector graphics are optionally exact the resolution is irrelevant,
 Vector graphics are scalable without loss.
 Vector graphics are ideal for storage of images containing line based information
or elements, or images which can easily be converted into line based information
(e.g. text)

Page 116
Multimedia and Its applications

Disadvantages of vector graphics:


 Not optimal for complex images with colors which change every pixel (e.g.
Photographs),
 Appearance is dependent on the respective application program
 Identical data cannot always be identically interpreted.
The output quality is naturally only optimum with vector output devices (Plotter).
Vector Image formats
The following are examples of commonly used vector image file formats:
 EPS-Encapsulated PostScript
 EPS (AI)-Encapsulated PostScript fro Adobe Illustrator.
 Scalable EPS
 HP-GL/2-A plotter/printer page description language developed by Hewlett
Packard, which can be used as a transfer file format.
 DXF-Drawing eXchange Format. Developed by AutoDesk for their AutoCAD
program, DXF has become a defector industry standard for data transfer between
CAD programs. It can contain both 2D vector information as well as 3D surface
information and comes in a variety of revisions (DXF v10, v11, v12, v13, v14,
&v2000). It is not normally used in the earliest stage of 3D computer graphics
generation.
 DWG-Drawing Format. Developed by Autodesk for this AutoCAD CAD
program. Although less widely used than DXF, DWG has become a common
standard for data transfer between some cad programs. it can contain both 2d
vector information as well as 3Da surface information and comes in a variety of
versions. Like DXF, DWG is not normally used as a data format within
multimedia applications
 WMF- Windows Meta File is closely associated to MS Windows. It is employed,
for example, for the exchange of graphics using a clipboard.
 CGM – Computer Graphics Meta file originates from the standardization of the
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
graphic core system GKS. It is especially designed for the exchange of pictures
between different systems and contains all commands and parameters necessary
for determination standardization.
 PICT – PICT files are Apple MacIntosh specific and are based on the MacIntosh
internal graphic protocol QuickDraw.
Bitmap images are made of individual dots called pixels (picture elements) that are
arranged and colored differently to form a pattern. The individual square that make up the
total images can be seen when zoomed in. However, from a greater distance the color and
shape of a bitmap image appear continues since each pixel is colored individually, you

Page 117
Multimedia and Its applications

can easily work with photographs with so many color and can create photo-realistic
effects such as shadowing and increasing color by manipulating select areas, on pixel at a
time.
Advantages of bitmap images:
 Manipulation of pixels is very simple, singularly or in groups (for example partial
color changes etc.).
 When the output device works directly with pixels, the bitmap image can be
optimal constructed (printer).
 Particularly extensive output medium, for the realistic representation of objects
(in contrast to vector graphics).
Bitmap program, are ideal to retouch photographs, editing images and video files and
creating original artwork. Variety of changes to photographs can be made, such as
adjusting the lighting, removing scratches, people, and things, swapping details between
images, adding text and objects adjusting color, and applying combinations of special
effects.
Disadvantages of bitmap images:
 Differentiated representation of the pixels.
 Smooth curves are represented through approximation of pixels on the raster grid:
“indents” and “steps”: aliasing.
 Large files, especially when colors are used.
Bitmaps are not easily reduced or enlarged: The individual pixels are simply duplicated
by enlargement, changing the picture or the proportions; while reducing the image merely
deletes individual pixels. This can best be seen by reducing and re-enlarging an image to
its original size, then comparing the result with the original image.
Vector-based versus Bitmap Images
As stated before, vector –based images are resolution independent. You can easily resize
vector images to a thumbnail sketch or a billboard-sized graphic. They just keep their
smoothness when resized and do not lose detail and proportion. Smooth curves are easy
to define in vector-based programs and they retain their smoothness and continuity even
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
when enlarged. You can also change vector-based images into bitmap formats when
needed. On the other hand, bitmap images provide photo-realistic images that require
complex color variations. They are not easily scalable though. The disadvantage of
bitmap images comes when you want to resize the picture.
Increasing the size of bitmap has the effect of increasing individual pixel, making lines
and shape appear rough and chunky. Reducing the size of a bitmap also distorts the
original image because pixels are removed to reduce the overall image size. Moreover,
since a bitmap image is created as a set of arranged pixels, its part cannot be manipulated
individually.

Page 118
Multimedia and Its applications

Bit-mapped Image formats


The following are examples of commonly used bit-mapped image file formats:
 TIFF-Tagged Image FILE format. A bit-mapped format used widely on
multiple platforms due to its cress platform compatibility. It can be used in both
compressed and non-compressed forms.
 PICT-Standard Macintosh Bit-mapped format able to be opened by some
windows software.
 JPEG-Acronym for Joint Photographic Experts Group. Widely used for web
distribution due to its extremely efficient compression characteristics.
 TARGA (TGA)-Multi platform bit-mapped format.
 GIF Graphics Interchange Format. Developed by CompuServe and widely used
for bit-mapped images on the Internet due to its small file size characteristics. It
incorporates a patented loose less image compression algorithm called LZW.
 BMP-A common Windows bit-mapped format
 Image- A proprietary bit-mapped format used by Electric Image Animation
System for both stills and movie files.
 PhotoShop- A proprietary bit-mapped format native to Adobe PhotoShop. It is a
cross-platform multi layered 32-bitmapped formats now used by a growing
number of third party applications particularly due to its ability to incorporate
virtually unlimited number of layers and channels within a single file.
 MAC- MacIntosh Paint is the standard graphic format on Apple-MacIntosh
Computers.
 RAS- RAS is the graphic format of UNIX Workstations, especially Sun
Microsystems

4.2.4 Color
The color of an object is the result of certain wavelengths of electromagnetic radiation
being absorbed by that object when light falls upon it. The eye receives the remaining
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
wavelengths of electromagnetic radiation reflected off the object. When the light enters
the eye it falls onto the retina where special cells called cones and rods are stimulated and
transmits corresponding signals to the brain. The brain interprets the signals coming

Page 119
Multimedia and Its applications

from the retina, so our sense of vision is therefore subjective. There are three types of
cone, each stimulated by different wavelengths of light corresponding approximately to
red, green and blue. We perceive different colors by the addition of different strengths of
signals of red, green and blue coming from the cone cells.
Screen color is created in a similar way to cone cells of the eye by adding varying
intensities of the color component to each pixel and is referred to as additive color. Black
is created by the absence of any color component and white is created by the maximum
intensity of the color components. Orange, for instance, is created by the addition of red
and green with no blue. Printed color on the other hand is created by taking color away
and is referred to as subtractive color. White is created by the absence of color (just the
white paper) and black is created by the addition of the maximum values of the color
components. Subtracting different amounts of the three-color components creates all the
other color.
There are several models for representing the color in an image. The most common of
these is the Red, Green and Blue or RGB model, where the color of each pixel is made up
of values for each of the three colors. Other commonly used color models are:
HSB- Hue (color), Saturation (intensity of color) and Brightness (amount of
black and white mixed with color).
HSL – Same as above only it refers to Brightness as Lightness
L*a*b – Luminance is the brightness of the color (from white to black); ‘a’
defines a color range between green and red and ‘b’ defines a color between
blue and yellow.
CMYK – Examples of a subtractive color model used in printing where Cyan,
Magenta, Yellow and Black are used to produce the color separations used in
the printing process.

Color Depth
Most modern computers equipped with Color monitors are capable of creating and
displaying million of colors. Utilizing the RGB Color system generates the images. This
system uses three Color channels (Red, Green and Blue) each displaying 256 intensity
levels for, each of 4 Color channels. This requires 8-bits of Color information per
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
channel. Three channels therefore require 24-bits of Color information. This is referred to
as 24-bit color depth, this allows for up to 16,777,216 possible colors for any given pixel
(determined by multiplying 256*256*256 intensity levels).
24-bit color depth is sometime referred to as “millions of colors”. When Red, Green and
Blue light is mixed together it is therefore possible to create all the colors of the visible
spectrum. On an RGB monitor when all three channels are at maximum intensity the
resulting color is white. When all their channels are at the minimum intensity the
resulting color is black.

Page 120
Multimedia and Its applications

Less sophisticated systems utilize video display system with a lower color depth:

System’s Color Numbers of colors Limitations


Depth

16-bit color depth 65,536 Color grainy dithered color

8-bit color depth 256 Color or minimal color


grayscale

1-bit Black and white No grayscale

Resolution
One of the more confusing concept in still imaging is the relationship between image size
and resolution. There are a number of different terms to be considered:
Screen Resolution-most computer screens (either CRT or LCD/TFT) operate at a
“native” screen resolution, which is optimized for the dot pitch, or pixel pith of the screen
aperture grid for CRTs or numbers of pixel elements in LCD/TFT screens. Refresh rate
and color depths of the graphics and also play a major role in determining screen
resolution. For example many CRT screens operate at 72dpi (dot per inch). This means
that there are 72 screen pixels per screen inch. There are in effect a predetermined
number of screen inch. There are in effect a predetermined number of screen pixel
elements in any given “square inch” of screen material.
Graphics Display Resolution- Working at different graphics resolutions (e.g. VGA,
SVGA, XGA or XVGA) determines how many pixels are contained in the graphics array
of the monitor. But this does not affect the monitors actual screen resolution, only the
resolution of the display field array projected on to the screen. Multiscan monitors may
project different display resolutions but depending on the actual screen resolution these
may be sharp or slightly fuzzy. Images projected at higher density appear shaper but
smaller in overall size.
a. Scan resolution- When an image is scanned it can be of any resolution which the
scanner can handle eg 300 dpi, 600dpi or even 2400dpi and beyond. Regardless of
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
scanning resolution, these images are still displayed on screen at the resolution
determined by the application, graphics card and monitor. For example and image
scanned at 600dpi which is displayed at “full size” on a monitor screen will look
no sharper than one scanned at 300dpi and displayed at the same size on the same
screen. Only when you “zoom in” on both images will you notice the higher
image quality of then 600dpi image.
b. Print Resolution- if you were to print out both images at the same size on a high-
resolution printer, the difference would be readily noticeable. For Multimedia
productions, producers tend to work at 72dpi because this most closely
approximates normal display or projection resolution. In the end it is the total

Page 121
Multimedia and Its applications

number of pixels which ism important. Higher “scan resolutions” should only be
used at the point of scanning and then quickly converted to 72dpi for any
multimedia work.
c. Anti-aliasing- Aliasing is an artifact inherent in digital images. Whenever there is
a transition of contrast between two colors or shades of gray there is a potential
for “stair shaped” jagged edges along any diagonal transition. These do not occur
on truly vertical or horizontal transitions. The jagged edge is generated by the
square nature of pixels, which makes up the image. Anti-aliasing is the software
algorithms, which automatically remove or reduce the visual impact of aliasing.
Sampling adjacent pixels and filling in the pixels adjacent to the transition usually
achieve this by subtly blurring the image.
d. Applications- For bit-mapped images Adobe Photoshop is far and away the
Word’s most widely used image editing program. Available on both Mac and
Windows it is the industry standard. Other programs like Live Image, Corel,
Canvas, Painter etc attempt to compete by standardizing on all of the third party
Photoshop plugging. These programs can be used to both scan (with the
appropriate plug –in scanner driver) and to edit bit-mapped images. This means
being able to digitally retouch, color correct and composite.
e. Modes- because of the on-screen nature of Multimedia all work is normally
processed in RGB mode. CMYK is not normally use except for pre-press work
(printing industry).
f. Conversion- There are a number of utility programs (some share, some freeware)
which allow image files of differing formats to be converted from on to the other.
In additional many images editing programs (such as PhotoShop) will allow the
users to import or open formats other than native PhotoShop) files and resave
them in a variety of other formats. As a general rule it is best to maintain original
documents as “working files” as PhotoShop and then save copies of the
completed work as a more compressed version. This may mean smaller size,
lower resolution, and higher compression settings. If the work ever needs to be
revised you still have the original PhotoShop file to fall back on.
g. Dithering - blending colors to modify colors or produce new ones.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 122
Multimedia and Its applications

h. Alpha Channels- As previously described full color photo-realistic images used


in multimedia productions contain 24-bits of color information per pixel (color
depth). This is enough to generate over sixteen million individual colors. If we are
working in RGB mode there are three channels for the color information (Red
Green and Blue channels) each of 8- bits color depth.
However many graphics and multimedia applications allow us the opportunity to
work in 32-bit color depth. There is still the 24-bits for the three-color channels plus
an extra 8- bit grayscale channel commonly referred to as an Alpha Channel. This
extra channel is normally invisible to the eye, but can contain extra information for
use by the application. It can be used to describe transparency, making, bump-
mapping or composting information. Alpha channels are widely used in digital video
production where, for example, a foreground logo can be generated by 3D animation
program over and Alpha channel background. When imported into a video editing or
composting application the Alpha channel can be selected to disappear to reveal and
alternative background. This effect is sometimes called keying or matting.

Not all image formats support Alpha channels so if you need this capability make some
you choose a format that will suit your needs. You should note that a 32-bit image is not
visually superior to a 24- bit image. The additional information is technical data increased
visual or Color quality.
i. Hardware considerations
Most multimedia professionals insist on working in full color (24-bit color depth) and at
least SGA resolution (800x600 pixels). This requires a digital monitor and appropriate
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
video graphics card (or Graphics Accelerator Board) with at least 2MB of video RAM
(VRAM). Higher resolutions will require more VRAM if sufficiently color depth is to be
maintained. It is not uncommon to apply up to 32 Mb of VRAM for a high-end
multimedia system. Most personal computers rely on VRAM on board a video graphics
card (or Graphics Accelerator Board) to minimize memory bottleneck between the PC
and the graphics memory controller. VRAM is a special type of DRAM that has a serial
interface, which allows data to be accessed simultaneously by the PC interface and the
graphics side. These video cards are usually PCI (peripheral Components Interconnect)
boards.

Page 123
Multimedia and Its applications

The most common computer monitor video standards are as follows:

Video Horizont Vertical VRAM*


standard al

VGA 640 480 IMB

SVGA 800 600 2MB

XGA 1024 768 3MB

SXGA 1280 1024 4MB

SXGA+ 1400 1024 5MB

SXGA-Wide 1600 1024 5MB

UXGA 1600 1200 6MB

HDTV 1920 1080 7MB

UXGA-Wide 1920 1200 7MB

QXGA 2056 1536 10MB

*Amount of VRAM require to display in 24-bit color


(Note: Traditionally VGA, SVGA and XGA were not associated with 24 bit color
displays some systems can be so configured).
Still images comprise a major component of contemporary multimedia production. This
is likely to continue for the foreseeable future. However since the demands an limitations
placed on multimedia systems by motion image file formats far outstrip those imposed by
still image formats, systems must naturally be designed to handle the more rigorous
requirements of moving media. Once these needs are met the lesser demands for still
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
images will be satisfied by default.
Color is a central part of our lives. People look at and react to different colors, tints, and
shades thousands time every day. People rely on colors to convey meanings for many
things. Color has both emotional and psychological impacts. Colors can capture our
attention and cause us to react based on our own experiences and beliefs. Web-designers
must be very familiar with effects of colors.
Basic Rules of Color Theory
1. The primary colors are red, yellow, and blue. All other hue is derived from these
colors.

Page 124
Multimedia and Its applications

2. The secondary colors are orange, violet, and green.


3. The intermediate or tertiary colors are between the primary and secondary colors: red-
orange, yellow-orange, yellow-green, blue-green, blue-violet, red-violet.
4. The warm colors are ranging from red-violet to yellow. Orange is considered the
extreme of warm. Warm colors are vibrant and active.
5. The cool colors ranges from violet to green-yellow. Blue is considered the extreme of
cool. Cool colors are relaxed and subdued. Creative color section starts with a few
basic color schemes.
6. Analogous colors are any three consecutive color segments on the color wheel. For
example, Blue, blue-violet, and violet are analogous colors. Analogous colors
produce a platter that bends well and conveys a feeling of harmony.
7. Complementary colors use two hues that are directly opposite. This color selection is
very powerful and provides high contras, but it sometimes can be quite jarring ad
hard to view over long periods of time.
8. Split complementary color consists of one hue and the two segments adjacent to its
complement. This color scheme is vivid and not too overpowering. For example, the
green. Red-violet and red-orange segments are split complementary colors.
9. Monochromic colors use all the hues of one color segment. A monochromatic color
scheme conveys harmony through gradual tone changes in the single-hue segment.
10. Triadic colors use three colors that are an equal distance from each other. These can
include the primary, secondary and intermediary colors. This color scheme gives a
sense of balance between the colors. For example, the blue-violet, red-orange, and
yellow-green segments make triadic colors.
Functions of colors
1. Effect of Color on Mood
Color can control or affect the look and feel of the multimedia product. Adding a few
colors can make a boring software exiting, a good software ugly, or can evoke emotional
responses. Therefore, designers should have colors to enhance their product by creating
good visual and emotional effects. Colors should help the reader/user to enjoy the
multimedia-experience. Here are some examples about how color influences mood:
ANNAMALAI
ANNAMALAI UNIVERSITY
Pink
UNIVERSITY
Soothes acquiesces; promotes affectability and affection

Yellow Expands cheers; increasing energy.

White Purifies, energize, unifies in combination, and enlivens all


other colors.

Black Disciplines, authorizes, and strengthens; encourages


independence

Page 125
Multimedia and Its applications

Orange Cheers, commands, stimulate appetites, conversation, and


charity.

Red Empowers, stimulates, dramatized, competes, symbolize


passion.

Green Balances, normalizes, refreshes, encourages emotional


growth.

Purple Comforts, spiritualizes, creates mystery and draws out


intuition.

Blue Relaxes, refreshes, cools, produces tranquil feelings and


peaceful moods.

2. Color Symbolism by Culture


Color has a powerful effect on how we associate things. Colors can give a reader
predefined feeling and prejudices about a web page even before he/she sees the content.
In our India, certain colors are associated with certain things/images. Red is related to a
stop sign and fire engines. Green is associated with go and nature. Blue us associated
with the words blue and sky. However, the World Wide Web is reaching out to more and
more people across the world. The web-designer’s job with respect to color becomes
more and more difficult. Colors take on totally different meanings in different cultures.
For example, black is normally associated with sorrow. In the United States, white is
symbolized purity and beginnings. But in India, white symbolizes death.
3. Readability
The color combination used is very crucial when dealing with backgrounds and
foregrounds of a screen. An important design issue is to create the background and
foreground with enough contrast to make the content legible. Not enough contras, such as
using analogous and monochromatic colors, make it hard for readers to read the text
whereas a too severe contrast can cause a physiological headache to result from your
presentation.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Bright colors are good attention grabbers, because they are loud and obvious. But if every
color on the screen is bright, the screen just becomes an eyesore. Creative colors scheme
are existing, but make sure that they add to the readability of your content. Colors that are
close together on the color wheel are good to use for subtle changes. However, using
them as a foreground/background color scheme is not wise. Complement colors have a
vibrant feel when together. However, for a foreground/background, they might be too
powerful. A high contrast might be existing and attention grabbing, but its overuse can
reduce a readability of a text severely. Achieving balance in color is vital in order to
properly convey the contrast of your screen without injuring the audience's eyes.

Page 126
Multimedia and Its applications

4. Legibility
The color for a text and links are an important consideration when ensuring that your
page is legible and choosing a text color that does not make readers difficult to read. Also
don’t make your audience go treasure hunting for your text.
Links should be distinguishable from the body text, even after it has been clicked. Try not
to select colors that match the body text around the link. Make also sure that after the link
has been visited, the link colors does not turn into the surrounding text color or blend into
the background.
5. Consistent color schemes
Consistent color schemes give your page a sense of familiarity and professionalism that
audience can recognize right away. For example, in order to get people to associate
certain colors with their company, they must be consistent in the way they splash their
colors all over their promotional materials, correspondence, commercials packaging,
sign, and so on. As a designer, we have to take some of that mentality and add it to our
presentation design.
6. Accessibility
Increasing accessibility for colorblind audience is important thing to develop professional
web pages. There are some considerations that we should have to increase accessibility
for colorblind readers.
It is strongly recommended that you should use a strong and bright contrast between
foreground and background colors not only for your screen text but also in your images.
Even totally colorblind audience can differentiate similar colors that contrast bright with
dark. It is good to use blue, yellow, white and black if you really must use colors to
distinguish items. These combinations are likely to be confused than other are.
Color Tables
Color tables are used for storing color values. Frame buffer values are used as indices
into the color table.
Color table allows us to map between a color index in the frame buffer and a color
specification.
Suppose our frame buffer had 8 bit, per pixel. This would allow 28 i.e. 256 colors.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
If color table has 2256 entries, 2256 = 16,777,216 colors are possible.
By changing the values in the color table, we can change the available color selection to
contain the 256 colors.
Changing a value in the color table can alter the appearance of large portions of the
display. Because, it is the hardware which makes these changes, they usually occur very
fast.

Page 127
Multimedia and Its applications

Color Stored Color Values in Displayed Color


Frame Buffer
Color Code Red Green Blue
0 0 0 0 Black
1 0 0 1 Blue
2 0 1 0 Green
3 0 1 1 Cyan
4 1 0 0 Red
5 1 0 1 Majenta
6 1 1 0 Yellow
7 1 1 1 White

Color tables are used to give false color or pseudo color to gray scale images.
They have also been used for gamma correction, lighting and shading and color model
transformations.
A user can set color table entries in a PHIGS application program with the functions
Set Color Representation (Ws, Ci, Colorptr)
Ws ---------- -> Workstation output device
Ci ---------- -> Color Index
Colorptr ----- > Pointing to the trio of RGB color
values (r,g,b) each specified in the
range from 0 to 1

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Fig: Chromacity diagram

Page 128
Multimedia and Its applications

RGB Color Model


The basis for displaying color output on a video monitor using the three primary colors
Red, Green and Blue is referred to as RGB Color Model.
We can represent this model with the unit cube define on R, G and B axes as shown in
the figure.
i. The origin represents black
ii. The vertex with co-ordinates (1,1,1) is white.
iii. Vertices of the cube on the axes represent the primary colors
iv. Remaining vertices represent the complementary color for each of the primary
colors.

The RGB color scheme is an additive model. Intensities of the primary colors area added
to produce other colors.
A color C is expressed in terms of RGB components as
C = R R + G G + B B
Magenta vertex is obtained by adding red and blue to produce the triple (1,0,1), and
White at (1,1,1) (i.e. The sum of Red, Green and Blue Vertices)
Shades of gray are represented along the main diagonal of the cube from the origin
(black) to the white vertex.
Each point along this diagonal has an equal contribution from each primary color.
So, Gray shade is half way between black and white

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Fig.: RGB Color Model

Page 129
Multimedia and Its applications

HSV Color Model:


o A Color model is a method for explaining the properties or behavior of color with
in some particular context.
o No single color model can explain all aspects of color.
o HSV color model is one of the models, which help to describe the difference
perceived (seen) (understood) characteristics of color.
Color parameters in this model are hue(H) saturation (S) and value (V)

The HSV hexcone


o The three dimensional representation of the HSV model is derived from the RGB
cube.
o If we imagine viewing the cube along the diagonal from the white vertex to the
origin (black), we see an outline of the cube that has the hexagonal shape as
shown in the figure.
o The boundary of the hexagon represents the various hues, and it is used as the top
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
of the HSV hex cone.
o In the hex cone, saturation is measured along a horizontal axis, and value is along
a vertical axis through the center of the hex cone.
o Hue is represented as an angle about the vertical axis, ranging from 0º at red
through 360º.
o Saturation is represented as the ratio of the purity of a selected hue to its
maximum purity at S = 1.
o Value V Varies from 0 at the apex of the hex cone to 1 at the top. The apex
represent black

Page 130
Multimedia and Its applications

o At the top of the hex cone, colors have their maximum intensity.
o When V = 1 and S = 1, we have the pure hues.
o Starting with a selection fro a pure hue, which specifies hue angle H and sets V=
S= 1, we describe the color we want in terms of adding either white or blue to the
pure hue.
o Adding black decreases the setting of V while S is held constant.
o Adding white decreases the setting of S while V is held constant.
o Thus, various shades are represented with values S = 1 and 0 < V < 1
(By adding black to a pure hue)
o Adding white to a pure tone produces different tints across the top plane of hex
cone where parameters values are S = 1 and 0 < V < 1
o Various tones are specified by adding both black and white producing color points
within the triangular cross sectional area of the hex cone. Cross section of the
HSV hex cone represent color concepts i.e. shades tints and tones.
CMY Color Model
A color model defined with primary colors cyan, magenta and yellow (CMY) is useful
for describing color output to hard copy devices unlike monitors, which produces a color
pattern by combining light from the screen phosphors, hard copy devices such as plotters
produces a color picture by coating a paper with color pigments.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
H L S Color Model
H L S  Hue (H), Lightness (L) Saturation (S)
It is a model based on intuitive color parameters used by Tektronix.
It has a double cone representation as shown in figure. H specifies an angle about the
vertical axis that locates a chosen hue.

Page 131
Multimedia and Its applications

In this model H =0º corresponds to blue. Remaining colors specified are like HSV model.
i.e. Magenta is at 60º Red is at 120º and
Cyan is at 180º
Complementary colors are 180º apart on the double cone.
The vertical axis is called lightness, L
At L = 0, we have black
At L = 1, we have white
Gray scale is along the L axis pure hues lie on the L=0.5 plane. Saturation parameters S
specify relative purity of the color.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
This parameter varies from 0 to 1, as S decreases, the hues are said to be less pure.
At S = 0, we have gray scale.
Hue is selected with hue angle H and desired shade, tint or tone is obtained by adjusting
L and S.
Colors are made lighter by increasing L and made dark by decreasing L.
When S is decreased, the colors more toward gray.

Page 132
Multimedia and Its applications

H S I Model:
H S I is a color model, which is used for describing the color components for the output
devices.
It has three components namely Hue (H) Saturation (S) and Intensity (I).
H specifies a dominant pure color perceived by an observer.
S measures degree to which that color has been diluted by white light.
It is more suitable than the RGB model for many image-processing tasks. H specifies a
dominant pure color perceived by an observer.
(E.g.: Red, Yellow, Blue) and S measurers the
degree to which that color have been ‘diluted’ by
white light. Because color and intensity are
independent, we can manipulate one without
affecting the other.
HSI color space is described by a cylindrical co-
ordinate system and is commonly requested as a
double cone. A color is a single point inside or on
the surface of the double cone.
The height of the cone corresponds to intensity. If
we imagine that the point lies in a horizontal
plane, we can define a vector in this plane from
the axis of the cones to the point.
Saturation is then the length of this vector and hue
is its orientation, expressed as an angle in degrees.
4.2.5 Animation
Animations are created from a sequence of still images. The images are displayed rapidly
in succession so that eye is fooled into perceiving continuous motion. This is because if a
phenomenon called persistence of motion. This is the tendency of eye and brain to
continue to perceive an image even after it has disappeared.
For example a sequence of images of bee showing various positions of its wings when

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 133
Multimedia and Its applications

displayed rapidly one after another gives the illusion of the bee flapping its wings.
Animation generally deals with hand drawing images in contrast to motion video which
deals with actual photographic of real world objects taken through a camera, although
both uses the concept of displaying a sequence of images one after another to depict
motion.
Animation on the Web
The world wide web was developed in the early 1990’s was initially created to serve
hypertext documents but later on animated graphics files were added to it. The biggest
obstacles in the use of animations on the web are bandwidth limitations the difference in
platforms and browser support. Typically web animations are computer files that must be
completely downloaded to the client machine before playback, which can take a long
time to do so. A way around this problem is streaming, which is the capability of
specially formatted animations files to begin playback before the entire file has been
completely downloaded. As the animation plays the rest of the file is downloaded in the
background. Another problem with web animations is that once the animation has been
delivered to the user, the user must have the proper helper application or plug-in to
display the animation. Several formats exists today like GIF animation, based on
extensions to GIF specifications, Quick time animation, based on quick time movie
format, java animation based on Java programming languages, shockwave animation
based on Macromedia Director file format etc.
To the average home user using 28.8 kbps modem, download speed is around 2.5
Kb/second while corporate and university LANS support around 10 to 50 kb/sec.
Types of Animations
Cel Animation

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 134
Multimedia and Its applications

A cel animation is a term from traditional animation. Cel comes from the world celluloid,
the material that made up early motion picture film, and refers to the transparent piece of
film that is used in hand – drawn animation. Animation cels are generally layered, one
top of the other, to produce a single animation frame. Layering enables the animator to
isolate and redraw only the pars of the image that change b/w successive frames. A
frame consists of the background cel and the overlying cel and is like a snapshot of the
action to one instant of time. By drawing each frame on transparent layers, the animator
can lay successive frames one on top of the other and see at a glance how the animation
progress through time.
Filp-Book Animation
Flip-book animation or frame based animation is the simplest kind of
animation to visualize. Here a series of graphic image are
displayed in rapid succession. Each image is slightly
different from the one before. The graphic images are
displayed so fast that the viewer is fooled into perceiving
a moving image. In film this display rate is 24 images
or frame per second. For playback on computer
each image has to be displayed on the screen in
quick succession. The biggest problem with
this form of animation is, on bandwidth
sensitive mediums like the web, which has to
update each frame fast enough to that the
viewer perceives smooth continuous motion.

Sprite Animation (path based animation)

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

A sprite is any part of the animation that moves as an independent object, like a flying
bird, a rotating planet, a bouncing ball or a spinning logo. A single image or series of

Page 135
Multimedia and Its applications

images can be attached to a sprite, which can animate either at one place or move along a
path. Sprite based animation is different from flip-book animation in that for each
successive frame only the part of the screen that contains the sprite is updated. File sizes
and bandwidth requirements for sprite based animations are typically less than for flip-
book based animation.
Special Effects:
Color cycling allows you to change color of object by cycling through a range of colors
in the color wheel. The software provides smooth color transitions from one color to
another.
Morphing
Morphing is probably most noticeably used to produce
incredible special effects in the entertainment industry. It is
often used in movies such as Terminator and The Avyss, in
commercials, and in music videos such as Michael Jackson’s
Black or White. Morphing is also used in the gaming industry
to add engaging animation to video games and computer
games. However, morphing techniques are not limited only to
entertainment purposes. Morphing is a powerful tool that
can enhance many multimedia projects such as
presentations, education, electronic book illustrations,
and computer-based training. The word morph derives
from the word metamorphosis meaning to change shape, appearance or form. According
to Vaughn morphing is defined as ‘an animation technique that allows you to
dynamically blend two still images, creating a sequence of in-between pictures that, when
played in Quick Time, metamorphoses the first image into the second’.
Morphing Techniques:
Image morphing techniques can be classified into two categories such as mesh-based and
feature-based methods in terms of their ways for specifying features.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

In mesh-based methods, a non-uniform mesh specifies the features on an image. Feature


based methods specify the features with a set of points or line segments. One way of
achieving the morphing effect is to transform one image into another by creating a cross-

Page 136
Multimedia and Its applications

dissolve between them. The color of each pixel is interpolated over time from the first
image value to the corresponding second image value. However, this is not very effective
in portraying an actual metamorphosis and the metamorphosis between faces dos not look
good if the two faces do not have about the same shape. This method also tends to wash
away the features on the images.
A second way to achieve morphing is feature interpolation, which is performed by
combining warps with the color interpolation. An animator with a set of point or line
segments specifies the features of two images and their correspondence. Then, warps are
computed to distort the images so that the features have intermediate positions and
shapes. The color interpolation between the distorted images finally gives an in-between
image.
In morphing the most difficult task is the warping of one image into another image. It is
the stretching and pulling of the images that makes the morphing effect so realistic. The
actual morphing of the image can be accomplished either by using morph points or
morph lines. Morph points are the markers that you set up on the start image and the end
image. The morphing program then uses these markers to calculate how the initial images
should bend/wrap to match the shape of the final image. The second method uses lines
(edges) instead of individual points. Both methods produce very realistic morphing
effects. One of the most time consuming tasks in morphing is selecting the points or lines
in the initial and final image so that the metamorphosis is smooth and natural.
There are several useful tips to remember when morphing objects. The first is to choose
carefully those pictures to morph (Morphing Software). For example, if you wish to
morph two animals, it is best to use picture should also be a close up of the head to obtain
successful results. A second tip is to carefully select the background (Morphing
Software). If a single color background is used, the morphing effect focuses on the
object. Ideally, it is best to use the same background for each picture.
Rotoscoping is the process of drawing on top of existing video, film or animation frames.
Particle system animation is used to simulate natural phenomenon like rain, smoke, fire
etc. Here the characteristics of a swarm are defined
Inverse Kinematics is a special way of linking separate pieces of a 3D model so that
they follow certain predefined behavior of rules. Usually used in character animation, the
motions are constrained added on real world objects.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 137
Multimedia and Its applications

Animation Techniques:
Onion Skinning:

Onion skinning is a drawing technique borrowed from traditional cel animation that helps
isolation, animators lay these transparent cels one on top of the other. This enables them
to see previous and following frames while they are drawing the current frames. Onion
skinning is an easy way to complete sequence of frames at a glance and to see how each
frame flows into the frames before and after.
Cut-Outs:
When the motion of a character is
limited, say the wave of a hand, it is
easier to just redraw the hand and arm
rather than redraw the entire character
for each frame. The character can be
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
drawn once and used as a background.
The separate cut-out are compo sited on
the background figure to simulate
motion.
Velocity Curves:

The gradual slowing down and speeding up as objects approach and leave key frames is
called ease-in and ease-out. In traditional animation slow movement is called by small
changes b/w frames. While fast movement is caused by large changes. Computer

Page 138
Multimedia and Its applications

animation programs enable you to control the declaration and a acceleration of objects by
using velocity curves that define velocity of an object over time.

Squash and stretch: Squash and stretch means


having your animated object in the direction of

motion. Then when the object stops, changes direction or hits an immovable object, show
the object compressing or squashing, squash and stretch is a simple way to give a feeling
of weight of an object in motion. It is also a good way to show anticipation and recoil.
Motion Cycling: Many actions are repetitive and can be decomposed into a single
cycling or looping action over a few frames. The classic example of animation cycling is
a walking two-legged figure. The walking figure can then be given translatory motion to
complete the animation of a walking figure.
Secondary and Overlapping Actions: One to create interesting animation is to create
secondary actions to the main action. Secondary actions can be simple like flickering
flame or flowing water can be added with tow or three frames animation loop.
Overlapping actions add a dimension of time to secondary actions. Loose or flowing
parts like cloth or hair can be arranged to come to stop slightly after the main character
comes to a stop.
Hierarchical Motion: Hierarchical motion is created by attaching or linking an object or
animation loop to another object or animation loop. So that the first loop moves with the
second. The flying bee animation is an example of hierarchical motion. First loop of a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
bird flapping its wings is created, and then it is attached to a second object in this case a
motion path-so that the flapping bee flies across the screen. Another example of
hierarchical motion is the solar system – moon revolve around the planet and the planets
revolve around the sun.
Anticipation and Exaggeration: Anticipation
helps to set up the viewer mind by showing some
small movement prior to the primary motion. For
e.g. A character may retrace a few steps before
making a long jump, or an object may stretch and
bend before breaking up.

Page 139
Multimedia and Its applications

Exaggeration is a way to add impact to your animation. Exaggerating the important


elements make them stand out and bring them closer to the user. For e.g. The length of a
racing car may be lengthened to depict high speed.
3D and Animation
3D refers to there- dimensional imaging. By implication, in the field of multimedia
technology, this implies images created by 3D modeling software rather than simply
“2D” images taken of real three-dimensional objects. Animation therefore implies
computer generated animation as opposed to traditional hand drawn “cell” animation
although as technology expands even the realms of traditional motion picture animation
is feeling the effect of new technology, so perhaps it too should be included in any study
of the genre.
In order to understand 3D computer-generated animation we must first understand 3D
computer generated stills. After all animation is, for all intents and purpose is simply a
sequence have still replayed in a predefined order at a predetermined fame rate.
Background
One of the most revolutionary developments of the multimedia era has been the raise of
computer-generated images (CGI). The first example began appearing in the latter
nineteen sixties when dedicated computer researches began to experiment with simple
computers available at that time. Except for military purpose, the high cost of computing
power necessary to process the enormous numbers of calculations required for realistic
CGI, meant commercial use of the medium reminded a pipe dream until the mid eighties.
During that decade, and for the first time, high-end computers were capable of
reproducing virtual reality and near photo realistic quality. By the end of the decade even
some consumer level personal computes were capable of producing stunning 3D images
with the first generation 3D software.
One of the earliest personal computers capable of producing realistic computer generated
images was the Commodore Amiga. Although incredible advance for its day using a
Motorola 68000 processor, high quality video output and full color graphics display, the
Amiga was doomed to run third in two horse race with the IBM/Microsoft PC and the
Apple Mac securing the lion’s share of the infant market place. Much of the software
originally developed to run on the Amiga survived and was decades later restructured to
run on new generation PCs, MACS and even HIGH-END Unix workstations. Hollywood

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
CGI blockbuster such as Toy Story and Ants were rendered using software originally
written on the now defunct Amiga platform.
Model/Database
In order to create a 3D computer generated image you must first build a 3D model. This
usually involves meticulously manipulating virtual geometry via the graphical user
interface of some 3D modeling software. However for more complex forms it is
sometimes preferable to digitize a physical 3D model into the system. The end result is
sometimes referred to as the 3D database as it is in effect a collection of code that
describes the positioning of all the surface geometry of the virtual objects in a single
database.

Page 140
Multimedia and Its applications

3-D Modeling Software


Working with a pencil, an artist can draw a three-dimensional scene on a two-
dimensional page. Similarly, an artist can use a drawing or painting program to create a
scene that appears to have depth on a two-dimensional computer screen. But in either
case the drawing lacks true depth; it’s just a flat representation of a scene. With a 3-D
modeling software graphic designers can create three-dimensional objects with tools
similar to those found in conventional drawing software. You can’t touch a 3-D
computer model; it’s no more real than a square, a circle or a letter created with a
drawing program. But a 3-D computer model can be rotated stretched, and combined
with other model objects to create complex 3-D scenes.
Illustrators who use 3-D software appreciate its flexibility. A designer can create a 3-D
software appreciate its flexibility. A designer can create a 3-D model of an object, rotate
it, view it from a variety of angles, and take two-dimensional “snapshots:” of the best
views for inclusion in final printouts. Similarly, it’s possible to “walk through” a 3-D
environment that exists only in the computer’s memory printing snapshots that show the
simulated space from many points of view. For many applications the goal is not a
printout but an animated presentation on a computer screen or videotape. Animation
software, presentation graphics software, and multimedia authoring software can display
sequences of screens showing 3-D objects being rotated, explored, and transformed.
Many modern television and movie special effects involved combinations of live action
and simulated 3-D objects being rotated, explored, and transformed. Many modern
television and movie special effects involve combinations of live action and simulated 3-
D animation. Techniques pioneered in films like Jurassic Park make it almost impossible
for audiences to tell clay and plastic models from computer models.

4.2.6 Video
Video can be the most impressive feature of a multimedia application and it is likely to be
the key medium in the next generation of applications. A moving image can convey
information much more powerfully than, say, a still image with sound, and certain types

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 141
Multimedia and Its applications

of information can only really be communicated in video form. This is particularly true if
we want to convey information about dynamic events in the real world, for example a
volcano erupting. While we can convey important information about how a volcano is
formed using animation and audio narration, or what one looks like using an image, or a
text describing what happens during an eruption, only a video can really convey what an
erupting volcano is actually like.
However, video is also the most testing media to include, due to the demands it places on
the delivery platform in terms of storage, processing and data transfer rates. As a result
of this, we must carefully consider the way we use video in our multimedia application.
Often we will need to make a compromise between what we would ideally like and what
is actually practical. We may, for example, replace a video sequence by a sequence of
still images with narration.
Video production is a highly skilled and technically demanding process that includes
scripting, direction, lighting, sound recording and so on. It must also be considered in the
multimedia design process, for instance by planning it using a storyboard. Video
production raises a number of project management questions:
 Will it be necessary to employ consultant video professionals?
 Will lower in-house production values be sufficient?
 Can existing video be reused?
Video can be classified according to the way it is used within a multimedia application
content video, interface video and incidental video.
Content video communicates the content of the multimedia application to the user. It
can be used in a variety of situations:
 Narration – video is used to present the subject matter. The talent (actor) used
for the narration is not important, aside from the fact that they must be able to act,
or may need to have a certain accent, or be a specific age, or a have specific look
if the application requires it.
 Testimonials – a video of a specific person or an historical event. The difference
between a narration and a testimonial is that in a narration the person delivers the
content, in a testimonial they are the content.
 ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Visualization – video can be used to show the visual layout and organization of
real world objects, such as building interiors.
 Processes- video is particularly well suited to showing processes, which take a
number of steps or events, which occur over time.
 Reinforcement- video can be used to complement or add emphasis to the content
provided in another medium.
Interface video is part of the interface, such as instructions, rather than being part of the
content:

Page 142
Multimedia and Its applications

 Interface instructions – can be used to explain how the use an application’s


interface, providing clear instructions about the features.
 Instructions – can also be used to explain how to use an application as part of a
larger activity, for instance as part of a training course.
Incidental video does not directly inform the user about the content or the interface, but
can be used to make the presentation more aesthetically pleasing:
 Scene setting – video can be used to establish the scene for the application. This
is likely to be appropriate in games, but also in training and educational
applications.
 Transitions – can be included to provide smooth movement between different
sections in an application. This can be particularly effective in games where
video transitions are used to join together the different game scenarios.
 Title and credits – are easily created and can add a professional look to an
application.
 Welcome messages – can be included to add a personal touch to an application.
Broadcast Video Standards
Animation and digital video movies are basically sequences of bitmapped scenes
(frames) rapidly played back. They can also be made, by changing the position of objects
rapidly to give the appearance of motion. Most authoring tools follow either the former
(frame oriented) or latter (object oriented) but rarely both.
Quick Time, Microsoft video (also called AVI) are the main tools which are used for
creating, editing and presenting motion video segments for various projects.
For making movies from video, special hardware is required to convert the analog video
signal to digital data. We can use premiere, video shop and media studio pro to edit and
assemble video clips captured from various sources like tape, camera etc. the completed
clip can then played back whenever needed.
Popular Video Formats:
NTSC: National Television Standards Committee, Frame aspect ratio: 4:3, Pixel aspect
ratio: 1, Frame Size: 640 x 480, Frame Rate: 29.97 fps.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
PAL: Phase Alternating Line, Frame aspect ratio: 4:3, Pixel aspect ratio: 1, Frame Size:
786 x 576, Frame rate: 25 fps.
D-1 NTSC: Frame aspect ratio: 4:3, Pixel aspect ratio: 0.9, Frame Size: 720 x 486,
Frame Rate: 29.97 fps.
D-1 PAL: Frame aspect ratio: 4:3, Pixel aspect ratio: 1.067, Frame Size: 786 x 576,
Frame rate: 25 fps.
HDTV (1): High Definition Television, Frame aspect ratio: 4:3, Pixel aspect ratio: 1,
Frame Size: 1280 x 720, Frame rate: 30 fps.

Page 143
Multimedia and Its applications

HDTV (2): Frame aspect ratio: 4:3, Pixel aspect ratio: 1, Frame Size: 1920 x 1035,
Frame rate: 30 fps.
Integrating Computers And Television
Television is perhaps the most important form of communication ever invented. It is
certainly the most popular and influential in our society. It is an effortless window on the
world, requiring of the viewer only the right time and the right channel, or for the no
discriminating viewer, any time and any channel (except channel one).
Computer presentation of information could certainly benefit from the color, motion, and
sound that television offers. Television viewers could similarly benefit from the control
and personalization that is promised by computer technology.
Combining the two seems irresistible. They already seem to have much in common, such
as CRT screens and programs and power cords. But they are different in significant ways,
and those differences are barriers to reasonable integration.
The problems on the computer side will get fixed in the course of technical evolution,
which should continue into the next century. We've been fortunate so far that not one of
the early computer systems has been so popular that it couldn't be obsoleted (although we
are dangerously close to having that happen with UNIX, and there is now some doubt as
to whether even IBM can displace the PC). The worst features of computers, that they are
underpowered and designed to be used by nerds, will improve over the long haul.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Television, unfortunately, has been spectacularly successful, and so is still crippled by the
limitations of the electronics industry of 40 years ago. There are many new television
systems on the horizon, a few of which promise to solve the integration problem, but for
the time being we are stuck with what we've got.

Page 144
Multimedia and Its applications

These limitations are not noticed by audiences, and could be completely ignored if they
were merely the esoterica of television engineers. Unfortunately, the television medium is
far more specialized than you might suppose. Interface designers who ignore its
limitations do so at their own peril.
Venue
Computer displays are generally designed for close viewing, usually in an office
environment--most often as a solitary activity. The display is sharp and precise. Displays
strongly emphasize text, sometimes exclusively so. Graphics and color are sometimes
available. Displays are generally static. Only recently have computers been given
interesting sound capabilities. There is still little understanding of how to use sound
effectively beyond BEEPs, which usually indicate when the machine wants a human to
perform an immediate action.
Television, on the other hand, was designed for distant viewing, usually in a living room
environment, often as a group activity. The screen is alive with people, places, and
products. The screen can present text, but viewers are not expected to receive much
information by reading. The sound track is an essential part of the viewing experience.
Indeed, most of the information is carried audibly. (You can prove this yourself. Try this
demonstration: Watch a program with the sound turned all the way down. Then watch
another program with the sound on, but with the picture brightness turned all the way
down. Then stop and think.)
Television was designed for distant viewing because the electronics of the 1940s couldn't
handle the additional information required to provide sufficient detail for close viewing.
Television has lower resolution than most computer displays, so you have to get some
distance from it to look good.
The correct viewing distance for a television viewer is as much as ten times what it is for
a computer user. Where is the best place to sit in order to enjoy fully integrated
interactive television, the arm chair or the desk chair? Many of the current generation of
multimedia products, such as Compact Disc-Interactive, suffer from this ambiguity. The
color images are best viewed from a distance, but the cursor-oriented interface wants to
be close.
Overscan
Every pixel on a computer display is precious. Because the visible window is a rectangle,
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
and the corners of CRTs are curved, the visible rectangle is inset, with sufficient black
border to assure that even the corner pixels will be visible. Television, unfortunately,
does not use such a border.
The first picture tubes used in television were more oval than rectangular. It was decided
that the picture should fill every bit of the face of the screen, even if that meant that
viewers would be unable to see the portions of the images that were near the edges,
particularly in the corners.
This was well suited to the distant viewing assumption, but the uncertainty of what is
visible on a viewer's screen (it can vary from set to set) causes problems even for the
producers of television programs. They had to accept conventions of Safe Action Area

Page 145
Multimedia and Its applications

and Safe Title Area, which are smaller rounded rectangles within the television frame.
Most actions that happen within the Safe Action Area will be visible on most sets. All
text should be confined to the Safe Title Area, which is visible on virtually all sets.
30 fps
Many computer systems have displays that run 30 or 60 frames per second, because it is
commonly believed that television runs at a rate of 30 frames per second. This is
incorrect for two reasons:
 Television doesn't really have frames, it has fields. A field is a half of a picture,
every other line of a picture (sort of like looking through blinds). There is no
guarantee that two fields make a coherent picture, or even which fields (this one
and that one, or that one and the next one) make up a frame. This is the field
dominance problem, and it makes television hostile to treating individual frames
as discrete units of information.
 If television did have a frame rate, it would be 29.97 frames per second. The
original black and white system was 30, but it was changed when color was
introduced. This can make synchronization difficult. Movies transferred to
television play a little longer, and the pitch in the sound track is lowered slightly.
It also causes problems with timecode.
Timecode is a scheme for identifying every frame with a unique number, in the
form hour:minute:second:frame, similar in function to the sector and track
numbers on computer disk drives. For television, there are assummed to be 30
frames per second, but because the true rate is 29.97, over the course of a half
hour you would go over by a couple of seconds. There is a special form of
timecode called Drop Frame Timecode, which skips every thousandth frame
number, so that the final time comes out right. However, it can be madness
dealing with a noncontiguous number system in a linear medium, particularly if
frame accuracy is required.
Interlace
Computers want to be able to deal with images as units. Television doesn't, because it
interlaces. Interlace is a scheme for doubling the apparent frame rate at the price of a loss
of vertical resolution and a lot of other problems. Pictures are transmitted as alternating
fields of even lines and fields of odd lines.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Images coming from a television camera produce 59.94 fields per second. Each field is
taken from a different instant in time. If there is any motion in the scene, it is not possible
to do a freeze frame, because the image will be made up of two fields, forcing the image
to flutter forward and backward in time. A still can be made by taking a single field and
doubling it to make a frame, with a loss of image quality.
Twitter is a disturbing flicker caused by the content of one line being significantly
different than its inter field neighbors. In extreme cases, it can cause the fields to separate
visibly. Twitter can be a big problem for computer generated graphics, because twittery
patterns are extremely common, particularly in text, boxes, and line drawings. The

Page 146
Multimedia and Its applications

horizontal stripes in the Macintosh title bar cause terrible twitter. Twitter can be removed
by filtering, but with a lose of detail and clarity.
Field dominance, as mentioned above, is the convention of deciding what a frame is: an
odd field followed by an even, or an even followed by an odd. There are two possible
ways to do it; neither is better than the other, and neither is generally agreed upon. Some
equipment is even, some is odd, and some is random. This can be critical when dealing
with frames as discrete objects, as in collections of stills. If the field dominance is wrong,
instead of getting the two fields of a single image, you will get a field each of two
different images, which looks sort of like a superimposition, except that it flickers like
crazy.
Color
RCA Laboratories came up with an ingenious method for inserting color into a television
channel that could still be viewed by unmodified black and white sets. But it didn't come
for free. The placing of all of the luminance and color information into a single composite
signal causes some special problems.
The color space of television is not the same as that in a computer RGB system. A
computer can display colors that television can't, and trying to encode those colors into a
composite television signal can cause aliasing. (Aliasing means "something you don't
want.")
Television cannot change colors as quickly as a computer display can. This can also
cause aliasing and detail loss in computer-generated pictures on television. There are
other problems, such as chroma crawl and cross-color, which are beyond the scope of this
article. But they're there.
Videotape
In the Golden Age, there was no good way to save programs, so all programs were
produced live. Videotape was developed years later.
Our problems with videotape are due to two sources: First, the design of television gave
no thought to videotape or videodisc, which results in the generation loss problem.
Second, the control aspects of interactive television require greater precision than
broadcasters require, which creates the frame accuracy problem.
Generation loss is the degradation in the quality of a program every time it is copied.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Because videotape is not spliced, the only way to assemble material is by copying it, and
with each copy it gets worse. This problem is being corrected by the application of digital
technology, and can be considered solved, at least at some locations. It remains to make
digital video recording cheap and widely available.
The frame accuracy problem is another story. A computer storage device that, when
requested to deliver a particular sector, instead delivered a different sector would be
considered defective. In the world of videotape editing, no one can notice that an edit is
off by 1/29.97 seconds, so precise, accurate-to-the-frame behavior is not always
demanded of professional video gear. This can make the production of computer

Page 147
Multimedia and Its applications

interactive video material extremely difficult, because if your interest is in a particular


frame, the off-by-one frame is totally wrong.
Other Television
This chapter has mostly concentrated on the NTSC system used in the United States.
Other countries use the PAL and SECAM systems, which have their own worlds of
problems. These are compounded for the designer who wants to make programs that
work in all nations.
A number of new television systems are being proposed to replace or enhance the
existing systems. To the extent these have progressive scan (non-interlaced), component
color (not composite), a frame rate than can be expressed as a whole number (60 fps, not
59.94 fps), and digital encoding (not analog), then computers and television can be
integrated successfully, and the limitations listed above will be techno-historical trivia.
The convergence of television and computer media is extremely desirable. Computer
technology would benefit from animated displays and high-bandwidth digital video
storage. Camcorders would be wonderful computer input devices. Television technology
would benefit from being less mysterious and more straightforward; eliminating the
video priesthood in much the same way that good interface design will eliminate the
computer priesthood.
Although desirable, this convergence is not inevitable. Some of the worst use of
computers is in television engineering. Some of the worst television is "desktop video."
The full power of a new medium based on the well considered unification of computer
and television technology is distant and elusive. The design challenge is not
technologically difficult. It requires only a commitment to excellence and a willingness to
accept change.
This New Television could make the tools of production available to every individual.
The New Media Literacy could grant people a significant power over the technology of
the Information Age. The New Television could perhaps be the most important for of
communication ever invented.

Shooting And Editing Video


There are two different ways of generating moving pictures in a digital form for inclusion
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
in a multimedia production. We can use a video camera to capture a sequence of frames
recording actual, motion, as it is occurring in the real world, or we can create each frame
individually, either within the computer or by capturing single images one at a time. We
use the word video for the first option, and animation for the second. Since both video
and animation are forms of moving pictures they share some features, but there are also
technical differences between them. For live-action video it is necessary both to record
images fast enough to achieve a convincing representation of real-time motion, and to
interface to equipment that conforms to requirements and standards which we come to
play back the video on a computer, these requirements and standards are largely
irrelevant. To make matters worse, there are several different standards in use in different
parts to the world.

Page 148
Multimedia and Its applications

Linear Vs Non-Linear:
Video can be categorized as being one of two fundamental configurations, either linear or
non-linear.
Linear video refers to traditional videotape recording systems. Reel-to-reel VTRs and
videocassettes are both defined as linear in that the image and audio signals are recorded
onto continuously moving videotape. If you need to edit such video content in a
traditional linear system you need to replay the original videotape whilst re-recording it
onto a special editing video recorder.
Dedicated recording features on edit VTRs called flying-erase heads allow for frame
accurate editing. If special effects such dissolves or wipes are required at least three
machines be needed (two replay and one record) as well as specialized sync pulse
generators and video mixing consoles. In such a configuration the video editor needs to
record alternate video tracks onto two “source” tapes with each segment accurately
aligned to overlap the opposite track so that the video signal can be “mixed” between the
A & B replay machines. This method is complex and time consuming and is often
referred to as “A & B roll” or Chequerboard” editing. The concept is a legacy from
manually edited motion picture film. Computer controllers are often used to speed up the
process but the essential criteria is that it remains tape-to-tape and therefore a linear
process.
Needless to say, advances in video technology of transferring the video signal to a
random access medium such as computer hard disk, DVD-RAM or magneto/optical
storage system for subsequent reply or editing. Such a system may incorporate a
videotape recorder to either replay or to record the final product but at least part of the
process must utilize a random access system in order for it to be described as non-linear.
Many modern configuration is referred to as a tapeless system. Such a system would
record directly to a random access disc. Television news was the first medium to utilize
tapeless video.
Multimedia Video:
Defining multimedia video is a moveable feast. Whatever definitions we use and
whatever examples we give can only represent a snapshot in time. Be assured it will be
obsolete within a very short timeframe. For new we can best describe it as any digital
video able to captured to and replayed from a computer system or that which is in a

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
digital form able to be interactively controlled such as in a video game console or DVD
player.
The most common examples are small video clips or animations, which can be replayed
from the desktop of a PC. These tend to be smaller than full screen movies with or
without synchronized sound and which can be replayed by themselves or embedded
inside a larger multimedia production such as an interactive presentation or game. These
clips can be of virtually any frame size or frame rate as long as they are not to be used as
a medium for non-linear video editing. However if non-linear off-line video editing is to
be achieved, strict adherence to the relevant video standards must be observed.

Page 149
Multimedia and Its applications

Video Capture:
Video capture is the process of converting linear video (such as
a signal from analog or digital videotape) into non-linear video
such as that stored on a computer hard disk in a digital format.
This video signal is captured and stored in either compressed
or non-compressed form and can be replayed, edited or re-
processed in a wide variety of ways without the inherent
quality loss of analog linear editing systems. Video capture is
sometimes called digitizing or sampling.
FireWire:
The most common form of non-broadcast quality video capture is via a FireWire
configuration. This involves the used of a suitably configured personal computer, which
has a FireWire connection (IEEE 1394 standard port), FireWire cable and a digital
record/replay device such as a digital camcorder. The
FireWire enabled software allows the replayed signal form
the camcorder to be loaded and stored onto the computers
hard disk or disk array. Both the original signal and the
stored data are heavily compressed using hardware
compression built into the camcorder thereby reducing the
data throughput to a manageable level.
Video Capture Card:
Prior to the introduction of the FireWire system most video
capture was facilitated by plug-in-cards such as Nu-bus or
PCI cards which are inserted into the main computer bus from the rear slots of the
computer. These cards either capture analog or digital video or store it as either
compressed or non-compressed digital video on either the hard disk or external RAID
array. Video capture cards are still used particularly for professional non-compressed
non-linear editing systems because of their faster processing and real-time effects
capabilities.
Video Formats:
There are a number of industry standard video formats in use around the World as well as
a range of non-standard formats that can be used in multimedia video clips. The
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
following are examples of the various formats available:
Analog Broadcast System:
 PAL (Phase Alternation Line) Analog color video standard used in Australia,
most of Europe and parts of Asia and Africa. It has a frame size of 768 x 576
pixels (when converted to digital) and a frame rate of 25 frames per second.
 NTSC (National Television Systems Committee) Analog color video standard
used in America, Japan and some parts of Asia. It has frame size of 640 x 480
pixels (when converted to digital) and frame rate of 29.97 frames per second.

Page 150
Multimedia and Its applications

 SECAM (System Electronic Couleur Avec Memoire) Analog color video


standard used in France, countries of the former size of 768 x 576 pixels (when
converted to digital) and a frame rate of 25 frames per second.
Digital Broadcast Systems:
 ARIB Japanese digital video/audio standard provides for both standard definition
(SD) and High-Definition (HD) recording and transmission.
 ATSC American digital video/audio standard provides for both standard definition
(SD) and High-Definition (HD) recording and transmission.
 DVB-T European digital video/audio standard provides for both standard
definition (SD) and High-Definition (HD) recording and transmission.
Video Editing:
Broadly speaking, there are two types of video editing. One involves editing directly
from one tape to another and is called linear editing. The other requires that the sequences
to be edited are transferred to hard disk, edited, and then transferred back to tape. This
method is referred to as non-linear editing (NLE).
When video first made its mark on broadcast and home entertainment, the most
convenient way to edit footage was to copy clips in sequence from one tape to another. In
this linear editing process the PC was used simply for controlling the source and record
VCRs or camcorders. In broadcast editing suites, elaborate hardware systems were soon
devised which allowed video and audio to be edited independently and for visual effects
to be added during the process. The hardware was prohibitively expensive and the
process of linear editing gave little latitude for error. If mistakes were made at the
beginning of a finished program, the whole thing would have to be reassembled.
For non-linear editing the video capture card transfers digitized video to the PC's hard
disk and the editing function is then performed entirely within the PC, in much the same
way as text is assembled in a word processor. Media can be duplicated and reused as
necessary, scenes can be rearranged, restructured, added or removed at any time during
the editing process and all the effects and audio manipulation that required expensive
add-on boxes in the linear environment are handled by the editing software itself. NLE
requires only one video deck to act as player and recorder and, in general, this can be the
camcorder itself.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The trend towards NLE began in the early 1990s - encouraged by ever bigger, faster and
cheaper hard disks and ever more sophisticated video editing software - and was given a
massive boost in 1995 with the emergence of Sony's DV format.
While MPEG-2 video has already found wide use in distribution, problems arise in
production, especially when video needs to be edited. If it becomes necessary to cut into
a data stream, B and P frames are separated from the frames to which they refer and they
lose their coherence. As a result, MPEG-2 video (from, say, a news feed) is
decompressed before processing. Even when producing an MPEG-2 video stream at a
different data rate, going from production to distribution, material needs to be fully

Page 151
Multimedia and Its applications

decompressed. Here again, concatenation rears its ugly head, so most broadcasters and
DVD producers leave encoding to the last possible moment.
Several manufacturers have developed workarounds to deliver editable MPEG-2 systems.
Sony, for instance, has introduced a format for professional digital camcorders and VCRs
called SX, which uses very short GOPs (four or fewer frames) of only P frames and I. It
runs at 18 Mbit/s, equivalent to 10:1 compression, but with an image quality comparable
to M-JPEG at 5:1. More recently, Pinnacle has enabled the editing of short-GOP, IP-
frame MPEG-2 within Adobe Premiere in conjunction with its DC1000 MPEG-2 video
capture board. Pinnacle claims its card needs half the bandwidth of equivalent M-JPEG
video, allowing two video streams to be played simultaneously on a low-cost platform
with less storage.

Faced with the problem of editing MPEG-2, many broadcast manufacturers sitting on the
ProMPEG committee agreed on a professional version that could be more easily handled,
known as MPEG-2 4:2:2 Profile@Main Level. It's I frame only and allows for high data
rates of up to 50 Mbit/s which have been endorsed by the European Broadcasting Union
and its US counterpart, the Society of Motion Picture Television Engineers (SMPTE), for
a broad range of production applications. Although there's no bandwidth advantage over
M-JPEG, and conversion to and from other MPEG-2 streams requires recompression, this
1 frame-only version of MPEG-2 is an agreed standard, allowing material to be shared
between systems. By contrast, NLE systems that use M-JPEG tend to use slightly
different file formats, making their data incompatible.
In the mid-1990s the DV format was initially pitched at the consumer marketplace.
However, the small size of DV-based camcorders coupled with their high-quality
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
performance soon led to the format being adopted by enthusiasts and professionals alike.
The result was that by the early 2000s - when even entry-level PCs were more than
capable of handling DV editing - the target market for NLE hardware and software was a
diverse one, encompassing broadcasters, freelance professionals, marketers and home
enthusiasts.
Despite all their advantages, DV files are still fairly large, and therefore need a fast
interface to facilitate the transfer from the video camera to a PC. Fortunately, the answer
to this problem has existed for a number of years. Apple Computers but has since been
ratified as international standard IEEE 1394 originally developed the FireWire interface
technology. Since FireWire remains an Apple trademark most other companies use the
IEEE 1394 label on their products; Sony refer to it as "i.LINK". When it was first

Page 152
Multimedia and Its applications

developed, digital video was in its infancy and there simply wasn't any need for such a
fast interface technology. So, for several years it was a solution to a problem, which
didn't exist. Originally representing the high end of the digital video market, IEEE 1394
editing systems have gradually followed digital camcorders into the consumer arena.
Since FireWire carries DV in its compressed digital state, copies made in this manner
ought, in theory, to be exact clones of the original. In most cases this is true. However,
whilst the copying process has effective error masking, it doesn't employ any error
correction techniques. Consequently, it's not unusual for video and audio dropouts to be
present after half a dozen or so generations. It is therefore preferred practice to avoid
making copies from copies wherever possible.
By the end of 1998 IEEE 1394-based editing systems remained expensive and aimed
more at the professional end of the market. However, with the increasing emphasis on
handling audio, video and general data types, the PC industry worked closely with
consumer giants, such as Sony, to incorporate IEEE 1394 into PC systems in order to
bring the communication, control and interchange of digital, audio and video data into the
mainstream. Whilst not yet ubiquitous, the interface had become far more common by the
early 2000s, not least through the efforts of audio specialist Creative who effectively
provided a "free" FireWire adapter on it's Audigy range of sound cards, introduced in late
2001
Digital Video
Understanding what digital video is first requires an understanding of its ancestor -
broadcast television or analogue video. The invention of radio demonstrated that sound
waves can be converted into electromagnetic waves and transmitted over great distances
to radio receivers. Likewise, a television camera converts the color and brightness
information of individual optical images into electrical signals to be transmitted through
the air or recorded onto videotape. Similar to a movie, television signals are converted
into frames of information and projected at a rate fast enough to fool the human eye into
perceiving continuous motion. When viewed by an oscilloscope, the unprojected
analogue signal looks like a brain wave scan - a continuous landscape of jagged hills and
valleys, analogous to the ever-changing brightness and color information.
There are three forms of TV signal encoding:
 most of Europe uses the PAL system
 ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
France, Russia and some Eastern European countries use SECAM, which only
differs from the PAL system only in detail, although sufficient to make it
incompatible
 The USA and Japan use a system called NTSC.
With PAL (Phase-Alternation-Line) each complete frame is drawn line-by-line, from top
to bottom. Europe uses an AC electric current that alternates 50 times per second (50Hz),
and the PAL system ties in with this to perform 50 passes (fields) each second. It takes
two passes to draw a complete frame, so the picture rate is 25 fps. The odd lines are
drawn on the first pass, the even lines on the second. This is known as interlaced, as
opposed to an image on a computer monitor, which is drawn in one pass, known as non-

Page 153
Multimedia and Its applications

interlaced. Interlaced signals, particularly at 50Hz, are prone to unsteadiness and flicker,
and are not good for displaying text or thin horizontal lines.
PCs, by contrast, deal with information in digits - ones and zeros, to be precise. To store
visual information digitally, the hills and valleys of the analogue video signal have to be
translated into the digital equivalent - ones and zeros - by a sophisticated computer-on-a-
chip, called an analogue-to-digital converter (ADC). The conversion process is known as
sampling or video capture. Since computers have the capability to deal with digital
graphics information, no other special processing of this data is needed to display digital
video on a computer monitor. However, to view digital video on a traditional television
set, the process has to be reversed. A digital-to-analogue converter (DAC) is required to
decode the binary information back into the analogue signal.

4.3) Revision points


 Text: Text is the most widely used and flexible means of presenting information on
screen and conveying ideas. The designer should not necessarily try to replace textual
elements with pictures or sound, but should consider how to present text in an
acceptable way and supplementing it with other media. For a public system, where
the eyesight of its users will vary considerably, a clear reasonably large font should
be used. Users will also be put off by the display of large amounts of text and will
find it hard to scan. To present tourist information about a hotel, for example,
information should be presented concisely under clear separate headings such as
location, services available, prices, contact details etc.
 Images: Images refer to still images that are input from files you already have
available. Scanning photographs, digital camera, photo clip art and paint package
outputs are all common sources of images. The maximum image size is 4,000 x 4,000
pixels, up to 64 different images may be used in a single animation StillMotion
Creator supports TGA, TIF, JPG and BMP fill formats
 Video: The transmission of moving pictures or animation to a monitor or television.
In a broader sense, any text or images transmitted from a computer and displayed on a
display monitor or television. Video images may be broadcast live, filmed, or video-
recorded and stored on tape or disk.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
4.4) Intext Question
1. What are the various building blocks of multimedia?
2. How will you make a multimedia software using text? Explain
3. Explain the character set used in multimedia text preparation
4. What are the various fonts and faces for text supported in Multimedia and explain
them.

Page 154
Multimedia and Its applications

5. Describe text animation


6. List any five points mainly considered for choosing text fonts.
7. Describe font editing and design in multimedia.
8. What do you mean by hypertext? Describe its applications.
9. What are the elements of hypertext? Describe each of the elements with the help of an
example
10. What are the various hypermedia document models?
11. How are still images made and used?
12. How will you manipulate images and graphics in multimedia? Explain in detail.
13. List and explain image file formats?
14. Explain bitmaps of images.
15. Compare vector-drawn objects and Bitmaps.
16. Write notes on colors in images
17. Explain the different broadcast video standards.
18. Explain the different audio file format.
19. Describe audio interfaces
20. Explain the principles of Animation and how it is made to work.
21. Discuss the different animation file formats.
22. Explain the following
a. DVI technologies
23. Define ‘Digital video’ and explain the method of making it.
24. Write briefly on shooting and editing video.
25. Explain the method used to integrate computers and television? Explain
26. How does Video Work? Explain any two video standards.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
27. How video works over image?
28. Explain the different video capturing and editing techniques.
29. Describe video recording formats and their choice for Multimedia projects

4.5) Summary
Multimedia Building blocks used essentially to define applications and technologies that
manipulate text, data, images, and voice and full motion video objects.

Page 155
Multimedia and Its applications

Multimedia is a system, that support the interactive use of text, audio, still images, video,
and graphics. Each of these elements must be converted in some way from analog form to
digital form before they can be used in a computer application.

4.6) Terminal Exercises


1. Compare and contrast the following in the context of Multimedia:
a. Analog Audio and Digital Audio
b. Analog Video and Digital Video
c. Analog Image and Digital image

4.7) Supplementary Materials


1. http://en.wikipedia.org/wiki/Multimedia
2. http://multimedia.expert-answers.net/multimedia-glossary/en/
3. http://nrg.cs.usm.my/~tcwan/Notes/MM-BldgBlocks-I.doc
4. www.edb.utexas.edu/multimedia/PDFfolder/WEBRES~1.PDF

4.8) Assignments
1. Discuss the steps in adding sound to multimedia project.
2. Discuss the method of preparing HTML documents.

4.9) Suggested Reading

1. Tay Vaughan, “Multimedia –Making it work”, TataMcGraw Hill, Fourth Edition.

ANNAMALAI
ANNAMALAI
4.10) Learning Activities
UNIVERSITY
UNIVERSITY
1. Brief out the stages in capturing and editing images.
2. Brief out the steps in preparing digital audio files.

Page 156
Multimedia and Its applications

4.11) Key Words

DirectSound3D (DS3D) was introduced in DirectX 3.0 and allowed developers to place
a sound anywhere in 3D space.
Dolby Digital (AC-3): The AC-3 (Audio Code number 3) surround sound standard was
created by Dolby Laboratories.
Dolby Digital EX and DTS ES: These are new surround audio standards that add an
additional surround sound channel to the 5.1 picture – the rear centre channel.
Dolby Pro Logic: This is an older standard that packs audio information for a centre
and a surround channel into your normal stereo channel.
DTS : An Acronym for Digital theatre Systems, this is a standard that was formulated by
Steven Spielberg and is a competitor to Dolby Digital
Duplex: A full-duplex soundcard can make and receive sounds at the same time.
MIDI Channels: These channels often a musician greater control over the instruments
connected to the soundcard.
Polyphony: Is the maximum number of voices a synthesizer can play at any one time.
Signal – to – Noise Ratio (SNR): SNR is the ratio of the largest sound signal that can be
handled by a card with minimum distortion, to the noise that is present at that time.
Sony/Philips Digital Interface (S/PDIF): It is a standard for transmitting data in a
loseless digital format to preserve sound quality.
Stereo Cross talk: Cross talk is the mixing of the left and the right channel sound
information.
THX: an acronym for Tomlinson Holman’s eXperiment,
Total Harmonic Distortion (THD): Non-linear distortion is a processing error that
creates output signals at frequencies that are not necessarily present in the input.
Wavetable: A Wavetable stores the digitized samples of actual recorded instruments,
which are then combined during music creation and playback.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 157
Multimedia and Its Applications

UNIT-V

5.0) Introduction
Internet Multimedia is a very happening field. Lots of work is going on to improve the
quality of multimedia on the network. The challenge is massive. It is more so because of
the accessories we already have in our hand.

Media's in Internet

Different types of Media used in Internet are:


 Text
 Picture
 Animation
 Audio
 Video
Today web has gone beyond test and pictures: many web pages now include sound and
video. With the increased popularity of broadband connections, many sites feature
music, movie, and television clips you can view or download. However, even with the
broadband connection, audio or video files that are more than a few seconds long can be
large and take a long time to download to your computer.

Because audio and video files can be large, and to avoid a long wait before you can start
playing them, streaming was invented. Streaming enables your computer to play the
beginning of an audio or video file while the rest of the file is still downloading. If the
file arrives more slowly than your computer plays it, the playback has gaps while your
computer waits for more data to play. (Players usually display a "Buffering..."message
when this happens.) Several streaming formats are widely used on the Web, and you can
install plug-ins and ActiveX controls to enable your browser to play them.

ANNAMALAI
ANNAMALAI UNIVERSITY
5.1) Objective
UNIVERSITY
To understand the role of multimedia in Internet and its applications

5.2) Content
5.2.1. Multimedia and the Internet

Once upon a time, so long ago, the words Internet and multimedia were rarely mentioned
in the same sentence. Although you could download GIF images or sound files from an
FTP site and then view or listen to them on your PC, the Internet experience itself was far

Page 158
Multimedia and Its Applications

from a multimedia extravaganza. Indeed, until the advent of Mosaic and the phenomenal
popularity of the World Wide Web, accessing the Internet was like reading the front page
of the Wall Street Journal: lots of good information, but gray, without pictures, and dull
on the eyes.

In 1993, a computer program called Mosaic changed all that. Mosaic is a browser -- a
program that allows users to use the Internet's World Wide Web. For the first time, true
multimedia -- the mixing of various media such as text, images, sounds, and movies --
came to the Internet. Today, not only can you download those sorts of files, but you can
also experience them while you are online. And, if you have anything to say, you can
even present your information, complete with mixed media, on your own Web page.

The World Wide Web continues to grow in popularity, but most of us have limited
bandwidth resources. We use poky 9600 bps and 14.4 Kbps modems to send and receive
data, but in the world of full multimedia we're going to need much faster access. After all,
14.4 Kbps means 14,400 bits of information every second, and even with good data
compression technology, we're lucky to hit 38,800 bits per second regularly. At these
speeds, video or audio files that are more than a few minutes long can take an hour or
more to transfer to a PC, so if you're waiting to see Gone with the Wind or hear Wagner's
entire Ring cycle, forget it. Even users who are lucky enough to access the Internet with a
28.8 Kbps modem get tired of waiting for things to download.

As a result of this bottleneck, most people get only text and graphics files from the Web.
Text and still image files are generally small, so you don't need to wait too long to view
them, but anyone who has waited for a graphically heavy Web site, such as Time
Warner's Pathfinder, soon realizes how frustrating even this experience can be. Although
audio and animation are both possible on the Web, you need a much faster connection (or
the patience of a saint) to send and receive the huge audio and video files that would
enable you to take full advantage of Internet multimedia.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 159
Multimedia and Its Applications

Figure 1-1: Time Warner's Pathfinder Web site -- a great resource, but the images can
make it slow going

On the Internet, and typically in real life, new technologies are first available to a core
group of inventors and experimenters. If the new technology is good enough, or
interesting enough, or worthwhile enough, word gets out. Other folks begin to hear about
the wonders of the new technology, and they want to try it. They find out what they need,
and then they spend whatever time and money is necessary. Slowly, the technology gains
wider and broader acceptance, with more and more people taking part, until at last it
becomes so common that it's practically a household word. Consider, for example,
electronic mail or the World Wide Web.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The key advantage of the Internet is that it provides a widely used and uniform
communication medium to link users together to access or deliver multimedia
information. However, when using the Internet as a vehicle for multimedia delivery, one
must be aware of the following considerations.
Bandwidth:
Determines how much information can be transmitted efficiently. The bandwidth
depends primarily on the type of data being transmitted. Text has the lowest bandwidth
requirements at one byte or character with graphics, audio, and video requiring
significant increases in bandwidth to move information.

Page 160
Multimedia and Its Applications

Application:
The type of software used for delivery of the information. The Internet has spawned the
development of a number of facilities such as mail services; file transfer, and the Web to
store and deliver information. Traditional multimedia products can benefit from the real
time nature of data delivery across the Internet.

Bandwidth considerations:
Bandwidth is based on how much information can be transmitted in a given period of
time. Bandwidth is dependent on the function of communication devices and the
transmission medium. Data, within the computer, moves at rates of 10 to 50 megabits per
second-often using parallel connections within the computer. Unfortunately, outside of
the computer, transmission speeds are much slower because they relay much slower,
serial-based local area networks and commercial telephone lines linking vast number of
users from home based computers to internet service providers. Therefore, typical
Internet connectivity speed ranges from:
 Low – end (modem): 14,400 and 28,800 bits per second (1800 to 3600
bytes per second under ideal conditions). Modem speeds from analog
commercial phones systems are limited to the speeds well below 56,000
bits per second.
 Mid-range (ISDN or Integrated Serviced Digital Network): 56,000 bits per
second (digital transmission). May include a second channel or 112,000
bits per second.
 High speed (Ethernet network) 10,000,000 bits per second (1,250,000
bytes per second under ideal conditions) most commonly found in office-
based network systems. The speed of the Internet connection is also
dependent on the number of users and how these networks are configured.
 Before we proceed any further, let us review different multimedia data and
their average file size. The typical file sizes for several of multimedia data
are shown.
It should be noted that the transmission speeds listed for modem or Ethernet network are
ideal and do not account for actual conditions such as telephone transmission limitations,
number of users sharing a network, and physical hardware limitations. Actual network
performance would probably be 5 to 20 times less than the values shown in the chart. Ex:
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
downloading a full screen bitmap picture would take up to five minutes via a modem
operating at 28,000 bits per second. Table gives more realistic expectations of data
transmission for network systems.
Data Size Typical Sample
Type
Text 1 character (ASCII) = 1 Page of text (100 characters/line, 30
byte lines) = 3000 bytes.
Pictures 1 pixel, 256 colors = 1 Full Screen (640 x 480 pixels), 256
byte color bitmap = 330,000 bytes.

Page 161
Multimedia and Its Applications

1 pixel, 16,000,000 Compressed graphic format =


colors = 3 bytes 75,000 to 200,000 bytes (depends on
type, detail, colors)
Audio Voice quality, 1 second, Voice, one minute = 6,000,000
8 bit capture rate, 11 bytes
KHz = 11,000 bytes
CD-Audio (music)
quality, 1 second, 16 bit
Music, one minute = 10.5
capture 44 KHz, stereo =
megabytes.
176,000 bytes.
Video 1 second, 24 bit color, 30 One minute, uncompressed = 414
frames per second megabytes
320 x 240 pixels = 6.9
megabytes. One minute, compressed = 10-15
Compressed video, 1 megabytes (depends on compression
second = 200,000 bytes scheme)

Table: Multimedia Data Types and File Sizes

Data Type/sample Time to Transmit…(assumes ideal


conditions
Data Typical At 28,800 At 10 megabits/second
Type Sample(from bits/second) (1.25 megabytes/second)
Table 10 –
1)
Text Page of text 1 second .0002 seconds
(100
characters
line, 30
lines) =
3000 bytes

ANNAMALAI
ANNAMALAI UNIVERSITY
Pictures
UNIVERSITY
Full screen
(640 x 480
91 seconds 0.26 seconds

pixels), 256
color bitmap
= 330,000
bytes.
Compressed
graphics Compressed file, Compressed file, .06 to .13
format = 20 to 55 seconds.
75,000 to seconds.
200,000

Page 162
Multimedia and Its Applications

bytes
(depends on
type detail,
colors)
Audio Voice, one Voice ,166 Voice, 5 seconds
minute = seconds
600,000
bytes
Music, 8.4 seconds.
Music, one
Music, 3000
minute =
seconds (50
10.5 minutes)
megabytes
Video One minute, Uncompressed Uncompressed 330
un=414 115,000 seconds seconds
megabytes (1.9 hours) (5.5 minutes)

One minute, Compressed Compressed 8 seconds


compressed 2777 seconds
= 10-15 (46 minutes)
megabytes
(depends on
compression
scheme)
Given these more realistic transmission speeds for network and the obviously slower
rates for low-end modem based transmission, it is easy to see how multimedia data
transmission is useful for text and graphics, but poses limitations for real time
transmission of audio and video.
Network Time to Transmit ( adjusted for
Data type/Sample
realistic performance factor = 10X)
Data Type Typical Sample At 10 megabits/second (125,000 bytes/second)
(from previous
chart)
Text
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page of Text .002 seconds
(100
characters/line,
30 lines) – 3000
bytes
Pictures Full screen (640 2.6 seconds
x 480 pixels),
256 color bitmap
= 330,000 bytes.
Compressed

Page 163
Multimedia and Its Applications

graphic format = Compressed file – 6 to 1.3 seconds.


75,000 to
200,000 bytes
(depends on
type, detail,
colors)
Audio Voice, one Voice, 5 seconds.
minute =
600,000 bytes. Music, 84 seconds.
Music, one
minute = 10.5
megabytes.
Video One minute, Uncompressed, 3300 seconds (50.5 minutes)
uncompressed =
414 megabytes.
One minute, Compressed, 80 seconds.
compressed =
10-15 megabytes
(depends on
compression
scheme)
Application Considerations:
Stand-alone multimedia products depend on local content that is only as current as
when it was placed on the delivery medium. Internet applications have the advantage of
have content available for delivery as soon as the file posted a server. This advantage is
significant for developers what on real time delivery of information for example integrate
multimedia current – events magazine delivered on a CD-ROM.
There are two basic models for authoring and delivering multimedia using
Internet technology:
 Application based programming of multimedia authoring tools with built –
in Internet access (file transfer, web, mail, etc) for example a traditional
CD_ROM based application that is able to present very current content by
downloading from an Internet based source.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
 Browser based software for presenting web files including associated
functions such as hyper linking and in-line data presentation (for graphics
audio and video files) this is the classic model of delivering web-based
documents using world web browser.
A variation of these two is combined approach in which applications call Internet
browser to percent Internet based content. For example a CD-ROM application might
start a web browser application and have it access and local a documents.
Application-based Internet Access:
The applications-based Internet access approach includes the standard features.
Multimedia authoring tool and arguments it with Internet specific features to:

Page 164
Multimedia and Its Applications

 Access to “real time” multimedia data (text, pictures, audio, and video).
This content complements or replaces exiting application content that is
originally delivered on CD-ROM.
 Dynamic reorganization of content; that is downloading new instructions
to change the layout presentation and even the look and feel of a
multimedia product.
Internet-specific capabilities on these functions wide:
 Access to downloading new files for a multimedia presentation.
Multimedia content can be transparently replaced in multimedia products
without the user ever knowing.
 Mail access to access distributes new information. End users of
multimedia products can be notified of updates to their applications or
new applications might be of interest? Web page presentation to take
advantages of hypertext mark up language (HTML) documents encoding.
Browser based multimedia delivery
Browser-based/web based information multimedia delivery technology offers number of
advantages over local data delivery via CD-ROM or other high-density storage media:

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
 Access to server based resources such as applications and database.
Information on servers can be continuously updated and distributed to
large number of end users.
 Add-on software modules that enhance the behavior or performance of the
browser. Browser software applications can be given new functionality
such as the ability to present multimedia animation files with add-on
software modules.
 Programming and scripting languages that adds functionally to the
browser. Web documents with embedded scripts enable programmed
functions to be added to browser applications.

Page 165
Multimedia and Its Applications

5.2.2. How the Internet Works


The simplest definition of the Internet is that it is the largest computer network in the
work. But technically speaking, the Internet is actually a network of many smaller
networks that exist all over the world.
It as the Inter networking i.e., the linking of many networks including private networks
that was named Internet.
One of the greatest things about the Internet is that nobody really owns it. It is a global
collection of networks, both big and small. These networks connect together in many
different ways to form the single entity that we know as the Internet. In fact, the very
name comes from this idea of interconnected networks.
Every computer that is connected to the Internet is part of a network, even the one in your
home. For example, you may use a modem and dial a local number to connect to an
Internet Service Provider (ISP). At work, you may be part of a local area network
(LAN), but you most likely still connect to the Internet using an ISP that your company
has contracted with. When you connect to your ISP, you become part of their network.

The ISP may then connect to a larger network and become part of their network. The
Internet is simply a network of networks.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Most large communications companies have their own dedicated backbones connecting
various regions. In each region, the company has a Point of Presence (POP). The POP is
a place for local users to access the company's network, often through a local phone
number or dedicated line. The amazing thing here is that there is no overall controlling
network. Instead, there are several high-level networks connecting to each other through
Network Access Points or NAPs.
Internet Functioning
The reason that the Internet works at all is that every computer connected follows a
common protocol.

Page 166
Multimedia and Its Applications

The communication protocol used by Internet is TCP/IP. The Transmission Control


Protocol is responsible for dividing the file/message into packets on the source computer.
It is also responsible for reassembling the received packets at the destination or recipient
computer.
The IP (Internet Protocol) part is responsible for handling the address of destination
computer so that each packet is routed (sent) to its proper destination.
How to be part of Internet?
To be part of Internet, all you need to do is - get connected to a server on Internet. A
server is a computer on a network that serves the requests made by various other
computers.
You can get connected to Internet through one of the following methods:
 Leased Lines- Leased lines are direct cables laid to connect your computer to an
Internet Service Provider's (ISP's) Server.
 Dial-up connection- A dial-up connection is a temporary connection, set up between
your computer and ISP server. A dial-up connection is established using a modem, which
uses the telephone line to dial up the number of ISP server.

5.2.3. Internetworking

Internetworking involves connecting two or more distinct computer networks or


network segments together to form an internetwork (often shortened to internet), using
devices which operate at layer 3 of the OSI Basic Reference Model (such as routers or
layer 3 switches) to connect them together to allow traffic to flow back and forth between
them . The layer 3 routing devices guide traffic on the correct path (among several
different ones usually available) across the complete internetwork to their destination.

Note: Routers were originally called gateways, but that term was discarded in this
context, due to confusion with functionally different devices using the same name.

It is interesting to note that some people inaccurately refer to the connecting together of
networks with bridges as internetworking, but the resulting system mimics a single
subnetwork, and no internetworking protocol (such as IP) is required to traverse it.
However, a single computer network may be converted into an internetwork by dividing
the network into segments and then adding routers or other layer 3 devices between the
ANNAMALAI
ANNAMALAI UNIVERSITY
segments. UNIVERSITY
The original term for an internetwork was catenet. Internetworking started as a way to
connect disparate types of networking technology, but it became widespread through the
developing need to connect two or more local area networks via some sort of wide area
network. The definition now includes the connection of other types of computer networks
such as personal area networks.

The most notable example of internetworking in practice is the Internet, a network of


networks running different low-level protocols, unified by an internetworking protocol,
the Internet Protocol (IP).

Page 167
Multimedia and Its Applications

IP only provides an unreliable packet service across an internet. To transfer data reliably,
applications must utilize a Transport layer protocol, such as TCP, which provides a
reliable stream (These terms do not mean that IP is actually unreliable but instead that it
sends packets without contacting and establishing a connection with the destination
router beforehand. The opposite applies for reliable). Since TCP is the most widely used
transport protocol, people commonly refer to TCP and IP together, as "TCP/IP". Some
applications occasionally use a simpler transport protocol (called UDP) for tasks which
do not require absolutely reliable delivery of data, such as video streaming.

5.2.4. Connections
Types of Internet Connections

There are several types of Internet connections available for home and small office
connectivity. This following section will address fundamentals of installation and
security for the primary connections available today.

 Dial-up connections (including bonded analog)


 DSL
 Cable internet
 ISDN
 Wireless internet

Dial Up Connections

Dial-up Internet connections are the most common and most readily available for home
and small business users. Dial up connections are easy to set up and use, and generally
inexpensive.

Single Line Dial-up


The following is required for a single line Windows dial-up connection:
 PC with modem installed, either internal or external.
 A dial-up Internet access account with an Internet Service Provider, such as
AT&T, Verizon, AOL, etc.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
 A Windows dial-up networking session installed and configured for the ISP. This
would include the access phone number for the ISP, your account name and
password.
 An analog phone line connected to the modem.

The configuration for a dial-up networking session in Windows 2000 (found under
Start/Control Panel/Network and Dial-up connections/) looks like this:

Page 168
Multimedia and Its Applications

Features of dial-up connections


 Dial-up connections are on an as-needed basis, they are usually not permanently
connected.
 Speed of single line dial-up connections is limited to 53kbps. This speed may be
lower depending on phone line quality, modem and ISP equipment. Typical
connection speeds in metropolitan areas for 56K modems are usually between
38kbps and 46kbps.
 Available anywhere there is phone service.
 One connection may be shared between several Windows computers via Internet
connection sharing on a small network, although performance will probably be
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
extremely slow if more than one person is using the connection at a time.
Security Considerations for dial-up connections
 Installed virus protection software is essential for all Internet connections,
including dial-up.
 Firewalls are usually unnecessary for stand-alone dial-up connections, since there
is no permanent connection to the Internet from the user’s PC. For installation
with permanent IP addresses assigned by the ISP to the dial-up customer,
however, a firewall is necessary.
 Software based firewalls (such as Zone Alarm) are generally adequate for dial-up
connections.

Page 169
Multimedia and Its Applications

Bonded Analog dial-up

Bonded Analog Dial-up is relatively new to the market, and not supported by all ISPs.
Bonded dial-up requires two phone lines, and an ISP account supporting MultilinkPPP
(multilink point to point protocol).

Bonded analog dials two phone lines simultaneously, and links them into one larger pipe
for Internet connectivity. Typical connection speeds for this service range from 76kbps to
98kbps.

Equipment considerations for Bonded Analog


 Bonded analog requires a special modem which supports MultilinkPPP and two
simultaneous connections.
 ISP must support MultilinkPPP.
 Bonded analog is not a widely available service, but may be a good solution for
rural customers who do not have access to cable, DSL or ISDN.
DSL Connections

One of the most desirable new options for agency connectivity is DSL (Digital
Subscriber Line) service, which operates on the same copper wire transmission lines as
Plain Old Telephone Service (POTS). DSL provides practical connection speeds of up to
1.5 mbps in areas where the service available.

Advantages of DSL
 Providers offer several options for connection speed, up to 1.5mbps
 "Always on" connection --no dialing in
 Can support a large number of users in the office from one connection.
Limitations of DSL
 Not available in all areas.
 In areas where available, the transmission format of DSL limits connections to within
approximately 18,000 feet of a Local Exchange Carrier’s Central Office (CO).
How DSL works

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
DSL is transmitted over Plain Old Telephone (POTS) lines. Part of the bandwidth of the
normal line, outside the range of normal voice communication, is used to transmit a
digital signal, which is decoded by a DSL modem on the receiving end.

Because analog transmission only uses a small portion of the available amount of
information that could be transmitted over copper wires, the maximum amount of data
that you can receive using ordinary modems is about 56 kbps.

Normal dial-up transmissions are analog. This means the ability of your computer to send
and receive information is limited because the Telephone Company converts information
from the Internet that arrives as digital data, puts it into analog form for your telephone

Page 170
Multimedia and Its Applications

line, and requires your modem to change it back to digital. In other words, the limited
bandwidth of the analog transmission between your home or business and the Phone
Company is a bandwidth bottleneck.

DSL does not require data to be changed into analog form and back. Digital data is
transmitted directly to your computer as digital data, allowing the Phone Company to use
more bandwidth for transmitting it to you.

If you choose, the signal can be separated so some of the bandwidth is used to
simultaneously transmit an analog signal so you may use your telephone and computer on
the same line at the same time.

Splitter-based vs. Splitterless DSL


Most DSL technologies require that a signal splitter be installed on the customer
premises, requiring the expense of a phone company visit and installation. It is possible,
however, to manage the splitting remotely from the central office. This is known as
Splitterless DSL, "DSL Lite," G.Lite, or Universal ADSL.
Factors Affecting the Experienced Data Rate
DSL modems follow the data rate multiples established by North American and European
standards. In general, the maximum range for DSL without repeaters is 5.5 km (18,000
feet or 3.4 miles). As distance decreases toward the telephone company office, the data
rate increases.
Another factor is the gauge of the copper wire. Heavier 24-gauge wire carries the same
data rate farther than 26 gauge wire. If you live beyond the 5.5-kilometer range, you may
still be able to have DSL if your phone company has extended the local loop with optical
fiber cable.

The Digital Subscriber Line Access Multiplexer (DSLAM)

To interconnect multiple DSL users to a high-speed backbone network, the telephone


company uses a Digital Subscriber Line Access Multiplexer (DSLAM). Typically, the
DSLAM connects to an asynchronous transfer mode (ATM) network that can aggregate
data transmission at gigabit data rates. At the other end of each transmission, a DSLAM
demultiplexes the signals and forwards them to appropriate individual DSL connections.

ANNAMALAI
ANNAMALAI UNIVERSITY
Types of DSL UNIVERSITY
ADSL

ADSL (Asymmetric Digital Subscriber Line) is the form of DSL most familiar to home
and small business users. ADSL is called "asymmetric" because most of its two-way or
duplex bandwidth is devoted to the downstream direction, sending data to the user. Only
a small portion of bandwidth is available for upstream or user-interaction messages.

Page 171
Multimedia and Its Applications

Most Internet sites, especially with graphics- or multi-media intensive data, need lots of
downstream bandwidth, but user requests and responses are small and require little
upstream bandwidth. Using ADSL, up to 6.1 megabits per second of data can be sent
downstream and up to 640 Kbps upstream.

The high downstream bandwidth means your telephone line will be able to bring motion
video, audio, and 3-D images to your computer or hooked-in TV set. In addition, a small
portion of the downstream bandwidth can be devoted to voice rather than data, and you
can use your phone without requiring a separate line.

In many cases, your existing telephone lines will work with ADSL. In some areas, they
may need upgrading.

CDSL
CDSL (Consumer DSL) is a trademarked version of DSL that is somewhat slower than
ADSL (1 Mbps downstream, less upstream) but has the advantage that a "splitter" does
not need to be installed at the user's end. Rockwell, which owns the technology and
makes a chipset for it, believes that phone companies should be able to deliver it in the
$40-45 a month price range. CDSL uses its own carrier technology rather than DMT or
CAP ADSL technology.
G.Lite or DSL Lite

G.Lite (also known as DSL Lite, Splitterless ADSL, and Universal ADSL) is essentially a
slower ADSL that doesn't require splitting of the line at the user end but manages to split
it for the user remotely at the telephone company. This saves the cost of what the phone
companies call "the truck roll." G.Lite (officially, ITU-T standard G-992.2), provides a
data rate from 1.544 Mbps to 6 Mpbs downstream and from 128 Kbps to 384 Kbps
upstream. G.Lite is expected to become the most widely installed form of DSL.

HDSL

The earliest variation of DSL to be widely deployed has been HDSL (High bit-rate DSL),
used for wideband digital transmission within a corporate site and between the Telephone
Company and a customer. The main characteristic of HDSL is that it is symmetrical: an
equal amount of bandwidth is available in both directions. For this reason, the maximum
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
data rate is lower than for ADSL. HDSL can carry as much on a single wire of twisted-
pair as can be carried on a T1 line in North America or an E1 line in Europe (2,320
Kbps).

IDSL

IDSL (ISDN DSL) is somewhat of a misnomer since it's really closer to ISDN data rates
and service at 128 Kbps than to the much higher rates of ADSL.

Page 172
Multimedia and Its Applications

RADSL

RADSL (Rate-Adaptive DSL) is an ADSL technology from Westell in which software is


able to determine the rate at which signals can be transmitted on a given customer phone
line and adjust the delivery rate accordingly. Westell's FlexCap2 system uses RADSL to
deliver from 640 Kbps to 2.2 Mbps downstream and from 272 Kbps to 1.088 Mbps
upstream over an existing line.

VDSL

VDSL (Very high data rate DSL) is a developing technology that promises much higher
data rates over relatively short distances (between 51 and 55 Mbps over lines up to 1,000
feet or 300 meters in length). It's envisioned that VDSL may emerge somewhat after
ADSL is widely deployed and co-exist with it. The transmission technology (CAP, DMT,
or other) and its effectiveness in some environments are not yet determined. A number of
standards organizations are working on it.

DSL Connection Sharing

If you plan to share a DSL connection between several PCs in an office, the best solution
is to purchase a DSL ready router. This device will allow one Internet IP address to be
shared across your local area network, so you only need to pay for a single connection.

Security Considerations

Since a DSL connection is 'always on', meaning you do not have to dial a number to use
it, it is also a good idea to consider some firewall features when selecting a DSL router.

A firewall is a hardware and/or software application that protects a network from


intruders. Firewalls can be as simple as a software application installed on a single PC to
multiple UNIX machines with powerful RISC processors and an array of proxy servers.

There are many excellent DSL routers on the market from vendors such as Nortel,
Cayman, Cisco and Flowpoint. All of these manufacturers make DSL routers which
operate in a similar fashion. If the DSL provider in your area does not provide a router as

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
part of the installation agreement, you will have to select and purchase a router yourself.

The Routers and Modems tutorial has more in depth information regarding routers and
modems.

If you are selecting a DSL router for use on a peer to peer or client-server based
Windows network, it is important to make sure it includes these features:

Required Features for a DSL router


 NAT (Network Address Translation -- Allows one outside IP address to serve
many machines on a local area network for Internet access

Page 173
Multimedia and Its Applications

 DHCP (Dynamic Host Control Protocol) -- Allows the router to assign IP


addresses to devices on the local area network (LAN)
 Firewall features -- Provides denial of service protection, Java blocking, dynamic
filtering and other security features
 10mbps Ethernet Interface -- Allows connection to the LAN

Even if you are running 100mbps Ethernet on the local area network, a 10mbps
connection to the router is all that is necessary for DSL, since the maximum bandwidth of
the DSL service will be under 1.5mbps in any case.

Firewall features on the router are imperative, since DSL is a constant connection. This
protects your network from attack by outside intruders. NAT also provides some
protection for your network, since it "hides" the internal network addresses from the
Internet. Please see the Firewalls and Virus protection tutorial for more information.

If you are purchasing a router separately from your DSL provider, you also need to make
sure the model you choose is compatible with any other equipment the provider may be
installing. The DSL provider and the router manufacturer should be able to verify this.

Setting up a peer to peer network to use DSL

In order to share a DSL Internet connection in an office, follow these steps:

1. Construct a peer to peer network in the office. See the Small Network
Fundamental module for more information.
2. Set up each network client to use DHCP to obtain an IP address on the network.
Go to Start/Settings/Control Panel/Network and open the TCP/IP properties.
Under the DHCP tab, enable DHCP for obtaining client address. Do this on each
PC in the office.
3. Follow the instructions provided with your DSL router for enabling DHCP.
Assign an IP address range to the DHCP pool for your network clients
4. Make sure NAT (network address translation) is set up on the router. See the
Routers and Modems tutorial.
5. Assign a static IP address for your DSL router to be used as the Default Gateway.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
This Default Gateway is the device on your network clients will use to access the
Internet. In this case, it is your DSL router. Make sure the static IP address is in
the SAME SUBNET as the range you used for the DHCP client pool.
 For example, if the range for the DHCP pool was 10.0.0.2 to 10.0.0.10, then use
10.0.0.1 for the default gateway (your DSL router). It is common practice in TCP/IP
usage to use to first address on a subnet for the router.

You should be in business for shared Internet access! Go to a client computer, restart it,
open up Internet Explorer and make sure it works. If it does not, check the following:

Page 174
Multimedia and Its Applications

1. Check the steps above. Did you use IP addresses from the same subnet? (use the
10.0.0.0 subnet -- this is easiest, and correct usage of the protocol).
2. Check the settings in Internet Explorer to make sure it is not forcing a dial-up
connection. Go to Tools/Internet Options/Connections and make sure "Never Dial
a Connection" is selected.
Cable Internet

Cable modems have some positive points, and several negative ones. At the present time,
cable companies offer Internet service to many areas where DSL is not yet available.
However, over the next few years, DSL will probably become more pervasive than cable
Internet service.

Important considerations for cable modems

These items, while not necessarily reasons "not to use cable Internet access", are
important considerations when discussing this option with a customer or weighing a cable
modem solution against DSL.

Cable modems operate on shared connections

This means that when the cable between a local cable company (usually fiber optic) and
your neighborhood or business area is split into many connections to various homes and
businesses, you are on the same network as everyone connected to the main feed. This is
not necessarily bad, but it does require some considerations regarding security and
bandwidth.

For security reasons, the cable modem system is designed so any modem on the network
can communicate only with the CMTS (Cable Modem Termination Service) at the cable
company’s office, and not with other cable modems. The newest standard of the
DOCSYS protocol used for cable Internet communication is encrypted, so this will
eliminate some of the security concerns. New equipment is required for the cable
companies to take advantage of this protocol, and it will be some time before this
equipment is installed on a widespread basis.

Unable to guarantee a particular amount of bandwidth availability


ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Because cable works with a shared connection, the cable company is usually unable to
guarantee a particular amount of bandwidth will be available to a certain customer at a
given time. If many customers are using the Internet on your segment at the same time,
there will be less bandwidth available, and vice versa. This differs from DSL in that with
the latter, there is always the same amount of bandwidth available on your point to point
dedicated connection. In spite of this architecture, however, there is rarely a bottleneck in
the cable service at the "neighborhood end" of the segment. Bottlenecks in most cable
Internet service systems tend to be in the Cable Company's pipe to the Internet itself, not
in the connection to you, the end user.

Page 175
Multimedia and Its Applications

Business Class Service not always offered

Because of the shared connections, and the fluctuating nature of bandwidth availability,
most cable companies are unable to offer "business class service" which would guarantee
a certain Quality of Service (bandwidth and availability) with a Service Level
Agreement. A few companies "guarantee" available bandwidth by placing equipment on
their systems which monitors usage and availability, and by restricting the number of
users it installs on a segment. In theory, this should work OK, but it is generally more
expensive than regular residential class cable service, since the number of customers on a
segment needs to be restricted

Special security considerations because of shared network connections

This is really the least problematic issue regarding cable modem connections, because if
you have taken the proper steps to secure your Internet connection with a firewall, it
doesn’t really matter if you are on a shared segment or not. One exception is with email.
On the shared network portion of the cable segment it is theoretically possible for

ISDN:

Integrated Services Digital Network, called ISDN for short, is an all-digital


telecommunications technology that can simultaneously transmit voice conversations and
data calls over the same pair of copper telephone wires. What's important about ISDN
and what makes it different from the analog phone lines that you're probably using today
is the speed with which it transfers data and the flexibility it providers its users.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 176
Multimedia and Its Applications

5.2.5. Internet Services


The types of services available on Internet are as diverse as the people interested in it.
The Internet fulfils the ever-increasing demand for information to a very large extent.
Retrieval of the information thus forms a critical part of using the Internet.
Some of the common ways by which information can be retrieved over the Internet are as
follows.
Search engines: A search engine is a program that searches through a database of web
pages for particular information
Home Page: It is the top-level web page of a web site. This is the page that gets display
when a web-site of a product or service is opened.
E-learning: It provides the convenience of learning at the pace, place and time of
choosing by the learner.
Access to publishing: Through the Internet one has easy access to full-text articles,
reports, illustrated articles, abstracts, computer programs, etc.
File Transfer Protocol: Using FTP you can able to transfer files from a computer on
Internet to you9r own computer.
Other Online Services
E-mail: E-mail is used to send written messages between individuals or groups of
individuals, often geographically separated by large distances.
Finding People on the Net: There are search engines offer a way to search for basic
information about people and places.
Chat: Chat is one of the fastest ways of communicated with others over the Internet. It
refers to people holding live (or real-time) keyboard conversations, and not voice
conversations.
Video Conferencing: Users can have video conferencing in which one can see people
live on the computer screen while talking to them. A videoconference can be held
between many people located at different places. In a videoconference, people can work
together on a file or even watch a video clipping.
Telnet: Telnet is a way to connecting one computer to another on the Internet. A user on
one computer can use Telnet to connect to and log on to another computer on the
network. Once logged in, the user can get e-mail or do other work on the remote
computer.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Newsgroup: A newsgroup or forum is like a community bulletin board. You can post a
message, reply to a message or just read messages.

5.2.6. The World Wide Web


The World Wide Web was first developed at the European Laboratory for Particle
Physics in 1993. Today it is the fastest growing component of the Internet. When you
surf the net, you are actually using the World Wide Web. As its name implies the World
Wide Web is a globally connected network.
The Web has many things to offer, but the most fascinating are the Web pages that
incorporate text, graphics, sound, animation and various other multimedia features. The

Page 177
Multimedia and Its Applications

pages are connected to one another using hypertext. This is a method of presenting
information in which certain text is highlighted. The highlighted text is a link to other
pages that have more information on that particular topic. Thus the user can move from
one page to other linked pages via the hypertext link.
The World Wide Web is non-linear; that is, it has no top and no bottom. This implies
that one does not have to follow a fixed path to access information. Thus, a user can:
 move from one link to another,
 Go directly to a link if its address is known.
 Simply jump to specific part of a document.
To navigate the World Wide Web, the users need to have browser software like the
Internet Explorer and Netscape Navigator.
Web Applications
Applications Services
Information Retrieval Exploring the web and retrieve information from Net
Electronic Mail It is the most widely used tool to send and receive
messages electronically on a network
Search Engine Is a program that searches through a database of web pages
for particular information
Chat Online textual talk is called chatting
Video Conferencing A two-way videophone conversation among multiple
participants is called video conferencing
FTP File Transfer Protocol, which defines a method for
transferring files from one computer to another over a
network
Telnet Is an Internet utility that lets you log onto remote computer
systems
NewsGroup A Newsgroup or forum is online community bulletin
board, where users can post messages, respond to posted
messages, or just read them. Groups of related messages
are known as Threads
ANNAMALAI
ANNAMALAI UNIVERSITY
Elements of Web
UNIVERSITY
Elements Functions
Clients and A Web server is a computer connected to the Internet that runs a
Servers program that takes responsibility for storing, retrieving, and
distributing some of the Web’ files.
A Web client or Web browser is a computer that requests files
from the Web.
Web’s languages Computers that are connected to Internet must have a well-

Page 178
Multimedia and Its Applications

and Protocols defined set of languages and protocols that are independent of
the hardware or operating systems on which they run.
URL’s and Each file on the Internet has an address, called a Uniform
Transfer Resource Locator (URL)
Protocols The first part of a URL specifies the transfer protocol, the
method that a computer uses to access this file. E.g. (HTTP, FTP
etc.,)
HTML The Hypertext Markup Language (HTML) is the universal
language of the Web. It is a language that you use to lay out
pages that are capable of displaying all the diverse kinds of
information that the Web contains.
Java and Java Java is a language for sending small applications (called applets)
Script over the Web, so that your computer can execute them.
JavaScript is a language for extending HTML to embed small
programs called scripts in Web pages.
VB Script and VBScript and ActiveX Controls are Microsoft systems that work
ActiveX controls with internet Explorer.
VBScript, a language that resembles Microsoft’s Visual Basic,
can be used to add scripts to pages that are displayed by Internet
Explorer.
ActiveX controls (AXC’s), like Java, are used to embed
executable programs into a Web page. When Internet Explorer
encounters a Web page that uses ActiveX controls, it checks
whether that particular control is already installed on your
computer, and if it isn’t IE installs it.
XML and Other The Extensible Markup language (XML) is a very powerful
Advanced Web language that may replace HTML as the language of the Web.
Languages Currently, XML is little more than a specification at the W3C,
but it is expected to be implemented in the fifth-generation
browsers, other than XML there are Cascading Style Sheets
(CSS), Extensible Style Language (XSL) and Dynamic HTML
are been used.
ANNAMALAI
ANNAMALAI UNIVERSITY
Image Formats UNIVERSITY
Pictures, Drawings, charts and diagrams are available on the
Web in variety of formats. The most popular formats for
displaying graphical information are JPET and GIF.
Audio and Video Some files on the Web represent audio or video, and they can be
Formats played by browser plug-ins.
VRML The Virtual Reality Modeling Language is the Web’s way of
describing three-dimensional scenes and objects. Given a VRML
file, a browser can display scene or object, as it would appear
from any particular viewing location. You can rotate an object

Page 179
Multimedia and Its Applications

or move through a scene, using the controls that a browser


provides.
Web pages and A Web page is an HTML document that is stored on a Web
Web Sites server and that has a URL so that it can be accessed via the Web.
A Web site is a collection of Web pages belonging to a
particular person or organization.

5.2.7. Web Servers


A Web server is a computer on the Internet that stores Web pages. Any user of the
Internet, located anywhere in the world can view a Web page on the Web server. These
servers differ from desktop computers in many ways. They can handle multiple

telecommunication connections at any given point of time. Usually, they also have
gigabytes of hard disk storage, considerable random access memory (RAM) and a very
high-speed processor. In certain cases Web servers might actually be several computers
linked together, with each handling incoming Web page requests.
A Web server runs special Web server software that reads requests sent from Web
browsers, and retrieves and sends the appropriate information to the computer from
where the request has come (called a client computer). Web servers normally have
dedicated links to the Internet backbone.

5.2.8. Web Browsers


Web browser is a software package used to access locations on the World Wide Web,
part of the global computer network called the Internet. Most browsers also contain
electronic mail (e-mail) software, including a simple word processor and a system for
ANNAMALAI
ANNAMALAI UNIVERSITY
storing mail.
UNIVERSITY
To browse the Web, a computer user must first establish a telephone connection with a
computer operated by a business called an Internet service provider (ISP). The user
makes this connection by means of a device known as a modem and special software.
Once the connection is made, a display created by the ISP appears on the screen. In some
cases, the user can employ the browser directly from this display. In other cases, the user
must access a separate browser display. The key part of a browser display, or the
browser portion of an ISP display, is a box called an address field. To access a location
on the Web, known as a Web site, the user types the site's address in the address field.

Page 180
Multimedia and Its Applications

Web sites generally consist of several displays called pages. Accessing a site brings the
first page to the screen via the telephone connection. This page contains text and, in
many cases, illustrations. Certain illustrations and passages of text are "hot"--they
contain electronic links to other pages or even other Web sites. The user can access
another page or site by selecting the "hot" area--with a mouse, for example. The browser
manages all the switching between links.
Many browsers store a user's most frequently accessed sites in files called caches in the
user's computer. When a user visits a site, the browser checks to see whether that site has
changed since the user's last visit. If the site has not changed, the browser loads it from
the cache, creating a copy of it in the computer's memory. Loading from the cache is
much faster than loading via the telephone connection. Browsers constantly update the
caches, removing sites that have not been visited recently to make room for other sites.

Elements of a Browser
Platform Support: Platform support is often confused with operating systems support.
An example of a platform would be Windows, Unix, Mac, and OS/2. But there are
different variations on each platform, and these variations are operating systems.
Interface: All browsers have a similar type of interface, with a menu and button bar
above the browser window.
Bookmarks: Bookmarks, while not the most important feature in a browser, are certainly
a very convenient item. Given the amount of information being placed on the web daily,
imagine having to write down every URL you wanted to remember, or trying to find the
same URL again! Bookmarks also let the user categorize and sort bookmarks into
sections, so their URLS are much easier to find in the bookmark list.
Mail and News: Browsers supports sending and receiving HTML, so e-mail can be sent
with images, sound, and even Java effects.
HTML Support: Style sheets give users the same flexibility of design and layout that
desktop publishing programs do, by enabling them to attach styles (such as fonts, colors,
and spacing) to HTML pages. By applying separate style tags to HTML, web page
designers ensure that all browsers (that support CSS) can view the basic text and structure
of the Web page while more sophisticated designs can be presented.

Basic Elements of Internet Explorer:

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The Internet Explorer Interface
The basic look and feel of Internet Explorer (IE) is very similar to Netscape. DCCC users
making the transition to the College's new browser should experience very little difficulty
using IE. To learn more about each element in the IE interface, click on the labeled areas
below:

Page 181
Multimedia and Its Applications

Status Indicator
The Status Indicator ( ) lets you know when the Web page you want to view has
fully loaded into the browser window. If the Status Indicator looks like a spinning globe
(see example below), the page has not fully loaded. It's best to wait for the indicator to
stop spinning before you begin to interact with the Web page (clicking on links, scrolling,
etc.) to ensure the browser won't freeze.

ANNAMALAI
ANNAMALAI UNIVERSITY
Standard ToolbarUNIVERSITY
The Standard toolbar provides a series of buttons representing the most commonly used
features of IE 5.5.
Address Toolbar

Page 182
Multimedia and Its Applications

The Address toolbar provides a textbox that allows you to enter the URL, or Web
address, of the site you would like to visit. The Address toolbar also displays the URL of
the page you are currently viewing.
Links Toolbar
The Links toolbar ( ) is a customizable bar that contains buttons to allow you to
quickly view sites you visit frequently. To learn more about the Links toolbar
Scroll Bar
The Scroll Bar appears when the Web page contains more information than can be seen
at a glance. The Scroll Bar also gives you a hint as to how big the page you are viewing
is. In the example showing DCCC's homepage above, the box in the Scroll Bar is more
than half the size of the entire bar. This indicates that you are currently viewing more
than half of the information contained on the Web page. If you are planning to print the
current page you are viewing, it's always a good idea to use the Scroll Bar (if one is
visible) to determine just how big the page is.
Status Bar
The Status Bar tells you what the progress of the browser is as it is loading pages. In the
example below you can see that the page from Amazon.com is still being opened:

Also, if you hover your mouse over links on a page, the Status Bar allows you to see the
URL of the link. In the example below, the user is hovering their mouse over the
Academics link on DCCC's homepage and the URL of the page appears in the Status Bar.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Netscape Features
Application Window
The following image shows the main elements of the Netscape Navigator window:

Page 183
Multimedia and Its Applications

Page Window
This area contains information, animation, graphics, and links to other sites.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Links may be indicated only by color. The rule is that if the mouse pointer looks like this
, then you can be sure that whatever it's pointing to is a link.

Page 184
Multimedia and Its Applications

Button Bar

Preferences

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Inside your preferences, you can change your starting page, home page, button bar
appearance, and browsing history duration.

To edit your preferences, choose Preferences from the Edit menu.

 To change your starting page and home page, click Navigator from the menu on
the left. On the right, in the section called Navigator starts with, select Blank
page, Home page, or Last page visited.
 To change your home page, enter a URL in the field in the Home page section.

Page 185
Multimedia and Its Applications

 To change the duration of your browsing history, enter a number of days in the
field in the History section.
 To change the appearance of your button bar, click Appearances from the menu
on the left. On the right, in the section called Show toolbars as select Pictures
and Text, Pictures Only, or Text Only.

Using Netscape
Opening a Location
If you know a specific URL you would like to visit, click the Location field, type the
address, and press Enter.
Searching
To search using Netscape’s search site,

1. Click Search on the toolbar.


2. Enter a keyword, topic or phrase in the Search the Web: field.
3. Press Enter.

To search using other search engines, type the URL of the site in the location bar and
perform a search the same way as stated above.
Bookmarks
A bookmark allows you revisit a page later. The URL is added to a list of bookmarked
URL’s found under the Bookmark menu next to the Location field.

 To bookmark a page, click Bookmarks and drag to Add Bookmark.


 To visit a page you’ve bookmarked, click Bookmarks and drag to a bookmark.
 To delete or organize your bookmarks, click Bookmarks and drag to Edit
Bookmarks.
 From the window that appears:

o Delete a bookmark by selecting it and pressing Delete.
o Create a new folder by choosing New Folder from the File menu.
o Place a bookmark into a folder by clicking and dragging it to a folder.

History
History keeps a record of the pages that you’ve visited over the amount of time specified

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
in your preferences.
To view your history, choose History from the Communicator menu. A window will
appear. From here you can scroll through and double-click a page you would like to
revisit.
Saving
You can save the source of a Web page by selecting Save As from the File menu.
To save images,

1. Position the mouse pointer over the image and right click. (On a Macintosh, click
and hold down the mouse button for a second or two.)
2. Choose Save this Image as... from the menu that appears.

Page 186
Multimedia and Its Applications

3. Enter a file name (if you wish to change it) and select a destination.
4. Click {OK}.

Remeber that copyright laws apply to web pages and images as well as paper
publications.
Printing
To print a web page,

1. Select Print from the File menu.


2. Click {OK}.

5.2.9. Web page makers and Site Builders


A Web site is a collection of many Web pages collected together as a single package.
The first or the index page in this collection of pages that makes up the Web site is
known as the Home Page. This page is like the cover of a magazine or the content page
of a book. Usually it acts as an introduction to the site, explaining its purpose and
describing the information found on other pages of the sit. In other words, the home page
acts as the table of contents for the rest of the sites.
Generally Web sites use one of the three kinds of organizational structures to organize
their pages - tree structure, linear structure and random structure.
 The Tree structure, or the pyramid structure, is the easiest to read as the format
makes it easy for its users to navigate through the sit and find the information they are
looking for.
 In a linear structure, one page leads to the next, which then leads to the next and so
on in a straight line.
 Finally, in a random structure, pages are connected to one another in a random way
following no order.
Once the pages of a Web sit are created, you use FTP software to publish them on a web
server. The space on the server is normally taken on rent from a local ISP. However, you
can even setup your own Web server if the size of your Web sit is big.
Web Page Maker is an easy-to-use web page editor that allows you to create and upload
web pages in minutes without knowing HTML. Simply drag and drop objects onto the
page and position them freely in the layout. It comes with several pre-designed templates
that help you to get started. It also includes ready-to-use navigation bars that can be
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
inserted into the page. Additional features include built-in color picker, Java script
library, image library and built-in FTP client.
Every Web developer has to know the building blocks of the Web:

 HTML 4.01
 The use of CSS (style sheets)
 XHTML
 XML and XSLT
 Client side scripting
 Server side scripting

Page 187
Multimedia and Its Applications

 Managing data with SQL


 The future of the Web

HTML 4.01

HTML is the language of the Web, and every Web developer should have a basic
understanding of it.

HTML 4.01 is an important Web standard and very different from HTML 3.2.

When tags like <font> and color attributes were added to HTML 3.2, it started a
developer's nightmare. Development of web sites where font information must be added
to every single Web page is a long and expensive pain.

With HTML 4.01 all formatting can be moved out of the HTML document and into a
separate style sheet.

HTML 4.01 is also important because XHTML 1.0 (the latest HTML standard) is HTML
4.01 "reformulated" as an XML application. Using HTML 4.01 in your pages makes the
future upgrade from HTML to XHTML a very simple process.

Make sure you use the latest HTML 4.01 standard.

Cascading Style Sheets (CSS)

Styles define how HTML elements are displayed, just like the font tag in HTML 3.2.
Styles are normally saved in files external to HTML documents. External style sheets
enable you to change the appearance and layout of all the pages in your Web, just by
editing a single CSS document. If you have ever tried changing something like the font or
color of all the headings in all your Web pages, you will understand how CSS can save
you a lot of work.

XHTML - The Future of HTML

XHTML stands for Extensible HyperText Markup Language.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
XHTML 1.0 is now the latest HTML standard from W3C. It became an official
Recommendation January 26, 2000. A W3C Recommendation means that the
specification is stable and that the specification is now a Web standard.

XHTML is a reformulation of HTML 4.01 in XML and can be put to immediate use with
existing browsers by following a few simple guidelines.

Page 188
Multimedia and Its Applications

XML - A Tool for Describing Data

The Extensible Markup Language (XML) is NOT a replacement for HTML. In future
Web development, XML will be used to describe and carry the data, while HTML will be
used to display the data.

Our best description of XML is as a cross-platform, software- and hardware-independent


tool for storing and transmitting information.

We believe that XML is as important to the Web as HTML was to the foundation of the
Web and that XML will be the most common tool for all data manipulation and data
transmission.

XSLT - A Tool for Transforming Data

XSLT (Extensible Stylesheet Language Transformations) is a language for transforming


XML.

Future Web sites will have to deliver data in different formats, to different browsers, and
to other Web servers. To transform XML data into different formats, XSLT is the new
W3C standard.

XSLT can transform an XML file into a format that is recognizable to a browser. One
such format is HTML. Another format is WML - the mark-up language used in many
handheld devices.

XSLT can also add elements, remove, rearrange and sort elements, test and make
decisions about which elements to display, and a lot more.

Client-Side Scripting

Client-side scripting is about "programming" the behavior of an Internet browser. To be


able to deliver more dynamic web site content, you should teach yourself JavaScript:

 JavaScript gives HTML designers a programming tool - HTML authors are

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
normally not programmers, but JavaScript is a scripting language with a very
simple syntax! Almost anyone can put small "snippets" of code into their HTML
pages.
 JavaScript can put dynamic text into an HTML page - A JavaScript statement
like this: document.write("<h1>" + name + "</h1>") can write a variable text into
an HTML page.
 JavaScript can react to events - A JavaScript can be set to execute when
something happens, like when a page has finished loading or when a user clicks
on an HTML element.
 JavaScript can read and write HTML elements - A JavaScript can read and
change the content of an HTML element.

Page 189
Multimedia and Its Applications

 JavaScript can be used to validate data - A JavaScript can be used to validate


form data before it is submitted to a server, this will save the server from extra
processing.

Server-Side Scripting

Server-side scripting is about "programming" an Internet server. To be able to deliver


more dynamic web site content, you should teach yourself server-side scripting. With
server-side scripting, you can:

 Dynamically edit, change, or add any content of a Web page


 Respond to user queries or data submitted from HTML forms
 Access any data or databases and return the results to a browser
 Access any files or XML data and return the results to a browser
 Transform XML data to HTML data and return the results to a browser
 Customize a Web page to make it more useful for individual users
 Provide security and access control to different Web pages
 Tailor your output to different types of browsers
 Minimize the network traffic

Managing Data with SQL

The Structured Query Language (SQL) is the common standard for accessing databases
such as SQL Server, Oracle, Sybase, and Access.

Knowledge of SQL is invaluable for anyone wanting to store or retrieve data from a
database.

Any webmaster should know that SQL is the true engine for interacting with databases on
the Web.

5.2.10. Plug-Ins and Delivery Vehicles

A plugin (or plug-in) is a computer program that interacts with a main (or host)
application (a web browser or an email program, for example) to provide a certain,

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
usually very specific, function on-demand.

Typical examples are plugins that

 read or edit specific types of files (for instance, decode multimedia files)
 encrypt or decrypt email (for instance, PGP)
 filter images in graphic programs in ways that the host application could not
normally do
 play and watch Flash presentations in a web browser

Page 190
Multimedia and Its Applications

The host application provides services which the plugins can use, including a way for
plugins to register themselves with the host application and a protocol by which data is
exchanged with plugins. Plugins are dependent on these services provided by the main
application and do not usually work by themselves. Conversely, the main application is
independent of the plugins, making it possible for plugins to be added and updated
dynamically without changes to the main application.

Plugins are slightly different from extensions, which modify or add to existing
functionality. The main difference is that plugins generally rely on the main application's
user interface and have a well-defined boundary to their possible set of actions.
Extensions generally have fewer restrictions on their actions, and may provide their own
user interfaces. They sometimes are used to decrease the size of the main application and
offer optional functions. Mozilla Firefox uses a well-developed extension system to
reduce the feature creep that plagued the Mozilla Application Suite.

Perhaps the first software applications to include a plugin function were HyperCard and
QuarkXPress on the Macintosh, both released in 1987. In 1988, Silicon Beach Software
included plugin functionality in Digital Darkroom and SuperPaint, and the term plug-in
was coined by Ed Bomke. Currently, plugins are typically implemented as shared
libraries that must be installed in a place prescribed by the main application. HyperCard
supported a similar facility, but it was more common for the plugin code to be included in
the HyperCard documents (called stacks) themselves. This way, the HyperCard stack
became a self-contained application in its own right, which could be distributed as a
single entity that could be run by the user without the need for additional installation
steps.

Open application programming interfaces (APIs) provide a standard interface, allowing


third parties to create plugins that interact with the main application. A stable API allows
third-party plugins to function as the original version changes and to extend the lifecycle
of obsolete applications. The Adobe Photoshop and After Effects plugin APIs have
become a standard and been adopted to some extent by competing applications. Other
examples of such APIs include Audio Units and VST.

Delivery Vehicles
Deliver Vehicles include face-to-face, online (synchronous, asynchronous) audio
conference, Web seminars, CD-ROM, audiotapes/videotapes, printed publications/self-
ANNAMALAI
ANNAMALAI UNIVERSITY
Study workbooks.UNIVERSITY
The Internet and intranets, which use the TCP protocol suite, are the most important
delivery vehicles for multimedia objects. TCP provides communication sessions between
applications on hosts, sending streams of bytes for which delivery is always guaranteed
by means of acknowledgments and retransmission. User Datagram Protocol (UDP) is a
``best-effort'' delivery protocol (some messages may be lost) that sends individual
messages between hosts. Internet technology is used on single LANs and on connected
LANs within an organization, which are sometimes called intranets, and on ``backbones''
that link different organizations into one single global network. Internet technology

Page 191
Multimedia and Its Applications

allows LANs and backbones of totally different technologies to be joined together into a
single, seamless network.

Part of this is achieved through communications processors called routers. Routers can be
accessed from two or more networks, passing data back and forth as needed. The routers
communicate information on the current network topology among themselves in order to
build routing tables within each router. These tables are consulted each time a message
arrives, in order to send it to the next appropriate router, eventually resulting in delivery.

5.3) Revision points

 Multimedia and the Internet: many web pages now include sound and video.
With the increased popularity of broadband connections, many sites feature
music, movie, and television clips you can view or download. However, even
with the broadband connection, audio or video files that are more than a few
seconds long can be large and take a long time to download to your computer.
 Internet: It as the Inter networking i.e., the linking of many networks including
private networks that was named Internet.
 Internetworking involves connecting two or more distinct computer networks or
network segments together to form an internetwork
 Internet Connections: primary connections available today are Dial-up
connections, DSL, Cable internet, ISDN, Wireless internet
 Internet Services: Search engines, Home Page, E-learning, Access to publishing,
File Transfer Protocol, E-mail, Finding People on the Net, Chat, Video
Conferencing, Telnet, Newsgroup.
 World Wide Web: As its name implies the World Wide Web is a globally
connected network.
 Web Servers: A Web server is a computer on the Internet that stores Web pages.
 Web Browsers: Web browser is a software package used to access locations on
the World Wide Web, part of the global computer network called the Internet.
 Plug-Ins: A plugin (or plug-in) is a computer program that interacts with a main
(or host) application (a web browser or an email program, for example) to provide
a certain, usually very specific, function on-demand.

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Delivery Vehicles: Deliver Vehicles include face-to-face, online (synchronous,
asynchronous) audio conference, Web seminars, CD-ROM,
audiotapes/videotapes, printed publications/self-Study workbooks.

5.4) Intext Question


1. Explain the components of Intranet.
2. Explain in detail the elements of browser.
3. Briefly explain about managing websites
4. Name the different types of internet connections
5. What makes Netscape Navigator different from other browsers?

Page 192
Multimedia and Its Applications

6. Explain in detail ISDN.


7. Explain briefly about the browsers
8. Describe in detail about Internet concepts.
9. Explain the salient features of Microsoft Internet Explorer
10. What are web-servers? What are they used for?
11. Describe the salient features of World Wide Web and web applications.
12. Describe the various features of Netscape Navigator and Communicator.

5.5) Summary

Today individuals, companies and institutions use the Internet in many ways as
mentioned below:
Business uses the Internet to provide access to complex databases, such as financial
databases.
Companies carry out electronic commerce (commerce on Internet) including advertising,
selling, buying, distributing products and providing after sales services.
Businesses and institutions use the Internet for voice and video conferencing and other
forms of communication that enable people to telecommunicate, or work from a distance
The use of electronic mail (e-mail) over the Internet has greatly speeded communication
between companies among co-workers and between other individuals.
Media and entertainment companies use the Internet to broadcast audio and video,
including live radio and television programs. They also offer online chat groups, in
which people carry on discussions using written text, and online news and weather
programs
Scientists and scholars use the Internet to communicate with colleagues, to perform
research, to distribute lecture notes and course materials to students, and to publish
papers and articles.
Individuals use the Internet for communication, entertainment, finding information, and
to buy and sell goods and services.

5.6) Terminal exercises


1. What is a web? How is it different from www?
2. ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Explain the various elements of web
3. What are online services?
4. Name the popular online services available.
5. Write short notes on high-speed connection.

5.7) Supplementary Materials


1. http://en.wikipedia.org/wiki/Multimedia

Page 193
Multimedia and Its Applications

2. http://multimedia.expert-answers.net/multimedia-glossary/en/
3. http://nrg.cs.usm.my/~tcwan/Notes/MM-BldgBlocks-I.doc
4. www.edb.utexas.edu/multimedia/PDFfolder/WEBRES~1.PDF

5.8) Assignments
1. Explain the possible ways to connect to internet using the wizard.
2. What media are used in the Internet? How does the medium affect the performance
of the Internet?

5.9) Suggested Reading


1. Tay Vaughan, “Multimedia –Making it work”, TataMcGraw Hill, Fourth Edition.

5.10) Learning Activities


1. What are the issues in high-speed connection? What are the solutions? Discuss
Compare Netscape navigator with MS Internet explorer.
2. What is ADSL? How is it used to connect internet? What are the advantages and
disadvantages of using ADSL for connecting to internet?

5.11) Key words


ADSL: Asymmetric Digital Subscriber Line
CDSL: Consumer DSL
DSL: Digital Subscriber Line
DSLAM: The Digital Subscriber Line Access Multiplexer
HDSL: High bit-rate DSL
IDSL: ISDN DSL
ISDN: Integrated Services Digital Network
RADSL: Rate-Adaptive DSL
VDSL: Very high data rate DSL

ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY

Page 194

You might also like