155e1150 Multimedia and Its Applications
155e1150 Multimedia and Its Applications
ANNAMALAI UNIVERSITY
DIRECTORATE OF DISTANCE EDUCATION
Copyright Reserved
(For Private Circulation Only)
Multimedia and Its Applications
Table of Content
Unit-I Page no
1.0 Introduction 1
1.1 Objective 1
1.2 Content 1
1.2.1 Usage of Multimedia 1
1.2.2 Introduction To Making Multimedia 6
1.2.3 Multimedia Skills And Training 9
1.2.4. Multimedia For The Web 11
1.2.5. The Sum Of The Parts 11
1.3 Revision points 14
1.4 Intext Question 14
1.5 Summary 14
1.6 Terminal exercises 14
1.7 Supplementary Materials 16
1.8 Assignments 17
1.9 Suggested Reading 17
1.10 Learning Activities 17
1.11 Key words 17
Unit-II
2.0 Introduction 18
2.1 Objective 18
2.2 Content 18
2.2.1 Macintosh And Windows Production Platforms Pc Platform 18
2.2.2 Hardware Peripherals 19
2.2.3 Connections 20
2.2.4 Memory And Storage Devices 23
2.2.5. CD-Technologies 30
2.2.6. DVD 32
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
2.2.5 Input Devices
2.2.6 Touch Screens
41
44
2.2.7 Magnetic Card Encoders And Readers 46
2.2.8 Flat Bed Scanners 47
2.2.9 Voice Recognition Systems 52
2.2.10 Digital Camera 53
2.2.11 Output Hardware 54
2.2.12 Projectors Printers 55
2.2.13 Communication Devices 57
2.2.14 Modems 58
2.2.15 Cable Modems 58
2.3 Revision points 59
2.4 Intext Question 59
2.5 Summary 60
2.6 Terminal exercises 60
2.7 Supplementary Materials 60
2.8 Assignments 60
2.9 Suggested Reading 61
2.10 Learning Activities 61
2.11 Key words 61
Unit-III
3.0 Introduction 63
3.1 Objective 63
3.2 Content 63
3.2.1 Text Editing 63
3.2.2 Word Processing Tools 66
3.2.3 Painting And Drawing Tools 67
3.2.4 3d Modeling And Animation Tools 68
3.2.5 Animation, Video And Digital Movie Tools 70
3.2.6 Making Instant Multimedia 75
3.2.7 Spreadsheets 75
3.2.8 Presentations Tools 79
3.2.9 Multimedia Authoring Tools 81
3.2.10 Types Of Authoring Tools 83
3.2.11 Time Based Authroing Tools 85
3.2.12 Object Oriented Authoring Tools 85
3.3 Revision points 86
3.4 Intext Question 86
3.5 Summary 87
3.6 Terminal exercises 87
3.7 Supplementary Materials 87
3.8 Assignments 88
3.9 Suggested Reading 88
3.10 Learning Activities 88
3.11 Key words 89
ANNAMALAI
ANNAMALAI UNIVERSITY
Unit-IV
UNIVERSITY
4.0 Introduction 90
4.1 Objective 90
4.2 Content 90
4.2.1 Text 90
4.2.2 Sound 107
4.2.3 Images 114
4.2.4 Color 119
4.2.5 Animation 133
4.2.6 Video 141
4.3 Revision points 154
4.4 Intext Question 154
4.5 Summary 155
4.6 Terminal Exercises 156
4.7 Supplementary Materials 156
4.8 Assignments 156
4.9 Suggested Reading 156
4.10 Learning Activities 156
4.11 Key Words 157
Unit-V
UNIT-I
1.0) Introduction
Multimedia is a combination of text, graphic art, and sound, animation and video
elements. When you allow an end user-the viewers of a multimedia project-to control
what and when the elements are delivered, it is interactive multimedia. When you provide
a structure of linked elements through which the user can navigate, interactive
multimedia becomes hypermedia.
The IBM dictionary of Computing describes multimedia as "comprehensive material,
presented in a combination of text, graphics, video, animation and sound. Any system
that is capable of presenting multimedia in its entirely, is called a multimedia system". A
multimedia application accepts input from the user by means of a keyboard, voice or
pointing device. Multimedia applications involve using multimedia technology for
business, education and entertainment.
Multimedia can be divided into three broad categories that are based on the applications
they cover. These are
Fun Material
Powerful Material
Creative Material.
The first category covers games, animation sequences, realistic sounds and anything else
you can think of. The Powerful Material category comprises software packages that have
not been run on the PC earlier. These include Encyclopedias on CD-ROMs, Works of
literature, magazines with graphics and sound and reference works. The Creative
Material category covers software that enables users to create their own multimedia
programs, presentation and tools.
1.1) Objective
To study the various uses of multimedia in different applications.
To understand the roles and responsibilities of Multimedia project team members.
ANNAMALAI
ANNAMALAI UNIVERSITY
1.2) Content UNIVERSITY
1.2.1 Usage of Multimedia
Applications of Multimedia:
Many multimedia applications are driving the
development of new technology, and many more
applications are becoming viable because of the
technological advances. The technology push and the
application pull form a self-supporting cycle that is
hastening the pace of development. Many applications
Page 1
Multimedia and Its Application
that did not use multimedia content in their earlier versions presently include multimedia,
because multimedia makes a product more attractive and marketable. In this section,
various multimedia applications are described, with a view to understanding the demands
they put on networking systems.
Education:
Educational programs are designed as educational games that appeal to children,
beginning from the elementary class. These programs present Letter Recognition,
Elementary Mathematics, Spelling, Science, History, and Geography. Even students of
the higher class can use the interactive programs of Physics, Chemistry, etc. These
programs can be procured by schools
Entertainment
Multimedia has become the basic mode of development of
entertainment programs. TV programs, video games, etc.
Mostly depend upon the multiple design facility of
multimedia.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Most of the above programs are based on multimedia. You will notice that the careful
use of animation, graphic, sound and video has
made each one very interesting.
Games
Most of you are familiar with computer/video
games. Why don’t you try to become a
multimedia designer and create your own
computer games.
Page 2
Multimedia and Its Application
Sports
Multimedia technology is being exploited extensively in the
filed of sports telecast and training. Animation digitized
video, graphics, etc. have enhanced the power of sports
presentation.
For cricket matches shown on TV, the replays, graphic
overlays, batting averages, slow motion etc. are being
developed by multimedia software.
Cyberart
Cyberart has become a
powerful medium and a specialized field to
express ideas. The creativity of an individual may
be expressed on a three dimensional canvas. In the
hands of an imaginative artist, this virtual canvas
can have the addition of video and sound effects to
an extent that would be the envy of an artist with
an easel and paintbrush.
Advertising
Advertisements for TV and the film industry are
developed with multimedia. All kinds of special
effects like animation, moving through a building,
sound and video manipulation, special objects, etc.
are created through multimedia.
Training
Multimedia has become the most effective means of
imparting training. A large number of institutions
have embraced this method to impart knowledge in
preference to the older means. It must be remembered
Page 3
Multimedia and Its Application
that the common features are text, graphics, animation, sound and video integration.
Interactive Multimedia
Interaction between the user and the computer has
been evolved over a period of time. Form the
keyboard and Mouse to the Joystick and trackball
was the initial phase. Gradually it changed to the
touch sensitive screen and presently computers
has begun to recognize voices to carry out order.
Artificial Intelligence is no longer a distant dream.
Multimedia and Internet
Multimedia technology is being used extensively
on the Internet. Every page of the Internet has
text, graphics, animation, etc.
Health Care – Telemedicine
Multimedia Information
Networking can be used for
enhancing service quality
and reducing costs in the
health care area. Delivery
of health care services via a
network is also called
telemedicine. Computer
technology has been
applied to health care
functions for more than a
decade now. But most such
systems have remained
disjointed islands of
technology. The aim of telemedicine is to use Multimedia Information Networking
technology to create seamless information transmission systems for use in the health care
industry.
Example
Administration
ANNAMALAI
ANNAMALAI UNIVERSITY
Registration
Authorization
UNIVERSITY
While registering incoming patients, photos can be added to improve
authentication
The hospital can check all relevant data to authorize the care; e.g.,
checking Medicare and insurance data by saving forms signed by the
patient or representative
Claims The organization processing the claims can access multimedia
Processing information, such as patient photo, signature, X-rays etc. before
processing the claims.
Diagnostics
Tests Digital storage of test results including X-rays, CAT scans, etc.
Consulting doctor(s) can view the test results on high-resolution
Page 4
Multimedia and Its Application
display screens.
Consultation The local doctor can communicate with remotely located specialists
over a collaborative videoconference and discuss the patient’s
condition and test results.
Patient Care
Monitoring Monitoring of patients in the hospital or their homes can be done via a
multimedia network. Expert systems connected to the monitoring
systems can be used to warn the caregiver of any abnormal conditions,
and the patients could be observed and talked to over a
videoconference link.
Emergency Care
Record Access Patent records can be accessed over a multimedia network in an
emergency situation, even on the roadside, by using wireless
communication.
Carrier Opportunities:
Now you must be wondering about the carrier opportunities in fields related to
multimedia technology. The following lists are few indications about the applicability of
this specialized field:
Entertainment:
Special effects in TV serials and films.
Interactive computer games.
Internet games.
Animation and virtual reality simulation.
Advertising:
Marketing through WebPages.
TV commercials.
Multimedia Advertising.
Education:
Education related software.
Textbooks based on multimedia.
Classroom instructional materials.
Internet distance learning programs.
Research facilities through libraries.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Science and Research
All fields of scientific research.
Astronomy.
Space Technology and Aviation.
Medical.
Interactive Publishing:
Multimedia books.
Internet Web page design.
CD-ROM (based on multimedia) based electronic magazines.
Page 5
Multimedia and Its Application
Police Department:
Image composing of suspects. Trials can be reenacted.
Investment:
Investment analysis.
Statistical modeling.
Market analysis.
There are opportunities for a multimedia specialist also in the fields of:
Architectural designing.
Interior designing.
Landscape designing.
Needs and benefits:
Multimedia can be used to perform a wide range of sophisticated functions. With
multimedia we can:
Browse through an encyclopedia and see animations on subjects ranging form the
nervous system to electrons in a fission reaction.
Build business presentations using text, graphics, sound, video and animation.
Create interactive computer presentations.
Explore the anatomy of the human body for the anatomy paper.
Create 3-D effects in various ways.
Explore the map of any country that you may like to visit.
Add sound to files or tasks.
Create animated birthday/greeting cards for friends who have computers.
Watch a man walk on the surface of the moon.
Use multimedia for selling a product.
Learn a language.
Capture an image from video and use it as a bitmap on the Windows desktop.
The functions and possibilities are endless. Till now, people used to dream about an
electronic paperless office. With multimedia, this dream can be realized. It is indeed a
futuristic concept that is taking shape now!
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
1.2.2 Introduction To Making Multimedia
Multimedia, is a very powerful tool for influencing people. With this objective in mind,
carefully observe the following principles:
Multimedia means getting your message across in the shortest possible time with
maximum effect. To do this, match your presentation to the target audience. In other
words, if your audience is illiterate, use more of graphics and sounds in place of
written text. Evidently, illiterate persons can appreciate a picture but will be unable to
read the most beautifully drafted text.
Page 6
Multimedia and Its Application
Learn to convey your messages in the least possible words. Do also remember that,
whereas excess of detail is boring, brevity by itself is no virtue. In other words, learn
to make a balanced presentation. It should neither be so short as to lose meaning nor
so detailed that people start yawing or go to sleep.
Be careful in deciding the medium that you will use more than the other media. In
this regard, pay attention to the age, sex, education, cultural background, economic
position and other details of the audience. Taking care of these characteristics of the
audience will enable you to choose the media best suited to influence the receivers of
your message.
Multimedia presentations can, at times, be quite expensive. Hence, be innovative and
learn to make use of locally available skills, idioms, forms of expression, etc. so that
people can easily understand your message. To explain, in TamilNadu, people enjoy
the Kargham dancers. In Haryana, people like Saangs (a folk music cum drama
presentation).
Clearly, combining locally popular and accepted forms of expressions and your
knowledge of multimedia is likely to make your presentations more effective and
successful.
Ensure that the message that you wish to convey and the words or graphics used by
you do not hurt the popular sentiments. In other words, do not use words or
expressions that may be considered offensive, dirty or socially unacceptable in any
way. The same criterion applies to the use of graphics, wherein you must learn to
respect the local idiom. Forgetting this principle may not make your presentation
unacceptable but also create unpleasant consequences for you and all others
associated with you in making the presentation.
Avoid repetition. The availability of a variety of media gives you enough
opportunities to experiment and innovate. For example, it is now being increasingly
realized that street plays can be used as a very effective means of communication for
spreading awareness. Now experiment, how you can combine a street play with a
slide show, a film show, or attractively prepared posters, hand bills, etc., with a few
of your teammates dispersed in the crowd to elicit the people's response to your
presentation.
Multimedia Production
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The production of interactive multimedia applications is a complex one, involving
multiple steps. This process can be divided into the following phases:
Conceptualization
Development
Pre production
Production
Post production
Documentation
Page 7
Multimedia and Its Application
Conceptualization:
The process of making multimedia starts with an “idea” or better described as “the
vision”-which is the conceptual starting point. The starting point is ironically the
visualization of the ending point- the multimedia experiences that the targeted end-user
will have. Conceptualisation involves identifying a relevant theme for the multimedia
title. We prefer choosing themes that are socially important and exiting to work on. Other
criteria like availability of content, how amenable are the content to multimedia treatment
and issues like copyright are also to be considered.
Development:
Defining projects goal and objectives:
After a theme has been finalized for a multimedia project, specific goals, objectives and
activities matrix must be laid down.
Goals: In multimedia project, specific goals are general statements of
anticipated project outcomes, usually more global in scope.
Objectives: Specific statements of anticipated project outcomes.
Activities: These are actions, things done in order to implement and
objective.
Specific people are responsible for their execution, a cost is related to their
implementation and there is a time frame binding their development.
Defining the Target Audience:
A very importantly element that needs to be defined at this stage is the potential target
audience of the proposed title. Since, this will determine how the content needs to be
presented.
Preproduction:
It is the process of intelligently mapping out a cohesive strategy for the entire multimedia
project, including contents, technical execution and marketing. Based on the goals and
objectives, the three pillars of multimedia viz. hardware, software and user participation
are defined. At this stage the multimedia producer begins to assemble the resources and
talent required for creating the multimedia application. The Production Manager
undertakes the following activities.
Development of the budget control system
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Hiring of all specialists involved in the multimedia applications process
Contracting video and audio production crews and recording studios
Equipment rental, leasing and purchasing
Software acquisition and installation
Planning the research work of the content specialists
Development of the multimedia application outline, logic flow, scripts and video
and audio files production scripts and schedules
Coordination of legal aspects of production.
Page 8
Multimedia and Its Application
Production:
Once all the pre production activities have been completed, the multimedia application
enters the production phase. An activity in this phase includes:
Content Research
Interface Design
Graphics Development
Selection of musical background and sound recording
Development of computer animation
Production of digital video
Authoring
Post production:
In this phase, the multimedia application enters the alpha beta testing process. Once the
application is tested and revised, it enters the packaging stage. It could be burned into a
CD-ROM or published on the Internet as website.
Developing documentation:
User documentation is a very important feature of high-end multimedia titles. This
includes instructions for installing, system requirements, developing acknowledgements,
copyrights, technical support and other information important of the user.
Page 9
Multimedia and Its Application
Page 10
Multimedia and Its Application
Page 11
Multimedia and Its Application
The main responsibility of content development lies with the content Specialist,
scriptwriter or Computer Graphic Artist. The content specialist undertakes the following
tasks:
Content research
Identifying document sources
Identification of the building blocks like colours and graphics representative of
the theme, time or period to be presented in the application
Identifying individuals to be interviewed
Location to be video taped
The responsibilities of the scriptwriters are the following:
Content evaluation
Adaptation of the content to the goals and objectives of the application
Development of the application script and storyboard based on the content
the computer graphics artist is responsible for the development of the
following:
developing line art necessary for the application
scanning and editing of photos, backgrounds, and other Graphic elements
chart development
map preparation
text manipulation
3-D graphics and walkthrough
computer animation
If content is not readily available then it needs to be developed. The creations of story,
graphics, or the compositions of music are examples of content development. Sometimes
content needs to be adapted to meet the needs of the application. This includes editing
and manipulation of existing graphics, photo, video, sound or text.
Checklist for Multimedia Production
There may be many tasks in your multimedia project. Here is a brief check-list of action
items for which you should plan ahead as you think through your project:
Design Instructional Framework
Hold Creative Idea Session(S)
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Determine Delivery Platform
Examine Available Content
Draw Navigation Map
Create Storyboards
Design Interface
Design Information Containers
Research/Gather Content
Assemble Team
Build Prototype
Page 12
Multimedia and Its Application
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 13
Multimedia and Its Application
Multimedia is now available on standard computer platforms. It is the best way to gain
attention of users and is widely used in many fields as follows:
Business: In any business enterprise, multimedia exists in the form of advertisements,
presentations, video conferencing, voice mail, etc.
Schools: Multimedia tools for learning are widely used these days. People of all ages
learn easily and quickly when they are presented information with the visual treat.
Home PCs equipped with CD-ROMs and game machines hooked up with TV screens
have brought home entertainment to new levels. These multimedia titles viewed at some
would probably be available on the multimedia highway soon.
Public places _ Interactive maps at public places like libraries, museums, airports and the
stand-alone terminals at super markets would do much good to the users to gain
information quickly and easily
1.5) Summary
Hardware, Software, Creativity, talent and technical skills are required for making
good multimedia. Following the time schedule and budget is essential, as time and
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
money are major requirements.
Most multimedia projects are results of team work. Many team work-many graphic
artists, sound producers, programmers are involved.
Page 14
Multimedia and Its Application
D) Evaluation
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
II. Project Phases
Many large-scale projects follow the ________________________________
.
III. The Project Team
The makeup of a project team depends on the __________________ and
________________ its __________________ and __________________.
On small teams, members often fill more than one role.
IV. Project Team Roles
Project teams may include:
A.Client Representative
B. Project Manger
C. Produced
Page 15
Multimedia and Its Application
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
1. http://en.wikipedia.org/wiki/Multimedia
2. http://multimedia.expert-answers.net/multimedia-glossary/en/
3. http://nrg.cs.usm.my/~tcwan/Notes/MM-BldgBlocks-I.doc
4. www.edb.utexas.edu/multimedia/PDFfolder/WEBRES~1.PDF
Page 16
Multimedia and Its Application
1.8) Assignments
1. Explain the need for planning a multimedia application. Explain the need of logic
flow chart for development of interactive multimedia application with an
example.
2. Discuss how content-based retrieval can be used for the three media types: image,
sound and video.
3. What problems need to be overcome in providing an effective and efficient
service to the user?
Page 17
Multimedia and Its Applications
UNIT-II
2.0) Introduction
The processing, input, output, and storage technologies can be used to create interactive
multimedia applications that integrate sound, full-motion video, or animation with
graphic and text. PCs today come with built-in multimedia capabilities, including a high-
resolution color monitor; a CD-ROM drive or DVD drive to store video, audio, and
graphic data; and stereo speakers for amplifying audio output.
2.1) Objective
To study the various Hardware devices of multimedia systems
To understand the working principle of some I/O devices and storage devices.
2.2) Content
2.2.1 Macintosh And Windows Production Platforms Pc Platform
Macintosh versus PC:
The debate on selecting the platform for multimedia project
between Macintosh and PC has been going on for a long
time. Most developers have their mind set on believing that
Macintosh provides an easier and smoother platform for
multimedia development. It is true that with the advent of
hardware and authoring software tools for window,
multimedia development on both
Macintosh and windows platform is
easily good. Ultimately personal
preference, budget constraints and requirements decide the platform
for development.
the Windows Operating System can assemble Windows computers. Most
Windows computers today come equipped with audio software, CD-ROM
drive large amounts of RAM, good processor speed and high-resolution
monitor. These features all make multimedia experience in Windows
smooth and good.
Page 18
Multimedia and Its Applications
ANNAMALAI
ANNAMALAI UNIVERSITY
Firewire
UNIVERSITY
For live video, it is necessary to have FireWire port to bring in video to system. Macs
come with built-in FireWire; so do most high-end workstation PCs.
TV tuner card
As the name suggests, a tuner card is used to display TV on PC. Frankly, these are more
for entertainment than for your work. Video captured from a TV card will be of inferior
quality, and would tend to be grainy.
Sound card
Multimedia is both video and audio. So make sure that a good sound card is installed for
doing a lot of music-based work. Preferably a 5.1 output sound card with Dolby and DTS
Page 19
Multimedia and Its Applications
decoder on the hardware. Check that it has all the outputs that are needed, like optical and
digital. The same applies to the inputs available (auxiliary, line or microphone
Speakers
Apart from the number of channels, other important parameters for a speaker system are
their frequency response and power output. The better the frequency range they cover,
the more the clarity of sound across the spectrum
CD/DVD/CDR drives
Depending upon the application need, these should be incorporated. A CD writer will
obviously be a requirement. Combo drives are also available, which can read and write
CDs and read DVDs
Data backup
If the volume of data is more then there is a need for backup. Apart form CD/DVD drive
that can archive one project at a time, it is necessary to have complete backup. Hence a
good tape backup system of adequate capacity should be incorporated. If the system is
one of many workstations at the same place, then the tape drive can be shared.
2.2.3 Connections
Many multimedia applications are developed in workgroups comprising instructional
designers, writers, graphic artists, programmers, and musicians located in the same office
space or building. The workgroup members’ computers typically are connected on a local
area network (LAN). The client’s computers, however, may be thousands of miles
distant, requiring other methods for good communication.
Communication among workgroup members and with the client is essential to the
effective and accurate completion of the project. Our Postal Service mail delivery is too
slow to keep pace with most projects; courier services are better. And when you need it
immediately, an Internet connection is required. If your client and you are both connected
to the Internet, a combination of communication by e-mail and by FTP (File Transfer
Protocol) may be the most cost-effective and efficient solution for both creative
development and project management.
In the workplace, quality equipment and software used communications setup. The cost -
in both time and money - of stable and fast networking will returned to you.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Networking Macintosh and Windows computers
It is desirable to network Macintosh and Windows computers and Windows computers so
that share resources (like printers) between them. Based on the geographical distance
between the networking devices, the network is classified.
Local Area Network (LAN): are those in which the distance between the workstations is
less. Examples of LAN are computers connected within a building. Resources like
printers file servers; scanners etc. are shared directly between these network devices. The
most common protocol used for LAN connections is Ethernet or Token Ring.”CAT-5” or
“data-grade level5”, twisted-pair telephone wire set up these connections.
Page 20
Multimedia and Its Applications
Wide Area Network (WAN): are network systems separated by great distance.
Examples of WAN are connections between large corporate enterprises and institutions
spanning over a large geographic area. These are more expensive to install and maintain
compared to LANs. WANs can operate using dedicated phone lines, wireless network,
and dial-up connections through an Internet Service Provider (ISP). These dial-up
services use telephone line to connect to the ISP’s server and so we are charged for the
telephone lines for the duration we are connected.
While working across Mac and Windows platforms for multimedia development, we
need to establish an Ethernet connection to enable the PCs and Macs to be able to talk to
each other and share resources.
Macs have in-built Ethernet cards. We can fix inexpensive Ethernet cards on PCs.
Windows PC use Microsoft Client TCP/IP as the client/server software. We can add to
Macs to connect to the network of PCs. Another option is to add software to Windows
PC to connect it to a network of Mac, which use AppleTalk as their client/server
software. Both these methods use Ethernet to connect.
Connection: The equipment needed for a multimedia project depends on the design and
contents of the project. A fast computer with ample RAM and disc storage would be the
basic need. To avoid using tool to develop multimedia content we can compile pre-
existing sound, music, art, clip animation etc and reuse them in our project. Most
multimedia developed special equipment for digitizing sound or skills from videotapes.
Communication channels
By communication channels of network, it is meant that the connecting cables are being
talked about. The cables that connect two or more workstations are the communication
channels.
In LANs many different types of media are in use. Copper conductors in the form of
twisted pair or coaxial are by far the most common. More recently, very serious
consideration has been given to the use of optical fiber technology in LANs. Other media
e.g., microwave transmission, infrared, telephone line etc. are also used. The basic types
of cables are:
Twisted pair cable
Coaxial cable
Optical fibers
Twisted pair cable:
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The most common form of wiring in data
communication application is the twisted pair cable.
As a voice grade medium (VGM), it is the basis for
most internal office telephone wiring. It consists of
two identical wires wrapped together in a double
helix.
Problems can occur due to differences in the electrical
characteristics between the pair (e.g., length,
resistance, and capacitance). For this reason, LAN
applications will tend to use a higher-quality cable
known as data grade medium (DGM).
Page 21
Multimedia and Its Applications
The main advantages of twisted pair cable are its simplicity and ease of installation. It is
physically flexible, has a low weight and can be easily connected.
The data transmission characteristics are not so good. Because of high attenuation, it is
incapable of carrying a signal over long distance without the use of repeaters. Its low
bandwidth capabilities make it unsuitable for broadband applications.
Coaxial cable:
This type of cable consists of a solid wire core
surrounded by one or more foil or wire shields, each
separated by some kind of plastic insulator. The inner
core carries the signal, and the shield provides the
ground. While it is less popular than twisted pair, it is
widely used for television signals. In the form of
(CATV) cable, it provides a cheap means of
transporting multi-channel television signals around
metropolitan areas. Large corporations in building
security systems also use it.
The data transmission characteristics of coaxial cable
are considerably better than those of twisted pair. This opens the possibility of using it as
the basis for a shared cable network, with part of the bandwidth being used for data
traffic.
Optical Fibers:
Optical fibers consist of thin strands of glass or glass like
material, which are so constructed that they carry light from a
source at one end of the fiber to a detector at the other end. The
light sources used are either light emitting diodes or laser diodes.
The data to be transmitted is modulated onto the light beam using
frequency modulation techniques. The signals can then be picked
up at the receiving end and demodulated. The bandwidth of the
medium is potentially very high. For LEDs, this range between
20 and 150 mbps and higher rates are possible using LDs.
The major problems with optical fibers are associated with installation. They are quite
fragile and may need special care to make them sufficiently robust for an office
environment. Connecting either two fibers together or a light source to a fiber is a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
difficult process.
One of the major advantages of optical fibers over other media is their complete
immunity to noise, because the information is traveling on a modulated light beam.
A side effect of this noise immunity is that optical fibers are virtually impossible to tap.
In order to incept the signal, the fiber must be cut and a detector inserted.
Despite its shortcomings, optical fiber is an important technology and will be very
attractive transmission indeed.
Page 22
Multimedia and Its Applications
Page 23
Multimedia and Its Applications
Storage Technology:
Rapid advances in computing, communication, and compression technologies coupled
with the dramatic growth of the Internet has led to the emergence of a wide variety of
multimedia applications – such as distance learning, interactive multiplayer games, online
virtual worlds, and scientific visualization of multi-resolution imagery. These
applications differ from conventional application in at least two ways. First, they involve
storage, transmission, and processing of heterogeneous data types – such as text, image,
audio, and video – that differ significantly in their characteristics (e.g., size, data rate,
real-time requirements, etc.). Second, unlike conventional best-effort applications, these
applications impose diverse performance requirements – for instance, with respect to
timeliness on the networks and operating systems. Because of these differences,
techniques employed by conventional file systems for managing textual files do not
suffice for managing multimedia objects.
Selecting Multimedia Storage Device
You need large capacities, fast access and high data-
transfer rates for storing video. Let us review storage
devices typically used in such environments and the
reasons for choosing them.
Tapes
Tapes have always been the choice for capturing and
storing video. They are compatible with digicams and camcorders as well. But, they
cannot be used for storing while processing (editing), as they are sequential access
devices. Digital videotapes or DV tapes are becoming popular, as they’re ideal for high
quality digital video recordings.
SCSI/IDE Drives
Hard disks are used in editing systems. Traditionally, these machines used SCSI drives
though IDE or Ultra ATA drives are being used these days. On professional systems,
there is AV (Audio-Visual) drives that avoid thermal recalibration between read/writes
and are suitable for desktop multimedia. (Thermal recalibration is a process by which
older hard drives operate smoothly despite heating).
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Firewire Hard Disks:
Firewire hard disks find application in the post-production
market. They work on Firewire technology that provides high-
speed serial input/output connection when connecting digital
devices like camcorders to desktop or portable computers.
Most of the DV camcorders available today have Firewire
ports. These disks are hot plug and daisy chain capable, which
means you, can add many as external drives without shutting
down or restarting.
Page 24
Multimedia and Its Applications
systems rotate and position the disk media and the pickup head, thus controlling the
position of the head with respect to data tracks on the disk. Additional peripheral
electronics are used for control and for data acquisition, encoding, and decoding. As for
all data storage systems, optical disk systems are characterized by their storage capacity,
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
data transfer rate, access time, and cost.
Storage capacity:
The storage capacity of an optical storage system is a direct function of spot size
(minimum dimensions of a stored bit) and the geometrical dimensions of the media. A
good metric to measure the efficiency in using the storage area is the areal density
(MB/sq. in.). Areal density is governed by the resolution of the media and by the
numerical aperture of the optics and the wavelength of the laser in the optical head used
for recording and readout. Areal density can be limited by how well the head can be
Page 25
Multimedia and Its Applications
positioned over the tracks; this is measured by the track density (tracks/in.). In addition,
areal density can be limited by how closely the optical transitions can be spaced; this is
measured by the linear density (bits/in.).
Data transfer rate
The data transfer rate of an optical storage system is a critical parameter in applications
where long data streams must be stored or retrieved, such as for image storage or backup.
Data transfer rate is a combination of the linear density and the rotational speed of the
drive. It is mostly governed by the optical power available, the speed of the pickup head
servo controllers, and the tolerance of the media to high centrifugal forces.
Access time
The access time of an optical storage system is a critical parameter in computing
applications such as transaction processing; it represents how fast a data location can be
accessed on the disk. It is mostly governed by the latency of the head movements and is
proportional to the weight of the pickup head and the rotation speed of the disk.
Cost:
The cost of an optical storage system is a parameter that can be subdivided into the drive
cost and the media cost. Cost strongly depends on the number of units produced, the
automation techniques used during assembly, and component yields. Optical storage
R&D typically concentrates on the following efforts: reducing spot size using lower-
wavelength light sources; reducing the weight of optical pickup heads using holographic
components; increasing rotation speeds using larger optical power lasers; improving the
efficiency of error correction codes; and increasing the speed of the servo systems.
Equally active R&D efforts, especially in Japan, are focused on developing new
manufacturing techniques to minimize component and assembly costs.
Optical Disk Formats:
Depending on the access times required by given applications, optical disk products come
in two different formats: the compact disk (CD) format used for entertainment systems
(audio, photo, or digital video disk applications), and the standard or banded format used
for information processing or computing applications.
CD format:
In the optical disk CD format, information is recorded in a spiral while the disk turns at a
constant linear velocity. The standard disk diameter used is 12 cm, which offers a typical
capacity of 650 MB with a seek time (access time) in the order of 300 ms and data rate of
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
about 10 kB/s. A minidisk format is currently being adopted in some Sony products that
use 6 cm disks providing 140 MB capacity. Various types of products belong to the CD
family, including CD recordable (CD-R) products, which are the write-once, read-many
(WORM) version of standard CDs; the CD-E erasable products, which are to appear
shortly in the market; the Photo-CD systems, which were first marketed by Kodak for
storing images; and video CDs, which may become available over the next two years.
Several standards for videodisk systems are presently being put forward, including the
double-sided video disk (DVD) standard proposed by Toshiba and the double-layer
format proposed by Sony. Major improvements in CD technology are expected to take
place within the next few years.
Page 26
Multimedia and Its Applications
Standard format:
The access time achieved by the CD format is too slow for use in computing applications.
To shorten access times, a standard format is commonly used in magnetic as well as
optical disk systems, where the disk turns at a constant angular velocity and data is
recorded on concentric tracks. Whether the inner or outer tracks are read, the disk's speed
of rotation remains constant, allowing for faster access times; however, this format
wastes valuable disk space on the outer tracks, because it requires a constant number of
bits per track, limited by the number of bits that can be supported by the innermost track.
To eliminate this waste, a "banded" format is now used where tracks of similar length are
grouped in bands, allowing the outer bands to support a much larger number of bits than
the inner bands. This, however, requires different channel codes for the different bands in
order to achieve similar bit error rates over the bands.
In the standard format, 12 in., 5.25 in., and 3.5 in. disk diameters are commercially
available, and 14 in. and 2.5 in. disk diameters are being investigated. The 12 in. products
(mostly WORM) provide high-capacity solutions on the order of 7 GB on a single platter
for storage of large databases, achieving areal densities exceeding 500 MB/sq. in. The
5.25 in. disks are most commonly used today and provide data capacities of 2 GB per
disk, seek times on the order of 35 to 40 ms, and data rates on the order of 2 to 5 MB/s.
They achieve an areal density of 380 MB/sq. in., and are cost-competitive. The 3.5 in.
disks presently provide one-eighth of the capacity of 5.25 in. disks, reaching only 128
MB.
Optical Storage in Hierarchical Memory Systems
During the past fifty years, many memory technologies have been developed. Despite
intense competition, several widely different approaches are currently in use: magnetic
and optical tape; hard disks, floppy disks, and disk stacks (Bell 1983); and both electronic
static random-access memory (SRAM) (Maes at al. 1989) and dynamic random-access
memory (DRAM) (Singer 1993). There are also several newer technologies now
available, such as the solid-state disk (Sugiura, Morita, and Nagasawa 1991), the Flash
Erasable Electrically Programmable Read-Only Memory (EEPROM) (Kuki 1992), and
the Redundant Array of Inexpensive Disks (RAID) (Velvet 1993) systems.
This proliferation of technologies exists because each technology has different strengths
and weaknesses in terms of its capacity, access time, data transfer rate, storage
persistence time, and cost per megabyte. No single technology can achieve maximum
performance in all these characteristics at once; modern computing systems use a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
hierarchy of memories rather than a single type. The memory hierarchy approach utilizes
the strong points of each technology to create an effective memory system that
maximizes overall computer performance given a particular cost.
Hierarchy levels
In standard sequential computer architecture there are three major levels of the storage
hierarchy: primary, secondary, and tertiary.
Primary memories (cache and main): Primary memories are currently implemented in
silicon and can be classified as cache memory (as local storage within the processing
chip) and main memory (as RAM and DRAM chips located on the same board). The
access times of primary memories are comparable to the microprocessor clock cycle, but
Page 27
Multimedia and Its Applications
their data capacity is limited (10 to 100 MB for main), although it has been doubling
every year.
Secondary memories: Secondary memories, such as magnetic or optical disk drives,
have significantly increased capacity (into gigabytes), with significantly lower cost per
megabyte, but the access times are on the order of 10 to 40 ms.
Tertiary (archival) memories: Tertiary memories store huge amounts of data (into
terabytes, or 10 12 bytes), but the time to access the data is on the order of minutes to
hours. Presently, archival data storage systems require large installations based on disk
farms and tapes often operated off line. Archival storage does not necessarily require
many write operations, and write-once, read-many (WORM) systems are acceptable.
Despite having the lowest cost per megabyte, archival storage is typically the most
expensive single element of modern supercomputer installations.
Storage capacity versus access time
Magnetic systems: Arial density of magnetic systems is governed by the minimum
switchable area of a magnetic domain. The size of these domains is governed by the
dimensions of the magnetic heads and their distance to the active media. These domains
can be made quite small, since the magnetic heads can be miniaturized and are "flown"
right against the media (approximately 50 nm above). The access time of magnetic disk
devices is in general shorter than optical disk systems by about one order of magnitude,
because of the low inertia of these miniature heads and the faster rotation speed of the
media. This same advantage, however, is also associated with two of the main
disadvantages of magnetic storage: head crashes and non-resolvability. It should be
pointed out, however, that some magnetic disk products provide resolvability at the
expense of longer access times.
Optical systems Up to recently, interest in optical storage systems was restricted to use
for very large storage systems and backup systems, because of their robustness and
resolvability. Optical storage for very large storage devices employing interchangeable
and recordable media in automatic "jukeboxes" is a market traditionally outside the range
of magnetic disk drives but directly in competition with magnetic tapes. The advantage of
optical systems for this market is that they have much shorter access times than tapes.
Storage capacity versus cost
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The market direction for optical disk systems can be anticipated by examining cost per
megabit as a function of system capacity, as shown in Figure B The generally decreasing
trend seen in this graph indicates that as capacity increases, cost per megabit decreases.
The solid lines show total system cost for the three storage system types. These lines
indicate that the total cost of secondary and tertiary memory systems far exceeds the cost
of primary memory.
Page 28
Multimedia and Its Applications
The strong linear relationship shows that as capacity increases cost per megabit
decreases, but not in the same proportion. The result is that high-capacity systems have a
much higher total system cost (Call/Recall Inc.).
Benefits and applications of optical storage
Optical media is a newer technology than tape. Following are some of its advantages:
Durability. With proper care, optical media can last a long time, depending on what
kind of optical media you choose.
Great for archiving. Several forms of optical media are write-once read-many,
which means that when data is written to them, they cannot be reused. This is
excellent for archiving because data is preserved permanently with no possibility of
being overwritten.
Transportability. Optical media are widely used on other platforms, including the
PC. For example, data written on a DVD-RAM can be read on a PC or any other
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
system with an optical device and the same file system.
Random access. Optical media provide the capability to pinpoint a particular piece of
data stored on it, independent of the other data on the volume or the order in which
that data was stored on the volume.
While optical has many advantages, there are also some disadvantages to consider, as
follows:
Page 29
Multimedia and Its Applications
Writing time. The server uses software compression to write compressed data to
your optical media. This process takes considerable processing unit resources and
may increase the time needed to write and restore that data.
Another option that you can use for optical storage is virtual optical storage. When you
use virtual optical storage, you create and use optical images that are stored on your disk
units.
2.2.5. CD-Technologies
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
CD-R
Write Once/Read Many storage (WORM) has been around since the late 1980s, and is a
type of optical drive that can be written to and read from. When data is written to a
WORM drive, physical marks are made on the media surface by a low-powered laser and
since these marks are permanent, they cannot be erased, hence write once. The
Page 30
Multimedia and Its Applications
Page 31
Multimedia and Its Applications
temperature and cooled it becomes crystalline, but if it's heated to a higher temperature,
when it cools down again it becomes amorphous. The crystalline areas allow the
metalised layer to reflect the laser better while the non-crystalline portion absorbs the
laser beam, so it is not reflected.
In order to achieve these effects in the recording layer, the CD-Rewritable recorder uses
three different laser powers:
the highest laser power, which is called "Write Power", creates a non-crystalline
(absorptive) state on the recording layer
the middle power, also known as "Erase Power", melts the recording layer and
converts it to a reflective crystalline state
the lowest power, which is "Read Power", does not alter the state of the recording
layer, so it can be used for reading the data.
2.2.6. DVD
The compact disc (CD) is surely a major technological innovation of our era. Beginning
as a pure, high-quality sound reproduction system, it rapidly evolved into an entire family
of systems, with applications covering the entire landscape of data storage and
distribution. When Sony introduced the CD-ROM drive in 1987, it was an ideal platform
for all software makers who could now deliver applications on one mass-produced disc
rather than dozen or more floppy diskettes. It also opened new possibilities for storing the
most vital contents (Sound and Video) of the multimedia age.
Multimedia computer applications make every effort for increased realism, with, more
full-screen, high-quality video, 3D animations, and multimedia hi-fi audio. The resulting
demand for storage capacity exceeded the capacity of existing CD-ROMs and began to
be measured in gigabytes. The result was the emergence of second-generation optical
data storage devices in 1995. A new high-capacity, universally applicable optical disc
initially called the digital videodisc, and eventually the digital versatile disc or DVD, was
born.
DVDs have the potential to store more than 17 gigabytes of data, which is more than 25
times the capacity of CD-ROMs. This huge capacity can be used to store up to nine hours
of studio quality video and multichannel surround-sound audio, interactive multimedia
computer programs, 30 hours of CD-quality audio, and just above everything that can be
represented as digital data.
ANNAMALAI
ANNAMALAI UNIVERSITY
Some Basics:
UNIVERSITY
A DVD looks like an ordinary CD: It is a silvery platter, 4.75 inches in diameter (the
same as a CD-ROM) and about 0.05 inches thick, with a hole in the center. Unlike
conventional CDs, the DVD is comprised of two platters cemented together, each with a
Page 32
Multimedia and Its Applications
thickness of 0.6mm. Each of these platters can be a complete disc, recordable on both
sides. The resultant sandwich thus has two layers per side, or four separate recording
surfaces. A DVD is therefore four spaces in one, with separate schemes for storing 4.7 to
17 GB of data on a single disc. By taking advantage of the media layering and using both
sides of the disc platter, four capacity levels (see Figure) are supported.
(Compares the pit size of an audio CD to that of a DVD)
Data is recorded on a DVD in a spiral trail of tiny pits and the discs are read using a laser
beam, just like on a CD. But the similarity ends here; the tracks on a DVD are placed
together, thereby allowing more tracks per disc. The DVD track pitch (the distance
between two adjacent tracks) is reduced to 0.74 microns, less than half of the CD’s 1.6
micron. The pits, in which data is stored, are also much smaller, allowing more pits per
track. The minimum pit length of a single layer DVD is 0.4 microns, compared to 0.834
microns for a CD. With the number of pits equating to capacity levels, a DVD’s reduced
track pitch and pit size creates four times as many pits as a CD’s
To read these tightly packed discs, a laser with a shorter wavelength is required, as are
more accurate aiming and focusing mechanisms. CD drives use a laser beam with a
wavelength of 780nm (which falls in the infrared region), whereas DVD drives use a
635nm or 650nm red light laser to read data. The reduction in the wavelength of the laser
beam is what has made DVD technology possible.
Page 33
Multimedia and Its Applications
it takes longer to relocate the read head mechanism to another location or file on the same
surface.
Having understood the basic structure of a DVD, let us take a look at the fundamental
functioning of DVD-ROM drives.
These drives can read data from a DVD or CD but
can’t make any changes to it. As far as physical
appearance goes there is little to distinguish a DVD-
ROM drive from an ordinary CD-ROM drive. The
only giveaway is the DVD logo on the front. Inside
the drive two there are more similarities than
differences. The interface is ATAPI (also called IDE
in common parlance) or SCSI, and transport is much
like any other CD-ROM drive. But as opposed to a
CD-ROM the data is recorded near the top surface of
the disc, the data layer in a DVD is right in the
middle, so that the disc can be double sided. The laser
is also different, having a pair of lenses on a swivel:
One to focus the beam on to the DVD data layers and
the other for reading ordinary CDs.
DVD-ROM drives spin the disc a lot slower than their CD-ROM counter parts. However,
since the data is packed much closer together, there throughput is substantially better than
that of a CD-ROM drive at equivalent spin speed. While a 1X CD-ROM drive has a
maximum data rate of only 150KBps, a 1X DVD-ROM drive can transfer data at 1,250
KBps, which is a mite higher than the speed of an 8X CD-ROM Drive.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
DVD-ROM drives became available in early 1997 and these yearly 1X devices were also
capable of reading CD-ROMs at 12X-sufficient for full screen video playback. As with
CD-ROMs, higher speed DVD drives appeared as technology matured. By the beginning
of 1998, multi-speed DVD-ROM drives has already hit the market. These drives were
capable of reading DVD media at double their initial speed, producing a sustained
transfer rate of 2,700KBps, and spinning CDs at 24X. By the end of the year DVD read
performance had been increased to 5X. almost a year later performance has improved to
6X (8,100KBps) reading of DVD media and 32X reading of CD-ROMs.
Page 34
Multimedia and Its Applications
Page 35
Multimedia and Its Applications
Page 36
Multimedia and Its Applications
spots and erased, reflective areas between the spots is how a player or drive can discern
and reproduce stored information.
DVD-RW media uses the same physical addressing scheme as DVD-R media. During
recording, the drive's laser follows a microscopic groove to ensure consistent spacing of
data in a spiral track. The walls of the microscopic groove are modulated in a consistent
sinusoidal pattern so that a drive can read and compare it with to an oscillator for precise
rotation of a disc. This modulated pattern is called a "wobble groove", because the walls
of the groove appear to wobble from side to side. This signal is used only during
recording, and has no effect on the playback process. Among the DVD family of formats
only recordable media use wobble grooves.
+RW:
This rewritable DVD format was
born out of the competition
between CD originators Sony
and Philips, and the principal
DVD protagonists – hitachi,
Matsushita Electric and Toshiba.
Unsatisfied with DVD-RAM
specifications, Philips and Sony
began work on drives, which
were originally called DVD
+RW and later rechristened as
+RW under pressure from the
DVD forum. This is a rewritable
format, based on DVD and CD-
RW technology.
+RW drives read all previous CD
formats, and stores 3GB of data on proprietary discs. Manufactures claim that the drives
will have a sustained data transfer rate of 1.7 MBps as opposed to the 1.35 MBps of
DVD-RAM and will offer a better access time than DVD-RAM drives. +RW backers
believe that their specs are better that suited for some applications. For instance, +RW
drives can easily be modified to create discs readable in any DVD-ROM drive.
DVD+R
In October 2003, Philips Electronics and Mitsubishi Kagaku Media (better known by its
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Verbatim brand name) demonstrated its new dual-layer DVD recordable technology at
Page 37
Multimedia and Its Applications
the Ceatec Japan 2003 exhibition. The new technology virtually doubles data storage
capacity on DVD+R recordable discs from 4.7GB to 8.5GB, while remaining compatible
with existing DVD Video players and DVD-ROM drives.
The dual-layer DVD+R system uses two thin embedded organic dye films for data
storage separated by a spacer layer. Heating with a focused laser beam irreversibly
modifies the physical and chemical structure of each layer such that the modified areas
have different optical properties to those of their unmodified surroundings. This causes a
variation in reflectivity as the disc rotates to provide a read-out signal as with
commercially pressed read-only discs.
The following table summarizes the read/write compatibility of the various formats.
Some of the compatibility questions with regard to DVD+RW will remain uncertain until
product actually reaches the marketplace. A "Yes" means that it is usual for the relevant
drive unit type to handle the associated disc format it does not mean that all such units
do. A "No" means that the relevant drive unit type either doesn't or rarely handles the
associated disc format:
Type of DVD Unit
DVD
DVD DVD- DVD-
Disc DVD-RAM DVD-RW DVD+RW
Player R(G) R(A)
Format
R W R W R W R W R W R W
DVD-
Yes No Yes No Yes No Yes No Yes No Yes No
ROM
DVD-
Yes No Yes Yes Yes No Yes No Yes Yes Yes No
R(G)
DVD-
Yes No Yes No Yes Yes Yes No Yes No Yes No
R(A)
DVD-
No No No No No No Yes Yes No No No No
RAM
DVD-
Yes No Yes Yes Yes No Yes No Yes Yes Yes No
RW
DVD+R
Yes No Yes Yes Yes No No No Yes No Yes Yes
W
CD-R No No No No No No Yes No Yes Yes Yes Yes
CD-RW No No No No No No Yes No Yes Yes Yes Yes
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
DVD Regions and its Intricacies:
Soon after the DVD format was standardized worldwide, the movie industry divided the
world in to six regions. These are:
Regio Region
n No.
1 USA and Canada
2 Europe, Near East, South Africa and Japan
3 South East Asia
Page 38
Multimedia and Its Applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The Operational Concept behind Millipede
The Millipede chip is created using an array of very tiny cantilevers to create almost
atomic-sized indentations in a plastic substrate that is used as the recording material. This
array works in a massively parallel fashion where a bank of cantilevers accesses or
creates information.
Before read or write or information, the polymer-based medium, which is just about 50
nanometers thick, is positioned beneath the cantilever array. This medium is mounted on
a magnetically driver scanner that can move in three dimensions. During read-write the
cantilevers along the x-y axis move the medium while the cantilevers actuate and create
indentations on recording surface. Using this process and with a single cantilever design,
researchers have managed to achieve a storage density of an astounding 60 to 80 GB per
Page 39
Multimedia and Its Applications
square centimeter. Also, this substrate can be ‘erased’ and data can be rewritten onto it
repeatedly. This is achieved by momentarily heating the polymer to a temperature of
150C so that the surface is effectively smoothened and ready for rewrite. However,
individual bits of information cannot be erased; only larger sections of the polymer
surface can be cleared. The image above also shows an actual electron microscope image
of one of the Millipede cantilevers. The tip of the cantilever head is about 50 Angstroms
wide—that’s just a few atoms clustered together.
One of the most obvious advantages with this technology is the fact that very large
storage densities can be achieved in very small areas, Lower power consumption makes it
ideal for mobile applications such as handheld computers and cellular phones your next
generation cellular phone would be able to hold a gigabyte of multimedia content and
contact information! The main hurdle that lies in the path of commercialization of this
technology is the fabrication of the controllers that go into these chips.
Blue laser, Blu-ray or BD:
BD is a joint effort by nine consumer electronics companies, namely Hitachi, LG,
Matsushita (Panasonic), Pioneer, Philips, Samsung, Sharp, Sony, and Thomson. This
technology will make recording possible of up to 2-3 hours of HDTV on a 27CB disk.
Such high capacities have been made possible by using a blue laser (that’s why the name
Blu-ray) instead of the regular red laser used in CD/DVD. The blue laser has shorter
wavelength of 405 nanometers as compared to 650 nanaometers of the red laser. This
makes easier to focus the laser beam with more precision, thus making it possible to hold
more data on the disk.
The disk features a data transfer rate of 36 MBps and will be compatible with the
prevailing optical disk technologies. So a BD drive will be able to playback CDs and
DVDs. An interesting feature of this technology is that it can simultaneously record from
TV and play pre-recorded video from the same disk.
Another exciting form of Blu-ray disks is being developed along the lines of the Sony
Minidisk, which is fairly popular today. Philips has already showcased a 30mm re-
writable disk, code named SFFO (Small Form Factor Optical Storage) based on the Blu-
ray technology that can hold up to one GB of data. The company plans to use these disks
in mobile phones and PDAs instead of existing memory cards. The showcased drive
measures just 5.6x 3.4x0.75cm in size.
Talking of miniature storage, Iomega has also announced a 1.5 GB capacity magnetic
digital capture technology (DCT) disk. It is a small form factor disk that weighs just 9
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
grams and is the size of a small coin. It comes in its own stainless steel casing to protect
the data from being damaged.
If it is simply a question of using laser light with a smaller wavelength, you might ask
why this wasn’t done before. The reason for this is that the materials used to generate
blue lasers have relatively shorter life span compared to those used for red lasers. While
blue lasers are still in the research phase, there are three types of methods that are used to
generate these lasers
Zinc Selenide (Znse): The initial method for implementing blue lasers involved the use
of Zinc Selenide to fabricate the diodes that generate blue lasers. However, this material
has a relatively short life span and its power requirements make it economically
Page 40
Multimedia and Its Applications
unsuitable for commercial implementation. Also, these lasers have wavelengths ranging
from 460 to 520nm, putting them at the end of the blue and closer to the green light band
of the spectrum.
Gallium Nitride (GaN): This material has proved to be very successful in the creation of
blue lasers and has generated wavelengths as low as 370 nm with relatively high
reliability. Most of the work in blue lasers today is based on this material.
Second Harmonic Generation lasers: These lasers are relatively new on the blue laser
scene but have exhibited very high levels of reliability. Through this intelligent method,
the frequency of a given laser is doubled (that is, the wavelength is halved) and laser light
within the blue spectrum is generated. This is done through an apparatus called a
Distributed Bragg Reflector (DBR) where, for example, the frequency of an infrared laser
with a wavelength of 850 nm is doubled, resulting in a blue laser with a wavelength of
425 nm.
It will be some time before blue laser technology becomes commercially viable. The
major hindrance to this technology to the cost of implementation. A Blue-ray device
available today would cost about 2 lakh! The second hurdle is that of reliability: red
lasers that are used in all CD-ROM and DVD-ROM drives today have a life cycle of
about 10,000 hours. Now compare that to the meager hundreds of hours that Gallium
Nitride based blue lasers last for using today’s technology! However, it is just a matter of
time before these issues are addressed Scientists predict that in just a couple of years
nearly all new optical storage devices would be based on blue lasers.
Fluorescent Multi-layer Disc:
A DVD can record data on a maximum of two layers since an increase in the number of
layers in the disk increases the interference and the data cannot be read properly. A
Company called Constellation 3D had developed a technology by which data could be
recorded on up to 20 layers in a disk. Instead of using a reflective surface, the FMD
technology used a disk that can appear transparent to the human eye. It used layers of
fluorescent dyes. This technology was also red- laser based and thus compatible with the
legacy CD/DVD media.
ANNAMALAI
ANNAMALAI UNIVERSITY
Traditional UNIVERSITY
Keyboards Pointing Device Source Data-Entry Devices
Computer Mice, trackballs, Scanning devices; imaging
keyboards pointing sticks, systems, bard-code readers, mark-
touch pads and character-recognition devices
(MICR, OMR, OCR), fax
machines
Specialty keyboards and Touch Screens Audio-input devices
terminals: dumb terminals, Video-input devices
intelligent terminals (ATMs, Pen-based Digital cameras
POS terminals), Internet computer Voice-recognition systems
terminals) systems, light Sensors
Page 41
Multimedia and Its Applications
Traditional Devices
Peripheral requirements for MM development - Input Devices:
Key devices for multimedia input.
Keyboard and OCR for text.
Digital cameras, scanners for graphics.
Midi keyboards and microphones for sound.
Video cameras, CD-ROMs, and frame grabbers for video.
Mice, trackballs, joy sticks, virtual reality gloves and wands for spatial data.
Keyboard:
These are used for textual input. Pressing a
key on a keyboard closes a circuit
corresponding to the key to send a
unique code to the CPU of the
computer.
Midi keyboards are used with microphones
to input original sounds. Microphones have a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
diaphragm that vibrates in response to sound waves.
Vibrations modulate a continuous electric current
analogous to sound waves. Modulated current can be digitized
and stored as standardized format for audio data, such as .WAV files.
The microphones plugs into a sound input board.
Mice, trackballs, joysticks, drawing tablets:
These devices are used to enter positional as 2D or 3D data
(Latitude, longitude, and altitude) from a standard reference point.
The common methodology is to define a point on the computer
screen and react with respect to the screen co-ordinates.
Page 42
Multimedia and Its Applications
The mouse is a pointing device with a roller on its base. Its size is about the size of the
normal cake of bath soap. When a mouse rolls on a flat surface, the cursor on the screen
also moves in the direction of the mouse’s movement. A movement of the mouse across
flat surface causes the roller to move and potentiometers coupled to the roller, sense the
relative movements. This motion is then converted to digital values that determine the
magnitude and direction of the mouse’s movement. Movement of the mouse is tracked by
software, which can also set the tracking speed.
The trackball and drawing tablets works the same way as the mouse.
Joystick
The joystick is a device that lets the user move an object on the screen. Children can play
with computers in a simple way by the use of a joystick (or a tracker ball). While playing
certain games, the user needs to move certain object(s) quickly across the screen.
Through, pressing key(s) on the keyboard can do this but it is not convenient for small
children. Joystick makes it much easier for them and provides a better control.
A joystick is a stick set in two crossed grooves and can be moved left or right, forward or
backward. The movements of the stick are sensed by a potentiometer. As the stick is
moved around, the movements are translated into binary instructions with the help of
electrical contacts in its base.
A joystick is generally used to control the velocity of the screen cursor movement rather
than its absolute position.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The tracker ball also does the same thing but is round in shape. Both joystick and the
tracker ball allow you to move objects around the screen easily.
Multimedia software should be able to determine the positional information as well as the
signal context such as the mouse press.
Page 43
Multimedia and Its Applications
Video Camera:
A standard camera contains photosensitive cells,
scanning one frame after another. Output of the cells
gets recorded as analog stream of colors, or sent to
digitizing circuitry to generate a stream of digital
codes.
Video input cards are required for use of video
camera to input video stream into computer. It
digitizes the analog signal from camera. The output
can be sent to a file for storage, CPU for processing,
or monitor for display (or all of them).
Frame grabber:
Allows the capture of a single frame of data from video stream. It does not have a good
resolution as a still camera. Typical frame grabbers process 30 frames per second for real
time performance.
2.2.6 Touch Screens
Touch Screens:
A touch screen is an intuitive computer input device that works by simply touching the
display screen, either by a finger, or with a stylus, rather than typing on a keyboard or
pointing with a mouse. Computers with touch screens have a smaller footprint, and can
be mounted in smaller spaces; they have fewer movable parts, and can be sealed. Touch
screens may be built in, or added on. Add-on touch screens are external frames with a
clear see-through touch screen, which mount onto the monitor bezel and have a controller
built into their frame. Built-in touch screens are internal, heavy-duty touch screens
mounted directly onto the CRT tube.
The touch screen interface - whereby users navigate a computer system by touching icons
or links on the screen itself - is the most simple, intuitive, and easiest to learn of all PC
input devices and is fast is fast becoming the interface of choice for a wide variety of
applications, such as:
Public Information Systems: many people that have little or no computing
experience use Information kiosks, tourism displays, and other electronic displays.
The user-friendly touch screen interface can be less intimidating and easier to use
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
than other input devices, especially for novice users, making information accessible
to the widest possible audience.
Restaurant/POS Systems: Time is money, especially in a fast paced restaurant or
retail environment. Because touch screen systems are easy to use,
overall training time for new employees can be reduced. And
work can get done faster, because employees can simply touch
the screen to perform tasks, rather than entering complex
keystrokes or commands.
Customer Self-Service: In todays fast pace world, waiting in
line is one of the things that have yet to speed up. Self-service
Page 44
Multimedia and Its Applications
touch screen terminals can be used to improve customer service at busy stores, fast
service restaurants, transportation hubs, and more. Customers can quickly place their
own orders or check themselves in or out, saving them time, and decreasing wait
times for other customers.
Control / Automation Systems: The touch screen interface is useful in systems
ranging from industrial process control to home automation. By integrating the input
device with the display, valuable workspace can be saved. And with a graphical
interface, operators can monitor and control complex operations in real-time by
simply touching the screen.
Computer Based Training: Because the touch screen interface is user-friendlier than
other input devices, overall training time for computer novices, and therefore training
expense, can be reduced. It can also help to make learning more fun and interactive,
which can lead to a more beneficial training experience for both students and
educators.
Future touch applications:
Constant innovation is happening to improve the performance in all sensor technologies.
This restricted-view angle adds a measure of privacy and security to transactions on a
touch screen system. Pen capability is an additional draw that allows for a denser touch
point, do annotations, drawings and checklists.
The immediate future has more in store. The touch system here will address medical,
geophysical, design, engineering and other 3D applications. Such applications indicate a
design focus on the potentials of touch technology. The end product allows the User to
perform more tasks with his hands. Imagine sketching with charcoal on Adobe
Photoshop, with your hands applying various pressures on a screen. Touch technology is
advancing towards this and much more at a rapid pace
Light pen:
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
A light pen is also a pointing device. The light pen consists of a photocell mounted in a
pen shaped tube. When the pen is brought in front of a picture element of the screen, it
senses light coming from the screen causes the photocell to respond by generating a
pulse. Thus electric response is transmitted to a processor that identifies the pixel
(graphic point) the light pen is pointing to. Thus, to identify a specific location, the light
pen is very useful. But the light pen provides no information when held over a blank part
of the screen because it is a passive device with a sensor only.
Page 45
Multimedia and Its Applications
The light pen is also used to draw images on the screen. With the movement of the light
pen over the screen, the lines are drawn.
2.2.7 Magnetic Card Encoders And Readers
With the increasing deployment of magnetic strip cards, the demand for less expensive
and more robust card encoding and issuing equipment has also grown.
A visual inspection of a credit card may leave the impression the credit card has but a
single magnetic strip. In actuality, the International Organization of Standardization
(ISO) dictates the locations of three strips, a standard observed by nearly every type of
card. Each of these strips, or tracks, is recorded at different bit densities using the
character-encoding standards shown in the following table.
Airline customers are often greeted by name after the ticket agent swipes their credit
card. That’s because the International Air Transport Association (IATA) standard for
placing the customer’s name and account information is assigned to track one of a credit
card. A quick swipe of the card and the customer’s name becomes instantly available,
with no database query required.
Track two is written in the lingua franca of the credit card processing world as set forth
by the American Banking Association (ABA). Nearly all credit cards and credit card
equipment around the world use track two, though there is currently a movement to
relocate their data to track one because it holds more information.
Track three was originally intended to support offline automated teller machine (ATM)
transactions. Once deployed, ATMs were quickly networked. The need to support offline
transactions quickly diminished, and with it the use of track three.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Applications of Magnetic card
Magnetic cards are used in common credit card, magnetic strip card use has rapidly
spread to student IDs, grocery store discount cards, copy machine user ID cards, vending
machine debit cards, library cards, etc.
Bar Code Reader
In a bar code, data is coded in the form of light and dark bars.
These bar codes are commonly used to identify merchandise in
retail stores. A coding scheme known as Universal Product Code
(UPC) is used. The manufacturer records these codes on the
Page 46
Multimedia and Its Applications
product. In a retail shop, the most popular way to read data from these codes is through a
hand-held scanner, called a bar code reader, which is flashed over the bar code. The data
that is transmitted to the computer picks up the price from the computer, updates
inventory and sales records in the computer, and enables the computer’s bill to be printed
out.
2.2.8 Flat Bed Scanners
A scanner is just another input device, much like a keyboard or mouse, except that it
takes its input in graphical form. These images could be photographs for retouching,
correction or use in DTP. They could be hand-drawn logos required for document
letterheads. They could even be pages of text which suitable software could read and save
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
However, flatbed scanners are the most versatile and popular format. These are capable
of capturing color pictures, documents, pages from books and magazines, and, with the
right attachments, even scan transparent photographic film.
Operation:
On the simplest level, a scanner is a device, which converts light (which we see when we
look at something), into 0s and 1s (a computer-readable format). In other word, scanners
convert analogue data into digital data.
All scanners work on the same principle of reflectance or transmission. The image is
placed before the carriage, consisting of a light source and sensor; in the case of a digital
camera, the light source could be the sun or artificial lights. When desktop scanners were
Page 47
Multimedia and Its Applications
first introduced, many manufacturers used fluorescent bulbs as light sources. While good
enough for many purposes, fluorescent bulbs have two distinct weaknesses: they rarely
emit consistent white light for long, and while they're on they emit heat which can distort
the other optical components. For these reasons, most manufacturers have moved to
"cold-cathode" bulbs. These differ from standard fluorescent bulbs in that they have no
filament. They therefore operate at much lower temperatures and, as a consequence, are
more reliable. Standard fluorescent bulbs are now found primarily on low-cost units and
older models.
By late 2000, Xenon bulbs had emerged as an alternative light source. Xenon produces a
very stable, full-spectrum light source that's both long lasting and quick to initiate.
However, xenon light sources do consume power at a higher rate than cold cathode tubes.
To direct light from the bulb to the sensors that read light values, CCD scanners use
prisms, lenses, and other optical components. Like eyeglasses and magnifying glasses,
these items can vary quite a bit in quality. A high-quality scanner will use high-quality
glass optics that are color-corrected and coated for minimum diffusion. Lower-end
models will typically skimp in this area, using plastic components to reduce costs.
The amount of light reflected by or transmitted through the image and picked up by the
sensor, is then converted to a voltage proportional to the light intensity - the brighter the
part of the image, the more light is reflected or transmitted, resulting in a higher voltage.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 48
Multimedia and Its Applications
analogue-to-digital converter that process data away from the main circuitry of the
scanner. However, this introduces additional costs to the manufacturing process, so many
low-end models include integrated analogue-to-digital converters that are built into the
scanner's primary circuit board.
The sensor component itself is implemented using one of three different types of
technology:
PMT (photomultiplier tube), a technology inherited from the drum scanners of yesteryear
CCD (charge-coupled device), the type of sensor used in desktop scanners
CIS (contact image sensor), a newer technology which integrates scanning functions into
fewer components, allowing scanners to be more compact in size.
Scan modes
PCs represent pictures in a variety of ways - the most common methods being are line art,
halftone, grayscale, and color:
Line art is the smallest of all the image formats. Since only black and white information
is stored, the computer represents black with a 1 and white with a 0. It only takes 1-bit of
data to store each dot of a black and white scanned image. Line art is most useful when
scanning text or line drawing. Pictures do not scan well in line art mode.
While computers can store and show grayscale images, most printers are unable to print
different shades of gray. They use a trick called halftoning. Halftones use patterns of dots
to fool the eye into believing it is seeing grayscale information
Grayscale images are the simplest of images for the computer to store. Humans can
perceive about 255 different shades of gray - represented in a PC by a single byte of data
with the value 0 to 255. A grayscale image can be thought of as equivalent to a black and
white photograph
True color images are the largest and most complex images to store, PCs using 8-bits (1
byte) to represent each of the color components (red, green, and blue) and therefore 24-
bits in total to represent the entire color spectrum.
File formats
The format in which a scanned image is saved can have a significant effect on file size -
and file size is an important consideration when scanning, since the high resolutions
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
supported by many modern scanners can result in the creation of image files as large as
30MB for an A4 page.
Windows bitmap (BMP) files are the largest, since they store the image in full color
without compression or in 256 colors with simple run-length encoding (RLE)
compression. Images to be used as Windows wallpaper have to be saved in BMP format,
but for most other cases it can be avoided.
Tagged image file format (TIFF) files are the most flexible, since they can store images
in RGB mode for screen display, or CMYK for printing. TIFF also supports LZW
compression, which can reduce the file size significantly without any loss of quality. This
is based on two techniques introduced by Jacob Ziv and Abraham Lempel in 1977 and
Page 49
Multimedia and Its Applications
subsequently refined by Unisys researcher Terry Welch. LZ77 creates pointers back to
repeating data, and LZ78 creates a dictionary of repeating phrases with pointers to those
phrases.
CompuServe's graphics interchange formats (GIF) stores images using indexed color.
A total of 256 colors are available in each image, although what these colors are can
change from image to image. A table of RGB values for each index color is stored at the
start of the image file. GIFs tend to be smaller than most other file formats because of this
decreased color depth, making them a good choice for use in WWW-published material.
The PC Paintbrush (PCX) format has fallen into disuse, but offers a compressed format
at 24-bit color depth. The JPEG file format uses lossy compression and can achieve small
file sizes at 24-bit color depth. The level of compression can be selected - and hence the
amount of data loss - but even at the maximum quality setting JPEG loses some detail
and is therefore only really suitable for viewing images on-line. The number of levels of
compression available depends on the image editing software being used.
Unless there is a need to preserve color information from the original document, images
stored for subsequent OCR processing are best scanned in grayscale. This uses a third of
the space of an RGB color scan. An alternative is to scan in line-art mode - black and
white with no grayscales - but this often loses detail, reducing the accuracy of the
subsequent OCR process.
The table below illustrates the relative file sizes that can be achieved by the different file
formats in storing a "native" 1MB image, and also indicates the color depth supported:
Image
File format No. of colors
size
BMP – RGB 1MB 16.7 million
BMP –RLE 83KB 256
GIF 31KB 256
JPEG - min. compression 185KB 16.7 million
JPEG - min. progressive compression 150KB 16.7 million
JPEG - max. compression 20KB 16.7 million
JPEG - max. progressive
16KB 16.7 million
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
compression
PCX 189KB 16.7 million
TIFF 1MB 16.7 million
TIFF - LZW compression 83KB 16.7 million
Page 50
Multimedia and Its Applications
The software then attempts to recognize text characters by comparing the shape of the
scanned objects to a database of words categorized by different fonts or typefaces.
Thereafter, it groups individual characters and compares them with the words in the
dictionary that is set to use a particular language.
This step is extremely crucial for
accuracy in recognition. The more
comprehensive the dictionary, the
more accurate is the finished product.
The OCR software marks certain
words that it ‘considers’ inaccurate for
you to correct them manually.
Finally, the OCR software uses the
index it created to align the text fields
as accurately as possible.
OCR software accuracy depends on
scanner quality paper used to print
text, etc. The latest breed of OCR
packages uses optimization algorithms,
neural networks and even AI concepts
to get this done. Using pattern-recognition techniques, the software tries to guess the
character as whole and look at all possibilities before arriving at a hypothesis. Some
OCR packages have inbuilt tools that enable them to ‘learn’ from the changes you make
to the output.
Field-specific recognition, wherein the scanned data is automatically stored in the
appropriate field in, say, a database, will also be an integral part of the software in the
near future.
2.2.9 Infrared Remotes
Infrared remotes work just like TV remotes - they
require a direct line of sight between the remote
and the unit. Radio frequency (RF) remotes that
do not require line of sight are becoming more
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY common and can be useful if you have employees
who like to pace around the room while giving a
presentation.
The Process
Pushing a button on a remote control sets in
motion a series of events that causes the
controlled device to carry out a command. The
process works something like this:
Page 51
Multimedia and Its Applications
1. You push the "volume up" button on your remote control, causing it to touch the
contact beneath it and complete the "volume up" circuit on the circuit board. The
integrated circuit detects this.
2. The integrated circuit sends the binary "volume up" command to the LED at the
front of the remote.
3. The LED sends out a series of light pulses that corresponds to the binary "volume
up" command.
One example of remote-control codes is the Sony Control-S protocol, which is used for
Sony TVs and includes the following 7-bit binary commands:
Button Code
1 000 0000
2 000 0001
3 000 0010
4 000 0011
Channel up 001 0000
Channel
001 0001
down
Power on 001 0101
Power off 010 1111
Volume up 001 0010
Volume down 001 0011
The remote signal includes more than the command for "volume up," though. It carries
several chunks of information to the receiving device, including:
a "start" command
the command code for "volume up"
the device address (so the TV knows the data is intended for it)
a "stop" command (triggered when you release the "volume up" button)
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Infrared (IR) remote controls are most commonly used for devices such as TVs, VCRs,
DVDs, home theater systems, etc. Repeaters can be used to extend the range.
2.2.9 Voice Recognition Systems
These use speech or voice as input. This is a form of pattern recognition where the
spoken sound patterns are matched against previously recorded patterns.
The problems faced by users are that the voice quality of different people differs in pitch,
timbre, volume, rate of speech and accent. The subject can train the computer by
speaking certain words repeatedly.
The major problems facing these systems are:
Page 52
Multimedia and Its Applications
Limited vocabulary
People use different words to convey the same meaning.
Some sentences make sense but cannot be properly parsed.
Accentuating a word may be important.
Tone of speaker’s voice can alter the meaning of words.
Cultural or language issues.
Homonyms (see Vs sea, know Vs no).
2.2.10 Digital Camera
Digital Camera:
Digital cameras are used to capture digital images.
Real images are those images that are present in
nature. Digital images are the representation of real
images in terms of pixels. A still image is the snapshot
of a motion image, which is a sequence of images
giving the impression of continuous motion.
In principal, a digital camera is similar to a traditional film-
based camera. There's a viewfinder to aim it, a lens to focus the
image onto a light-sensitive device, some means by which several images can be stored
and removed for later use, and the whole lot is fitted into a box. In a conventional camera,
light-sensitive film captures images and is used to store them after chemical
development. Digital photography uses a combination of advanced image sensor
technology and memory storage, which allows images to
be captured in a digital format that is available instantly -
with no need for a "development" process.
Although the principle may be the same as a film camera,
the inner workings of a digital camera are quite different,
the imaging being performed either by a charge coupled
device (CCD) or CMOS (complementary metal-oxide
semiconductor) sensors. Each sensor element converts
light into a voltage proportional to the brightness, which
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
is passed into an analogue-to-digital converter (ADC)
which translates the fluctuations of the CCD into discrete binary code. The digital output
of the ADC is sent to a digital signal processor (DSP) which adjusts contrast and detail,
and compresses the image before sending it to the storage medium. The brighter the light,
the higher the voltage and the brighter the resulting computer pixel. The more elements,
the higher the resolution, and the greater the detail that can be captured.
This entire process is very environment-friendly. The CCD or CMOS sensors are fixed in
place and it can go on taking photos for the lifetime of the camera. There's no need to
wind film between two spools either, which helps minimize the number of moving parts.
Page 53
Multimedia and Its Applications
Page 54
Multimedia and Its Applications
Page 55
Multimedia and Its Applications
Laser printers have a number of advantages over the rival inkjet technology. They
produce much better quality black text documents than inkjets, and they tend to be
designed more for the long haul - that is, they turn out more pages per month at a lower
cost per page than inkjets. So, if it's an office workhorse that's required, the laser printer
may be the best option. Another factor of importance to both the home and business user
is the handling of envelopes, card and other non-regular media, where lasers once again
have the edge over inkjets.
PLOTTER
Apart from printed outputs, quite a few applications require to produce good quality
drawing and graphs. For this purpose plotters are used. There are to types of plotters –
drum plotters and flat bed plotters. These plotters use either pens or inkjet technology to
the drawing.
Page 56
Multimedia and Its Applications
Complexity Video
Image
Search
High quality
quality sound sound
Text Page 57
Size of Object
Multimedia and Its Applications
media increase is in terms of the complexity of the machinery needed to exploit them and
in the techniques used to store and transmit the information.
Video, is the most complex and, and potentially, the most massive of all the media.
Many multimedia applications are developed in workgroups comprising instructional
designers, writers, graphic artists, programmers, and musicians located in the same office
space or building. The workgroup members’ computers typically are connected on a local
area network (LAN). The client’s computers, however, may be thousands of miles
distant, requiring other methods for good communication.
Communication among workgroup members and with the client is essential to the
effective and accurate completion of the project. Our Postal Service mail delivery is too
slow to keep pace with most projects; courier services are better. And when you need it
immediately, an Internet connection is required. If your client and you are both connected
to the Internet, a combination of communication by e-mail and by FTP (File Transfer
Protocol) may be the most cost-effective and efficient solution for both creative
development and project management.
In the workplace, quality equipment and software used communications setup. The cost -
in both time and money - of stable and fast networking will returned to you.
2.2.14 Modems
Modems:
A Modem is a computer peripheral that allows you to
connect and communicate with computers via telephone
lines.
Modems allow us to combine the power of the
computer with the global reach of telephone system.
Because ordinary telephone lines cannot carry digital
information, a modem changes the digital data from
your computer into analog data, a format that can be
carried by telephone lines. In a similar manner, the modem receiving the call then
changes the analog signal back into digital data that the computer can digest. This shift
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
of digital data into analog data and back again, allows two computers to speak with one
another, called modulation/demodulation, this transformation of signals is how the
modem received its name.
2.2.15 Cable Modems
You will agree that accessing the Internet through
normal modems, be it from office or home, is not the
easiest thing in the world to do. One of the
alternatives that are becoming popular among home
users is a cable modem.
Page 58
Multimedia and Its Applications
Cable Internet means accessing the Internet through the same cable that brings TV
channels like Star, Zee, and MTV into your homes. The two main devices which make
this possible are a Cable Modem Termination System (CMTS), which has to be installed
at your cablewallah or broadband service provider’s end, and a cable modem, which has
to be installed in your home. Simply put, a cable modem is a device that lets you access
the Internet through your Cable TV (CATV) network.
Page 59
Multimedia and Its Applications
2.5) Summary
The principal input devices are keyboards computer mice, touch screens, magnetic
ink, optical character recognition, pen-based instrument, digital scanners, sensors, and
voice input. The principal output devices are video display terminals, printers,
plotters, voice output devices and microfilm. The principal forms of secondary
storage are magnetic disk, optical disk, and magnetic tape. Magnetic disk permits
direct access to specific records. Optical disks can store vast amounts of data
compactly. CD-ROM disk systems can only be read from but rewritable optical disk
systems are becoming available.
2.6) Terminal exercises
Compare and Contrast the following in the context of Multimedia:
CD-ROM and DVD
Magnetic storage and Optical Storage
Light pen and Touch screen
Printer and Plotter
Joystick and mouse
Flat bed scanner and Hand held scanner
Bar code reader and MICR
2.8) Assignments
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
A company has a number of sites, each of which has a PC equipped LAN, with the LANs
connected over a WAN. The company wishes to use the network for videoconferencing
between the PCs on the LANs. The company has asked you, as an independent technical
consultant, to advise them on the issues below. Prepare a report covering the following:
a. Definitions and descriptions of the network technology.
b. Compression issues in video conferencing.
c. Co-operative working
d. Advantages and disadvantages of a communication system of this type within this
environment.
Page 60
Multimedia and Its Applications
a) Some functions take too much processor time to be viable without using some
form of hardware acceleration. Give one example of input and one example of
output that a DSP might make possible.
b) What benefits are there for someone with a general-purpose multimedia DSP as a
core part of a computer system, other than combining many functions? What
disadvantages might there be?
c) The Bluetooth communications system is designed as a system to help replace the
cables that join devices together. However, it offers much more functionality than
a simple cable.
Briefly describe the Bluetooth communications system.
Give one persuasive reason for adopting Bluetooth technology.
Give one major disadvantage of Bluetooth technology.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
UDF: A file system that is optimized to handle large data sizes, and to minimize the
changes necessary when a file is added or deleted. Windows 98 and higher versions can
write to and read from the UDF file system, without any special driver support. This is
the best format for DVD-RW drives where the data size goes into GBs.
Simulation: This is the process of testing the recording process without actually doing
so. The writing is done by sending the data to the recorder, whilst keeping the laser off.
This way, the blank CD remains intact. Simulation is used to check if the recording will
be successful.
CD Extra: A CD format that combines a music CD with a regular data CD-ROM. These
discs have audio tracks in the first part, and computer data in the second. You can play
the discs on music players, and also use them as regular data CDs.
Page 61
Multimedia and Its Applications
Overburning: A technique by which you burn more data onto a CD than its specified
capacity. For overburning, the CD recorder should support this feature, and the writing
should be done in the 'Disc-at-once' mode. How much data can be overturned, depends
on the medium and the recorder.
DDCD: Expanding to mean Doubly Density CD, it allows data capacity to be doubled to
1.3GB. DDCD is made possible by a few simple modifications to the regular CD format,
such as miniaturization of track pitch and minimum pit length. DDCD is mainly and
minimum pit length. DDCD is mainly targeted at the mass data backup segment. The
media is not backward compatible.
DAO: In the Disc At Once method, the laser is never turned off, and hence no gap or
delay is necessary. This mode is much faster than Track At Once, but the recorder should
support it.
SAO: Session at Once is the mode used in multi-session CD-writing. In this mode, a
session is written without turning off the laser just as in DAO, but the session is closed
only after the data and the ToC are written into the program Memory Area.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 62
Multimedia and Its Applications
UNIT-III
3.0) Introduction
Multimedia tools and products depend on the ability of the computer to capture, process
and present text, pictures, audio and video. Multimedia applications offer significant
challenges to computer architecture in terms of its ability to:
Input various data formats, including converting analog data to digital format.
Input may be as simple as typing in characters from a keyboard, scanning images
or capturing analog audio and video.
Manipulate and edit multimedia data.
Output Multimedia data including converting digital version into an analog format
suitable for the end user.
This includes placing graphics on the screen, sending sound to speaker or
generating television signals for display on large screen project systems.
3.1) Objective
To study the various types of Multimedia Authoring Tools and their applications.
3.2) Content
3.2.1 Text Editing
Using Text in Multimedia:
We generally use text for titles, headlines, menus, and navigation and for content.
a. Designing with text:
From a designer perspective the choice of font size and the number of headlines we place
on a particular screen must be related both to the complexity of the message and top its
venue. If messages are part of an interactive project or web site then we can pack a great
deal of text information on to the screen before it becomes busy. We must strike a
balance between too little text and too much text. If in case we are providing public
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
speaking support the text will be keyed to a live presentation where the text accents the
main message. Here we use large fonts and few words with, lots of space.
b. Choosing Text Fonts:
Selection of font is and important and difficult task. Listed below are a few design
suggestions:
Choose the most legible font instead of a decorative font for small text.
Try to use the minimum number of faces. We can vary the weight and size of the
typeface whenever needed.
Make sure the line spacing is pleasing to the eye.
Page 63
Multimedia and Its Applications
Keep a proportion between the letters is proper. Try experimenting with the colors
and effects.
c. Menus for Navigation:
An interactive multimedia project or website typically consists of a body of information
or content through which a user navigates by pressing key, clicking a mouse, pressing a
touch screen. The simplest menu consists of text lists of topics. Text is helpful to users to
perceptual cues about their location within the body of the content.
d. Buttons for interaction:
In multimedia buttons are objects that make things happen when they are clicked. They
are used to manifest properties such as highlighting or other visual or sound effects to
indicate that we have hit the target.
The automatic button making tools supplied with multimedia and HTML page authoring
systems are useful, but in creating our own text, they offer little opportunity to fine tune
the look of the text. Character and Word wrap highlighting and inverting are
automatically applied to the buttons as needed by the authoring system.
Before using a font, we must make sure it is recognized by the computer’s Operating
system. If we want to use other fonts than those installed with the Operating System, then
we need to install them first.
In most authoring platforms, it is easy to make your own buttons from bitmaps of drawn
objects. In a message passing authoring system where we can script activity when the
mouse button is up or down over an object, we can quickly replace one bitmap with
another highlighted or colored version of bitmap.
e. Field for reading:
Fields are useful when the very purpose of your multimedia project or website is to
display very large blocks of text
Try to print only a few paragraphs of text per page.
Use a font, which is easy to read rather than a decorative illegal font.
Try to displace whole paragraphs on the screen.
Avoid breaks where users must go back and forth between pages to read the entire
content.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
f. Portrait versus Landscape:
Traditional hard copy and printed documents in the taller than wide orientation are not
readable on a typical monitor with a wider than tall aspect ratio. The taller than wide
orientation used for printed documents is called portrait while the wider than tall
orientation normal to monitors is called landscape.
g. HTML Documents:
The standard document format used for pages on the web is called HyperText Markup
Language (HTML). In a HTML document we can specify typefaces, size, colors, and
other properties by “making up” the text in the document with tags.
Page 64
Multimedia and Its Applications
Page 65
Multimedia and Its Applications
Page 66
Multimedia and Its Applications
Mail merge feature actually merges main document with a data source. The main
document stores the original text with data area at appropriate places. These data areas
are successively filled by the information in the data source and the merged document is
printed.
To draw a curved
To draw straight line line with selected
line width
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY To draw polygon
To draw rectangle or with selected fill
square style
Page 67
Multimedia and Its Applications
Page 68
Multimedia and Its Applications
Page 69
Multimedia and Its Applications
Regardless of whether the 3D form is derived from polygonal or from a NURBS object,
the rendered image is usually capable of being smoothed by rendering software.
XYZ Virtually all 3D modelling software relies on a virtual world built on the concept of
space defined as positions within an axial coordinate system called XYZ. In most systems
the horizontal coordinates of space are defined along the X&Y-axes with the Z-axis
providing the vertical dimension. A small number of 3D programs, however,
transpose the Y and Z axes adding to confusion when exporting models from one
program to another. This is a typical system (let us assume one where Z implies
verticality) where there is a point at which the X, Y & Z-axes interact. This point is
usually referred to as the origin point and is described as X0, Y0, and Z0. Using a map
metaphor the X-axis runs from west to East and the Y-axis runs south to North. The X
and Y-axes therefore form a plan like a flat map. Points in space, which have a positive X
component are to the right of the Y-axis and those with, a negative component, are to the
left.
Similarly points in space, which have a positive Y component, are to the North of the X-
axis and those with a negative component are to the south. It follows therefore those
points in space, which have a positive Z components are above XY plane and those with
a negative component are below it.
Page 70
Multimedia and Its Applications
abstract shapes, or whatever, that have been photographed in sequence, appear to come to
life.
As film is projected at 24 frames per second, drawn animation, as we have just described
it, technically requires 24 drawings for each second of film, that is, 1440 drawings for
every minute – and even more for animation made on video. In practice, animation that
does not require seamlessly smooth movement can be shot ‘on 2s’, which means that two
frames for each drawing, or whatever, are captured rather than just one. This gives an
effective frame rate of 12 frames per second for film, or 15 for NTSC video.
If animation is made solely from drawings or paintings on paper, every aspect of the
image has to be repeated for every single frame that is shot. In an effort to reduce the
enormous amount of labor this process involves, as well as in a continuing search for new
expressive possibilities, many other techniques of animation have been devised. The most
well known and widely used – at least until very recently – has been cell animation. In
this method of working, those elements in a scene that might move – tom & Jerry, ‘for
example – are drawn on sheets of transparent material known as ‘cell’, and laid over a
background – the Jerry’ living room, perhaps – drawn separately. In producing a
sequence, only the moving elements on the cell; need to be redrawn for each frame; the
fixed part of the scene need only be made once. Many cells might be overlaid together,
with changes being made to different ones between different frames to achieve a greater
complexity in the scene. To take the approach further, the background can be drawn on a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
long sheet, extending well beyond the bounds of a single frame, and moved between
shots behind the cells, to produces an effect of travelling through a scene. The concept
and techniques of traditional cell animation have proved particularly suitable for transfer
to the digital realm.
Largely because of the huge influence of the Walt Disney studios, where cell
animation was refined to a high degree, with the use of multi-plane set-ups that added a
sense of three-dimensionality to the work, cell has dominated the popular perception of
animation. It was used in nearly all the major cartoon series, from Popeye to the
Simpsons and beyond, as well as in many full-length feature films, starting Micky Mouse
to Allahudin.
Page 71
Multimedia and Its Applications
Page 72
Multimedia and Its Applications
make the frames of an animated sequence. Thus, to create animation, you begin by
creating the background layer in the image for the first frame. Next, on separate layers,
you create the elements that will move; you may want to use additional static layers in
between these moving layers if you need to create an illusion of depth. After saving the
first frame, you begin the next by pasting the background layer from the first; then, you
add the other layers, incorporating the changes that are needed for your animation. In
this way, you do not need to recreate the static elements of each frame, not even using a
script.
Where the motion in animation is simple, it may only be necessary to reposition or
transform the images on some of the layers. To take a simple example, suppose we wish
to animate the movement of a planet across a background of stars. The first frame could
consist of a background layer containing the star field, and a foreground layer with an
image of our planet. To create the next frame, we would copy the two layers, and then,
using the move tool, displace the planet's image a small amount. By continuing in this
way, we could produce a sequence in which the planet moved across the background. (If
we did not want the planet to move in a straight line, it would be necessary to rotate the
image as well as displace it, to keep it tangential to the motion path). Simple motion of
this sort is ripe for automation, and we will see in a later section how After Effects can be
used to animate Photoshop layers semi-automatically.
Using layers as the digital equivalent of cell saves the animator time, but, as we have
described it, it does not affect the way in which the completed animation is store: each
frame is saved as an image file, and the sequence will later be transformed in to a
QuickTime movie, an animated GIF, or any other conventional representation. Yet there
is clearly a great deal of redundancy in a sequence whose frames are all built out of the
same set of elements. Possibly, when the sequence comes to be compressed, the
redundant information will be squeezed out, but compressing after the event is unlikely to
be as successful as storing the sequence in a form that exploits its redundancy in the first
place. In general terms, this would mean storing a single copy of all the static layers and
all the objects (that is, the non-transparent parts) on the other layers, together with a
description of how the moving elements are transformed between frames.
This form of animation, based on moving objects, is called sprite animation, with the
objects being referred to as sprites. Slightly more sophisticated motion can be achieved
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
by associating a set of images, sometimes called faces, with each sprite. This would be
suitable to create a 'walk cycle' for a humanoid character. By advancing the position of
the sprite and cycling through the faces, the character can be made to walk.
QuickTime supports sprite tracks, which store animation in the form of a 'key frame
sample' followed by some 'override samples'. The key frame sample contains the images
for all the faces of all the faces of all the sprites used in this animation, and values for the
spatial properties (position, orientation, visibility, and so on) of each sprite, as well as an
indication of which face is to be displayed. The override samples contain no image data,
only new values for the properties of any sprites that have changed in any way. They can
Page 73
Multimedia and Its Applications
therefore be very small. QuickTime sprite tracks can be combined with ordinary video
and sound tracks in a movie.
We have described sprite animation as a way of storing an animated sequence, but it is
often used in different way. Instead of storing the changes to the properties of the sprites
the changed values can be generated dynamically by a program. Simple motion
sequences that can be described algorithmically can be held in an even more compact
form, therefore, but more interestingly, the computation of sprite properties can be made
to depend upon external events such as mouse movements and other user input. In other
words, the user can control the movement and appearance of animated objects. This way
of using sprites has been extensively used in two-dimensional computer games, but it can
also be used to provide a dynamic form of interaction in other contexts, for example
simulations.
Key Frame Animation:
During the 1930s and 1940s, the large American cartoon producers led by Walt Disney
developed a mass production approach to animation. Central to this development was
division of labor. Just as Henry ford’s assembly line approach to manufacturing motor
cars relied on breaking down complex tasks into small repetitive sub-tasks that could be
carried out by relatively unskilled workers, so Disney’s approach to manufacturing
dwarfs relied on breaking down the production of a sequence of drawings into sub-tasks,
some of which, at least, could be performed by relatively unskilled staff. Disney was less
successful at de-skilling animation than ford was at de-skillling manufacture – character
design, concept art, storyboards, tests, and some of the animation, always had to be done
by experienced and talented artists. But when it came to the production of the final cells
for a film, the role of trained animators was largely confined to the creation of key
frames.
We have met this expression already, in the context of video compression and also in
connection with QuickTime sprite tracks. There, key frames were those, which were
stored in their entirety, while the frames in between them, were stored as differences
only. In animation, the meaning has a slightly different twists: key frames are typically
drawn by a ‘chief animator’ to provide bottom of a fall, and so on – which determine
more or less entirely what happens in between, but they may be used for any point the
pose and detailed characteristic of characters at important points in the animation.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Usually, key frames occur at the extremes of a movement – the beginning and end of a
walk, the top and which marks a significant change. 'in-betweeners' can then draw the
intermediate frames almost mechanically. Each chief animator could have several in-
betweeners working with him to multiply his productivity. (In addition, the tedious task
of transferring drawings to cell and coloring them in was also delegated to subordinates.)
In-betweening (which is what in-betweeners do) resembles what mathematicians call
interpolation: the calculation of values of a function lying in between known points.
Interpolation is something that computer programs are very good at, provided the values
to be computed and the relationship between them can be expressed a hand-drawn
animation is too complex to be reduced to numbers in a way that is amenable to computer
Page 74
Multimedia and Its Applications
processing. But this does not prevent people trying – because of the potential labor
savings.
All digital images are represented numerically, in a sense, but the numerical
representation of vector images is much simpler than that of bitmapped images, making
them more amenable to numerical interpolation. To be more precise, the transformations
that can be applied to vector shapes - translation, rotation, scaling, reflection and shearing
– are arithmetical operations that can be interpolated. Thus, movement that consists of a
combination of these operations can be generated by a process of numerical in-
betweening starting from a pair of key frames.
Authoring tools help in the preparation of texts. Generally, they are facilities provided in
association with word processing, desktop publishing, and document management
systems to aid the author of documents. They typically include an on-line dictionary and
thesaurus, spell-checking, grammar-checking, and style-checking, and facilities for
structuring, integrating and linking documents. Authoring tools can also be provided
which enable users to author better quality documents in languages, other than their own,
which they understand to a degree but in which they could not normally compose a
document.
3.2.7 Spreadsheets
Spreadsheets:
Spreadsheets have become the backbone of many users' information management
systems. A spreadsheet organizes its data in columns and rows. Calculations are made
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
based on user-defined formulas for, say, analyzing the survival rates of seedlings, or the
production of glass bottles in Karnataka, or a household's consumption of energy in ergs
per capita. Spreadsheets can answer what-if-questions, build complex graphs and charts,
and calculate a bottom line. From Kashmir to Kanyakumari, spreadsheets have become a
ubiquitous computer tools.
Most spreadsheet applications provide excellent chart-making routines; some allow you
to build a series of several charts into an animation or movie, so you can dramatically
show change over time or under varying conditions Full-color curves that demonstrate
changing annual sales, robbery and assault static's, or birth rates may have a far greater
effect on an audience than will a column of numbers.
Page 75
Multimedia and Its Applications
The latest spreadsheets let you attach special notes and drawings, including full
multimedia display of sounds, pictures, animations, and video clips.
Lotus 1-2-3:
Lotus 1-2-3 lets you rearrange graph elements by clicking and dragging and using a menu
to access data objects from the outside world. You can place bitmapped pictures and
other objects such as QuickTime movies anywhere in your spreadsheet. There is a
complete color drawing package for placing lines, circles, arrows, and special text on top
of the spreadsheet to help illustrate its content.
Excel:
You can embed object from many applications into Excel, too, Figure shows an Excel
document embedded with a Windows WAV sound, an image from Photoshop and a
video movie. The Insert menu shown in Figure will be used to add a picture into the
spreadsheet directly from a digital camera.
Spread Sheets:
A Spreadsheet is a software tool that lets one enter, calculate, manipulate, and
analyze set of numbers. Various components of spreadsheets are being discussed being
discussed below:
ANNAMALAI
ANNAMALAI UNIVERSITY
Worksheet:
UNIVERSITY
It is a grid of cells made up of horizontal rows and vertical columns. Number of rows
and columns vary form package to package.
Lotus 1-2-3 worksheet contains 8192 rows and 256 columns. MS-Excel worksheet
contains 65,536 rows and 256 columns.
Each intersection of a row and column is called a cell wherein data can be stored.
Page 76
Multimedia and Its Applications
Row number:
Data in a worksheet are divided in rows and columns. Each row is given a number that
identifies it. Row numbers start form 1 and go as 2,3,4 …
Column Letter:
Each column is given a letter that identifies it. Column letter start from A and go as B,C
… Z, AA, AB, AC … AZ, BA … BZ, IV. That is, columns are lettered A-Z, AA-AZ,
BA-BZ, … , IA-IZ.
Cells:
Cell is a unit of worksheet where numbers, descriptive text, formulas etc. can be placed.
Cell is formed by intersection of a row and a column and this intersection gives a cell a
unique address i.e., the combination of the column letter and the row number.
For instance, if a column F intersects row 3, then the cell formed out of it gets an address
F3. Similarly, C5 identifies the cell in column C, row 5.
Cell Menu Bar Toolbar Address
Address Bar
Page 77
Multimedia and Its Applications
valid. Giving the addresses of first cell in the range and the last cell of the range specifies
a range.
For instance, a range starting from F7 till G14 would be written as F7..G14 in Lotus 1-2-
3 (..is the range indicator in Lotus 1-2-3) and F7:G14 in MS-Excel (: is the range
indicator in MS-Excel).
Status Bar and Control Panel:
Apart from other things a worksheet also has a status bar and a control panel. The status
bar is an area where the status, the particular program condition, is displayed.
For instance, a status indicator CALC means that the worksheet area needs to be
recalculated. Date and Time are also displayed in status bar. Also, if an error takes place,
the error messages are displayed at the status bar. A control panel is the area wherein
information regarding the current cell, mode indicators and the commands are displayed.
In Windows based spreadsheets the commands are replaced by pull-down menus and
toolbars. A toolbar is a bar having icons for various commands.
Workbook:
A spreadsheet allows you to combine more than one worksheet in a file. A file having
multiple worksheets is known as a workbook.
DATABASES
Database Management System (DBMS):
The data base management programs are a set of programs that create and use a database
consisting of one or more files. DBMS allow you to create and access multiple,
interrelated files.
Characteristics of Database:
1. Data can be stored and accessed in more than one file.
2. Data does not have to be duplicated. The accounts receivable and addressed, so they
do not have to be entered in the charges file. This lack of duplication saves storage
space and reduces the possibility of inconsistencies.
3. Files must share a common field containing unique attributes.
4. Different users can have different views of database.
5. Records can be found as they can be with a record management program but the data
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
can be pulled from one or more files.
6. Queries can be posed that require that data be drawn from one or more files.
7. Reports can be performed using data from one or more files.
8. The data is entered and used without the user knowing how
it is physically stored on the disk or how queries located it in the files in which it is
stored.
9. Data can be shared by more than one application.
10. The data can be shared by more than one user.
11. Data is independent of the application program that manipulates it.
Page 78
Multimedia and Its Applications
Page 79
Multimedia and Its Applications
a movie be played, or the next slide be presented, etc). Not just that, you can even have
content advice related to the content template that you have created.
MS PowerPoint leaves the rest far behind when it comes to the task of creating slide
shows and adding custom animation. Just right-click on any object (text, images, etc) and
you get all the options you could ever want. You can specify animation on the page, and
all with Auto Preview (if you have the option checked). The numbers of animation
effects, templates, slide designs and layouts have all increased considerably. Also, with
the same right-click option you can specify a certain action for the object, if clicked or on
a mouse over. The action you define can, for instance, be playing a sound or running a
program.
In terms of sheer features, there’s nothing that can be beat PowerPoint. Free-lance
Graphic and 1-2-3, offers more simplicity and optimal utilization of workspace, so if you
like your presentations simple, then definitely check these out.
Presentation Freelance Graphics PowerPoint 2002
Wizard and Provision of content Quite extensive
templates templates templates
Transition and Customizable Task Pane provides
Effects sequencing of objects number of transition and
animation effects.
Slide Show Rehearse Timings, Narration, Add Buttons,
add pointers Live Broadcast.
The Multimedia authoring tools is the glue
that holds the data together in order to inform
educate or entertain. Authoring tools offer two
basic features.
First, the ability to create and edit a
product.
Second the presentation scheme for
delivering the product. The selection of an
authoring tools is based on how it works
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
and what it does can be one of the most critical element of multimedia product
development.
Microsoft PowerPoint:
Start Programs Ms Office Ms PowerPoint
Ruler
Slide
Page 80
Multimedia and Its Applications
Slide Show
Slide Sorter View
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
controlling the execution of the product. These features include:
Page: The page is an instant in time for a multimedia presentation. The page acts as a
container for other features or objects such as text or graphic content or controls such
as buttons. It is usually capable of various transitional effects such as fade in/out on
the screen.
Control: Controls enables the user to interact with the product. Controls may be used
to manage or direct sequences events or to collect data, or to manage data objects.
There are three general categories of controls:
Navigation: Buttons, hotspots, and hypertext. Many tools have a fixed set of object,
such as standard gray buttons that can be modified. Hot spots are used over graphic
Page 81
Multimedia and Its Applications
object to allow the user to interact with data. Highlighting through changing the color
of the text or underlying it usually indicates hypertext. When the user selects the
highlighted text, an action is initiated such as presenting a definition or related
information.
Input: Text, checkboxes, radio buttons, combo/list boxes. These are used for
collecting information from the user or controlling a sequence of the information
presentation. Data collected from these may be directed to a file or other device for
storage or transmission.
Media Controls: Applies to managing or presentation of fonts, graphics, audio, and
video. Specific features for each of these include
Fonts: Select type, size, bold italic
Graphics: Zoom, Scroll, mark-up/edit
Audio/video: Play, stop, pause, rewind and volume, Zoom.
Data:
Depending on the authoring tool, data may be stored internally as part of the
application program or as external files that are accessed during program execution.
Internal storage of data usually means faster data presentation and easier transport of the
application and the data. On the other hand, external data storage usually means smaller
executable programs and faster program startup. Another advantage of external data
storage is that data can be modified to change the application without having to use the
authority tool.
Data types include:
Text
Graphic
Audio
Video
Live audio/video
Database
Execution: Execution is the process that controls the presentation and sequencing of
the application. It can take on one or more of the following mechanisms,
Linear sequenced: the user pages on one or more of the following mechanisms
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
control Presentation.
Program Controlled: Code or scripting within the application controls program
execution. Execution, in some cases may be fixed or altered by the user.
Temporal Controlled: a timer that initiates events in a predetermined order controls
Presentation. The presentation may pause and wait for the user to provide input prior
to continuing.
Interactively Controlled: Presentation waits for the user to select a function such as
pressing a button before continuing.
Page 82
Multimedia and Its Applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Here are some features typical of image-editing applications and of interest to multimedia
developers:
Multiple windows provide views of more than one image at a time.
Page 83
Multimedia and Its Applications
Page 84
Multimedia and Its Applications
Page 85
Multimedia and Its Applications
that are “filled in” by the developer are reused within the application or across similar
applications. Templates often contain data and controls that are common to a number of
pages within the application.
For example, a template could contain a standard page title, background color, and
positioning of recurring buttons, that can be reused over and over during development.
Page 86
Multimedia and Its Applications
3.5) Summary
Authoring is the process of assembling the content into the multimedia software
development environment following the map provided by the storyboards. It is the
focal point of the software design and storyboarding / content efforts.
3. You should understand two key principles when you use multimedia. Our eyes
are attracted to light. Our eyes are attracted to motion.
4. When you present with multimedia you are more than just a performer. You are
a director and producer.
5. When each slide first appears on the screen you should know by design the first
place their eyes go and after a few seconds - the second place their eyes go.
6. The most common error in creating computer slides – is too much on the slide.
The most important element of your presentation is you. When presenting with a
computer projector be sure that the audience can see you and hear you because it
is you that they buy – not your slides.
7. Murphy is the one who said that whatever can go wrong will go wrong. Murphy
loves technology. Some even believe that Murphy invented computers.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
8. You can be prepared for Murphy to visit during your PowerPoint Presentation by
having a saver line. This allows you to save your presentation by relaxing the
audience and assuring them that you are in control.
9. A great PowerPoint show will not save a bad presenter. But a superior presenter
can save the audience from a bad PowerPoint show.
Page 87
Multimedia and Its Applications
3.8) Assignments
1. Explain the statement "Multimedia productions are tailored to specifically meet
the users needs" with an example.
2. Compared with printed text, hypermedia dramatically changes the way
information is characterised, accessed and utilised. This highly rich media
experience presents problems and benefits for its users and developers.
Demonstrate your depth of knowledge on the subject by identifying and
discussing issues you find relevant to the above statement.
You must follow the standard methodology for development of multimedia. You must fill
up various templates required for the development of Multimedia.
Present a prototype of your design using MS-Office tools. Your prototype should include
at least 10 slides having suitable graphics, simple animation and few audio clips. (Audio
clips need not be recorded professionally but can be recorded using built in microphone
of simple multimedia computer).
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
1. Using your knowledge of Multimedia and CBL provide justification for this
statement and suggest a scenario where this concept could be demonstrable.
2. Using this scenario, propose learning tasks that would benefit from a CBT (or
your chosen variant) approach. Explain the added value of using Multimedia for
the tasks.
Page 88
Multimedia and Its Applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 89
Multimedia and Its applications
UNIT-IV
4.0) Introduction
The ability to access information stored as different media depends on the availability of
standard data formats that is understood by most applications in use. Proprietary formats
are typically more compact compared with open standard formats. Although there are
many proprietary formats for each media type, they are often not suitable for use in
defining multimedia building blocks since the ability to access the information contained
in those data files depend very much on the availability of filters for the respective
applications.
Basic Multimedia Building Blocks are
Text
Sound
Images
Color
Animation
Video
4.1) Objective
To improve the full digital content chain, covering creation, acquisition, management and
production, through effective multimedia technologies enabling multi-channel, cross-
platform access to media, entertainment and leisure content in the form of film, music,
games, news and alike.
4.2) Content
4.2.1 Text
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Using text for communication is very recent human development that is popular now but
actually began about 6000 years ago. Nowadays, text and the ability to read is doorway
of power and knowledge.
Every single word can be interpreted in different ways. So it is important it cultivate
accuracy and conciseness in the specific words that we choose.
Multimedia authors weave words, symbols, sound and images and then blend text to
create integrated tools and interfaces or acquiring, displaying and dissemination messages
and data using computers.
Page 90
Multimedia and Its applications
The most common tool for manipulating text is a word processor and most like Microsoft
Word, have built-in wizards to create interesting looking text, as shown in following fig.
However the problem with this technique is transferring the text into a multimedia or
Web page because of the proprietary nature of the image format.
The term type size (sometimes called font size) is the next level of specification. The
type size is the distance from the top of the capital letters to the bottom of the
"descenders" in letters such as "g" and "y." Type sizes are generally expressed in
"points." On paper, one "point" is .0138 inches or about 1/72 of an inch. However, due to
different size monitors, in the electronic world this term is generally meaningful only as a
method of comparison. Below is an example of how type size is determined.
The term font style refers to... the particular style of textual characters. Styles are usually
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
standard, bold and italic. Below is an example of font style.
The term font is the most specific unit of text. Fonts are a particular typeface, size and
style. Helvetica 12 - bold would be an example of a Font. The typeface (Helvetica), size
(12 points), and style (bold) are all included. Below is an example of two different fonts.
Page 91
Multimedia and Its applications
Another way to categorize text has to do with its historical origin. A typeface can be
either Serif or Sans Serif. Serif characters have a little "flag" or decoration at the end of
the letter stroke. It could be said that Serif characters are embellished. Below is an
example of the letter "T" as a Serif character. Notice the "flag" or decoration.
Sans Serif (sans is French for "without") characters don't have these decorations. The
example below shows several characters as Serif characters and as Sans Serif characters.
Notice the more basic look to the Sans Serif characters.
Certain rules about the use of text for print don't work well in the electronic world. For
example, in print it has mostly been assumed that headlines are best in Sans Serif and that
body text is best when done in a Serif typeface. On a computer screen or television set it's
often best to go with something simple and bold. Therefore, it is generally recommended
that multimedia/web developers reverse the print rule. In other words, because body text
is often small, it is best to use a Sans Serif typeface in a bold style. Only when the
characters will be large (such as headings) is it a good idea to use Serif characters.
Text on the Web
Now, that we understand some of the terms and rules for text, lets look at how text is
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
used on the World Wide Web. The WWW is often called a user definable interface. In
other words, much of what a web browser shows on the computer screen can be changed
to fit the particular tastes of the user. Software such as Netscape allows the user to
determine what font is used for general text. So, the basic textual information you write
as part of a web presentation is something you, as a web creator, don't determine.
However, often time’s text on a web page is part of an inline image. In other words, you
might utilize text as part of an inline graphic image which is displayed. In this case you,
the web creator, do determine what font the user sees. You control everything from font
to color to drop shadow.
Page 92
Multimedia and Its applications
Below is an example of text that is used as part of a graphic image. In this case it's the
graphic that's used at the top of most of these pages.
Notice that the image contains a particular font, different colors and includes a drop
shadow. This is different from the text you're reading now. This text is something the
user can control. The image above is something you control.
Computer And Text
About fonts and faces
A typeface is a family of graphic characters that usually includes many type, sizes and
styles. A font is a collection of characters of a single size and style belonging to a
particular typeface family. The usual style or fonts are BOLDFACE and ITALIC. Type
sizes are express in points (1point=013 inches). A few examples of typefaces are:
Bookman Old Style
Times New Roman
Courier
Times 12.italic is a Font. Usually the term font is commonly used while typeface would
be more correct.
A font size differs. So it cannot be used to describe
the height or width of the character. Tools usually
automatically add space to provide appropriate line
spacing or “leading”. Leading can be adjusted in
most programs on both Mac and Windows. We can
usually find this either fine-tuning adjustment or in
the paragraph menu. For best results, we need to
experiment and find out. With a front editing program like FontoGrapher (from
MacroMedia) adjustments can also be made along the horizontal access of text: the
character metric of each of character and the kerning of character pairs can be altered.
Character metrics is the general measurement applied to individual characters. Kerning
is the spacing between character pairs.
Parts of a typeface design.
ANNAMALAI
ANNAMALAI UNIVERSITY
Arm UNIVERSITY
Ascender Ear Bracketed
Serif
EbgjeLi
Stem Counter Loop Tail Terminal Serifs
Page 93
Multimedia and Its applications
Typefaces can be described in many ways. One simple way of categorizing it is “serif”
versus “san serif”. The type either has a serif or it does not! (“SANS” is “without” in
French). The serif is the little decoration at the end of a letter stroke.
Example
Examples of 12 point size 24 point size
Serif fonts Times
Courier Times
Courier
Sans-serif fonts Helvetica
MS Reference
Helvetica
Sans Serif MS Reference
Sans Serif
Lucida
Script Fonts
Monotype Corsiva Lucida
Monotype
Corsiva
Display Fonts Matisse ITC
Westminister Matisse ITC
Westminister
Symbol fonts
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Font Editing And Design Tools
Font Technology
When a computer displays a character on a monitor or prints it on a laser, inkjet, or dot-
matrix printer, the character is nothing more than a collection of dots in an invisible grid.
Bit mapped fonts store characters in this way with each pixel represents as black or white
bit in a matrix. A bit mapped font usually looks fine on screen in the intended point size
but doesn’t look smooth when printed on a high-resolution printer or enlarged on screen.
Most computer systems now use scalable outline fonts to represent type in memory until
it is displayed or printed. A scalable font represents each character as an outline that can
Page 94
Multimedia and Its applications
be scaled- increased or decreased in size without distortion. Curves and lines are smooth
and don’t’ have stair-stepped, jagged edges when they’re resized. The outline is stored
inside the computer or printer as series of mathematical statements about the position of
points and the shape of the lines connecting those points.
Downloadable fonts (soft fonts) are stored in the computer system (not the printer) and
downloaded to the printer only when needed. These fonts usually have matching screen
fonts and are easily moved to different computer systems. Most importantly, you can use
the same downloadable font on many printer models.
The problem with most of the programs listed below is that they don't deal natively with
TrueType. Instead, as they load the font, they convert the outlines into PostScript-style
cubic Bézier curves, and discard all the hints. For high quality fonts at low resolution, this
is a tragic loss.
TrueType hinting takes the form of little programs attached to each glyph, and it is
admittedly hard, in fact virtually impossible, to work out automatically which program
instructions can remain, and which must change when a glyph is modified. However, it's
a shame that you can't leave alone any glyphs you don't modify: these programs affect
every glyph.
Failing the introduction of
affordable native TrueType
editing tools, if you're making
TrueType fonts from scratch,
or converting PostScript Type
1 fonts to TrueType, then the
following programs are
certainly worth a look. But be
warned of the serious problem
that arises when editing
existing TrueTypes: total loss
of all hints (followed by semi-
automatic, almost always
inferior, hint regeration); and of
the not so serious problem:
conversion of quadratic curves
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
into cubics and back again,
with probable loss of precision.
MacroMedia Fontographer
Fontographer is a very old program
that Macromedia took over from the
original developers, Altsys, and
didn't do very much with. It was last
updated in 1996 and is now
effectively dead in the water. It's
still available for Mac OS9 and
Page 95
Multimedia and Its applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
BitFonter gives FontLab pixel editing
facilities
Letraset FontStudio
Written by Ernie Brock, Harold
Grey and others at Ares (long
before it was taken over by
Adobe), it is no longer marketed.
Page 96
Multimedia and Its applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 97
Multimedia and Its applications
From version 3.0 onwards the Corel behemoth is occasionally used to create and edit
TrueType and Type 1 fonts. But most people find it lacks too many features and
move to a dedicated font editor if they are serious about type design
Drawing Font
There are several font formats in use these days but the two main ones are Adobe Type 1
PostScript font and TrueType - invented by Apple.
Type 1 fonts are generally used for print in conjunction with DTP programs like Quark
XPress and Adobe InDesign. They come in two parts, a vector outline (printer) font and
bitmap (screen) font. The bitmap fonts are only used on older systems to give a rough
approximation of what the printed result will look like. On modern systems, the screen
display is generated on-the-fly from the vector font.
More common these days, TrueType fonts are outline (vector) fonts and don't require a
separate bitmap screen font, the screen display is rendered from the vector outline.
In principle, TrueType fonts are capable of higher quality than the older PostScript fonts
because they have more 'sample' points. In reality, the quality of the 'cut' is generally
better for PostScript fonts because of the skill of the designers. For pixel fonts, TrueType
is the obvious choice.
Whichever program you choose to use, making a pixel font is only a matter of
transferring your penciled design to corresponding square shapes in the font editor. If a
number of squares run together, then you can draw a rectangle but before you start, you
should make a grid of guidelines.
If you count the number of pixels from the top of the highest character to the bottom of
the lowest, that is the 'pixel height' of your font. You can add a few extra pixels of line
spacing above and below if you like.
Draw horizontal guidelines for each row of pixels and similarly, enough vertical
guidelines to allow for the widest character – generally the thousand percent symbols. If
you then switch-on snap to guide, you’re drawing with the rectangle drawing tool will
lock-on to perfect pixels.
You need to add a certain amount of space at one or both sides of each character. This is
called the 'sidebearing' and depends on the size and style of the font – but it must always
be an exact number of pixels so there are no possibilities for subtle kerning. It's best to
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
start with one pixel space on the left and one on the right of each character. It's not until
you do a test setting that you can decide whether to increase or reduce the space on one
side or both. It's something that you learn from experience.
When you have drawn all the characters, then you can generate the font.
Generating Font
When it comes to saving the actual font, you have a number of options to consider.
Firstly, are you using it on a Mac or a PC. Fonts are different between Mac OS and
Windows – both in the file format and in the order of the characters. Characters with
ASCII values between 32 and 127 are common to both platforms but the characters from
Page 98
Multimedia and Its applications
128 up are in different 'encodings' or character order, and different again according to
your language.
The most common encoding for Macintosh computers is 'MacRoman' and the Windows
'standard' encoding provides slots for characters from 128 to 255 with some slots
'reserved' for control characters.
More recent 'Unicode' fonts have slots for thousands of characters so that you can include
characters or 'glyphs' for multiple languages if you like.
When you have chosen the appropriate encoding, you can generate the font file.
Testing and find tuning
Install your font in the usual way and try some test settings. Copy and past a chunk of
text from anywhere into you paint program. Select it and apply the font you have
designed, whatever you called it when you saved it. Make sure that the document
resolution is 72 pixels per inch and set the font size to the the same size as the number of
vertical pixels that you used for your grid. Make sure anti-aliasing is turned off and that
no stretching or kerning is in operation.
All being well, you should have a crisp, sharp font with no blurring anywhere. You now
have to look at the character shapes and the spacing between them and make a note of
anything that needs to be fixed.
When you have done that, uninstall the font before going any further and put it away
somewhere out of the way – or better still, trash it completely. Go back to the font editor
and make the adjustments you noted down, regenerate the font, reinstall it and try again.
You will probably have to go round this sequence of events a number of times until all
the quirks are ironed out.
Hypermedia and hypertext
Text becomes hypertext with the addition of links, which connects separate locations
within a collection of hypertext documents. Links are active: using some simple gesture,
usually a mouse click and a user can follow a link to read the hypertext it points to. To
make this happen, a piece of software called a browser is required. Usually, when you
follow a link, the browser remembers where you came from, so that you can backtrack if
you need to. The World Wide Web is an example of a (distributed) hypertext system and
a Web browser is a particular sort of a browser.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 99
Multimedia and Its applications
Page 100
Multimedia and Its applications
Adobe’s Portable Document Format (PDF) supports hypertext linkage. PDF links are
uni-directional, but not quite simple, since a restricted form of regional link is provided;
each end of a link is a rectangular area on a single page. Since Acrobat Distiller and
PDFWriter make it possible to convert just about any text document to PDF, and links
can then be added using Acrobat Exchange, this means that hypertext links can be added
to any document, however it was originally prepared.
Acrobat Reader in a variety of styles may display links. The default representation
consists of a rectangle outlining the region that is the source of the link. When a user
clicks within this rectangle it is highlighted – by default, colors within the region are
inverted – and then the viewer displays the page containing the destination region. The
actual destination, always a rectangular region, is highlighted. When the link is created,
the magnification to be used when the destination is displayed can be specified, which
makes it possible to zoom in to the destination region as the link is followed.
The links that can be embedded in a Web page composed in HTML are simple and uni-
directional. What distinguishes them from links an earlier hypertext system is the use of
Uniform Resource Locators (URLs) to identify destination. The URL syntax provides a
general mechanism for specifying the information required accessing a resource over a
network. For Web pages, three pieces of information are required: the protocol to use
when transferring the data, which is always HTTP, a domain name identifying a network
host running a server using that protocol, and a path describing the whereabouts on the
host of the page or a script that can be run to generate it dynamically. The basic syntax
will be familiar: every Web page URL begins with the prefix http://, identifying the
HTTP protocol. Next is the domain name, a sequence of sub-names separated by dots,
for example, www.chennaionline.com.
After the domain name in a URL comes the path, giving the location of the page on the
host identified by the preceding domain name. A path looks very much like a UNIX
pathname: it consists of a /, followed by an arbitrary number of segments separated by
/characters. These segments identify components within some hierarchical naming
scheme. In practice, they will usually by the names of directories in a hierarchical
directory tree, but this does not mean that the path part of a URL is the same as the
pathname of a file on the host – not even after the minor cosmetic transformations
necessary for operating systems that use a character other than / to separate pathname
components. For security and other reasons, URL paths are usually resolved relative to
some directory other than the root of the entire directory tree.
ANNAMALAI
ANNAMALAI UNIVERSITY
Hypermeida UNIVERSITY
A Brief History of Hypermedia
Definition of the Hypertext:
"Information is linked and cross-referenced in many different ways and is widely
available to end users" (Hooper, 1990).
"Hypertext means a database in which information (text) has been organised nonlinearly.
The database consists of nodes and links between nodes" (Multisilta, 1995).
A Hypermedia Timeline
Page 101
Multimedia and Its applications
Ted Nelson: Xanadu is Ted Nelson's dream since early `60s: all the world literature in
one publicly accessible global online system (analogy: you can today get a telephone link
from anywhere to anywhere, so why not from any text to any other?). Every reference to
a text will lead to royalties being paid automatically to the author. Includes the use of full
versioning (claimed to be horrifyingly complex), "hot links" (called transclusions) and
zippered texts (eg. parallel texts like for translations or annotations.). A few of ideas in
Xanadu are now implemented in WWW.
Dough Englebart: Can be called as The father of the Hypertext. He invented mouse and
touch screens. He is also creator of one of the first hypermedia systems NLS/Augment.
Definitions of Concepts
A link is defined by source and destination nodes, and by an anchor in the source node.
The destination of a link can be a file (so-called string-to-lexia link) or a string in a file
(string-to-string link).
With a string-to-lexia link it is not possible to reference to a certain part of a file. This
kind of link can make hypermedia easily navigable, especially if the destination nodes are
"short" documents. String-to-string links would permit the destination to be a string in a
file, but this kind of link requires more planning in the design process.
String-to-lexia links also support implicit linking. Implicit links are generated by the
hypermedia software on runtime, for example referential links from a concept to the
definition of the concept. To some extent hypermedia software should generate links
from the fluctuated forms of concepts (a link from word "matrix" and "matrices"). In
contrast to implicit links, the hypermedia author generates explicit links.
The nodes and links form a network structure in the database. Hypermedia is a database,
which contains pictures, digitised videos, sound and animations in addition to text.
Document Markup Languages
Why we need a document markup? Bryan defines markup as: "Markup is the term used
to describe codes added to electronically prepared text to define the structure of the text
or the format in which it is to appear." There can be two types of markups: specific
markup and generalized markup. Specific markup describes the format of the document
whereas generalized markup describes the structure of the document (headings, citations
etc.). For example, Rich Text Format (RTF) is a specific markup language and TeX,
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
LaTeX, SGML and HTML are general markup languages.
Standard Generalized Markup Language (SGML)
SGML is an international stardard (ISO 8879) for document markup. An SGML
document contains a document type definition (DTD) and a set of elements that are
defined in DTD. Each element has a name and it can be used as a tag in SGML
document.
HyperText Markup Language (HTML)
HTML is a SGML based markup language for WWW documents. HTML is actually a
DTD, a set of definitions of how to interpret HTML tags.
Page 102
Multimedia and Its applications
HyTime
HyTime is an international standard for hypermedia documents. It is based on SGML, but
it can reference to a data in almost any format. Only the hypertext link information is
required to be in SGML format.
TeX and LaTeX
TeX and LaTeX are also general markup languages in the sense that we only describe
document structures with LaTeX macros. The definition of the macros can be later
changed and the document could be formatted differently.
There is a HyperTeX that has limited hypertext capability by implementing special
keyword so that is supports for example URL's. DVI viewer is then used to display ps
files containing URL's. DVI viewer could call WWW browser to follow the URL.
Rich Text Format (RTF)
The difference between SGML and RTF is that SGML describes the structure of a
document, whereas RTF describes mainly the physical characteristics of the text (text
face, size, etc). However, RTF includes also certain tags that describe document
structure. The author can define a set of styles for the document (heading 1, heading 3,
abstract, etc) that are written into the beginning of the RTF file and have a special tag in
the RTF markup. An RTF file contains all text formatting, pictures and formulas and it is
a standard defined by Microsoft.
OpenMath
OpenMath consortium is an international group of researchers designing a protocol for
exchanging mathematical information between applications. For example, a general-
purpose computer algebra system could call a specific purpose application to execute an
algorithm implemented only in this application. OpenMath tries to preserve semantic
information in addition to the structural information of the formula. For example, TeX
describes only the visual appearance of a formula, not the semantic structure of the
formula. Similar visual representation of mathematical formulas has been planned to
SGML. MathLink is communications protocol for exchanging Mathematica expressions
and data between Mathematica and external applications. The difference between
MathLink and OpenMath is that MathLink does not define the semantical information of
a formula.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
OpenMath will include SGML compatibility, so that OpenMath objects can also be
included in SGML documents.
Hypermedia Models
Hypertext Abstract Machine
Hypermedia is divided into:
1) User interface,
2) hypermedia application (client),
Page 103
Multimedia and Its applications
3) HAM Hypermedia "engine" (server) that retrieves link and node information from
database and passes that to the hypermedia application,
4) database.
Dexter Hypertext Reference Model
Purpose of Dexter model is that it is standard hypertext terminology coupled with a
formal model of the important abstractions commonly found in a wide range of hypertext
systems. Dexter model is actually a formal specification of generic hypermedis system
written in Z.
Run-time layer: presentation of the hypertext, user interaction, dynamics
|
(Presentation specifications)
|
Storage layer: database containing network of nodes and links
|
(Anchoring)
|
Within component layer: the contents/structure of nodes.
Components are Dexter model concept of nodes, frames, cards and links in other systems.
Hypermedia Systems
Intermedia
A well known hypermedia system is Intermedia developed at Brown Universitys Institute
for Research in Information and Scholarship (IRIS) between 1985 and 1990. Intermedia
is a multiuser hypermedia framework where hypermedia functionality is handled at
system level. Intermedia present the user graphical file system browser and a set of
applications that can handle text, graphics, timelines, animations and videodisc data.
There is also a browser for link information, a set of linguistic tools and the ability to
create and traverse links. Link information is isolated from the documents and is saved
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
into separate database. The start and end position of the link is called anchors.
World-Wide-Web
World Wide Web (WWW) is a global hypermedia system on Internet. It can be described
as wide-area hypermedia information retrieval initiative aiming to give universal access
to a large universe of documents. It was originally developed in CERN for transforming
research and ideas effectively throughout the organization. Through WWW it is possible
to deliver hypertext, graphics, animation and sound between different computer
environments. To use WWW the user needs a browser, for example NCSA Mosaic and a
set of viewers that are used to display complex graphics, animation and sound. NCSA
Mosaic is currently available on X-Windows, Windows and Macintosh.
Page 104
Multimedia and Its applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Intrestingly, HyperCard stacks can be imported to MetaCard. However there are some
incoptabilities on the HyperTalk and MetaTalk, so advanced stacks dont run without
modifications.
LinksWare
LinksWare is a commercial hypermedia authoring software for Macintosh that can create
hypertext links between texts files created with different word processors. LinksWare
uses a set of translators to convert files to its own format (Claris XTND system). This can
make the opening of a file very slow. LinksWare can open files that contain mathematical
text, but files may be formatted differently than in original document, especially formulae
do not appear to have proper line heights. In addition, it can not create links to other
Page 105
Multimedia and Its applications
applications. However, it can create links to Apple script command files that can open an
application and execute commands for that application.
HyperG
Hyper-G is the name of a hypermedia project currently under development at the IICM.
Like other hypermedia undertakings, Hyper-G will offer facilities to access a diversity of
databases with very heterogeneous information (from textual data, to vector graphics and
digitized pictures, courseware and software, digitized speech and sound, synthesized
music and speech, and digitized movie-clips). Like other hypermedia-systems it will
allow browsing, searching, hyperlinking, and annotation. Like no other big hypermedia
system known today, it will also support automatic indexing and link-generation, a
variety of automatic consistency-checks, a built-in messaging and computer conferencing
system, a special editor allowing the incorporation of animation sequences,
question/answer dialogues, and a number of unorthodox man-machine interfaces. Further,
and maybe most important of all, it is built on the basis of already existing large
databases: hundreds of CAI lessons, a large general-purpose encyclopaedia in
hypermedia form, a number of smaller special-purpose lexica, a data-base of thousands of
pictures, some pieces of digitized sound and movie-clips, and links to other databases in
other networks. A number of smaller spin-off applications are surfacing which are mainly
pursued by IMMIS and have led to research in the area of computerisation of various
aspects of museums.
Designing a Hypermedia
Important questions in designing the hypermedia are:
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Converting linear text to hypertext
Text format conversions
Dividing the text into nodes
Link structures, automatic generation of links
Are nodes in a database or are they separate files on file system
Page 106
Multimedia and Its applications
Client-server of standalone
Text indexing is a well-known problem area and results from there can be used to study
automatic generation of links. In principle, a document can be analysed semantically
(with the help of AI), statistically or lexically (by computing the occurrences of words).
Problems in semantic analysis are that natural language is not easy to understand by the
computer. In lexical analysis problems are for example the conflation of words and
recognition of phrases (Esim. matriisi, matriisin, matriisilla mutta ei jälki, jälkeen).
Solutions:
Conflation algorithm
Stemming algorithm
Stopword-list
Hypermedia Applications
Hypermedia is applied in many areas, especially in education and technical
documentation.
Future Directions of Hypermedia
There is a trend that hypertext features start to appear in ordinary applications like word
processors, spreadsheets etc. This is called hypertext functionality within an application.
Good examples of this are Microsoft Internet Assistant, MathBrowser and MatSyma.
Eventually, this will lead to system software containing support for hypertext features,
nodes, and links and browsing.
4.2.2 Sound
Physically, sound is vibration of some medium. The word is also used to describe the
sensation of this vibration when received by the ear.
Sound is created when some
object vibrates. Consider a guitar
string that has been plucked. The
string is stretched in one
direction and then the elasticity
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
of the string forces it back to its
original straight position. The
momentum of the string carries it
past the original position in the
opposite direction. This back and
forth motion continues until the
energy has dissipated. As the string moves, it pushes air molecules in front of it and
compresses them together, creating a high-pressure area. Also, air molecules behind the
string are drawn into the space vacated by the string, creating a low-pressure area.
Page 107
Multimedia and Its applications
Air itself is elastic. The high-pressure area pushes the molecules next to it and this sends
a wave of compression outward from the string. As the string reverses direction, a low-
pressure area is sent out following the high-pressure area. This flow of high and low-
pressure areas continues to move away from the vibrating string at a high velocity,
spreading out in all directions. When these sound waves reach an object, that object is
also forced to vibrate in a pattern closely resembling the vibration of the string that
originally created the sound. Thus, the sound is transmitted from the source to the
listener's ear.
How do we represent it?
Sound can be represented as a graph of the air pressure created by the vibrating object
over time. By convention, high pressure is represented by positive numbers (above the
centerline) and low pressure by negative numbers. The centerline itself represents normal
air pressure with no sound
An object vibrating more rapidly would have the waves shorter and closer together.
Slower vibration would result in longer waves spaced farther apart. This change in
vibration speed is perceived as pitch; faster vibrations are higher pitches and slower
vibrations are lower pitches.
An object that vibrates more forcefully will produce more pressure and will result in
waves that are "taller" on the graph. This is perceived as loudness.
Converting sound energy
Energy can be converted from one form to another. Electricity can be converted to light,
chemical energy can be converted to heat, and so forth. Sound waves are energy and they
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
can be converted to different forms as well.
Consider a thin membrane attached to a coil of wire suspended in a magnetic field. When
sound waves make contact with this membrane, it will vibrate. This vibration moves the
coil of wire back and forth through the magnetic field and this produces a movement of
electrons in the wire. This movement is electricity and the pressure (voltage) of the
electricity will be proportional to the pressure of the sound wave. Such a device is called
a microphone and it is commonly used to pick up sound waves and convert them to
electrical energy.
Page 108
Multimedia and Its applications
A similar device can be used to convert this electrical energy back into sound by having
the electricity flow through another coil and making this coil move in another magnetic
field. The coil is attached to a membrane that will vibrate against the air and set up sound
waves similar to the original sound. This device is called a loudspeaker.
Typically, the electrical energy put out by a microphone is insufficient to move a
loudspeaker enough to be heard, so an additional device is used to amplify the level of
the signal. These three devices (microphone, amplifier, and loudspeaker) can be used to
make a quiet sound loud enough to be heard over a large room, or to carry sound to
distant locations.
Recording
It is often desired to preserve sound and recreate it later. Processes for recording sound
waves for later playback were developed to accomplish this.
Analog methods
Early methods for preserving sound were analog. This means that some pattern was
created by the sound that contained a form similar to the sound wave. The electrical
waveform from the microphone is used to vibrate a cutting device or create a magnetic
pattern. The goal was to create a recording of the original sound in some medium that
follow a pattern analogous to the original sound wave.
Analog media
The earliest device used for recording sound was the phonograph. This device created a
groove in the medium that had a shape modulated by the sound wave. Phonograph
records are played back by having a needle follow the groove. The needle will vibrate in
the same pattern that was used to cut the groove, and this vibration could be amplified
and output through loudspeakers. Another common analog recording device is the tape
recorder. A thin strip of plastic (tape) coated with a magnetic material is passed by an
electromagnet that is modulated by the sound wave. This creates magnetic patterns on the
tape that may be reproduced by reversing the process; the tape is drawn past a coil and
the changing magnetic patters induce an electric current, which is then amplified. These
recording techniques have several problems. At each step, sound to microphone,
microphone to electricity, electricity to magnetism or groove, and then back to sound
afterwards, errors can accumulate. The microphone diaphragm may not vibrate in exactly
the same pattern as the sound wave. There may be outside interference in the cables. But
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
the majority of the problems are in the recording medium itself. If the groove of the
record is cut too slowly, then there is not enough room to accurately represent the detail
of the higher frequencies. If the groove is cut too fast, then noise from the record rubbing
against the needle becomes apparent. There may be spots in the plastic records that are
malformed. Dust can accumulate and cause a hissing noise. Similar problems also exist
for magnetic tape. Even in the best possible circumstances, the quality of the sound
degrades with each step since the physical media used to preserve it contains flaws and
imperfections. If the recording is copied to new media (such as for editing or
reproduction/marketing) then these flaws accumulate.
Page 109
Multimedia and Its applications
Digital methods
Since most of the problems with recording sound accurately are due to the medium used
for analog recording, methods were sought to prevent these problems. The single largest
problem with analog recording is that the information being recorded must be represented
as an analog to the original sound wave. What is needed is a different way to represent
the sound; a way that doesn't suffer from the flaws of the recording media.
With the advent of the computer age, it became quite easy to represent waveform
information as a series of numbers rather than as a analogous pattern. The voltage level of
the waveform could be measured, and "samples" taken every so often. These
measurements were numerical (digital) and these numbers could be converted to pulses
that could be more reliably recorded than analog waveforms. To play back the digitally
recorded sound, the numbers are read back from the recording medium and the voltage of
an electrical signal is varied in precisely the same way as the original signal.
The numbers representing the strength of the waveform are set up on a scale from -32767
to 32767. This gives a fine enough gradation that listeners can't tell the difference
between digitally recorded sound and analog recordings. This range of numbers can be
represented in binary (base 2) with 16 bits (a bit is 0 or 1, off or on). Since a bit is either
on or off, it is much more reliable to read it from a tape than an analog signal. Even a
large amount of noise or imperfections on the medium won't interfere with distinguishing
between a 1 or a 0. This avoids the single biggest source of poor quality that had been
present with analog recording.
Since sound waves vibrate rapidly, the waveform must also be sampled very rapidly. The
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
more often the waveform is sampled, the closer the reproduction will be to the original.
Of course, as the waveform is sampled more often, more data must be stored. A sample
rate must be chosen that is fast enough to accurately represent the sound without resulting
in more data than necessary. Experimentation determined that sampling just over twice
the rate of the highest frequency to be reproduced is sufficient. Humans can hear a
maximum frequency of 20,000 cycles per second (20,000 Hertz). A standard sample rate
of 44,100 Hertz was chosen.
Page 110
Multimedia and Its applications
Digital Audio
Preparing Digital Audio Files:
Preparing digital audio files is fairly straightforward. If you have analog source material -
music or sound effects that you have recorded on analog media such as cassette tapes -
the first step is to digitize the analog material by recording it onto computer-readable
digital media. In most cases, this just means playing sound from one device (such as tape
recorder) right into your computer, using appropriate audio digitizing software.
You want to focus on two crucial aspects of preparing digital audio files:
Balancing the need for sound quality with your available RAM and hard disk
resources.
Setting proper recording levels to get a good, clean recording.
Setting Proper Recording Levels:
A distorted recording sounds terrible. If the signal you feed into your computer is too
“hot” to handle, the result will be an unpleasant crackling or background ripping noise.
Conversely, recordings that are made at too low a level are often unusable because the
amount of sound recorded does not sufficiently exceed the residual noise levels of the
recording process itself. The trick is to set the right levels when you record.
Any good piece of digital audio recording and editing software will display digital meters
to let you know how loud your sound is. Watch the meters closely during recording, and
you’ll never have a problem. Unlike analog meters that usually have a 0 setting
somewhere in the middle and extend up into ranges line +5, +8, or even higher, digital
meters peak out. To avoid distortion, do not cross over this limit. If this happens, lower
your volume and try again. Try to keep peak levels between –3 and –10. Any time you go
over the peak, whether you can hear it or not, you introduce distortion into the recording.
Editing Digital Recordings:
Once a recording has been made, it will
almost certainly need to be edited. Apple’s
QuickTime Pro, shown in Figure, provides
a basic look at a sound file’s structure and
allows for primitive playback and editing.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
A more serious sound editor is Sonic
Foundry’s sound forge for Windows,
shown in Figure with its special effects
menu. With this tool you can create
professional sound tracks and digital
mixers.
Trimming:
Removing “dead air” or blank space from the front of a recording and any unnecessary
extra time off the end is your first sound-editing task. Trimming even a few seconds here
and there might make a big difference in your file size. Trimming is typically
Page 111
Multimedia and Its applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Resampling or Downsampling:
If you have recorded and edited your sound at 16-bit sampling rates but are using lower
rates and resolutions in your project, you must resample or downsample the file. The
process will save considerable disk space.
Page 112
Multimedia and Its applications
Page 113
Multimedia and Its applications
audience. This type of file, like the Microsoft WAV format, can be larger that
other types of audio files, so it is best used for short sound clips for effective
download times. When using AU files on a Web page, you will need to use an
outside program (such as the share package Cool Edit or the commercially
available program Sound Forge) to unload the WAV you have recorded and
convert it AU format.
Real Audio (RA) - Real Audio and Real Video are formats that were developed
by an independent company, progressive Networks. It was one of the first audio
and video format specifically designed for Internet use and has gained widespread
use. The files are format specifically designed for Internet use and has gained
widespread use. The files are played using a free “plug-in” that is available from
their Web site and you can obtain a free encoder to convert files from WAV to
Real Audio format. Real Audio and Real Video files use compression schemes to
make the file small for Internet use- some loss of quality is apparent, but you can
control what type of compression you wish to use for the file, depending on the
target audience and recorded contents.
MP3 (MPEG Audio) – MPEG audio is a standard for high-quality audio and
video files that has gained widespread use. MPEG (Motion picture Experts Group)
is a family of compression formats and audio/video storage formats developed by
a cooperative effort under the joint direction of the International Standards
Organization (ISO) and the International Electro-Technical commission (ICE).
There are many companies that offer quality with a smaller file size. CD-quality
MP3 sounds files can still be larger, however, for use will have to have an audio
player in their computer capable of handling the format. MP3 has found a
receptive audience among musician on the Net, who offers samples of their works
available at sites such as the Internet users and the music industry has been
lobbying for standards in the format that will prevent illegal copying of
copyrighted works.
4.2.3 Images
Still Images
Major components of most multimedia productions are images as opposed to movie or
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
sound files. There are basically two types used: Bit-mapped and Vector graphics:
Page 114
Multimedia and Its applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Computer Generated Images:
These are images generated by 3D modeling and
rendering programs. The artist builds a model in virtual
world within the computer. Sophisticated rendering
algorithms within the software then reproduce high
quality rendered images, which can approach
photorealism. Because of the unlimited level of control
over image quality, lighting and creative input, many
images traditionally obtained by photography are now
created by computer artists using virtual computer
Page 115
Multimedia and Its applications
graphics software and techniques. Again these images can be further modified or
“retouched” in graphics editing programs.
Vector based graphics
Vector graphics are also referred to as object oriented graphics and consist of
mathematically exact defined curves (basic geometrical forms: primitives) and lines that
are designed as “vectors”.
For example, in order to define a line, only three pieces of information are necessary:
The coordinates of the starting point (Origin),
The coordinates of the end point (Vector top) and
The line width (Attribute).
Similarly, the center coordinates (Origin, the radius (vector top) and the line width
(attribute) suffice for a circle. Such graphics are scalable, this means that simply by
modifying the measurements the object’s size is arbitrarily vector graphics variable. Also
vector graphics are resolution independent; they are displayed or printed according the
resolution settings of the respective printer or computer screen.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Advantages of vector graphics:
Vector graphics require less memory space than Bitmap images,
Vector graphics are optionally exact the resolution is irrelevant,
Vector graphics are scalable without loss.
Vector graphics are ideal for storage of images containing line based information
or elements, or images which can easily be converted into line based information
(e.g. text)
Page 116
Multimedia and Its applications
Page 117
Multimedia and Its applications
can easily work with photographs with so many color and can create photo-realistic
effects such as shadowing and increasing color by manipulating select areas, on pixel at a
time.
Advantages of bitmap images:
Manipulation of pixels is very simple, singularly or in groups (for example partial
color changes etc.).
When the output device works directly with pixels, the bitmap image can be
optimal constructed (printer).
Particularly extensive output medium, for the realistic representation of objects
(in contrast to vector graphics).
Bitmap program, are ideal to retouch photographs, editing images and video files and
creating original artwork. Variety of changes to photographs can be made, such as
adjusting the lighting, removing scratches, people, and things, swapping details between
images, adding text and objects adjusting color, and applying combinations of special
effects.
Disadvantages of bitmap images:
Differentiated representation of the pixels.
Smooth curves are represented through approximation of pixels on the raster grid:
“indents” and “steps”: aliasing.
Large files, especially when colors are used.
Bitmaps are not easily reduced or enlarged: The individual pixels are simply duplicated
by enlargement, changing the picture or the proportions; while reducing the image merely
deletes individual pixels. This can best be seen by reducing and re-enlarging an image to
its original size, then comparing the result with the original image.
Vector-based versus Bitmap Images
As stated before, vector –based images are resolution independent. You can easily resize
vector images to a thumbnail sketch or a billboard-sized graphic. They just keep their
smoothness when resized and do not lose detail and proportion. Smooth curves are easy
to define in vector-based programs and they retain their smoothness and continuity even
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
when enlarged. You can also change vector-based images into bitmap formats when
needed. On the other hand, bitmap images provide photo-realistic images that require
complex color variations. They are not easily scalable though. The disadvantage of
bitmap images comes when you want to resize the picture.
Increasing the size of bitmap has the effect of increasing individual pixel, making lines
and shape appear rough and chunky. Reducing the size of a bitmap also distorts the
original image because pixels are removed to reduce the overall image size. Moreover,
since a bitmap image is created as a set of arranged pixels, its part cannot be manipulated
individually.
Page 118
Multimedia and Its applications
4.2.4 Color
The color of an object is the result of certain wavelengths of electromagnetic radiation
being absorbed by that object when light falls upon it. The eye receives the remaining
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
wavelengths of electromagnetic radiation reflected off the object. When the light enters
the eye it falls onto the retina where special cells called cones and rods are stimulated and
transmits corresponding signals to the brain. The brain interprets the signals coming
Page 119
Multimedia and Its applications
from the retina, so our sense of vision is therefore subjective. There are three types of
cone, each stimulated by different wavelengths of light corresponding approximately to
red, green and blue. We perceive different colors by the addition of different strengths of
signals of red, green and blue coming from the cone cells.
Screen color is created in a similar way to cone cells of the eye by adding varying
intensities of the color component to each pixel and is referred to as additive color. Black
is created by the absence of any color component and white is created by the maximum
intensity of the color components. Orange, for instance, is created by the addition of red
and green with no blue. Printed color on the other hand is created by taking color away
and is referred to as subtractive color. White is created by the absence of color (just the
white paper) and black is created by the addition of the maximum values of the color
components. Subtracting different amounts of the three-color components creates all the
other color.
There are several models for representing the color in an image. The most common of
these is the Red, Green and Blue or RGB model, where the color of each pixel is made up
of values for each of the three colors. Other commonly used color models are:
HSB- Hue (color), Saturation (intensity of color) and Brightness (amount of
black and white mixed with color).
HSL – Same as above only it refers to Brightness as Lightness
L*a*b – Luminance is the brightness of the color (from white to black); ‘a’
defines a color range between green and red and ‘b’ defines a color between
blue and yellow.
CMYK – Examples of a subtractive color model used in printing where Cyan,
Magenta, Yellow and Black are used to produce the color separations used in
the printing process.
Color Depth
Most modern computers equipped with Color monitors are capable of creating and
displaying million of colors. Utilizing the RGB Color system generates the images. This
system uses three Color channels (Red, Green and Blue) each displaying 256 intensity
levels for, each of 4 Color channels. This requires 8-bits of Color information per
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
channel. Three channels therefore require 24-bits of Color information. This is referred to
as 24-bit color depth, this allows for up to 16,777,216 possible colors for any given pixel
(determined by multiplying 256*256*256 intensity levels).
24-bit color depth is sometime referred to as “millions of colors”. When Red, Green and
Blue light is mixed together it is therefore possible to create all the colors of the visible
spectrum. On an RGB monitor when all three channels are at maximum intensity the
resulting color is white. When all their channels are at the minimum intensity the
resulting color is black.
Page 120
Multimedia and Its applications
Less sophisticated systems utilize video display system with a lower color depth:
Resolution
One of the more confusing concept in still imaging is the relationship between image size
and resolution. There are a number of different terms to be considered:
Screen Resolution-most computer screens (either CRT or LCD/TFT) operate at a
“native” screen resolution, which is optimized for the dot pitch, or pixel pith of the screen
aperture grid for CRTs or numbers of pixel elements in LCD/TFT screens. Refresh rate
and color depths of the graphics and also play a major role in determining screen
resolution. For example many CRT screens operate at 72dpi (dot per inch). This means
that there are 72 screen pixels per screen inch. There are in effect a predetermined
number of screen inch. There are in effect a predetermined number of screen pixel
elements in any given “square inch” of screen material.
Graphics Display Resolution- Working at different graphics resolutions (e.g. VGA,
SVGA, XGA or XVGA) determines how many pixels are contained in the graphics array
of the monitor. But this does not affect the monitors actual screen resolution, only the
resolution of the display field array projected on to the screen. Multiscan monitors may
project different display resolutions but depending on the actual screen resolution these
may be sharp or slightly fuzzy. Images projected at higher density appear shaper but
smaller in overall size.
a. Scan resolution- When an image is scanned it can be of any resolution which the
scanner can handle eg 300 dpi, 600dpi or even 2400dpi and beyond. Regardless of
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
scanning resolution, these images are still displayed on screen at the resolution
determined by the application, graphics card and monitor. For example and image
scanned at 600dpi which is displayed at “full size” on a monitor screen will look
no sharper than one scanned at 300dpi and displayed at the same size on the same
screen. Only when you “zoom in” on both images will you notice the higher
image quality of then 600dpi image.
b. Print Resolution- if you were to print out both images at the same size on a high-
resolution printer, the difference would be readily noticeable. For Multimedia
productions, producers tend to work at 72dpi because this most closely
approximates normal display or projection resolution. In the end it is the total
Page 121
Multimedia and Its applications
number of pixels which ism important. Higher “scan resolutions” should only be
used at the point of scanning and then quickly converted to 72dpi for any
multimedia work.
c. Anti-aliasing- Aliasing is an artifact inherent in digital images. Whenever there is
a transition of contrast between two colors or shades of gray there is a potential
for “stair shaped” jagged edges along any diagonal transition. These do not occur
on truly vertical or horizontal transitions. The jagged edge is generated by the
square nature of pixels, which makes up the image. Anti-aliasing is the software
algorithms, which automatically remove or reduce the visual impact of aliasing.
Sampling adjacent pixels and filling in the pixels adjacent to the transition usually
achieve this by subtly blurring the image.
d. Applications- For bit-mapped images Adobe Photoshop is far and away the
Word’s most widely used image editing program. Available on both Mac and
Windows it is the industry standard. Other programs like Live Image, Corel,
Canvas, Painter etc attempt to compete by standardizing on all of the third party
Photoshop plugging. These programs can be used to both scan (with the
appropriate plug –in scanner driver) and to edit bit-mapped images. This means
being able to digitally retouch, color correct and composite.
e. Modes- because of the on-screen nature of Multimedia all work is normally
processed in RGB mode. CMYK is not normally use except for pre-press work
(printing industry).
f. Conversion- There are a number of utility programs (some share, some freeware)
which allow image files of differing formats to be converted from on to the other.
In additional many images editing programs (such as PhotoShop) will allow the
users to import or open formats other than native PhotoShop) files and resave
them in a variety of other formats. As a general rule it is best to maintain original
documents as “working files” as PhotoShop and then save copies of the
completed work as a more compressed version. This may mean smaller size,
lower resolution, and higher compression settings. If the work ever needs to be
revised you still have the original PhotoShop file to fall back on.
g. Dithering - blending colors to modify colors or produce new ones.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 122
Multimedia and Its applications
Not all image formats support Alpha channels so if you need this capability make some
you choose a format that will suit your needs. You should note that a 32-bit image is not
visually superior to a 24- bit image. The additional information is technical data increased
visual or Color quality.
i. Hardware considerations
Most multimedia professionals insist on working in full color (24-bit color depth) and at
least SGA resolution (800x600 pixels). This requires a digital monitor and appropriate
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
video graphics card (or Graphics Accelerator Board) with at least 2MB of video RAM
(VRAM). Higher resolutions will require more VRAM if sufficiently color depth is to be
maintained. It is not uncommon to apply up to 32 Mb of VRAM for a high-end
multimedia system. Most personal computers rely on VRAM on board a video graphics
card (or Graphics Accelerator Board) to minimize memory bottleneck between the PC
and the graphics memory controller. VRAM is a special type of DRAM that has a serial
interface, which allows data to be accessed simultaneously by the PC interface and the
graphics side. These video cards are usually PCI (peripheral Components Interconnect)
boards.
Page 123
Multimedia and Its applications
Page 124
Multimedia and Its applications
Page 125
Multimedia and Its applications
Page 126
Multimedia and Its applications
4. Legibility
The color for a text and links are an important consideration when ensuring that your
page is legible and choosing a text color that does not make readers difficult to read. Also
don’t make your audience go treasure hunting for your text.
Links should be distinguishable from the body text, even after it has been clicked. Try not
to select colors that match the body text around the link. Make also sure that after the link
has been visited, the link colors does not turn into the surrounding text color or blend into
the background.
5. Consistent color schemes
Consistent color schemes give your page a sense of familiarity and professionalism that
audience can recognize right away. For example, in order to get people to associate
certain colors with their company, they must be consistent in the way they splash their
colors all over their promotional materials, correspondence, commercials packaging,
sign, and so on. As a designer, we have to take some of that mentality and add it to our
presentation design.
6. Accessibility
Increasing accessibility for colorblind audience is important thing to develop professional
web pages. There are some considerations that we should have to increase accessibility
for colorblind readers.
It is strongly recommended that you should use a strong and bright contrast between
foreground and background colors not only for your screen text but also in your images.
Even totally colorblind audience can differentiate similar colors that contrast bright with
dark. It is good to use blue, yellow, white and black if you really must use colors to
distinguish items. These combinations are likely to be confused than other are.
Color Tables
Color tables are used for storing color values. Frame buffer values are used as indices
into the color table.
Color table allows us to map between a color index in the frame buffer and a color
specification.
Suppose our frame buffer had 8 bit, per pixel. This would allow 28 i.e. 256 colors.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
If color table has 2256 entries, 2256 = 16,777,216 colors are possible.
By changing the values in the color table, we can change the available color selection to
contain the 256 colors.
Changing a value in the color table can alter the appearance of large portions of the
display. Because, it is the hardware which makes these changes, they usually occur very
fast.
Page 127
Multimedia and Its applications
Color tables are used to give false color or pseudo color to gray scale images.
They have also been used for gamma correction, lighting and shading and color model
transformations.
A user can set color table entries in a PHIGS application program with the functions
Set Color Representation (Ws, Ci, Colorptr)
Ws ---------- -> Workstation output device
Ci ---------- -> Color Index
Colorptr ----- > Pointing to the trio of RGB color
values (r,g,b) each specified in the
range from 0 to 1
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 128
Multimedia and Its applications
The RGB color scheme is an additive model. Intensities of the primary colors area added
to produce other colors.
A color C is expressed in terms of RGB components as
C = R R + G G + B B
Magenta vertex is obtained by adding red and blue to produce the triple (1,0,1), and
White at (1,1,1) (i.e. The sum of Red, Green and Blue Vertices)
Shades of gray are represented along the main diagonal of the cube from the origin
(black) to the white vertex.
Each point along this diagonal has an equal contribution from each primary color.
So, Gray shade is half way between black and white
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 129
Multimedia and Its applications
Page 130
Multimedia and Its applications
o At the top of the hex cone, colors have their maximum intensity.
o When V = 1 and S = 1, we have the pure hues.
o Starting with a selection fro a pure hue, which specifies hue angle H and sets V=
S= 1, we describe the color we want in terms of adding either white or blue to the
pure hue.
o Adding black decreases the setting of V while S is held constant.
o Adding white decreases the setting of S while V is held constant.
o Thus, various shades are represented with values S = 1 and 0 < V < 1
(By adding black to a pure hue)
o Adding white to a pure tone produces different tints across the top plane of hex
cone where parameters values are S = 1 and 0 < V < 1
o Various tones are specified by adding both black and white producing color points
within the triangular cross sectional area of the hex cone. Cross section of the
HSV hex cone represent color concepts i.e. shades tints and tones.
CMY Color Model
A color model defined with primary colors cyan, magenta and yellow (CMY) is useful
for describing color output to hard copy devices unlike monitors, which produces a color
pattern by combining light from the screen phosphors, hard copy devices such as plotters
produces a color picture by coating a paper with color pigments.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
H L S Color Model
H L S Hue (H), Lightness (L) Saturation (S)
It is a model based on intuitive color parameters used by Tektronix.
It has a double cone representation as shown in figure. H specifies an angle about the
vertical axis that locates a chosen hue.
Page 131
Multimedia and Its applications
In this model H =0º corresponds to blue. Remaining colors specified are like HSV model.
i.e. Magenta is at 60º Red is at 120º and
Cyan is at 180º
Complementary colors are 180º apart on the double cone.
The vertical axis is called lightness, L
At L = 0, we have black
At L = 1, we have white
Gray scale is along the L axis pure hues lie on the L=0.5 plane. Saturation parameters S
specify relative purity of the color.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
This parameter varies from 0 to 1, as S decreases, the hues are said to be less pure.
At S = 0, we have gray scale.
Hue is selected with hue angle H and desired shade, tint or tone is obtained by adjusting
L and S.
Colors are made lighter by increasing L and made dark by decreasing L.
When S is decreased, the colors more toward gray.
Page 132
Multimedia and Its applications
H S I Model:
H S I is a color model, which is used for describing the color components for the output
devices.
It has three components namely Hue (H) Saturation (S) and Intensity (I).
H specifies a dominant pure color perceived by an observer.
S measures degree to which that color has been diluted by white light.
It is more suitable than the RGB model for many image-processing tasks. H specifies a
dominant pure color perceived by an observer.
(E.g.: Red, Yellow, Blue) and S measurers the
degree to which that color have been ‘diluted’ by
white light. Because color and intensity are
independent, we can manipulate one without
affecting the other.
HSI color space is described by a cylindrical co-
ordinate system and is commonly requested as a
double cone. A color is a single point inside or on
the surface of the double cone.
The height of the cone corresponds to intensity. If
we imagine that the point lies in a horizontal
plane, we can define a vector in this plane from
the axis of the cones to the point.
Saturation is then the length of this vector and hue
is its orientation, expressed as an angle in degrees.
4.2.5 Animation
Animations are created from a sequence of still images. The images are displayed rapidly
in succession so that eye is fooled into perceiving continuous motion. This is because if a
phenomenon called persistence of motion. This is the tendency of eye and brain to
continue to perceive an image even after it has disappeared.
For example a sequence of images of bee showing various positions of its wings when
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 133
Multimedia and Its applications
displayed rapidly one after another gives the illusion of the bee flapping its wings.
Animation generally deals with hand drawing images in contrast to motion video which
deals with actual photographic of real world objects taken through a camera, although
both uses the concept of displaying a sequence of images one after another to depict
motion.
Animation on the Web
The world wide web was developed in the early 1990’s was initially created to serve
hypertext documents but later on animated graphics files were added to it. The biggest
obstacles in the use of animations on the web are bandwidth limitations the difference in
platforms and browser support. Typically web animations are computer files that must be
completely downloaded to the client machine before playback, which can take a long
time to do so. A way around this problem is streaming, which is the capability of
specially formatted animations files to begin playback before the entire file has been
completely downloaded. As the animation plays the rest of the file is downloaded in the
background. Another problem with web animations is that once the animation has been
delivered to the user, the user must have the proper helper application or plug-in to
display the animation. Several formats exists today like GIF animation, based on
extensions to GIF specifications, Quick time animation, based on quick time movie
format, java animation based on Java programming languages, shockwave animation
based on Macromedia Director file format etc.
To the average home user using 28.8 kbps modem, download speed is around 2.5
Kb/second while corporate and university LANS support around 10 to 50 kb/sec.
Types of Animations
Cel Animation
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 134
Multimedia and Its applications
A cel animation is a term from traditional animation. Cel comes from the world celluloid,
the material that made up early motion picture film, and refers to the transparent piece of
film that is used in hand – drawn animation. Animation cels are generally layered, one
top of the other, to produce a single animation frame. Layering enables the animator to
isolate and redraw only the pars of the image that change b/w successive frames. A
frame consists of the background cel and the overlying cel and is like a snapshot of the
action to one instant of time. By drawing each frame on transparent layers, the animator
can lay successive frames one on top of the other and see at a glance how the animation
progress through time.
Filp-Book Animation
Flip-book animation or frame based animation is the simplest kind of
animation to visualize. Here a series of graphic image are
displayed in rapid succession. Each image is slightly
different from the one before. The graphic images are
displayed so fast that the viewer is fooled into perceiving
a moving image. In film this display rate is 24 images
or frame per second. For playback on computer
each image has to be displayed on the screen in
quick succession. The biggest problem with
this form of animation is, on bandwidth
sensitive mediums like the web, which has to
update each frame fast enough to that the
viewer perceives smooth continuous motion.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
A sprite is any part of the animation that moves as an independent object, like a flying
bird, a rotating planet, a bouncing ball or a spinning logo. A single image or series of
Page 135
Multimedia and Its applications
images can be attached to a sprite, which can animate either at one place or move along a
path. Sprite based animation is different from flip-book animation in that for each
successive frame only the part of the screen that contains the sprite is updated. File sizes
and bandwidth requirements for sprite based animations are typically less than for flip-
book based animation.
Special Effects:
Color cycling allows you to change color of object by cycling through a range of colors
in the color wheel. The software provides smooth color transitions from one color to
another.
Morphing
Morphing is probably most noticeably used to produce
incredible special effects in the entertainment industry. It is
often used in movies such as Terminator and The Avyss, in
commercials, and in music videos such as Michael Jackson’s
Black or White. Morphing is also used in the gaming industry
to add engaging animation to video games and computer
games. However, morphing techniques are not limited only to
entertainment purposes. Morphing is a powerful tool that
can enhance many multimedia projects such as
presentations, education, electronic book illustrations,
and computer-based training. The word morph derives
from the word metamorphosis meaning to change shape, appearance or form. According
to Vaughn morphing is defined as ‘an animation technique that allows you to
dynamically blend two still images, creating a sequence of in-between pictures that, when
played in Quick Time, metamorphoses the first image into the second’.
Morphing Techniques:
Image morphing techniques can be classified into two categories such as mesh-based and
feature-based methods in terms of their ways for specifying features.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 136
Multimedia and Its applications
dissolve between them. The color of each pixel is interpolated over time from the first
image value to the corresponding second image value. However, this is not very effective
in portraying an actual metamorphosis and the metamorphosis between faces dos not look
good if the two faces do not have about the same shape. This method also tends to wash
away the features on the images.
A second way to achieve morphing is feature interpolation, which is performed by
combining warps with the color interpolation. An animator with a set of point or line
segments specifies the features of two images and their correspondence. Then, warps are
computed to distort the images so that the features have intermediate positions and
shapes. The color interpolation between the distorted images finally gives an in-between
image.
In morphing the most difficult task is the warping of one image into another image. It is
the stretching and pulling of the images that makes the morphing effect so realistic. The
actual morphing of the image can be accomplished either by using morph points or
morph lines. Morph points are the markers that you set up on the start image and the end
image. The morphing program then uses these markers to calculate how the initial images
should bend/wrap to match the shape of the final image. The second method uses lines
(edges) instead of individual points. Both methods produce very realistic morphing
effects. One of the most time consuming tasks in morphing is selecting the points or lines
in the initial and final image so that the metamorphosis is smooth and natural.
There are several useful tips to remember when morphing objects. The first is to choose
carefully those pictures to morph (Morphing Software). For example, if you wish to
morph two animals, it is best to use picture should also be a close up of the head to obtain
successful results. A second tip is to carefully select the background (Morphing
Software). If a single color background is used, the morphing effect focuses on the
object. Ideally, it is best to use the same background for each picture.
Rotoscoping is the process of drawing on top of existing video, film or animation frames.
Particle system animation is used to simulate natural phenomenon like rain, smoke, fire
etc. Here the characteristics of a swarm are defined
Inverse Kinematics is a special way of linking separate pieces of a 3D model so that
they follow certain predefined behavior of rules. Usually used in character animation, the
motions are constrained added on real world objects.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 137
Multimedia and Its applications
Animation Techniques:
Onion Skinning:
Onion skinning is a drawing technique borrowed from traditional cel animation that helps
isolation, animators lay these transparent cels one on top of the other. This enables them
to see previous and following frames while they are drawing the current frames. Onion
skinning is an easy way to complete sequence of frames at a glance and to see how each
frame flows into the frames before and after.
Cut-Outs:
When the motion of a character is
limited, say the wave of a hand, it is
easier to just redraw the hand and arm
rather than redraw the entire character
for each frame. The character can be
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
drawn once and used as a background.
The separate cut-out are compo sited on
the background figure to simulate
motion.
Velocity Curves:
The gradual slowing down and speeding up as objects approach and leave key frames is
called ease-in and ease-out. In traditional animation slow movement is called by small
changes b/w frames. While fast movement is caused by large changes. Computer
Page 138
Multimedia and Its applications
animation programs enable you to control the declaration and a acceleration of objects by
using velocity curves that define velocity of an object over time.
motion. Then when the object stops, changes direction or hits an immovable object, show
the object compressing or squashing, squash and stretch is a simple way to give a feeling
of weight of an object in motion. It is also a good way to show anticipation and recoil.
Motion Cycling: Many actions are repetitive and can be decomposed into a single
cycling or looping action over a few frames. The classic example of animation cycling is
a walking two-legged figure. The walking figure can then be given translatory motion to
complete the animation of a walking figure.
Secondary and Overlapping Actions: One to create interesting animation is to create
secondary actions to the main action. Secondary actions can be simple like flickering
flame or flowing water can be added with tow or three frames animation loop.
Overlapping actions add a dimension of time to secondary actions. Loose or flowing
parts like cloth or hair can be arranged to come to stop slightly after the main character
comes to a stop.
Hierarchical Motion: Hierarchical motion is created by attaching or linking an object or
animation loop to another object or animation loop. So that the first loop moves with the
second. The flying bee animation is an example of hierarchical motion. First loop of a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
bird flapping its wings is created, and then it is attached to a second object in this case a
motion path-so that the flapping bee flies across the screen. Another example of
hierarchical motion is the solar system – moon revolve around the planet and the planets
revolve around the sun.
Anticipation and Exaggeration: Anticipation
helps to set up the viewer mind by showing some
small movement prior to the primary motion. For
e.g. A character may retrace a few steps before
making a long jump, or an object may stretch and
bend before breaking up.
Page 139
Multimedia and Its applications
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
CGI blockbuster such as Toy Story and Ants were rendered using software originally
written on the now defunct Amiga platform.
Model/Database
In order to create a 3D computer generated image you must first build a 3D model. This
usually involves meticulously manipulating virtual geometry via the graphical user
interface of some 3D modeling software. However for more complex forms it is
sometimes preferable to digitize a physical 3D model into the system. The end result is
sometimes referred to as the 3D database as it is in effect a collection of code that
describes the positioning of all the surface geometry of the virtual objects in a single
database.
Page 140
Multimedia and Its applications
4.2.6 Video
Video can be the most impressive feature of a multimedia application and it is likely to be
the key medium in the next generation of applications. A moving image can convey
information much more powerfully than, say, a still image with sound, and certain types
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 141
Multimedia and Its applications
of information can only really be communicated in video form. This is particularly true if
we want to convey information about dynamic events in the real world, for example a
volcano erupting. While we can convey important information about how a volcano is
formed using animation and audio narration, or what one looks like using an image, or a
text describing what happens during an eruption, only a video can really convey what an
erupting volcano is actually like.
However, video is also the most testing media to include, due to the demands it places on
the delivery platform in terms of storage, processing and data transfer rates. As a result
of this, we must carefully consider the way we use video in our multimedia application.
Often we will need to make a compromise between what we would ideally like and what
is actually practical. We may, for example, replace a video sequence by a sequence of
still images with narration.
Video production is a highly skilled and technically demanding process that includes
scripting, direction, lighting, sound recording and so on. It must also be considered in the
multimedia design process, for instance by planning it using a storyboard. Video
production raises a number of project management questions:
Will it be necessary to employ consultant video professionals?
Will lower in-house production values be sufficient?
Can existing video be reused?
Video can be classified according to the way it is used within a multimedia application
content video, interface video and incidental video.
Content video communicates the content of the multimedia application to the user. It
can be used in a variety of situations:
Narration – video is used to present the subject matter. The talent (actor) used
for the narration is not important, aside from the fact that they must be able to act,
or may need to have a certain accent, or be a specific age, or a have specific look
if the application requires it.
Testimonials – a video of a specific person or an historical event. The difference
between a narration and a testimonial is that in a narration the person delivers the
content, in a testimonial they are the content.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Visualization – video can be used to show the visual layout and organization of
real world objects, such as building interiors.
Processes- video is particularly well suited to showing processes, which take a
number of steps or events, which occur over time.
Reinforcement- video can be used to complement or add emphasis to the content
provided in another medium.
Interface video is part of the interface, such as instructions, rather than being part of the
content:
Page 142
Multimedia and Its applications
Page 143
Multimedia and Its applications
HDTV (2): Frame aspect ratio: 4:3, Pixel aspect ratio: 1, Frame Size: 1920 x 1035,
Frame rate: 30 fps.
Integrating Computers And Television
Television is perhaps the most important form of communication ever invented. It is
certainly the most popular and influential in our society. It is an effortless window on the
world, requiring of the viewer only the right time and the right channel, or for the no
discriminating viewer, any time and any channel (except channel one).
Computer presentation of information could certainly benefit from the color, motion, and
sound that television offers. Television viewers could similarly benefit from the control
and personalization that is promised by computer technology.
Combining the two seems irresistible. They already seem to have much in common, such
as CRT screens and programs and power cords. But they are different in significant ways,
and those differences are barriers to reasonable integration.
The problems on the computer side will get fixed in the course of technical evolution,
which should continue into the next century. We've been fortunate so far that not one of
the early computer systems has been so popular that it couldn't be obsoleted (although we
are dangerously close to having that happen with UNIX, and there is now some doubt as
to whether even IBM can displace the PC). The worst features of computers, that they are
underpowered and designed to be used by nerds, will improve over the long haul.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Television, unfortunately, has been spectacularly successful, and so is still crippled by the
limitations of the electronics industry of 40 years ago. There are many new television
systems on the horizon, a few of which promise to solve the integration problem, but for
the time being we are stuck with what we've got.
Page 144
Multimedia and Its applications
These limitations are not noticed by audiences, and could be completely ignored if they
were merely the esoterica of television engineers. Unfortunately, the television medium is
far more specialized than you might suppose. Interface designers who ignore its
limitations do so at their own peril.
Venue
Computer displays are generally designed for close viewing, usually in an office
environment--most often as a solitary activity. The display is sharp and precise. Displays
strongly emphasize text, sometimes exclusively so. Graphics and color are sometimes
available. Displays are generally static. Only recently have computers been given
interesting sound capabilities. There is still little understanding of how to use sound
effectively beyond BEEPs, which usually indicate when the machine wants a human to
perform an immediate action.
Television, on the other hand, was designed for distant viewing, usually in a living room
environment, often as a group activity. The screen is alive with people, places, and
products. The screen can present text, but viewers are not expected to receive much
information by reading. The sound track is an essential part of the viewing experience.
Indeed, most of the information is carried audibly. (You can prove this yourself. Try this
demonstration: Watch a program with the sound turned all the way down. Then watch
another program with the sound on, but with the picture brightness turned all the way
down. Then stop and think.)
Television was designed for distant viewing because the electronics of the 1940s couldn't
handle the additional information required to provide sufficient detail for close viewing.
Television has lower resolution than most computer displays, so you have to get some
distance from it to look good.
The correct viewing distance for a television viewer is as much as ten times what it is for
a computer user. Where is the best place to sit in order to enjoy fully integrated
interactive television, the arm chair or the desk chair? Many of the current generation of
multimedia products, such as Compact Disc-Interactive, suffer from this ambiguity. The
color images are best viewed from a distance, but the cursor-oriented interface wants to
be close.
Overscan
Every pixel on a computer display is precious. Because the visible window is a rectangle,
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
and the corners of CRTs are curved, the visible rectangle is inset, with sufficient black
border to assure that even the corner pixels will be visible. Television, unfortunately,
does not use such a border.
The first picture tubes used in television were more oval than rectangular. It was decided
that the picture should fill every bit of the face of the screen, even if that meant that
viewers would be unable to see the portions of the images that were near the edges,
particularly in the corners.
This was well suited to the distant viewing assumption, but the uncertainty of what is
visible on a viewer's screen (it can vary from set to set) causes problems even for the
producers of television programs. They had to accept conventions of Safe Action Area
Page 145
Multimedia and Its applications
and Safe Title Area, which are smaller rounded rectangles within the television frame.
Most actions that happen within the Safe Action Area will be visible on most sets. All
text should be confined to the Safe Title Area, which is visible on virtually all sets.
30 fps
Many computer systems have displays that run 30 or 60 frames per second, because it is
commonly believed that television runs at a rate of 30 frames per second. This is
incorrect for two reasons:
Television doesn't really have frames, it has fields. A field is a half of a picture,
every other line of a picture (sort of like looking through blinds). There is no
guarantee that two fields make a coherent picture, or even which fields (this one
and that one, or that one and the next one) make up a frame. This is the field
dominance problem, and it makes television hostile to treating individual frames
as discrete units of information.
If television did have a frame rate, it would be 29.97 frames per second. The
original black and white system was 30, but it was changed when color was
introduced. This can make synchronization difficult. Movies transferred to
television play a little longer, and the pitch in the sound track is lowered slightly.
It also causes problems with timecode.
Timecode is a scheme for identifying every frame with a unique number, in the
form hour:minute:second:frame, similar in function to the sector and track
numbers on computer disk drives. For television, there are assummed to be 30
frames per second, but because the true rate is 29.97, over the course of a half
hour you would go over by a couple of seconds. There is a special form of
timecode called Drop Frame Timecode, which skips every thousandth frame
number, so that the final time comes out right. However, it can be madness
dealing with a noncontiguous number system in a linear medium, particularly if
frame accuracy is required.
Interlace
Computers want to be able to deal with images as units. Television doesn't, because it
interlaces. Interlace is a scheme for doubling the apparent frame rate at the price of a loss
of vertical resolution and a lot of other problems. Pictures are transmitted as alternating
fields of even lines and fields of odd lines.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Images coming from a television camera produce 59.94 fields per second. Each field is
taken from a different instant in time. If there is any motion in the scene, it is not possible
to do a freeze frame, because the image will be made up of two fields, forcing the image
to flutter forward and backward in time. A still can be made by taking a single field and
doubling it to make a frame, with a loss of image quality.
Twitter is a disturbing flicker caused by the content of one line being significantly
different than its inter field neighbors. In extreme cases, it can cause the fields to separate
visibly. Twitter can be a big problem for computer generated graphics, because twittery
patterns are extremely common, particularly in text, boxes, and line drawings. The
Page 146
Multimedia and Its applications
horizontal stripes in the Macintosh title bar cause terrible twitter. Twitter can be removed
by filtering, but with a lose of detail and clarity.
Field dominance, as mentioned above, is the convention of deciding what a frame is: an
odd field followed by an even, or an even followed by an odd. There are two possible
ways to do it; neither is better than the other, and neither is generally agreed upon. Some
equipment is even, some is odd, and some is random. This can be critical when dealing
with frames as discrete objects, as in collections of stills. If the field dominance is wrong,
instead of getting the two fields of a single image, you will get a field each of two
different images, which looks sort of like a superimposition, except that it flickers like
crazy.
Color
RCA Laboratories came up with an ingenious method for inserting color into a television
channel that could still be viewed by unmodified black and white sets. But it didn't come
for free. The placing of all of the luminance and color information into a single composite
signal causes some special problems.
The color space of television is not the same as that in a computer RGB system. A
computer can display colors that television can't, and trying to encode those colors into a
composite television signal can cause aliasing. (Aliasing means "something you don't
want.")
Television cannot change colors as quickly as a computer display can. This can also
cause aliasing and detail loss in computer-generated pictures on television. There are
other problems, such as chroma crawl and cross-color, which are beyond the scope of this
article. But they're there.
Videotape
In the Golden Age, there was no good way to save programs, so all programs were
produced live. Videotape was developed years later.
Our problems with videotape are due to two sources: First, the design of television gave
no thought to videotape or videodisc, which results in the generation loss problem.
Second, the control aspects of interactive television require greater precision than
broadcasters require, which creates the frame accuracy problem.
Generation loss is the degradation in the quality of a program every time it is copied.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Because videotape is not spliced, the only way to assemble material is by copying it, and
with each copy it gets worse. This problem is being corrected by the application of digital
technology, and can be considered solved, at least at some locations. It remains to make
digital video recording cheap and widely available.
The frame accuracy problem is another story. A computer storage device that, when
requested to deliver a particular sector, instead delivered a different sector would be
considered defective. In the world of videotape editing, no one can notice that an edit is
off by 1/29.97 seconds, so precise, accurate-to-the-frame behavior is not always
demanded of professional video gear. This can make the production of computer
Page 147
Multimedia and Its applications
Page 148
Multimedia and Its applications
Linear Vs Non-Linear:
Video can be categorized as being one of two fundamental configurations, either linear or
non-linear.
Linear video refers to traditional videotape recording systems. Reel-to-reel VTRs and
videocassettes are both defined as linear in that the image and audio signals are recorded
onto continuously moving videotape. If you need to edit such video content in a
traditional linear system you need to replay the original videotape whilst re-recording it
onto a special editing video recorder.
Dedicated recording features on edit VTRs called flying-erase heads allow for frame
accurate editing. If special effects such dissolves or wipes are required at least three
machines be needed (two replay and one record) as well as specialized sync pulse
generators and video mixing consoles. In such a configuration the video editor needs to
record alternate video tracks onto two “source” tapes with each segment accurately
aligned to overlap the opposite track so that the video signal can be “mixed” between the
A & B replay machines. This method is complex and time consuming and is often
referred to as “A & B roll” or Chequerboard” editing. The concept is a legacy from
manually edited motion picture film. Computer controllers are often used to speed up the
process but the essential criteria is that it remains tape-to-tape and therefore a linear
process.
Needless to say, advances in video technology of transferring the video signal to a
random access medium such as computer hard disk, DVD-RAM or magneto/optical
storage system for subsequent reply or editing. Such a system may incorporate a
videotape recorder to either replay or to record the final product but at least part of the
process must utilize a random access system in order for it to be described as non-linear.
Many modern configuration is referred to as a tapeless system. Such a system would
record directly to a random access disc. Television news was the first medium to utilize
tapeless video.
Multimedia Video:
Defining multimedia video is a moveable feast. Whatever definitions we use and
whatever examples we give can only represent a snapshot in time. Be assured it will be
obsolete within a very short timeframe. For new we can best describe it as any digital
video able to captured to and replayed from a computer system or that which is in a
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
digital form able to be interactively controlled such as in a video game console or DVD
player.
The most common examples are small video clips or animations, which can be replayed
from the desktop of a PC. These tend to be smaller than full screen movies with or
without synchronized sound and which can be replayed by themselves or embedded
inside a larger multimedia production such as an interactive presentation or game. These
clips can be of virtually any frame size or frame rate as long as they are not to be used as
a medium for non-linear video editing. However if non-linear off-line video editing is to
be achieved, strict adherence to the relevant video standards must be observed.
Page 149
Multimedia and Its applications
Video Capture:
Video capture is the process of converting linear video (such as
a signal from analog or digital videotape) into non-linear video
such as that stored on a computer hard disk in a digital format.
This video signal is captured and stored in either compressed
or non-compressed form and can be replayed, edited or re-
processed in a wide variety of ways without the inherent
quality loss of analog linear editing systems. Video capture is
sometimes called digitizing or sampling.
FireWire:
The most common form of non-broadcast quality video capture is via a FireWire
configuration. This involves the used of a suitably configured personal computer, which
has a FireWire connection (IEEE 1394 standard port), FireWire cable and a digital
record/replay device such as a digital camcorder. The
FireWire enabled software allows the replayed signal form
the camcorder to be loaded and stored onto the computers
hard disk or disk array. Both the original signal and the
stored data are heavily compressed using hardware
compression built into the camcorder thereby reducing the
data throughput to a manageable level.
Video Capture Card:
Prior to the introduction of the FireWire system most video
capture was facilitated by plug-in-cards such as Nu-bus or
PCI cards which are inserted into the main computer bus from the rear slots of the
computer. These cards either capture analog or digital video or store it as either
compressed or non-compressed digital video on either the hard disk or external RAID
array. Video capture cards are still used particularly for professional non-compressed
non-linear editing systems because of their faster processing and real-time effects
capabilities.
Video Formats:
There are a number of industry standard video formats in use around the World as well as
a range of non-standard formats that can be used in multimedia video clips. The
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
following are examples of the various formats available:
Analog Broadcast System:
PAL (Phase Alternation Line) Analog color video standard used in Australia,
most of Europe and parts of Asia and Africa. It has a frame size of 768 x 576
pixels (when converted to digital) and a frame rate of 25 frames per second.
NTSC (National Television Systems Committee) Analog color video standard
used in America, Japan and some parts of Asia. It has frame size of 640 x 480
pixels (when converted to digital) and frame rate of 29.97 frames per second.
Page 150
Multimedia and Its applications
Page 151
Multimedia and Its applications
decompressed. Here again, concatenation rears its ugly head, so most broadcasters and
DVD producers leave encoding to the last possible moment.
Several manufacturers have developed workarounds to deliver editable MPEG-2 systems.
Sony, for instance, has introduced a format for professional digital camcorders and VCRs
called SX, which uses very short GOPs (four or fewer frames) of only P frames and I. It
runs at 18 Mbit/s, equivalent to 10:1 compression, but with an image quality comparable
to M-JPEG at 5:1. More recently, Pinnacle has enabled the editing of short-GOP, IP-
frame MPEG-2 within Adobe Premiere in conjunction with its DC1000 MPEG-2 video
capture board. Pinnacle claims its card needs half the bandwidth of equivalent M-JPEG
video, allowing two video streams to be played simultaneously on a low-cost platform
with less storage.
Faced with the problem of editing MPEG-2, many broadcast manufacturers sitting on the
ProMPEG committee agreed on a professional version that could be more easily handled,
known as MPEG-2 4:2:2 Profile@Main Level. It's I frame only and allows for high data
rates of up to 50 Mbit/s which have been endorsed by the European Broadcasting Union
and its US counterpart, the Society of Motion Picture Television Engineers (SMPTE), for
a broad range of production applications. Although there's no bandwidth advantage over
M-JPEG, and conversion to and from other MPEG-2 streams requires recompression, this
1 frame-only version of MPEG-2 is an agreed standard, allowing material to be shared
between systems. By contrast, NLE systems that use M-JPEG tend to use slightly
different file formats, making their data incompatible.
In the mid-1990s the DV format was initially pitched at the consumer marketplace.
However, the small size of DV-based camcorders coupled with their high-quality
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
performance soon led to the format being adopted by enthusiasts and professionals alike.
The result was that by the early 2000s - when even entry-level PCs were more than
capable of handling DV editing - the target market for NLE hardware and software was a
diverse one, encompassing broadcasters, freelance professionals, marketers and home
enthusiasts.
Despite all their advantages, DV files are still fairly large, and therefore need a fast
interface to facilitate the transfer from the video camera to a PC. Fortunately, the answer
to this problem has existed for a number of years. Apple Computers but has since been
ratified as international standard IEEE 1394 originally developed the FireWire interface
technology. Since FireWire remains an Apple trademark most other companies use the
IEEE 1394 label on their products; Sony refer to it as "i.LINK". When it was first
Page 152
Multimedia and Its applications
developed, digital video was in its infancy and there simply wasn't any need for such a
fast interface technology. So, for several years it was a solution to a problem, which
didn't exist. Originally representing the high end of the digital video market, IEEE 1394
editing systems have gradually followed digital camcorders into the consumer arena.
Since FireWire carries DV in its compressed digital state, copies made in this manner
ought, in theory, to be exact clones of the original. In most cases this is true. However,
whilst the copying process has effective error masking, it doesn't employ any error
correction techniques. Consequently, it's not unusual for video and audio dropouts to be
present after half a dozen or so generations. It is therefore preferred practice to avoid
making copies from copies wherever possible.
By the end of 1998 IEEE 1394-based editing systems remained expensive and aimed
more at the professional end of the market. However, with the increasing emphasis on
handling audio, video and general data types, the PC industry worked closely with
consumer giants, such as Sony, to incorporate IEEE 1394 into PC systems in order to
bring the communication, control and interchange of digital, audio and video data into the
mainstream. Whilst not yet ubiquitous, the interface had become far more common by the
early 2000s, not least through the efforts of audio specialist Creative who effectively
provided a "free" FireWire adapter on it's Audigy range of sound cards, introduced in late
2001
Digital Video
Understanding what digital video is first requires an understanding of its ancestor -
broadcast television or analogue video. The invention of radio demonstrated that sound
waves can be converted into electromagnetic waves and transmitted over great distances
to radio receivers. Likewise, a television camera converts the color and brightness
information of individual optical images into electrical signals to be transmitted through
the air or recorded onto videotape. Similar to a movie, television signals are converted
into frames of information and projected at a rate fast enough to fool the human eye into
perceiving continuous motion. When viewed by an oscilloscope, the unprojected
analogue signal looks like a brain wave scan - a continuous landscape of jagged hills and
valleys, analogous to the ever-changing brightness and color information.
There are three forms of TV signal encoding:
most of Europe uses the PAL system
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
France, Russia and some Eastern European countries use SECAM, which only
differs from the PAL system only in detail, although sufficient to make it
incompatible
The USA and Japan use a system called NTSC.
With PAL (Phase-Alternation-Line) each complete frame is drawn line-by-line, from top
to bottom. Europe uses an AC electric current that alternates 50 times per second (50Hz),
and the PAL system ties in with this to perform 50 passes (fields) each second. It takes
two passes to draw a complete frame, so the picture rate is 25 fps. The odd lines are
drawn on the first pass, the even lines on the second. This is known as interlaced, as
opposed to an image on a computer monitor, which is drawn in one pass, known as non-
Page 153
Multimedia and Its applications
interlaced. Interlaced signals, particularly at 50Hz, are prone to unsteadiness and flicker,
and are not good for displaying text or thin horizontal lines.
PCs, by contrast, deal with information in digits - ones and zeros, to be precise. To store
visual information digitally, the hills and valleys of the analogue video signal have to be
translated into the digital equivalent - ones and zeros - by a sophisticated computer-on-a-
chip, called an analogue-to-digital converter (ADC). The conversion process is known as
sampling or video capture. Since computers have the capability to deal with digital
graphics information, no other special processing of this data is needed to display digital
video on a computer monitor. However, to view digital video on a traditional television
set, the process has to be reversed. A digital-to-analogue converter (DAC) is required to
decode the binary information back into the analogue signal.
Page 154
Multimedia and Its applications
4.5) Summary
Multimedia Building blocks used essentially to define applications and technologies that
manipulate text, data, images, and voice and full motion video objects.
Page 155
Multimedia and Its applications
Multimedia is a system, that support the interactive use of text, audio, still images, video,
and graphics. Each of these elements must be converted in some way from analog form to
digital form before they can be used in a computer application.
4.8) Assignments
1. Discuss the steps in adding sound to multimedia project.
2. Discuss the method of preparing HTML documents.
ANNAMALAI
ANNAMALAI
4.10) Learning Activities
UNIVERSITY
UNIVERSITY
1. Brief out the stages in capturing and editing images.
2. Brief out the steps in preparing digital audio files.
Page 156
Multimedia and Its applications
DirectSound3D (DS3D) was introduced in DirectX 3.0 and allowed developers to place
a sound anywhere in 3D space.
Dolby Digital (AC-3): The AC-3 (Audio Code number 3) surround sound standard was
created by Dolby Laboratories.
Dolby Digital EX and DTS ES: These are new surround audio standards that add an
additional surround sound channel to the 5.1 picture – the rear centre channel.
Dolby Pro Logic: This is an older standard that packs audio information for a centre
and a surround channel into your normal stereo channel.
DTS : An Acronym for Digital theatre Systems, this is a standard that was formulated by
Steven Spielberg and is a competitor to Dolby Digital
Duplex: A full-duplex soundcard can make and receive sounds at the same time.
MIDI Channels: These channels often a musician greater control over the instruments
connected to the soundcard.
Polyphony: Is the maximum number of voices a synthesizer can play at any one time.
Signal – to – Noise Ratio (SNR): SNR is the ratio of the largest sound signal that can be
handled by a card with minimum distortion, to the noise that is present at that time.
Sony/Philips Digital Interface (S/PDIF): It is a standard for transmitting data in a
loseless digital format to preserve sound quality.
Stereo Cross talk: Cross talk is the mixing of the left and the right channel sound
information.
THX: an acronym for Tomlinson Holman’s eXperiment,
Total Harmonic Distortion (THD): Non-linear distortion is a processing error that
creates output signals at frequencies that are not necessarily present in the input.
Wavetable: A Wavetable stores the digitized samples of actual recorded instruments,
which are then combined during music creation and playback.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 157
Multimedia and Its Applications
UNIT-V
5.0) Introduction
Internet Multimedia is a very happening field. Lots of work is going on to improve the
quality of multimedia on the network. The challenge is massive. It is more so because of
the accessories we already have in our hand.
Media's in Internet
Because audio and video files can be large, and to avoid a long wait before you can start
playing them, streaming was invented. Streaming enables your computer to play the
beginning of an audio or video file while the rest of the file is still downloading. If the
file arrives more slowly than your computer plays it, the playback has gaps while your
computer waits for more data to play. (Players usually display a "Buffering..."message
when this happens.) Several streaming formats are widely used on the Web, and you can
install plug-ins and ActiveX controls to enable your browser to play them.
ANNAMALAI
ANNAMALAI UNIVERSITY
5.1) Objective
UNIVERSITY
To understand the role of multimedia in Internet and its applications
5.2) Content
5.2.1. Multimedia and the Internet
Once upon a time, so long ago, the words Internet and multimedia were rarely mentioned
in the same sentence. Although you could download GIF images or sound files from an
FTP site and then view or listen to them on your PC, the Internet experience itself was far
Page 158
Multimedia and Its Applications
from a multimedia extravaganza. Indeed, until the advent of Mosaic and the phenomenal
popularity of the World Wide Web, accessing the Internet was like reading the front page
of the Wall Street Journal: lots of good information, but gray, without pictures, and dull
on the eyes.
In 1993, a computer program called Mosaic changed all that. Mosaic is a browser -- a
program that allows users to use the Internet's World Wide Web. For the first time, true
multimedia -- the mixing of various media such as text, images, sounds, and movies --
came to the Internet. Today, not only can you download those sorts of files, but you can
also experience them while you are online. And, if you have anything to say, you can
even present your information, complete with mixed media, on your own Web page.
The World Wide Web continues to grow in popularity, but most of us have limited
bandwidth resources. We use poky 9600 bps and 14.4 Kbps modems to send and receive
data, but in the world of full multimedia we're going to need much faster access. After all,
14.4 Kbps means 14,400 bits of information every second, and even with good data
compression technology, we're lucky to hit 38,800 bits per second regularly. At these
speeds, video or audio files that are more than a few minutes long can take an hour or
more to transfer to a PC, so if you're waiting to see Gone with the Wind or hear Wagner's
entire Ring cycle, forget it. Even users who are lucky enough to access the Internet with a
28.8 Kbps modem get tired of waiting for things to download.
As a result of this bottleneck, most people get only text and graphics files from the Web.
Text and still image files are generally small, so you don't need to wait too long to view
them, but anyone who has waited for a graphically heavy Web site, such as Time
Warner's Pathfinder, soon realizes how frustrating even this experience can be. Although
audio and animation are both possible on the Web, you need a much faster connection (or
the patience of a saint) to send and receive the huge audio and video files that would
enable you to take full advantage of Internet multimedia.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 159
Multimedia and Its Applications
Figure 1-1: Time Warner's Pathfinder Web site -- a great resource, but the images can
make it slow going
On the Internet, and typically in real life, new technologies are first available to a core
group of inventors and experimenters. If the new technology is good enough, or
interesting enough, or worthwhile enough, word gets out. Other folks begin to hear about
the wonders of the new technology, and they want to try it. They find out what they need,
and then they spend whatever time and money is necessary. Slowly, the technology gains
wider and broader acceptance, with more and more people taking part, until at last it
becomes so common that it's practically a household word. Consider, for example,
electronic mail or the World Wide Web.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The key advantage of the Internet is that it provides a widely used and uniform
communication medium to link users together to access or deliver multimedia
information. However, when using the Internet as a vehicle for multimedia delivery, one
must be aware of the following considerations.
Bandwidth:
Determines how much information can be transmitted efficiently. The bandwidth
depends primarily on the type of data being transmitted. Text has the lowest bandwidth
requirements at one byte or character with graphics, audio, and video requiring
significant increases in bandwidth to move information.
Page 160
Multimedia and Its Applications
Application:
The type of software used for delivery of the information. The Internet has spawned the
development of a number of facilities such as mail services; file transfer, and the Web to
store and deliver information. Traditional multimedia products can benefit from the real
time nature of data delivery across the Internet.
Bandwidth considerations:
Bandwidth is based on how much information can be transmitted in a given period of
time. Bandwidth is dependent on the function of communication devices and the
transmission medium. Data, within the computer, moves at rates of 10 to 50 megabits per
second-often using parallel connections within the computer. Unfortunately, outside of
the computer, transmission speeds are much slower because they relay much slower,
serial-based local area networks and commercial telephone lines linking vast number of
users from home based computers to internet service providers. Therefore, typical
Internet connectivity speed ranges from:
Low – end (modem): 14,400 and 28,800 bits per second (1800 to 3600
bytes per second under ideal conditions). Modem speeds from analog
commercial phones systems are limited to the speeds well below 56,000
bits per second.
Mid-range (ISDN or Integrated Serviced Digital Network): 56,000 bits per
second (digital transmission). May include a second channel or 112,000
bits per second.
High speed (Ethernet network) 10,000,000 bits per second (1,250,000
bytes per second under ideal conditions) most commonly found in office-
based network systems. The speed of the Internet connection is also
dependent on the number of users and how these networks are configured.
Before we proceed any further, let us review different multimedia data and
their average file size. The typical file sizes for several of multimedia data
are shown.
It should be noted that the transmission speeds listed for modem or Ethernet network are
ideal and do not account for actual conditions such as telephone transmission limitations,
number of users sharing a network, and physical hardware limitations. Actual network
performance would probably be 5 to 20 times less than the values shown in the chart. Ex:
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
downloading a full screen bitmap picture would take up to five minutes via a modem
operating at 28,000 bits per second. Table gives more realistic expectations of data
transmission for network systems.
Data Size Typical Sample
Type
Text 1 character (ASCII) = 1 Page of text (100 characters/line, 30
byte lines) = 3000 bytes.
Pictures 1 pixel, 256 colors = 1 Full Screen (640 x 480 pixels), 256
byte color bitmap = 330,000 bytes.
Page 161
Multimedia and Its Applications
ANNAMALAI
ANNAMALAI UNIVERSITY
Pictures
UNIVERSITY
Full screen
(640 x 480
91 seconds 0.26 seconds
pixels), 256
color bitmap
= 330,000
bytes.
Compressed
graphics Compressed file, Compressed file, .06 to .13
format = 20 to 55 seconds.
75,000 to seconds.
200,000
Page 162
Multimedia and Its Applications
bytes
(depends on
type detail,
colors)
Audio Voice, one Voice ,166 Voice, 5 seconds
minute = seconds
600,000
bytes
Music, 8.4 seconds.
Music, one
Music, 3000
minute =
seconds (50
10.5 minutes)
megabytes
Video One minute, Uncompressed Uncompressed 330
un=414 115,000 seconds seconds
megabytes (1.9 hours) (5.5 minutes)
Page 163
Multimedia and Its Applications
Page 164
Multimedia and Its Applications
Access to “real time” multimedia data (text, pictures, audio, and video).
This content complements or replaces exiting application content that is
originally delivered on CD-ROM.
Dynamic reorganization of content; that is downloading new instructions
to change the layout presentation and even the look and feel of a
multimedia product.
Internet-specific capabilities on these functions wide:
Access to downloading new files for a multimedia presentation.
Multimedia content can be transparently replaced in multimedia products
without the user ever knowing.
Mail access to access distributes new information. End users of
multimedia products can be notified of updates to their applications or
new applications might be of interest? Web page presentation to take
advantages of hypertext mark up language (HTML) documents encoding.
Browser based multimedia delivery
Browser-based/web based information multimedia delivery technology offers number of
advantages over local data delivery via CD-ROM or other high-density storage media:
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Access to server based resources such as applications and database.
Information on servers can be continuously updated and distributed to
large number of end users.
Add-on software modules that enhance the behavior or performance of the
browser. Browser software applications can be given new functionality
such as the ability to present multimedia animation files with add-on
software modules.
Programming and scripting languages that adds functionally to the
browser. Web documents with embedded scripts enable programmed
functions to be added to browser applications.
Page 165
Multimedia and Its Applications
The ISP may then connect to a larger network and become part of their network. The
Internet is simply a network of networks.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Most large communications companies have their own dedicated backbones connecting
various regions. In each region, the company has a Point of Presence (POP). The POP is
a place for local users to access the company's network, often through a local phone
number or dedicated line. The amazing thing here is that there is no overall controlling
network. Instead, there are several high-level networks connecting to each other through
Network Access Points or NAPs.
Internet Functioning
The reason that the Internet works at all is that every computer connected follows a
common protocol.
Page 166
Multimedia and Its Applications
5.2.3. Internetworking
Note: Routers were originally called gateways, but that term was discarded in this
context, due to confusion with functionally different devices using the same name.
It is interesting to note that some people inaccurately refer to the connecting together of
networks with bridges as internetworking, but the resulting system mimics a single
subnetwork, and no internetworking protocol (such as IP) is required to traverse it.
However, a single computer network may be converted into an internetwork by dividing
the network into segments and then adding routers or other layer 3 devices between the
ANNAMALAI
ANNAMALAI UNIVERSITY
segments. UNIVERSITY
The original term for an internetwork was catenet. Internetworking started as a way to
connect disparate types of networking technology, but it became widespread through the
developing need to connect two or more local area networks via some sort of wide area
network. The definition now includes the connection of other types of computer networks
such as personal area networks.
Page 167
Multimedia and Its Applications
IP only provides an unreliable packet service across an internet. To transfer data reliably,
applications must utilize a Transport layer protocol, such as TCP, which provides a
reliable stream (These terms do not mean that IP is actually unreliable but instead that it
sends packets without contacting and establishing a connection with the destination
router beforehand. The opposite applies for reliable). Since TCP is the most widely used
transport protocol, people commonly refer to TCP and IP together, as "TCP/IP". Some
applications occasionally use a simpler transport protocol (called UDP) for tasks which
do not require absolutely reliable delivery of data, such as video streaming.
5.2.4. Connections
Types of Internet Connections
There are several types of Internet connections available for home and small office
connectivity. This following section will address fundamentals of installation and
security for the primary connections available today.
Dial Up Connections
Dial-up Internet connections are the most common and most readily available for home
and small business users. Dial up connections are easy to set up and use, and generally
inexpensive.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
A Windows dial-up networking session installed and configured for the ISP. This
would include the access phone number for the ISP, your account name and
password.
An analog phone line connected to the modem.
The configuration for a dial-up networking session in Windows 2000 (found under
Start/Control Panel/Network and Dial-up connections/) looks like this:
Page 168
Multimedia and Its Applications
Page 169
Multimedia and Its Applications
Bonded Analog Dial-up is relatively new to the market, and not supported by all ISPs.
Bonded dial-up requires two phone lines, and an ISP account supporting MultilinkPPP
(multilink point to point protocol).
Bonded analog dials two phone lines simultaneously, and links them into one larger pipe
for Internet connectivity. Typical connection speeds for this service range from 76kbps to
98kbps.
One of the most desirable new options for agency connectivity is DSL (Digital
Subscriber Line) service, which operates on the same copper wire transmission lines as
Plain Old Telephone Service (POTS). DSL provides practical connection speeds of up to
1.5 mbps in areas where the service available.
Advantages of DSL
Providers offer several options for connection speed, up to 1.5mbps
"Always on" connection --no dialing in
Can support a large number of users in the office from one connection.
Limitations of DSL
Not available in all areas.
In areas where available, the transmission format of DSL limits connections to within
approximately 18,000 feet of a Local Exchange Carrier’s Central Office (CO).
How DSL works
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
DSL is transmitted over Plain Old Telephone (POTS) lines. Part of the bandwidth of the
normal line, outside the range of normal voice communication, is used to transmit a
digital signal, which is decoded by a DSL modem on the receiving end.
Because analog transmission only uses a small portion of the available amount of
information that could be transmitted over copper wires, the maximum amount of data
that you can receive using ordinary modems is about 56 kbps.
Normal dial-up transmissions are analog. This means the ability of your computer to send
and receive information is limited because the Telephone Company converts information
from the Internet that arrives as digital data, puts it into analog form for your telephone
Page 170
Multimedia and Its Applications
line, and requires your modem to change it back to digital. In other words, the limited
bandwidth of the analog transmission between your home or business and the Phone
Company is a bandwidth bottleneck.
DSL does not require data to be changed into analog form and back. Digital data is
transmitted directly to your computer as digital data, allowing the Phone Company to use
more bandwidth for transmitting it to you.
If you choose, the signal can be separated so some of the bandwidth is used to
simultaneously transmit an analog signal so you may use your telephone and computer on
the same line at the same time.
ANNAMALAI
ANNAMALAI UNIVERSITY
Types of DSL UNIVERSITY
ADSL
ADSL (Asymmetric Digital Subscriber Line) is the form of DSL most familiar to home
and small business users. ADSL is called "asymmetric" because most of its two-way or
duplex bandwidth is devoted to the downstream direction, sending data to the user. Only
a small portion of bandwidth is available for upstream or user-interaction messages.
Page 171
Multimedia and Its Applications
Most Internet sites, especially with graphics- or multi-media intensive data, need lots of
downstream bandwidth, but user requests and responses are small and require little
upstream bandwidth. Using ADSL, up to 6.1 megabits per second of data can be sent
downstream and up to 640 Kbps upstream.
The high downstream bandwidth means your telephone line will be able to bring motion
video, audio, and 3-D images to your computer or hooked-in TV set. In addition, a small
portion of the downstream bandwidth can be devoted to voice rather than data, and you
can use your phone without requiring a separate line.
In many cases, your existing telephone lines will work with ADSL. In some areas, they
may need upgrading.
CDSL
CDSL (Consumer DSL) is a trademarked version of DSL that is somewhat slower than
ADSL (1 Mbps downstream, less upstream) but has the advantage that a "splitter" does
not need to be installed at the user's end. Rockwell, which owns the technology and
makes a chipset for it, believes that phone companies should be able to deliver it in the
$40-45 a month price range. CDSL uses its own carrier technology rather than DMT or
CAP ADSL technology.
G.Lite or DSL Lite
G.Lite (also known as DSL Lite, Splitterless ADSL, and Universal ADSL) is essentially a
slower ADSL that doesn't require splitting of the line at the user end but manages to split
it for the user remotely at the telephone company. This saves the cost of what the phone
companies call "the truck roll." G.Lite (officially, ITU-T standard G-992.2), provides a
data rate from 1.544 Mbps to 6 Mpbs downstream and from 128 Kbps to 384 Kbps
upstream. G.Lite is expected to become the most widely installed form of DSL.
HDSL
The earliest variation of DSL to be widely deployed has been HDSL (High bit-rate DSL),
used for wideband digital transmission within a corporate site and between the Telephone
Company and a customer. The main characteristic of HDSL is that it is symmetrical: an
equal amount of bandwidth is available in both directions. For this reason, the maximum
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
data rate is lower than for ADSL. HDSL can carry as much on a single wire of twisted-
pair as can be carried on a T1 line in North America or an E1 line in Europe (2,320
Kbps).
IDSL
IDSL (ISDN DSL) is somewhat of a misnomer since it's really closer to ISDN data rates
and service at 128 Kbps than to the much higher rates of ADSL.
Page 172
Multimedia and Its Applications
RADSL
VDSL
VDSL (Very high data rate DSL) is a developing technology that promises much higher
data rates over relatively short distances (between 51 and 55 Mbps over lines up to 1,000
feet or 300 meters in length). It's envisioned that VDSL may emerge somewhat after
ADSL is widely deployed and co-exist with it. The transmission technology (CAP, DMT,
or other) and its effectiveness in some environments are not yet determined. A number of
standards organizations are working on it.
If you plan to share a DSL connection between several PCs in an office, the best solution
is to purchase a DSL ready router. This device will allow one Internet IP address to be
shared across your local area network, so you only need to pay for a single connection.
Security Considerations
Since a DSL connection is 'always on', meaning you do not have to dial a number to use
it, it is also a good idea to consider some firewall features when selecting a DSL router.
There are many excellent DSL routers on the market from vendors such as Nortel,
Cayman, Cisco and Flowpoint. All of these manufacturers make DSL routers which
operate in a similar fashion. If the DSL provider in your area does not provide a router as
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
part of the installation agreement, you will have to select and purchase a router yourself.
The Routers and Modems tutorial has more in depth information regarding routers and
modems.
If you are selecting a DSL router for use on a peer to peer or client-server based
Windows network, it is important to make sure it includes these features:
Page 173
Multimedia and Its Applications
Even if you are running 100mbps Ethernet on the local area network, a 10mbps
connection to the router is all that is necessary for DSL, since the maximum bandwidth of
the DSL service will be under 1.5mbps in any case.
Firewall features on the router are imperative, since DSL is a constant connection. This
protects your network from attack by outside intruders. NAT also provides some
protection for your network, since it "hides" the internal network addresses from the
Internet. Please see the Firewalls and Virus protection tutorial for more information.
If you are purchasing a router separately from your DSL provider, you also need to make
sure the model you choose is compatible with any other equipment the provider may be
installing. The DSL provider and the router manufacturer should be able to verify this.
1. Construct a peer to peer network in the office. See the Small Network
Fundamental module for more information.
2. Set up each network client to use DHCP to obtain an IP address on the network.
Go to Start/Settings/Control Panel/Network and open the TCP/IP properties.
Under the DHCP tab, enable DHCP for obtaining client address. Do this on each
PC in the office.
3. Follow the instructions provided with your DSL router for enabling DHCP.
Assign an IP address range to the DHCP pool for your network clients
4. Make sure NAT (network address translation) is set up on the router. See the
Routers and Modems tutorial.
5. Assign a static IP address for your DSL router to be used as the Default Gateway.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
This Default Gateway is the device on your network clients will use to access the
Internet. In this case, it is your DSL router. Make sure the static IP address is in
the SAME SUBNET as the range you used for the DHCP client pool.
For example, if the range for the DHCP pool was 10.0.0.2 to 10.0.0.10, then use
10.0.0.1 for the default gateway (your DSL router). It is common practice in TCP/IP
usage to use to first address on a subnet for the router.
You should be in business for shared Internet access! Go to a client computer, restart it,
open up Internet Explorer and make sure it works. If it does not, check the following:
Page 174
Multimedia and Its Applications
1. Check the steps above. Did you use IP addresses from the same subnet? (use the
10.0.0.0 subnet -- this is easiest, and correct usage of the protocol).
2. Check the settings in Internet Explorer to make sure it is not forcing a dial-up
connection. Go to Tools/Internet Options/Connections and make sure "Never Dial
a Connection" is selected.
Cable Internet
Cable modems have some positive points, and several negative ones. At the present time,
cable companies offer Internet service to many areas where DSL is not yet available.
However, over the next few years, DSL will probably become more pervasive than cable
Internet service.
These items, while not necessarily reasons "not to use cable Internet access", are
important considerations when discussing this option with a customer or weighing a cable
modem solution against DSL.
This means that when the cable between a local cable company (usually fiber optic) and
your neighborhood or business area is split into many connections to various homes and
businesses, you are on the same network as everyone connected to the main feed. This is
not necessarily bad, but it does require some considerations regarding security and
bandwidth.
For security reasons, the cable modem system is designed so any modem on the network
can communicate only with the CMTS (Cable Modem Termination Service) at the cable
company’s office, and not with other cable modems. The newest standard of the
DOCSYS protocol used for cable Internet communication is encrypted, so this will
eliminate some of the security concerns. New equipment is required for the cable
companies to take advantage of this protocol, and it will be some time before this
equipment is installed on a widespread basis.
Page 175
Multimedia and Its Applications
Because of the shared connections, and the fluctuating nature of bandwidth availability,
most cable companies are unable to offer "business class service" which would guarantee
a certain Quality of Service (bandwidth and availability) with a Service Level
Agreement. A few companies "guarantee" available bandwidth by placing equipment on
their systems which monitors usage and availability, and by restricting the number of
users it installs on a segment. In theory, this should work OK, but it is generally more
expensive than regular residential class cable service, since the number of customers on a
segment needs to be restricted
This is really the least problematic issue regarding cable modem connections, because if
you have taken the proper steps to secure your Internet connection with a firewall, it
doesn’t really matter if you are on a shared segment or not. One exception is with email.
On the shared network portion of the cable segment it is theoretically possible for
ISDN:
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 176
Multimedia and Its Applications
Page 177
Multimedia and Its Applications
pages are connected to one another using hypertext. This is a method of presenting
information in which certain text is highlighted. The highlighted text is a link to other
pages that have more information on that particular topic. Thus the user can move from
one page to other linked pages via the hypertext link.
The World Wide Web is non-linear; that is, it has no top and no bottom. This implies
that one does not have to follow a fixed path to access information. Thus, a user can:
move from one link to another,
Go directly to a link if its address is known.
Simply jump to specific part of a document.
To navigate the World Wide Web, the users need to have browser software like the
Internet Explorer and Netscape Navigator.
Web Applications
Applications Services
Information Retrieval Exploring the web and retrieve information from Net
Electronic Mail It is the most widely used tool to send and receive
messages electronically on a network
Search Engine Is a program that searches through a database of web pages
for particular information
Chat Online textual talk is called chatting
Video Conferencing A two-way videophone conversation among multiple
participants is called video conferencing
FTP File Transfer Protocol, which defines a method for
transferring files from one computer to another over a
network
Telnet Is an Internet utility that lets you log onto remote computer
systems
NewsGroup A Newsgroup or forum is online community bulletin
board, where users can post messages, respond to posted
messages, or just read them. Groups of related messages
are known as Threads
ANNAMALAI
ANNAMALAI UNIVERSITY
Elements of Web
UNIVERSITY
Elements Functions
Clients and A Web server is a computer connected to the Internet that runs a
Servers program that takes responsibility for storing, retrieving, and
distributing some of the Web’ files.
A Web client or Web browser is a computer that requests files
from the Web.
Web’s languages Computers that are connected to Internet must have a well-
Page 178
Multimedia and Its Applications
and Protocols defined set of languages and protocols that are independent of
the hardware or operating systems on which they run.
URL’s and Each file on the Internet has an address, called a Uniform
Transfer Resource Locator (URL)
Protocols The first part of a URL specifies the transfer protocol, the
method that a computer uses to access this file. E.g. (HTTP, FTP
etc.,)
HTML The Hypertext Markup Language (HTML) is the universal
language of the Web. It is a language that you use to lay out
pages that are capable of displaying all the diverse kinds of
information that the Web contains.
Java and Java Java is a language for sending small applications (called applets)
Script over the Web, so that your computer can execute them.
JavaScript is a language for extending HTML to embed small
programs called scripts in Web pages.
VB Script and VBScript and ActiveX Controls are Microsoft systems that work
ActiveX controls with internet Explorer.
VBScript, a language that resembles Microsoft’s Visual Basic,
can be used to add scripts to pages that are displayed by Internet
Explorer.
ActiveX controls (AXC’s), like Java, are used to embed
executable programs into a Web page. When Internet Explorer
encounters a Web page that uses ActiveX controls, it checks
whether that particular control is already installed on your
computer, and if it isn’t IE installs it.
XML and Other The Extensible Markup language (XML) is a very powerful
Advanced Web language that may replace HTML as the language of the Web.
Languages Currently, XML is little more than a specification at the W3C,
but it is expected to be implemented in the fifth-generation
browsers, other than XML there are Cascading Style Sheets
(CSS), Extensible Style Language (XSL) and Dynamic HTML
are been used.
ANNAMALAI
ANNAMALAI UNIVERSITY
Image Formats UNIVERSITY
Pictures, Drawings, charts and diagrams are available on the
Web in variety of formats. The most popular formats for
displaying graphical information are JPET and GIF.
Audio and Video Some files on the Web represent audio or video, and they can be
Formats played by browser plug-ins.
VRML The Virtual Reality Modeling Language is the Web’s way of
describing three-dimensional scenes and objects. Given a VRML
file, a browser can display scene or object, as it would appear
from any particular viewing location. You can rotate an object
Page 179
Multimedia and Its Applications
telecommunication connections at any given point of time. Usually, they also have
gigabytes of hard disk storage, considerable random access memory (RAM) and a very
high-speed processor. In certain cases Web servers might actually be several computers
linked together, with each handling incoming Web page requests.
A Web server runs special Web server software that reads requests sent from Web
browsers, and retrieves and sends the appropriate information to the computer from
where the request has come (called a client computer). Web servers normally have
dedicated links to the Internet backbone.
Page 180
Multimedia and Its Applications
Web sites generally consist of several displays called pages. Accessing a site brings the
first page to the screen via the telephone connection. This page contains text and, in
many cases, illustrations. Certain illustrations and passages of text are "hot"--they
contain electronic links to other pages or even other Web sites. The user can access
another page or site by selecting the "hot" area--with a mouse, for example. The browser
manages all the switching between links.
Many browsers store a user's most frequently accessed sites in files called caches in the
user's computer. When a user visits a site, the browser checks to see whether that site has
changed since the user's last visit. If the site has not changed, the browser loads it from
the cache, creating a copy of it in the computer's memory. Loading from the cache is
much faster than loading via the telephone connection. Browsers constantly update the
caches, removing sites that have not been visited recently to make room for other sites.
Elements of a Browser
Platform Support: Platform support is often confused with operating systems support.
An example of a platform would be Windows, Unix, Mac, and OS/2. But there are
different variations on each platform, and these variations are operating systems.
Interface: All browsers have a similar type of interface, with a menu and button bar
above the browser window.
Bookmarks: Bookmarks, while not the most important feature in a browser, are certainly
a very convenient item. Given the amount of information being placed on the web daily,
imagine having to write down every URL you wanted to remember, or trying to find the
same URL again! Bookmarks also let the user categorize and sort bookmarks into
sections, so their URLS are much easier to find in the bookmark list.
Mail and News: Browsers supports sending and receiving HTML, so e-mail can be sent
with images, sound, and even Java effects.
HTML Support: Style sheets give users the same flexibility of design and layout that
desktop publishing programs do, by enabling them to attach styles (such as fonts, colors,
and spacing) to HTML pages. By applying separate style tags to HTML, web page
designers ensure that all browsers (that support CSS) can view the basic text and structure
of the Web page while more sophisticated designs can be presented.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
The Internet Explorer Interface
The basic look and feel of Internet Explorer (IE) is very similar to Netscape. DCCC users
making the transition to the College's new browser should experience very little difficulty
using IE. To learn more about each element in the IE interface, click on the labeled areas
below:
Page 181
Multimedia and Its Applications
Status Indicator
The Status Indicator ( ) lets you know when the Web page you want to view has
fully loaded into the browser window. If the Status Indicator looks like a spinning globe
(see example below), the page has not fully loaded. It's best to wait for the indicator to
stop spinning before you begin to interact with the Web page (clicking on links, scrolling,
etc.) to ensure the browser won't freeze.
ANNAMALAI
ANNAMALAI UNIVERSITY
Standard ToolbarUNIVERSITY
The Standard toolbar provides a series of buttons representing the most commonly used
features of IE 5.5.
Address Toolbar
Page 182
Multimedia and Its Applications
The Address toolbar provides a textbox that allows you to enter the URL, or Web
address, of the site you would like to visit. The Address toolbar also displays the URL of
the page you are currently viewing.
Links Toolbar
The Links toolbar ( ) is a customizable bar that contains buttons to allow you to
quickly view sites you visit frequently. To learn more about the Links toolbar
Scroll Bar
The Scroll Bar appears when the Web page contains more information than can be seen
at a glance. The Scroll Bar also gives you a hint as to how big the page you are viewing
is. In the example showing DCCC's homepage above, the box in the Scroll Bar is more
than half the size of the entire bar. This indicates that you are currently viewing more
than half of the information contained on the Web page. If you are planning to print the
current page you are viewing, it's always a good idea to use the Scroll Bar (if one is
visible) to determine just how big the page is.
Status Bar
The Status Bar tells you what the progress of the browser is as it is loading pages. In the
example below you can see that the page from Amazon.com is still being opened:
Also, if you hover your mouse over links on a page, the Status Bar allows you to see the
URL of the link. In the example below, the user is hovering their mouse over the
Academics link on DCCC's homepage and the URL of the page appears in the Status Bar.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Netscape Features
Application Window
The following image shows the main elements of the Netscape Navigator window:
Page 183
Multimedia and Its Applications
Page Window
This area contains information, animation, graphics, and links to other sites.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Links may be indicated only by color. The rule is that if the mouse pointer looks like this
, then you can be sure that whatever it's pointing to is a link.
Page 184
Multimedia and Its Applications
Button Bar
Preferences
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Inside your preferences, you can change your starting page, home page, button bar
appearance, and browsing history duration.
To change your starting page and home page, click Navigator from the menu on
the left. On the right, in the section called Navigator starts with, select Blank
page, Home page, or Last page visited.
To change your home page, enter a URL in the field in the Home page section.
Page 185
Multimedia and Its Applications
To change the duration of your browsing history, enter a number of days in the
field in the History section.
To change the appearance of your button bar, click Appearances from the menu
on the left. On the right, in the section called Show toolbars as select Pictures
and Text, Pictures Only, or Text Only.
Using Netscape
Opening a Location
If you know a specific URL you would like to visit, click the Location field, type the
address, and press Enter.
Searching
To search using Netscape’s search site,
To search using other search engines, type the URL of the site in the location bar and
perform a search the same way as stated above.
Bookmarks
A bookmark allows you revisit a page later. The URL is added to a list of bookmarked
URL’s found under the Bookmark menu next to the Location field.
History
History keeps a record of the pages that you’ve visited over the amount of time specified
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
in your preferences.
To view your history, choose History from the Communicator menu. A window will
appear. From here you can scroll through and double-click a page you would like to
revisit.
Saving
You can save the source of a Web page by selecting Save As from the File menu.
To save images,
1. Position the mouse pointer over the image and right click. (On a Macintosh, click
and hold down the mouse button for a second or two.)
2. Choose Save this Image as... from the menu that appears.
Page 186
Multimedia and Its Applications
3. Enter a file name (if you wish to change it) and select a destination.
4. Click {OK}.
Remeber that copyright laws apply to web pages and images as well as paper
publications.
Printing
To print a web page,
HTML 4.01
The use of CSS (style sheets)
XHTML
XML and XSLT
Client side scripting
Server side scripting
Page 187
Multimedia and Its Applications
HTML 4.01
HTML is the language of the Web, and every Web developer should have a basic
understanding of it.
HTML 4.01 is an important Web standard and very different from HTML 3.2.
When tags like <font> and color attributes were added to HTML 3.2, it started a
developer's nightmare. Development of web sites where font information must be added
to every single Web page is a long and expensive pain.
With HTML 4.01 all formatting can be moved out of the HTML document and into a
separate style sheet.
HTML 4.01 is also important because XHTML 1.0 (the latest HTML standard) is HTML
4.01 "reformulated" as an XML application. Using HTML 4.01 in your pages makes the
future upgrade from HTML to XHTML a very simple process.
Styles define how HTML elements are displayed, just like the font tag in HTML 3.2.
Styles are normally saved in files external to HTML documents. External style sheets
enable you to change the appearance and layout of all the pages in your Web, just by
editing a single CSS document. If you have ever tried changing something like the font or
color of all the headings in all your Web pages, you will understand how CSS can save
you a lot of work.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
XHTML 1.0 is now the latest HTML standard from W3C. It became an official
Recommendation January 26, 2000. A W3C Recommendation means that the
specification is stable and that the specification is now a Web standard.
XHTML is a reformulation of HTML 4.01 in XML and can be put to immediate use with
existing browsers by following a few simple guidelines.
Page 188
Multimedia and Its Applications
The Extensible Markup Language (XML) is NOT a replacement for HTML. In future
Web development, XML will be used to describe and carry the data, while HTML will be
used to display the data.
We believe that XML is as important to the Web as HTML was to the foundation of the
Web and that XML will be the most common tool for all data manipulation and data
transmission.
Future Web sites will have to deliver data in different formats, to different browsers, and
to other Web servers. To transform XML data into different formats, XSLT is the new
W3C standard.
XSLT can transform an XML file into a format that is recognizable to a browser. One
such format is HTML. Another format is WML - the mark-up language used in many
handheld devices.
XSLT can also add elements, remove, rearrange and sort elements, test and make
decisions about which elements to display, and a lot more.
Client-Side Scripting
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
normally not programmers, but JavaScript is a scripting language with a very
simple syntax! Almost anyone can put small "snippets" of code into their HTML
pages.
JavaScript can put dynamic text into an HTML page - A JavaScript statement
like this: document.write("<h1>" + name + "</h1>") can write a variable text into
an HTML page.
JavaScript can react to events - A JavaScript can be set to execute when
something happens, like when a page has finished loading or when a user clicks
on an HTML element.
JavaScript can read and write HTML elements - A JavaScript can read and
change the content of an HTML element.
Page 189
Multimedia and Its Applications
Server-Side Scripting
The Structured Query Language (SQL) is the common standard for accessing databases
such as SQL Server, Oracle, Sybase, and Access.
Knowledge of SQL is invaluable for anyone wanting to store or retrieve data from a
database.
Any webmaster should know that SQL is the true engine for interacting with databases on
the Web.
A plugin (or plug-in) is a computer program that interacts with a main (or host)
application (a web browser or an email program, for example) to provide a certain,
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
usually very specific, function on-demand.
read or edit specific types of files (for instance, decode multimedia files)
encrypt or decrypt email (for instance, PGP)
filter images in graphic programs in ways that the host application could not
normally do
play and watch Flash presentations in a web browser
Page 190
Multimedia and Its Applications
The host application provides services which the plugins can use, including a way for
plugins to register themselves with the host application and a protocol by which data is
exchanged with plugins. Plugins are dependent on these services provided by the main
application and do not usually work by themselves. Conversely, the main application is
independent of the plugins, making it possible for plugins to be added and updated
dynamically without changes to the main application.
Plugins are slightly different from extensions, which modify or add to existing
functionality. The main difference is that plugins generally rely on the main application's
user interface and have a well-defined boundary to their possible set of actions.
Extensions generally have fewer restrictions on their actions, and may provide their own
user interfaces. They sometimes are used to decrease the size of the main application and
offer optional functions. Mozilla Firefox uses a well-developed extension system to
reduce the feature creep that plagued the Mozilla Application Suite.
Perhaps the first software applications to include a plugin function were HyperCard and
QuarkXPress on the Macintosh, both released in 1987. In 1988, Silicon Beach Software
included plugin functionality in Digital Darkroom and SuperPaint, and the term plug-in
was coined by Ed Bomke. Currently, plugins are typically implemented as shared
libraries that must be installed in a place prescribed by the main application. HyperCard
supported a similar facility, but it was more common for the plugin code to be included in
the HyperCard documents (called stacks) themselves. This way, the HyperCard stack
became a self-contained application in its own right, which could be distributed as a
single entity that could be run by the user without the need for additional installation
steps.
Delivery Vehicles
Deliver Vehicles include face-to-face, online (synchronous, asynchronous) audio
conference, Web seminars, CD-ROM, audiotapes/videotapes, printed publications/self-
ANNAMALAI
ANNAMALAI UNIVERSITY
Study workbooks.UNIVERSITY
The Internet and intranets, which use the TCP protocol suite, are the most important
delivery vehicles for multimedia objects. TCP provides communication sessions between
applications on hosts, sending streams of bytes for which delivery is always guaranteed
by means of acknowledgments and retransmission. User Datagram Protocol (UDP) is a
``best-effort'' delivery protocol (some messages may be lost) that sends individual
messages between hosts. Internet technology is used on single LANs and on connected
LANs within an organization, which are sometimes called intranets, and on ``backbones''
that link different organizations into one single global network. Internet technology
Page 191
Multimedia and Its Applications
allows LANs and backbones of totally different technologies to be joined together into a
single, seamless network.
Part of this is achieved through communications processors called routers. Routers can be
accessed from two or more networks, passing data back and forth as needed. The routers
communicate information on the current network topology among themselves in order to
build routing tables within each router. These tables are consulted each time a message
arrives, in order to send it to the next appropriate router, eventually resulting in delivery.
Multimedia and the Internet: many web pages now include sound and video.
With the increased popularity of broadband connections, many sites feature
music, movie, and television clips you can view or download. However, even
with the broadband connection, audio or video files that are more than a few
seconds long can be large and take a long time to download to your computer.
Internet: It as the Inter networking i.e., the linking of many networks including
private networks that was named Internet.
Internetworking involves connecting two or more distinct computer networks or
network segments together to form an internetwork
Internet Connections: primary connections available today are Dial-up
connections, DSL, Cable internet, ISDN, Wireless internet
Internet Services: Search engines, Home Page, E-learning, Access to publishing,
File Transfer Protocol, E-mail, Finding People on the Net, Chat, Video
Conferencing, Telnet, Newsgroup.
World Wide Web: As its name implies the World Wide Web is a globally
connected network.
Web Servers: A Web server is a computer on the Internet that stores Web pages.
Web Browsers: Web browser is a software package used to access locations on
the World Wide Web, part of the global computer network called the Internet.
Plug-Ins: A plugin (or plug-in) is a computer program that interacts with a main
(or host) application (a web browser or an email program, for example) to provide
a certain, usually very specific, function on-demand.
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Delivery Vehicles: Deliver Vehicles include face-to-face, online (synchronous,
asynchronous) audio conference, Web seminars, CD-ROM,
audiotapes/videotapes, printed publications/self-Study workbooks.
Page 192
Multimedia and Its Applications
5.5) Summary
Today individuals, companies and institutions use the Internet in many ways as
mentioned below:
Business uses the Internet to provide access to complex databases, such as financial
databases.
Companies carry out electronic commerce (commerce on Internet) including advertising,
selling, buying, distributing products and providing after sales services.
Businesses and institutions use the Internet for voice and video conferencing and other
forms of communication that enable people to telecommunicate, or work from a distance
The use of electronic mail (e-mail) over the Internet has greatly speeded communication
between companies among co-workers and between other individuals.
Media and entertainment companies use the Internet to broadcast audio and video,
including live radio and television programs. They also offer online chat groups, in
which people carry on discussions using written text, and online news and weather
programs
Scientists and scholars use the Internet to communicate with colleagues, to perform
research, to distribute lecture notes and course materials to students, and to publish
papers and articles.
Individuals use the Internet for communication, entertainment, finding information, and
to buy and sell goods and services.
Page 193
Multimedia and Its Applications
2. http://multimedia.expert-answers.net/multimedia-glossary/en/
3. http://nrg.cs.usm.my/~tcwan/Notes/MM-BldgBlocks-I.doc
4. www.edb.utexas.edu/multimedia/PDFfolder/WEBRES~1.PDF
5.8) Assignments
1. Explain the possible ways to connect to internet using the wizard.
2. What media are used in the Internet? How does the medium affect the performance
of the Internet?
ANNAMALAI
ANNAMALAI UNIVERSITY
UNIVERSITY
Page 194