0% found this document useful (0 votes)
885 views157 pages

Bit 4202 Distributed Multimedia Systems

This document contains an instruction manual for the course "Distributed Multimedia Systems" covering 5 chapters on key topics in multimedia: Chapter 1 introduces multimedia systems and their elements, categories, features, applications, and stages of development. Chapter 2 covers working with text, including multimedia building blocks, text formatting, character sets, fonts, and font editing tools. Chapter 3 discusses digital audio, including preparing and editing audio files, MIDI audio, common audio formats, and audio software. Chapter 4 examines digital images, formats, bitmaps, vector drawing, still image creation and editing tools. Chapter 5 presents principles of animation, techniques like cel, computer and morphing animation, and video file formats.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
885 views157 pages

Bit 4202 Distributed Multimedia Systems

This document contains an instruction manual for the course "Distributed Multimedia Systems" covering 5 chapters on key topics in multimedia: Chapter 1 introduces multimedia systems and their elements, categories, features, applications, and stages of development. Chapter 2 covers working with text, including multimedia building blocks, text formatting, character sets, fonts, and font editing tools. Chapter 3 discusses digital audio, including preparing and editing audio files, MIDI audio, common audio formats, and audio software. Chapter 4 examines digital images, formats, bitmaps, vector drawing, still image creation and editing tools. Chapter 5 presents principles of animation, techniques like cel, computer and morphing animation, and video file formats.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 157

P.O. Box 342-01000 Thika Email: [email protected] Web: www.mku.ac.

ke

DEPARTMENT OF INFORMATION TECHNOLOGY

COURSE CODE: BIT 4202


COURSE TITLE: DISTRIBUTED MULTIMEDIA SYSTEMS

Instructional Manual for BBIT Distance Learning


Prepared by Mr. Paul Mutinda E-mail: [email protected]

CONTENTS
CHAPTER 1 .................................................................................................................................................. 12 Introduction to Multimedia ........................................................................................................................ 12 1.0 Aims and Objectives .......................................................................................................................... 12 1.1 Introduction .................................................................................................................................. 12 1.2 Elements of Multimedia System ................................................................................................... 12 1.3 Categories of Multimedia ............................................................................................................. 13 1.4 Features of Multimedia................................................................................................................. 14 1.5 Applications of Multimedia ........................................................................................................... 14 1.6 Convergence of Multimedia (Virtual Reality) ............................................................................... 16 1.7 Stages of Multimedia Application Development .......................................................................... 17 1.8 Let us sum up ................................................................................................................................ 18 1.9 Lesson-end Activities .................................................................................................................... 18 Chapter 2..................................................................................................................................................... 19 Working with Text ....................................................................................................................................... 19 2.0 Aims and Objectives .......................................................................................................................... 19 2.1 Introduction ...................................................................................................................................... 19 2.2 Multimedia Building Blocks ............................................................................................................... 20 2.3 Text in Multimedia ............................................................................................................................ 20 2.4 About Fonts and Faces ...................................................................................................................... 20 2.5 Computers and text: ......................................................................................................................... 21 2.6 Character set and alphabets: ............................................................................................................ 22 ASCII Character set .............................................................................................................................. 22 The Extended Character set ................................................................................................................ 22 Unicode ............................................................................................................................................... 22 2.7 Font Editing and Design tools ........................................................................................................... 23 2

Chapter 3..................................................................................................................................................... 26 Audio ........................................................................................................................................................... 26 3.0 Aims and Objectives .......................................................................................................................... 26 3.1 Introduction ...................................................................................................................................... 26 3.2 Power of Sound ................................................................................................................................. 26 3.3 Multimedia Sound Systems .............................................................................................................. 26 3.4 Digital Audio ...................................................................................................................................... 27 3.4.1 Preparing Digital Audio Files ...................................................................................................... 27 3.5 Editing Digital Recordings ................................................................................................................. 28 3.6 Making MIDI Audio ........................................................................................................................... 28 3.7 Audio File Formats ............................................................................................................................ 29 3.8 Red Book Standard ............................................................................................................................ 30 3.9 Software used for Audio ................................................................................................................... 30 Chapter 4..................................................................................................................................................... 31 Images ......................................................................................................................................................... 31 4.0 Aims and Objectives .......................................................................................................................... 31 4.1 Introduction ...................................................................................................................................... 31 4.2 Digital Image ..................................................................................................................................... 31 4.2.1 Digital Image Format .................................................................................................................. 32 4.2.2 Captured Image Format ............................................................................................................. 32 4.2.3 Stored Image Format ................................................................................................................. 32 4.3 Bitmaps ............................................................................................................................................. 32 4.4 Making Still Images ........................................................................................................................... 34 4.4.1 Bitmap Software ........................................................................................................................ 34 4.4.2 Capturing and Editing Images .................................................................................................... 35

4.5 Vector Drawing ................................................................................................................................. 35 4.8 Let us sum up .................................................................................................................................... 37 4.9 Lesson-end activities ......................................................................................................................... 37 Chapter 5..................................................................................................................................................... 38 Animation and Video .................................................................................................................................. 38 5.0 Aims and Objectives .......................................................................................................................... 38 5.1 Introduction ...................................................................................................................................... 38 5.2 Principles of Animation ..................................................................................................................... 38 5.3 Animation Techniques ...................................................................................................................... 39 5.3.1 Cel Animation ............................................................................................................................. 39 5.3.2 Computer Animation ................................................................................................................. 40 5.3.3 Kinematics .................................................................................................................................. 40 5.3.4 Morphing ................................................................................................................................... 41 5.4 Animation File Formats ..................................................................................................................... 41 5.5 Video ................................................................................................................................................. 41 5.6 Broadcast Video Standards ............................................................................................................... 42 5.7 Shooting and Editing Video ............................................................................................................... 43 5.8 Video Compression ........................................................................................................................... 44 5.9 Let us sum up .................................................................................................................................... 46 Chapter 6..................................................................................................................................................... 47 Multimedia Hardware Connecting Devices ............................................................................................. 47 6.0 Aims and Objectives .......................................................................................................................... 47 6.1 Introduction ...................................................................................................................................... 47 6.2 Multimedia Hardware ....................................................................................................................... 47 6.3 Connecting Devices ........................................................................................................................... 47

6.4 SCSI .................................................................................................................................................... 48 6.4.1 SCSI interfaces ............................................................................................................................ 48 6.4.2 SCSI cabling ................................................................................................................................ 49 6.4.3 SCSI command protocol ............................................................................................................. 50 6.4.4 SCSI device identification ........................................................................................................... 51 6.4.5 SCSI enclosure services .............................................................................................................. 53 6.5 Media Control Interface (MCI) .......................................................................................................... 53 6.5.1 MCI Devices ................................................................................................................................ 53 6.5.2 Playing media through the MCI interface .................................................................................. 53 6.6 IDE ..................................................................................................................................................... 54 6.7 USB .................................................................................................................................................... 55 6.8 Let us sum up .................................................................................................................................... 56 6.9 Lesson-end activities ......................................................................................................................... 56 CHAPTER 7 .................................................................................................................................................. 57 Multimedia Workstation ............................................................................................................................. 57 7.0 Aims and Objectives .......................................................................................................................... 57 7.1 Introduction ...................................................................................................................................... 57 7.2 Communication Architecture ............................................................................................................ 57 7.3 Hybrid Systems.................................................................................................................................. 58 7.4 Digital Systems .................................................................................................................................. 58 7.5 Multimedia Workstation ................................................................................................................... 59 7.6 Preference of Operating System for Workstation. ........................................................................... 62 7.6.1 The Macintosh Platform ............................................................................................................ 62 7.6.2 The Windows Platform .............................................................................................................. 62 7.6.3 Networking Macintosh andWindows Computers ...................................................................... 63

7.7 Let us sum up .................................................................................................................................... 63 7.8 Lesson-end activities ......................................................................................................................... 63 Chapter 8:.................................................................................................................................................... 64 Documents, Hypertext, Hypermedia .......................................................................................................... 64 8.0 Aims and Objectives .......................................................................................................................... 64 8.1 Introduction ...................................................................................................................................... 64 8.2 Documents ........................................................................................................................................ 64 8.2.1 Document Architecture: ............................................................................................................ 64 8.3 HYPERTEXT ........................................................................................................................................ 65 8.4 Hypermedia ....................................................................................................................................... 66 8.5 Hypertext and Hypermedia ............................................................................................................... 66 8.6 Hypertext, Hypermedia and multimedia .......................................................................................... 68 8.7 Hypertext and the World Wide Web ................................................................................................ 70 Chapter 9..................................................................................................................................................... 71 Document Architecture and MPEG............................................................................................................. 71 9.0 Aims and Objectives .......................................................................................................................... 71 9.1 Introduction ...................................................................................................................................... 71 9.2 Document Architecture - SGML ........................................................................................................ 71 9.2.1 SGML and Multimedia ............................................................................................................... 72 9.3 Open Document Architecture ODA .................................................................................................. 73 9.3.1 Details of ODA ............................................................................................................................ 73 9.3.2 Layout structure and Logical Structure ...................................................................................... 74 9.3.3 ODA and Multimedia ................................................................................................................. 76 9.4 MPEG................................................................................................................................................. 78 9.4.2 Derivation of a Class Hierarchy .................................................................................................. 81

Chapter 10................................................................................................................................................... 83 Basic Tools for Multimedia Objects ............................................................................................................ 83 10.0 Aims and Objectives ........................................................................................................................ 83 10.1 Introduction .................................................................................................................................... 83 10.2 Text Editing and Word Processing Tools ......................................................................................... 83 10.3 OCR Software .................................................................................................................................. 83 10.4 Image-Editing Tools......................................................................................................................... 84 10.6 Sound Editing Tools ......................................................................................................................... 86 10.7 Animation, Video and Digital Movie Tools...................................................................................... 87 10.7.1 Video formats........................................................................................................................... 87 10.7.2 Common organization of video formats .................................................................................. 87 10.7.3 QuickTime ................................................................................................................................ 89 CHAPTER 11 ................................................................................................................................................ 92 User Interface ............................................................................................................................................. 92 11.0 Aims and Objectives ........................................................................................................................ 92 11.1 Introduction .................................................................................................................................... 92 11.2 User Interfaces ................................................................................................................................ 92 11.3 General Design Issues ..................................................................................................................... 92 11.3.1 Information Characteristics for presentation: ......................................................................... 93 11.3.2 Presentation Function .............................................................................................................. 93 11.3.3 Presentation Design Knowledge .............................................................................................. 94 11.4 Effective Human-Computer Interaction ......................................................................................... 94 11.5 Video at the User Interface ............................................................................................................. 94 11.6 Audio at the User Interface............................................................................................................. 95 11.7 User-friendliness as the Primary Goal ............................................................................................ 95

11.7.1 Easy to Learn Instructions ........................................................................................................ 95 11.7.2 Context-sensitive Help Functions ............................................................................................ 96 11.7.3 Easy to Remember Instructions ............................................................................................... 96 11.7.4 Effective Instructions ............................................................................................................... 96 11.7.5 Aesthetics ................................................................................................................................. 96 11.7.6 Entry elements ......................................................................................................................... 96 11.7.7 Presentation ............................................................................................................................. 97 11.7.8 Dialogue Boxes ......................................................................................................................... 97 11.7.9 Additional Design Criteria ........................................................................................................ 97 Chapter 12................................................................................................................................................... 98 Multimedia Communication Systems ......................................................................................................... 98 12.0 Aims and Objectives ........................................................................................................................ 98 12.1 Introduction .................................................................................................................................... 98 12.2 Application Subsystem .................................................................................................................... 98 12.2.1 Collaborative Computing ............................................................................................................. 98 12.2.2 Collaborative Dimensions ........................................................................................................ 99 12.2.3 Group Communication Architecture...................................................................................... 100 12.3 Application Sharing Approach....................................................................................................... 101 12.4 Conferencing ................................................................................................................................. 102 12.5 Session Management .................................................................................................................... 103 12.5.1 Architecture ........................................................................................................................... 103 12.5.2 Session Control ...................................................................................................................... 104 CHAPTER 13: ............................................................................................................................................ 105 Quality of Service and Resource Management......................................................................................... 105 14.0 Aims and Objectives ...................................................................................................................... 105

13.1 Introduction .................................................................................................................................. 105 13.2 Quality of Service and Process Management ............................................................................... 105 13.3 Translation .................................................................................................................................... 107 13.4 Managing Resources during Multimedia Transmission ................................................................ 108 13.5 Architectural Issues ....................................................................................................................... 110 CHAPTER 14 .............................................................................................................................................. 112 Synchronisation......................................................................................................................................... 112 14.0 Aims and Objectives ...................................................................................................................... 112 14.1 Introduction .................................................................................................................................. 112 14.2 Notion of Synchronization ............................................................................................................ 113 14.3 Basic Synchronization Issues ......................................................................................................... 114 14.4 Intra and Inter Object Synchronization......................................................................................... 115 14.5 Lip synchronization Requirements................................................................................................ 117 14.6 Pointer synchronization Requirements ........................................................................................ 118 14.7 Reference Model for Multimedia Synchronization ...................................................................... 118 14.7.1 The Synchronization Reference Model .................................................................................. 118 14.8 Synchronization Specification ....................................................................................................... 120 Chapter 15................................................................................................................................................. 122 Multimedia Networking System ............................................................................................................... 122 15.0 Aims and Objectives ...................................................................................................................... 122 15.1 Introduction .................................................................................................................................. 122 15.2 Layers, Protocols and Services ...................................................................................................... 122 15.2.1 Physical Layer ......................................................................................................................... 124 15.2.3. Network Layer ....................................................................................................................... 124 15.2.4. Transport Layer ..................................................................................................................... 125

15.2.5 Session Layer .......................................................................................................................... 125 15.2.7 Application Layer ................................................................................................................... 125 15.3 Multimedia on Networks .............................................................................................................. 125 15.4 FDDI ............................................................................................................................................... 128 15.4.1 Topology of FDDI .................................................................................................................... 129 15.4.2 FDDI Architecture ................................................................................................................... 129 15.4.3 Further properties of FDDI ..................................................................................................... 131 CHAPTER 16 .............................................................................................................................................. 132 Multimedia Operating System .................................................................................................................. 132 16.0 Aims and Objectives ...................................................................................................................... 132 16.1 Introduction .................................................................................................................................. 132 16.2 Multimedia Operating System ...................................................................................................... 132 16.3 Real Time Process ......................................................................................................................... 133 16.3.1 Characteristics of Real Time Systems..................................................................................... 133 16.3.2 Real Time and Multimedia ..................................................................................................... 133 16.4 Resource Management ................................................................................................................. 134 16.4.1 Resources ................................................................................................................................... 135 16.4.2 Requirements ......................................................................................................................... 135 16.4.3 Components of the Resources ............................................................................................... 136 16.4.4 Phases of the Resource Reservation and Management Process ........................................... 137 Chapter 17................................................................................................................................................. 139 Multimedia OS - Process Management .................................................................................................... 139 17.0 Aims and Objectives ...................................................................................................................... 139 17.3 Real-time Processing Requirements ............................................................................................. 140 17.4 Traditional Real-time Scheduling .................................................................................................. 140

10

17.4.1 Earliest Deadline First Algorithm ........................................................................................... 141 17.4.2 Rate Monotonic Algorithm .................................................................................................... 142 17.4.3 Other Approaches to Rate Monotonic Algorithm.................................................................. 143 17.4.4 Other Approaches for In-Time Scheduling............................................................................. 144 CHAPTER 18 MULTIMEDIA OS FILE SYSTEM .......................................................................................... 146 19.0 Aims and Objectives ...................................................................................................................... 146 18.1 Introduction .................................................................................................................................. 146 18.2 File Systems ................................................................................................................................... 146 18.3 File Structure ................................................................................................................................. 147 18.4 Disk Scheduling ............................................................................................................................. 150 18.5 Multimedia File systems ............................................................................................................... 151 18.5.1 Disk Scheduling Algorithms in Multimedia File System ......................................................... 152 18.6 Additional Operating System Issues ............................................................................................. 155 References ................................................................................................................................................ 157

11

CHAPTER 1 Introduction to Multimedia


1.0 Aims and Objectives
In this lesson we will learn the preliminary concepts of Multimedia. We will discuss the various benefits and applications of multimedia. After going through this chapter the reader will be able to: i. ii. iii. iv. Define multimedia List the elements of multimedia Enumerate the different applications of multimedia Describe the different stages of multimedia software development

1.1 Introduction
Multimedia has become an inevitable part of any presentation. It has found a variety of applications right from entertainment to education. The evolution of internet has also increased the demand for multimedia content. Definition Multimedia is the media that uses multiple forms of information content and information processing (e.g. text, audio, graphics, animation, and video, interactivity) to inform or entertain the user. Multimedia also refers to the use of electronic media to store and experience multimedia content. Multimedia is similar to traditional mixed media in fine art, but with a broader scope. The term "rich media" is synonymous for interactive multimedia.

1.2 Elements of Multimedia System


Multimedia means that computer information can be represented through audio,graphics, image, video and animation in addition to traditional media(text and graphics).Hypermedia can be considered as one type of particular multimedia application.

12

Multimedia is a combination of content forms: Audio

Video

1.3 Categories of Multimedia


Multimedia may be broadly divided into linear and non-linear categories. Linear active content progresses without any navigation control for the viewer such as a cinema presentation. Nonlinear content offers user interactivity to control progress as used with a computer game or used in self-paced computer based training. Non-linear content is also known as hypermedia content. Multimedia presentations can be live or recorded. A recorded presentation may allow interactivity via a navigation system. A live multimedia presentation may allow interactivity via interaction with the presenter or performer.

13

1.4 Features of Multimedia


Multimedia presentations may be viewed in person on stage, projected, transmitted, or played locally with a media player. A broadcast may be a live or recorded multimedia presentation. Broadcasts and recordings can be either analog or digital electronic media technology. Digital online multimedia may be downloaded or streamed. Streaming multimedia may be live or ondemand. Multimedia games and simulations may be used in a physical environment with special effects, with multiple users in an online network, or locally with an offline computer, game system, or simulator. Enhanced levels of interactivity are made possible by combining multiple forms of media content But depending on what multimedia content you have it may vary Online multimedia is increasingly becoming object-oriented and data-driven, enabling applications with collaborative end-user innovation and personalization on multiple forms of content over time. Examples of these range from multiple forms of content on web sites like photo galleries with both images (pictures) and title (text) user-updated, to simulations whose co-efficient, events, illustrations, animations or videos are modifiable, allowing the multimedia "experience" to be altered without reprogramming.

1.5 Applications of Multimedia


Multimedia finds its application in various areas including, but not limited to, advertisements, art, education, entertainment, engineering, medicine, mathematics, business, scientific research and spatial, temporal applications. A few application areas of multimedia are listed below:

Creative industries
Creative industries use multimedia for a variety of purposes ranging from fine arts, to entertainment, to commercial art, to journalism, to media and software services provided for any of the industries listed below. An individual multimedia designer may cover the spectrum throughout their career. Request for their skills range from technical, to analytical and to creative.

Commercial
Much of the electronic old and new media utilized by commercial artists is multimedia. Exciting presentations are used to grab and keep attention in advertising. Industrial, business to business,
14

and interoffice communications are often developed by creative services firms for advanced multimedia presentations beyond simple slide shows to sell ideas or liven-up training. Commercial multimedia developers may be hired to design for governmental services and Nonprofit services applications as well.

Entertainment and Fine Arts


In addition, multimedia is heavily used in the entertainment industry, especially to develop special effects in movies and animations. Multimedia games are a popular pastime and are software programs available either as CD-ROMs or online. Some video games also use multimedia features. Multimedia applications that allow users to actively participate instead of just sitting by as passive recipients of information are called Interactive Multimedia.

Education
In Education, multimedia is used to produce computer-based training courses (popularly called CBTs) and reference books like encyclopedia and almanacs. A CBT lets the user go through a series of presentations, text about a particular topic, and associated illustrations in various information formats. Edutainment is an informal term used to describe combining education with entertainment, especially multimedia entertainment.

Engineering
Software engineers may use multimedia in Computer Simulations for anything from entertainment to training such as military or industrial training. Multimedia for software interfaces are often done as collaboration between creative professionals and software engineers.

Industry
In the Industrial sector, multimedia is used as a way to help present information to shareholders, superiors and coworkers. Multimedia is also helpful for providing employee training, advertising and selling products all over the world via virtually unlimited web-based technologies.

Mathematical and Scientific Research


In Mathematical and Scientific Research, multimedia is mainly used for modeling and simulation. For example, a scientist can look at a molecular model of a particular substance and

15

manipulate it to arrive at a new substance. Representative research can be found in journals such as the Journal of Multimedia.

Medicine
In Medicine, doctors can get trained by looking at a virtual surgery or they can simulate how the human body is affected by diseases spread by viruses and bacteria and then develop techniques to prevent it.

Multimedia in Public Places


In hotels, railway stations, shopping malls, museums, and grocery stores, multimedia will become available at stand-alone terminals or kiosks to provide information and help. Such installation reduce demand on traditional information booths and personnel, add value, and they can work around the clock, even in the middle of the night, when live help is off duty. A menu screen from a supermarket kiosk that provide services ranging from meal planning to coupons. Hotel kiosk list nearby restaurant, maps of the city, airline schedules, and provide guest services such as automated checkout. Printers are often attached so users can walk away with a printed copy of the information. Museum kiosk are not only used to guide patrons through the exhibits, but when installed at each exhibit, provide great added depth, allowing visitors to browser though richly detailed information specific to that display.

Exercise. List five applications of multimedia.

1.6 Convergence of Multimedia (Virtual Reality)


At the convergence of technology and creative invention in multimedia is virtual reality, or VR. Goggles, helmets, special gloves, and bizarre human interfaces attempt to place you inside a lifelike experience. Take a step forward, and the view gets closer, turn your head, and the view rotates. Reach out and grab an object; your hand moves in front of you. Maybe the object explodes in a 90-decibel crescendo as you wrap your fingers around it. Or it slips out from your grip, falls to the floor, and hurriedly escapes through a mouse hole at the bottom of the wall. VR requires terrific computing horsepower to be realistic. In VR, your cyberspace is made up of many thousands of geometric objects plotted in three-dimensional space: the more objects and
16

the more points that describe the objects, the higher resolution and the more realistic your view. As the user moves about, each motion or action requires the computer to recalculate the position, angle size, and shape of all the objects that make up your view and many thousands of computations must occur as fast as 30 times per second to seem smooth. On the World Wide Web, standards for transmitting virtual reality worlds or scenes in VRML (Virtual Reality Modeling Language) documents (with the file name extension .wrl) have been developed. Using high-speed dedicated computers, multi-million-dollar flight simulators built by singer, RediFusion, and others have led the way in commercial application of VR. Pilots of F16s, Boeing 777s, and Rockwell space shuttles have made many dry runs before doing the real thing. At the California Maritime academy and other merchant marine officer training schools, computer-controlled simulators teach the intricate loading and unloading of oil tankers and container ships. Specialized public game arcades have been built recently to offer VR combat and flying experiences for a price. From virtual World Entertainment in walnut Greek, California, and Chicago, for example, BattleTech is a ten-minute interactive video encounter with hostile robots. You compete against others, perhaps your friends, who share coaches in the same containment Bay. The computer keeps score in a fast and sweaty firefight. Similar attractions will bring VR to the public, particularly a youthful public, with increasing presence during the 1990s. The technology and methods for working with three-dimensional images and for animating them are discussed. VR is an extension of multimedia-it uses the basic multimedia elements of imagery, sound, and animation. Because it requires instrumented feedback from a wired-up person, VR is perhaps interactive multimedia at its fullest extension.

1.7 Stages of Multimedia Application Development


A Multimedia application is developed in stages as all other software is being developed. In multimedia application development a few stages have to complete before other stages being, and some stages may be skipped or combined with other stages. Following are the four basic stages of multimedia project development: 1. Planning and Costing: This stage of multimedia application is the first stage which begins with an idea or need. This idea can be further refined by outlining its messages and objectives. Before starting to develop the multimedia project, it is necessary to plan what writing skills, graphic art, music, video and other multimedia expertise will be
17

required. It is also necessary to estimate the time needed to prepare all elements of multimedia and prepare a budget accordingly. After preparing a budget, a prototype or proof of concept can be developed. 2. Designing and Producing: The next stage is to execute each of the planned tasks and create a finished product. 3. Testing: Testing a project ensures the product to be free from bugs. Apart from bug elimination another aspect of testing is to ensure that the multimedia application meets the objectives of the project. It is also necessary to test whether the multimedia project works properly on the intended deliver platforms and they meet the needs of the clients. 4. Delivering: The final stage of the multimedia application development is to pack the project and deliver the completed project to the end user. This stage has several steps such as implementation, maintenance, shipping and marketing the product.

1.8 Let us sum up


In this lesson we have discussed the following points i) Multimedia is a woven combination of text, audio, video, images and animation. ii) Multimedia systems finds a wide variety of applications in different areas uch as education, entertainment etc. iii) The categories of multimedia are linear and non-linear. iv) The stages for multimedia application development are Planning and costing, designing and producing, testing and delivery.

1.9 Lesson-end Activities


i) Create the credits for an imaginary multimedia production. Include several outside organizations such as audio mixing, video production, text based dialogues. ii) Review two educational CD-ROMs and enumerate their features.

18

Chapter 2 Working with Text


2.0 Aims and Objectives
In this lesson we will learn the different multimedia building blocks. Later we will learn the significant features of text. i. ii. iii. iv. At the end of the lesson you will be able to List the different multimedia building blocks Enumerate the importance of text List the features of different font editing and designing tools

2.1 Introduction
All multimedia content consists of texts in some form. Even a menu text is accompanied by a single action such as mouse click, keystroke or finger pressed in the monitor (in case of a touch
19

screen). The text in the multimedia is used to communicate information to the user. Proper use of text and words in multimedia presentation will help the content developer to communicate the idea and message to the user.

2.2 Multimedia Building Blocks


Any multimedia application consists any or all of the following components: 1. Text: Text and symbols are very important for communication in any medium. With the recent explosion of the Internet and World Wide Web, text has become more the important than ever. Web is HTML (Hyper text Markup language) originally designed to display simple text documents on computer screens, with occasional graphic images thrown in as illustrations. 2. Audio: Sound is perhaps the most element of multimedia. It can provide the listening pleasure of music, the startling accent of special effects or the ambience of a mood-setting background. 3. Images: Images whether represented analog or digital plays a vital role in a multimedia. It is expressed in the form of still picture, painting or a photograph taken through a digital camera. 4. Animation: Animation is the rapid display of a sequence of images of 2-D artwork or model positions in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in a number of ways. 5. Video: Digital video has supplanted analog video as the method of choice for making video for multimedia use. Video in multimedia are used to portray real time moving pictures in a multimedia project.

2.3 Text in Multimedia


Words and symbols in any form, spoken or written, are the most common system of communication. They deliver the most widely understood meaning to the greatest number of people. Most academic related text such as journals, e-magazines are available in the Web Browser readable form.

2.4 About Fonts and Faces


A typeface is family of graphic characters that usually includes many type sizes and styles. A font is a collection of characters of a single size and style belonging to a particular typeface family. Typical font styles are bold face and italic. Other style attributes such as underlining and outlining of characters, may be added at the users choice. The size of a text is usually measured
20

in points. One point is approximately 1/72 of an inch i.e. 0.0138. The size of a font does not exactly describe the height or width of its characters. This is because the x-height (the height of lower case character x) of two fonts may differ. Typefaces of fonts can be described in many ways, but the most common characterization of a typeface is serif and sans serif. The serif is the little decoration at the end of a letter stroke. Times, Times New Roman, Bookman are some fonts which comes under serif category. Arial, Optima, Verdana are some examples of sans serif font. Serif fonts are generally used for body of the text for better readability and sans serif fonts are generally used for headings. The following fonts shows a few categories of serif and sans serif fonts. F F (Serif Font) (Sans serif font) Selecting Text fonts It is a very difficult process to choose the fonts to be used in a multimedia presentation. Following are a few guidelines which help to choose a font in a multimedia presentation. As many number of typefaces can be used in a single presentation, this concept of using many fonts in a single page is called ransom-note topography. For small type, it is advisable to use the most legible font. In large size headlines, the kerning (spacing between the letters) can be adjusted In text blocks, the leading for the most pleasing line can be adjusted. Drop caps and initial caps can be used to accent the words. The different effects and colors of a font can be chosen in order to make the text look in a distinct manner. Anti aliased can be used to make a text look gentle and blended. For special attention to the text the words can be wrapped onto a sphere or bent like a wave. Meaningful words and phrases can be used for links and menu items. In case of text links(anchors) on web pages the messages can be accented. The most important text in a web page such as menu can be put in the top 320 pixels. Exercise List a few fonts available in your computer.

2.5 Computers and text:


Fonts:

21

Postscript fonts are a method of describing an image in terms of mathematical constructs (Bezier curves), so it is used not only to describe the individual characters of a font but also to describe illustrations and whole pages of text. Since postscript makes use of mathematical formula, it can be easily scaled bigger or smaller. Apple and Microsoft announced a joint effort to develop a better and faster quadratic curves outline font methodology, called true type. In addition to printing smooth characters on printers, TrueType would draw characters to a low resolution (72 dpi or 96 dpi) monitor.

2.6 Character set and alphabets:


ASCII Character set
The American standard code for information interchange (SCII) is the 7 bit character coding system most commonly used by computer systems in the United States and abroad. ASCII assigns a number of values to 128 characters, including both lower and uppercase letters, punctuation marks, Arabic numbers and math symbols. 32 control characters are also included. These control characters are used for device control messages, such as carriage return, line feed, Tab and form feed.

The Extended Character set


A byte which consists of 8 bits, is the most commonly used building block for computer processing. ASCII uses only 7 bits to code is 128 characters; the 8th bit of the byte is unused. This extra bit allows another 128 characters to be encoded before the byte is used up, and computer systems today use these extra 128 values for an extended character set. The extended character set is commonly filled with ANSI (American National Standards Institute) standard characters, including frequently used symbols.

Unicode
Unicode makes use of 16-bit architecture for multilingual text and character encoding. Unicode uses about 65,000 characters from all known languages and alphabets in the world. Several languages share a set of symbols that have a historically related derivation; the shared symbols of each language are unified into collections of symbols (Called scripts). A single script can work for tens or even hundreds of languages. Microsoft, Apple, Sun, Netscape, IBM, Xerox and

22

Novell are participating in the development of this standard and Microsoft and Apple have incorporated Unicode into their operating system.

2.7 Font Editing and Design tools


There are several software that can be used to create customized font. These tools help an multimedia developer to communicate his idea or the graphic feeling. Using these software different typefaces can be created. In some multimedia projects it may be required to create special characters. Using the font editing tools it is possible to create a special symbols and use it in the entire text. Following is the list of software that can be used for editing and creating fonts: Fontographer Fontmonger Cool 3D text Special font editing tools can be used to make your own type so you can communicate an idea or graphic feeling exactly. With these tools professional typographers create distinct text and display faces. 1. Fontographer: It is macromedia product; it is a specialized graphics editor for both Macintosh and Windows platforms. You can use it to create postscript, truetype and bitmapped fonts for Macintosh and Windows. 2. Making Pretty Text: To make your text look pretty you need a toolbox full of fonts and special graphics applications that can stretch, shade, color and anti-alias your words into real artwork. Pretty text can be found in bitmapped drawings where characters have been tweaked, manipulated and blended into a graphic image. 3. Hypermedia and Hypertext: Multimedia is the combination of text, graphic, and audio elements into a single collection or presentation becomes interactive multimedia when you give the user some control over what information is viewed and when it is viewed. When a hypermedia project includes large amounts of text or symbolic content, this content can be indexed and its element then linked together to afford rapid electronic retrieval of the associated information. When text is stored in a computer instead of on printed pages the computers powerful processing capabilities can be applied to make the text more accessible and meaningful. This text can be called as hypertext. 4. Hypermedia Structures:
23

Two Buzzwords used often in hypertext are link and node. Links are connections between the conceptual elements, that is, the nodes that may consists of text, graphics, sounds or related information in the knowledge base. 5. Searching for words: Following are typical methods for a word searching in hypermedia systems: Categories, Word Relationships, Adjacency, Alternates,Association, Negation, Truncation, Intermediate words, Frequency. Exercise. List a few font editing tools.

24

25

Chapter 3 Audio
3.0 Aims and Objectives
In this lesson we will learn the basics of Audio. We will learn how a digital audio is prepared and embedded in a multimedia system. At the end of the chapter the learner will be able to : i. ii. iii. iv. Distinguish audio and sound Prepare audio required for a multimedia system The learner will be able to list the different audio editing softwares. List the different audio file formats

3.1 Introduction
Sound is perhaps the most important element of multimedia. It is meaningful speech in any language, from a whisper to a scream. It can provide the listening pleasure of music, the startling accent of special effects or the ambience of a moodsetting background. Sound is the terminology used in the analog form, and the digitized form of sound is called as audio.

3.2 Power of Sound


When something vibrates in the air is moving back and forth it creates wave of pressure. These waves spread like ripples from pebble tossed into a still pool and when it reaches the eardrums, the change of pressure or vibration is experienced as sound. Acoustics is the branch of physics that studies sound. Sound pressure levels are measured in decibels (db); a decibel measurement is actually the ratio between a chosen reference point on a logarithmic scale and the level that is actually experienced.

3.3 Multimedia Sound Systems


The multimedia application user can use sound right off the bat on both the Macintosh and on a multimedia PC running Windows because beeps and warning sounds are available as soon as the operating system is installed. On the Macintosh you can choose one of the several sounds for the system alert. In Windows system sounds are WAV files and they reside in the windows\Media subdirectory. There are still more choices of audio if Microsoft Office is installed. Windows makes use of WAV files as the default file format for audio and Macintosh systems use SND as
26

default file format for audio.

3.4 Digital Audio


Digital audio is created when a sound wave is converted into numbers a process referred to as digitizing. It is possible to digitize sound from a microphone, a synthesizer, existing tape recordings, live radio and television broadcasts, and popular CDs. You can digitize sounds from a natural source or prerecorded. Digitized sound is sampled sound. Ever nth fraction of a second, a sample of sound is taken and stored as digital information in bits and bytes. The quality of this digital recording depends upon how often the samples are taken.

3.4.1 Preparing Digital Audio Files


Preparing digital audio files is fairly straight forward. If you have analog source materials music or sound effects that you have recorded on analog media such as cassette tapes. The first step is to digitize the analog material and recording it onto a computer readable digital media. It is necessary to focus on two crucial aspects of preparing digital audio files: o Balancing the need for sound quality against your available RAM and Hard disk resources. o Setting proper recording levels to get a good, clean recording. Remember that the sampling rate determines the frequency at which samples will be drawn for the recording. Sampling at higher rates more accurately captures the high frequency content of your sound. Audio resolution determines the accuracy with which a sound can be digitized. Formula for determining the size of the digital audio Monophonic = Sampling rate * duration of recording in seconds * (bit resolution / 8) * 1 Stereo = Sampling rate * duration of recording in seconds * (bit resolution / 8) * 2 The sampling rate is how often the samples are taken. The sample size is the amount of information stored. This is called as bit resolution. The number of channels is 2 for stereo and 1 for monophonic. The time span of the recording is measured in seconds.

27

3.5 Editing Digital Recordings


Once a recording has been made, it will almost certainly need to be edited. The basic sound editing operations that most multimedia procedures needed are described in the paragraphs that follow 1. Multiple Tasks: Able to edit and combine multiple tracks and then merge the tracks and export them in a final mix to a single audio file. 2. Trimming: Removing dead air or blank space from the front of a recording and an unnecessary extra time off the end is your first sound editing task. 3. Splicing and Assembly: Using the same tools mentioned for trimming, you will probably want to remove the extraneous noises that inevitably creep into recording. 4. Volume Adjustments: If you are trying to assemble ten different recordings into a single track there is a little chance that all the segments have the same volume. 5. Format Conversion: In some cases your digital audio editing software might read a format different from that read by your presentation or authoring program. 6. Resampling or downsampling: If you have recorded and edited your sounds at 16 bit sampling rates but are using lower rates you must resample or downsample the file. 7. Equalization: Some programs offer digital equalization capabilities that allow you to modify a recording frequency content so that it sounds brighter or darker. 8. Digital Signal Processing: Some programs allow you to process the signal with reverberation, multitap delay, and other special effects using DSP routines. 9. Reversing Sounds: Another simple manipulation is to reverse all or a portion of a digital audio recording. Sounds can produce a surreal, other wordly effect when played backward. 10. Time Stretching: Advanced programs let you alter the length of a sound file without changing its pitch. This feature can be very useful but watch out: most time stretching algorithms will severely degrade the audio quality. Exercise. List a few audio editing features

3.6 Making MIDI Audio


MIDI (Musical Instrument Digital Interface) is a communication standard developed for electronic musical instruments and computers. MIDI files allow music and sound synthesizers

28

from different manufacturers to communicate with each other by sending messages along cables connected to the devices. Creating your own original score can be one of the most creative and rewarding aspects of building a multimedia project, and MIDI (Musical Instrument Digital Interface) is the quickest, easiest and most flexible tool for this task. The process of creating MIDI music is quite different from digitizing existing audio. To make MIDI scores, however you will need sequencer software and a sound synthesizer. The MIDI keyboard is also useful to simply the creation of musical scores. An advantage of structured data such as MIDI is the ease with which the music director can edit the data. A MIDI file format is used in the following circumstances: Digital audio will not work due to memory constraints and more processing power requirements When there is high quality of MIDI source When there is no requirement for dialogue. A digital audio file format is preferred in the following circumstances: When there is no control over the playback hardware When the computing resources and the bandwidth requirements are high. When dialogue is required.

3.7 Audio File Formats


A file format determines the application that is to be used for opening a file. Following is the list of different file formats and the software that can be used for opening a specific file. 1. *.AIF, *.SDII in Macintosh Systems 2. *.SND for Macintosh Systems 3. *.WAV for Windows Systems 4. MIDI files used by north Macintosh and Windows 5. *.WMA windows media player 6. *.MP3 MP3 audio 7. *.RA Real Player 8. *.VOC VOC Sound 9. AIFF sound format for Macintosh sound files 10. *.OGG Ogg Vorbis

29

3.8 Red Book Standard


The method for digitally encoding the high quality stereo of the consumer CD music market is an instrument standard, ISO 10149. This is also called as RED BOOK standard. The developers of this standard claim that the digital audio sample size and sample rate of red book audio allow accurate reproduction of all sounds that humans can hear. The red book standard recommends audio recorded at a sample size of 16 bits and sampling rate of 44.1 KHz. Exercise. Write the specifications used in red book standard

3.9 Software used for Audio


Software such as Toast and CD-Creator from Adaptec can translate the digital files of red book Audio format on consumer compact discs directly into a digital sound editing file, or decompress MP3 files into CD-Audio. There are several tools available for recording audio. Following is the list of different software that can be used for recording and editing audio ; Soundrecorder fromMicrosoft Apples QuickTime Player pro Sonic Foundrys SoundForge for Windows Soundedit16

30

Chapter 4 Images
4.0 Aims and Objectives
In this lesson we will learn how images are captured and incorporated into a multimedia presentation. Different image file formats and the different color representations have been discussed in this lesson. At the end of this lesson the learner will be able to i) Create his own image ii) Describe the use of colors and palettes in multimedia iii) Describe the capabilities and limitations of vector images. iv) Use clip arts in the multimedia presentations

4.1 Introduction
Still images are the important element of a multimedia project or a web site. In order to make a multimedia presentation look elegant and complete, it is necessary to pend ample amount of time to design the graphics and the layouts. Competent, computer literate skills in graphic art and design are vital to the success of a multimedia project.

4.2 Digital Image


A digital image is represented by a matrix of numeric values each representing a quantized intensity value. When I is a two-dimensional matrix, then I(r,c) is the intensity value at the position corresponding to row r and column c of the matrix. The points at which an image is sampled are known as picture elements, commonly abbreviated as pixels. The pixel values of intensity images are called gray scale levels (we encode here the color of the image). The intensity at each pixel is represented by an integer and is determined from the continuous image by averaging over a small neighborhood around the pixel location. If there are just two intensity values, for example, black, and white, they are represented by the numbers 0 and 1; such images are called binary-valued images. If 8-bit integers are used to store each pixel value, the gray levels range from 0 (black) to 255 (white).

31

4.2.1 Digital Image Format


There are different kinds of image formats in the literature. We shall consider the image format that comes out of an image frame grabber, i.e., the captured image format, and the format when images are stored, i.e., the stored image format.

4.2.2 Captured Image Format


The image format is specified by two main parameters: spatial resolution, which is specified as pixelsxpixels (eg. 640x480)and color encoding, which is specified by bits per pixel. Both parameter values depend on hardware and software for input/output of images.

4.2.3 Stored Image Format


When we store an image, we are storing a two-dimensional array of values, in which each value represents the data associated with a pixel in the image. For a bitmap, this value is a binary digit.

4.3 Bitmaps
A bitmap is a simple information matrix describing the individual dots that are the smallest elements of resolution on a computer screen or other display or printing device. A onedimensional matrix is required for monochrome (black and white); greater depth (more bits of information) is required to describe more than 16 million colors the picture elements may have, as illustrated in following figure. The state of all the pixels on a computer screen make up the image seen by the viewer, whether in combinations of black and white or colored pixels in a line of text, a photograph-like picture, or a simple background pattern.

32

Where do bitmap come from? How are they made? Make a bitmap from scratch with paint or drawing program. Grab a bitmap from an active computer screen with a screen capture program, and Then paste into a paint program or your application. Capture a bitmap from a photo, artwork, or a television image using a scanner or video capture device that digitizes the image. Once made, a bitmap can be copied, altered, emailed, and otherwise used in many creative ways. Clip Art A clip art collection may contain a random assortment of images, or it may contain a series of graphics, photographs, sound, and video related to a single topic. For example, Corel, Micrografx, and Fractal Design bundle extensive clip art collection with their image-editing software. Multiple Monitors When developing multimedia, it is helpful to have more than one monitor, or a single highresolution monitor with lots of screen real estate, hooked up to your computer. In this way, you can display the full-screen working area of your project or presentation and still have space to put your tools and other menus. This is particularly important in an authoring system such as Macromedia Director, where the edits and changes you make in one window are immediately visible in the presentation window-provided the presentation window is not obscured by your editing tools. Exercise List a few software that can be used for creating images.
33

4.4 Making Still Images


Still images may be small or large, or even full screen. Whatever their form, still images are generated by the computer in two ways: as bitmap (or paint graphics) and as vector-drawn (or just plain drawn) graphics. Bitmaps are used for photo-realistic images and for complex drawing requiring fine detail. Vector-drawn objects are used for lines, boxes, circles, polygons, and other graphic shapes that can be mathematically expressed in angles, coordinates, and distances. A drawn object can be filled with color and patterns, and you can select it as a single object. Typically, image files are compressed to save memory and disk space; many image formats already use compression within the file itself for example, GIF, JPEG, and PNG. Still images may be the most important element of your multimedia project. If you are designing multimedia by yourself, put yourself in the role of graphic artist and layout designer.

4.4.1 Bitmap Software


The abilities and feature of image-editing programs for both the Macintosh and Windows range from simple to complex. The Macintosh does not ship with a painting tool, and Windows provides only the rudimentary Paint (see following figure), so you will need to acquire this very important software separately often bitmap editing or painting programs come as part of a bundle when you purchase your computer, monitor, or scanner.

34

4.4.2 Capturing and Editing Images


The image that is seen on a computer monitor is digital bitmap stored in video memory, updated about every 1/60 second or faster, depending upon monitors scan rate. When the images are assembled for multimedia project, it may often be needed to capture and store an image directly from screen. It is possible to use the Prt Scr key available in the keyboard to capture a image. Scanning Images After scanning through countless clip art collections, if it is not possible to find the unusual background you want for a screen about gardening. Sometimes when you search for something too hard, you dont realize that its right in front of your face. Open the scan in an image-editing program and experiment with different filters, the contrast, and various special effects. Be creative, and dont be afraid to try strange combinations sometimes mistakes yield the most intriguing results.

4.5 Vector Drawing


Most multimedia authoring systems provide for use of vector-drawn objects such as lines, rectangles, ovals, polygons, and text. Computer-aided design (CAD) programs have traditionally used vector-drawn object systems for creating the highly complex and geometric rendering needed by architects and engineers. Graphic artists designing for print media use vector-drawn objects because the same mathematics that put a rectangle on your screen can also place that rectangle on paper without jaggies. This requires the higher resolution of the printer, using a page description language such as PostScript. Programs for 3-D animation also use vector-drawn graphics. For example, the various changes of position, rotation, and shading of light required to spin the extruded. How Vector Drawing Works Vector-drawn objects are described and drawn to the computer screen using a fraction of the memory space required to describe and store the same object in bitmap form. A vector is a line that is described by the location of its two endpoints. A simple rectangle, for example, might be defined as follows: RECT 0,0,200,200 4.6 Color Color is a vital component of multimedia. Management of color is both a subjective and a technical exercise. Picking the right colors and combinations of colors for your project can involve many tries until you feel the result is right.
35

Understanding Natural Light and Color The letters of the mnemonic ROY G. BIV, learned by many of us to remember the colors of the rainbow, are the ascending frequencies of the visible light spectrum: red, orange, yellow, green, blue, indigo, and violet. Ultraviolet light, on the other hand, is beyond the higher end of the visible spectrum and can be damaging to humans. The color white is a noisy mixture of all the color frequencies in the visible spectrum. The cornea of the eye acts as a lens to focus light rays onto the retina. The light rays stimulate many thousands of specialized nerves called rods and cones that cover the surface of the retina. The eye can differentiate among millions of colors, or hues, consisting of combination of red, green, and blue. Additive Color In additive color model, a color is created by combining colored light sources in three primary colors: red, green and blue (RGB). This is the process used for a TV or computer monitor Subtractive Color In subtractive color method, a new color is created by combining colored media such as paints or ink that absorb (or subtract) some parts of the color spectrum of light and reflect the others back to the eye. Subtractive color is the process used to create color in printing. The printed page is made up of tiny halftone dots of three primary colors, cyan, magenta and yellow (CMY). Exercise Distinguish additive and subtractive colors and write their area of use.

4.7 Image File Formats There are many file formats used to store bitmaps and vectored drawing. Following is a list of few image file formats.

36

4.8 Let us sum up


In this lesson the following points have been discussed. Competent, computer literate skills in graphic art and design are vital to the success of a multimedia project. A digital image is represented by a matrix of numeric values each representing a quantized intensity value. A bitmap is a simple information matrix describing the individual dots that are the smallest elements of resolution on a computer screen or other display or printing device In additive color model, a color is created by combining colored lights sources in three primary colors: red, green and blue (RGB). Subtractive colors are used in printers and additive color concepts are used in monitors and television.

4.9 Lesson-end activities


1. Discuss the difference between bitmap and vector graphics. 2. Open an image in an image editing program capable of identifying colors. Select three different pixels in the image. Sample the color and write down its value in RGS, HSB, CMYK and hexadecimal color.

37

Chapter 5 Animation and Video


5.0 Aims and Objectives
In this lesson we will learn the basics of animation and video. At the end of this lesson the learner will be able to i. ii. iii. iv. v. List the different animation techniques. Enumerate the software used for animation. List the different broadcasting standards. Describe the basics of video recording and how they relate to multimedia production. Have knowledge on different video formats.

5.1 Introduction
Animation makes static presentations come alive. It is visual change over time and can add great power to our multimedia projects. Carefully planned, well-executed video clips can make a dramatic difference in a multimedia project. Animation is created from drawn pictures and video is created using real time visuals.

5.2 Principles of Animation


Animation is the rapid display of a sequence of images of 2-D artwork or model positions in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon of persistence of vision, and can be created and demonstrated in a number of ways. The most common method of presenting animation is as a motion picture or video program, although several other forms of presenting animation also exist. Animation is possible because of a biological phenomenon known as persistence of vision and a psychological phenomenon called phi. An object seen by the human eye remains chemically mapped on the eyes retina for a brief time after viewing. Combined with the human minds need to conceptually complete a perceived action, this makes it possible for a series of images that are changed very slightly and very rapidly, one after the other, to seemingly blend together into a visual illusion of movement. The following shows a few cells or frames of a rotating logo. When the images are progressively and rapidly changed, the arrow of the compass is perceived to be spinning.
38

Television video builds entire frames or pictures every second; the speed with which each frame is replaced by the next one makes the images appear to blend smoothly into movement. To make an object travel across the screen while it changes its shape, just change the shape and also move or translate it a few pixels for each frame.

5.3 Animation Techniques


When you create an animation, organize its execution into a series of logical steps. First, gather up in your mind all the activities you wish to provide in the animation; if it is complicated, you may wish to create a written script with a list of activities and required objects. Choose the animation tool best suited for the job. Then build and tweak your sequences; experiment with lighting effects. Allow plenty of time for this phase when you are experimenting and testing. Finally, post-process your animation, doing any special rendering and adding sound effects.

5.3.1 Cel Animation


The term cel derives from the clear celluloid sheets that were used for drawing each frame, which have been replaced today by acetate or plastic. Cels of famous animated cartoons have become sought-after, suitable-for-framing collectors items. Cel animation artwork begins with keyframes (the first and last frame of an action). For example, when an animated figure of a man walks across the screen, he balances the weight of his entire body on one foot and then the other in a series of falls and recoveries, with the opposite foot and leg catching up to support the body. The animation techniques made famous by Disney use a series of progressively different on each frame of movie film which plays at 24 frames per second. A minute of animation may thus require as many as 1,440 separate frames. The term cel derives from the clear celluloid sheets that were used for drawing each frame, which is been replaced today by acetate or plastic. Cel animation artwork begins with keyframes.

39

5.3.2 Computer Animation


Computer animation programs typically employ the same logic and procedural concepts as cel animation, using layer, keyframe, and tweening techniques, and even borrowing from the vocabulary of classic animators. On the computer, paint is most often filled or drawn with tools using features such as gradients and ant aliasing. The word links, in computer animation terminology, usually means special methods for computing RGB pixel values, providing edge detection, and layering so that images can blend or otherwise mix their colors to produce special transparencies, inversions, and effects. Computer Animation is same as that of the logic and procedural concepts as cel animation and use the vocabulary of classic cel animation terms such as layer, Keyframe, and tweening. The primary difference between the animation software program is in how much must be drawn by the animator and how much is automatically generated by the software In 2D animation the animator creates an object and describes a path for the object to follow. The software takes over, actually creating the animation on the fly as the program is being viewed by your user. In 3D animation the animator puts his effort in creating the models of individual and designing the characteristic of their shapes and surfaces. Paint is most often filled or drawn with tools using features such as gradients and antialiasing.

5.3.3 Kinematics
It is the study of the movement and motion of structures that have joints, such as a walking man. Inverse Kinematics is in high-end 3D programs, it is the process by which you link objects such as hands to arms and define their relationships and limits. Once those relationships are set you can drag these parts around and let the computer calculate the result.

40

5.3.4 Morphing
Morphing is popular effect in which one image transforms into another. Morphing application and other modeling tools that offer this effect can perform transition not only between still images but often between moving images as well. The morphed images were built at a rate of 8 frames per second, with each transition taking a total of 4 seconds. Some product that uses the morphing features are as follows o Black Belts EasyMorph and WinImages, o Human Softwares Squizz o Valis Groups Flo , MetaFlo, and MovieFlo. Exercise List the different animation techniques

5.4 Animation File Formats


Some file formats are designed specifically to contain animations and the can be ported among application and platforms with the proper translators. Director *.dir, *.dcr AnimationPro *.fli, *.flc 3D Studio Max *.max SuperCard and Director *.pics CompuServe *.gif Flash *.fla, *.swf Following is the list of few Software used for computerized animation: 3D Studio Max Flash AnimationPro

5.5 Video
Analog versus Digital Digital video has supplanted analog video as the method of choice for making video for multimedia use. While broadcast stations and professional production and postproduction houses remain greatly invested in analog video hardware (according to Sony, there are more than
41

350,000 Betacam SP devices in use today), digital video gear produces excellent finished products at a fraction of the cost of analog. A digital camcorder directly connected to a computer workstation eliminates the image-degrading analog-to-digital conversion step typically performed by expensive video capture cards, and brings the power of nonlinear video editing and production to everyday users.

5.6 Broadcast Video Standards


Four broadcast and video standards and recording formats are commonly in use around the world: NTSC, PAL, SECAM, and HDTV. Because these standards and formats are not easily interchangeable, it is important to know where your multimedia project will be used. NTSC The United States, Japan, and many other countries use a system for broadcasting and displaying video that is based upon the specifications set forth by the 1952 National Television Standards Committee. These standards define a method for encoding information into the electronic signal that ultimately creates a television picture. As specified by the NTSC standard, a single frame of video is made up of 525 horizontal scan lines drawn onto the inside face of a phosphor-coated picture tube every 1/30th of a second by a fast-moving electron beam. PAL The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe,Australia, and South Africa. PAL is an integrated method of adding color to a black-and-white television signal that paints 625 lines at a frame rate 25 frames per second.

SECAM The Sequential Color and Memory (SECAM) system is used in France, Russia, and few other countries. Although SECAM is a 625-line, 50 Hz system, it differs greatly from both the NTSC and the PAL color systems in its basic technology and broadcast method.

HDTV High Definition Television (HDTV) provides high resolution in a 16:9 aspect ratio (see following Figure). This aspect ratio allows the viewing of Cinemascope and Panavision
42

movies. There is contention between the broadcast and computer industries about whether to use interlacing or progressive-scan technologies.

Exercise2 List the different broadcast video standards and compare their specifications.

5.7 Shooting and Editing Video


To add full-screen, full-motion video to your multimedia project, you will need to invest in specialized hardware and software or purchase the services of a professional video production studio. In many cases, a professional studio will also provide editing tools and post-production capabilities that you cannot duplicate with your Macintosh or PC. NTSC television overscan approx. 648X480 (4:3) Video Tips A useful tool easily implemented in most digital video editing applications is blue screen, Ultimate, or chromo key editing. Blue screen is a popular technique for making multimedia titles because expensive sets are not required. Incredible backgrounds can be generated using 3D modeling and graphic software, and one or more actors, vehicles, or other objects can be neatly layered onto that background. Applications such as VideoShop, Premiere, Final Cut Pro, and iMovie provide this capability. Recording Formats
43

S-VHS video In S-VHS video, color and luminance information are kept on two separate tracks. The result is a definite improvement in picture quality. This standard is also used in Hi-8. still, if your ultimate goal is to have your project accepted by broadcast stations, this would not be the best choice. Component (YUV) In the early 1980s, Sony began to experiment with a new portable professional video format based on Betamax. Panasonic has developed their own standard based on a similar technology, called MII, Betacam SP has become the industry standard for professional video field recording. This format may soon be eclipsed by a new digital version called Digital Betacam. Digital Video Full integration of motion video on computers eliminates the analog television form of video from the multimedia delivery platform. If a video clip is stored as data on a hard disk, CD-ROM, or other mass-storage device, that clip can be played back on the computers monitor without overlay boards, videodisk players, or second monitors. This playback of digital video is accomplished using software architecture such as QuickTime or AVI, a multimedia producer or developer; you may need to convert video source material from its still common analog form (videotape) to a digital form manageable by the end users computer system. So an understanding of analog video and some special hardware must remain in your multimedia toolbox. Analog to digital conversion of video can be accomplished using the video overlay hardware described above, or it can be delivered direct to disk using FireWire cables. To repetitively digitize a full-screen color video image every 1/30 second and store it to disk or RAM severely taxes both Macintosh and PC processing capabilitiesspecial hardware, compression firmware, and massive amounts of digital storage space are required.

5.8 Video Compression


To digitize and store a 10-second clip of full-motion video in your computer requires transfer of an enormous amount of data in a very short amount of time. Reproducing just one frame of digital video component video at 24 bits requires almost 1MB of computer data; 30 seconds of video will fill a gigabyte hard disk. Full-size, full-motion video requires that the computer deliver data at about 30MB per second. This overwhelming technological bottleneck is overcome
44

using digital video compression schemes or codecs(coders/decoders). A codec is the algorithm used to compress a video for delivery and then decode it in real-time for fast playback. Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG,Cinepak, Sorenson, ClearVideo, RealVideo, and VDOwave are available to compress digital video information. Compression schemes use Discrete Cosine Transform (DCT), an encoding algorithm that quantifies the human eyes ability to detect color and image distortion. All of these codecs employ lossy compression algorithms. In addition to compressing video data, streaming technologies are being implemented to provide reasonable quality low-bandwidth video on the Web. Microsoft, RealNetworks, VXtreme, VDOnet, Xing, Precept, Cubic, Motorola, Viva, Vosaic, and Oracle are actively pursuing the commercialization of streaming technology on the Web. QuickTime, Apples software-based architecture for seamlessly integrating sound, animation, text, and video (data that changes over time), is often thought of as a compression standard, but it is really much more than that. MPEG The MPEG standard has been developed by the Moving Picture Experts Group, a working group convened by the International Standards Organization (ISO) and the International Electrotechnical Commission (IEC) to create standards for digital representation of moving pictures and associated audio and other data. MPEG1 and MPEG2 are the current standards. Using MPEG1, you can deliver 1.2 Mbps of video and 250 Kbps of two-channel stereo audio using CD-ROM technology. MPEG2, a completely different system from MPEG1, requires higher data rates (3 to 15 Mbps) but delivers higher image resolution, picture quality, interlaced video formats, multiresolution scalability, and multichannel audio features. DVI/Indeo DVI is a property, programmable compression/decompression technology based on the Intel i750 chip set. This hardware consists of two VLSI (Very Large Scale Integrated) chips to separate the image processing and display functions. Two levels of compression and decompression are provided by DVI: Production Level Video (PLV) and Real Time Video (RTV). PLV and RTV both use variable rates. DVIs algorithms can compress video images at ratios between 80:1 and 160:1.DVI will play back video in full-frame size and in full color at 30 frames per second. Optimizing Video Files for CD-ROM
45

CD-ROMs provide an excellent distribution medium for computer-based video: they are inexpensive to mass produce, and they can store great quantities of information. CDROM players offer slow data transfer rates, but adequate video transfer can be achieved by taking care to properly prepare your digital video files. Limit the amount of synchronization required between the video and audio. With Microsofts AVI files, the audio and video data are already interleaved, so this is not a necessity, but with QuickTime files, you should flatten your movie. Flattening means you interleave the audio and video segments together. Use regularly spaced key frames, 10 to 15 frames apart, and temporal compression can correct for seek time delays. Seek time is how long it takes the CD-ROM player to locate specific data on the CD-ROM disc. Even fast 56x drives must spin up, causing some delay (and occasionally substantial noise). The size of the video window and the frame rate you specify dramatically affect performance. In QuickTime, 20 frames per second played in a 160X120-pixel window is equivalent to playing 10 frames per second in a 320X240 window. The more data that has to be decompressed and transferred from the CD-ROM to the screen, the slower the playback.

5.9 Let us sum up


In this lesson we have learnt the use of animation and video in multimedia presentation. Following points have been discussed in this lesson: Animation is created from drawn pictures and video is created using real time visuals. Animation is possible because of a biological phenomenon known as persistence of vision The different techniques used in animation are cel animation, computer animation, kinematics and morphing. Four broadcast and video standards and recording formats are commonly in use around the world: NTSC, PAL, SECAM, and HDTV. Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG, Cinepak, Sorenson, ClearVideo, RealVideo, and VDOwave are available to compress digital video information.

46

Chapter 6 Multimedia Hardware Connecting Devices


6.0 Aims and Objectives
In this lesson we will learn about the multimedia hardware required for multimedia production. At the end of the lesson the learner will be able to identify the proper hardware required for connecting various devices.

6.1 Introduction
The hardware required for multimedia PC depends on the personal preference, budget, project delivery requirements and the type of material and content in the project. Multimedia production was much smoother and easy in Macintosh than in Windows. But Multimedia content production in windows has been made easy with additional storage and less computing cost. Right selection of multimedia hardware results in good quality multimedia presentation.

6.2 Multimedia Hardware


The hardware required for multimedia can be classified into five. They are 1. Connecting Devices 2. Input devices 3. Output devices 4. Storage devices and 5. Communicating devices.

6.3 Connecting Devices


Among the many hardware computers, monitors, disk drives, video projectors, light valves, video projectors, players, VCRs, mixers, sound speakers there are enough wires which connect these devices. The data transfer speed the connecting devices provide will determine the faster delivery of the multimedia content. The most popularly used connecting devices are: SCSI USB MCI IDE
47

USB

6.4 SCSI
SCSI (Small Computer System Interface) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, electrical and optical interfaces. SCSI is most commonly used for hard disks and tape drives, but it can connect a wide range of other devices, including scanners, and optical drives (CD, DVD, etc.). SCSI is most commonly pronounced "scuzzy". Since its standardization in 1986, SCSI has been commonly used in the Apple Macintosh and Sun Microsystems computer lines and PC server systems. SCSI has never been popular in the lowpriced IBM PC world, owing to the lower cost and adequate performance of its ATA hard disk standard. SCSI drives and even SCSI RAIDs became common in PC workstations for video or audio production, but the appearance of large cheap SATA drives means that SATA is rapidly taking over this market. Currently, SCSI is popular on high-performance workstations and servers. RAIDs on servers almost always use SCSI hard disks, though a number of manufacturers offer SATA-based RAID systems as a cheaper option. Desktop computers and notebooks more typically use the ATA/IDE or the newer SATA interfaces for hard disks, and USB and FireWire connections for external devices.

6.4.1 SCSI interfaces


SCSI is available in a variety of interfaces. The first, still very common, was parallel SCSI (also called SPI). It uses a parallel electrical bus design. The traditional SPI design is making a transition to Serial Attached SCSI, which switches to a serial point-topoint design but retains other aspects of the technology. iSCSI drops physical implementation entirely, and instead uses TCP/IP as a transport mechanism. Finally, many other interfaces which do not rely on complete SCSI standards still implement the SCSI command protocol.

48

The following table compares the different types of SCSI.

6.4.2 SCSI cabling


Internal SCSI cables are usually ribbon cables that have multiple 68 pin or 50 pin connectors. External cables are shielded and only have connectors on the ends.

ISCSI ISCSI preserves the basic SCSI paradigm, especially the command set, almost unchanged. iSCSI advocates project the iSCSI standard, an embedding of SCSI-3 over TCP/IP, as displacing Fibre Channel in the long run, arguing that Ethernet data rates are currently increasing faster than data rates for Fibre Channel and similar disk-attachment technologies. iSCSI could thus address both the low-end and high-end markets with a single commodity-based technology. Serial SCSI Four recent versions of SCSI, SSA, FC-AL, FireWire, and Serial Attached SCSI (SAS) break from the traditional parallel SCSI standards and perform data transfer via serial communications. Although much of the documentation of SCSI talks about the parallel interface, most contemporary development effort is on serial SCSI. Serial SCSI has a number of advantages over parallel SCSIfaster data rates, hot swapping, and improved fault isolation. The primary reason for the shift to serial interfaces is the clock skew issue of high speed parallel interfaces, which

49

makes the faster variants of parallel SCSI susceptible to problems caused by cabling and termination. Serial SCSI devices are more expensive than the equivalent parallel SCSI devices.

6.4.3 SCSI command protocol


In addition to many different hardware implementations, the SCSI standards also include a complex set of command protocol definitions. The SCSI command architecture was originally defined for parallel SCSI buses but has been carried forward with minimal change for use with iSCSI and serial SCSI. Other technologies which use the SCSI command set include the ATA Packet Interface, USB Mass Storage class and FireWire SBP-2. In SCSI terminology, communication takes place between an initiator and a target. The initiator sends a command to the target which then responds. SCSI commands are sent in a Command Descriptor Block (CDB). The CDB consists of a one byte operation code followed by five or more bytes containing command-specific parameters. At the end of the command sequence the target returns a Status Code byte which is usually 00h for success, 02h for an error (called a Check Condition), or 08h for busy. When the target returns a Check Condition in response to a command, the initiator usually then issues a SCSI Request Sense command in order to obtain a Key Code Qualifier (KCQ) from the target. The Check Condition and Request Sense sequence involves a special SCSI protocol called a Contingent Allegiance Condition. There are 4 categories of SCSI commands: N (non-data), W (writing data from initiator to target), R (reading data), and B (bidirectional). There are about 60 different SCSI commands in total, with the most common being: Test unit ready: Queries device to see if it is ready for data Inquiry: Returns basic device information also used to "ping" the device since it does not modify sense data. Request sense: Returns any error codes from the previous command that returned an error status. Send diagnostic and Receives diagnostic results: runs a simple self-test or a specialized test defined in a diagnostic page. Start/Stop unit: Spins disks up and down, load/unload media. Read capacity: Returns storage capacity.
50

transfers (disk spun up, media loaded, etc.).

Format unit: Sets all sectors to all zeroes, also allocates logical blocks Read Format Capacities: Read the capacity of the sectors. Read (four variants): Reads data from a device. Write (four variants): Writes data to a device. Log sense: Returns current information from log pages. Mode sense: Returns current device parameters from mode pages. Mode select: Sets device parameters in a mode page.

avoiding defective sectors.

Each device on the SCSI bus is assigned at least one Logical Unit Number (LUN).Simple devices have just one LUN, more complex devices may have multiple LUNs. A "direct access" (i.e. disk type) storage device consists of a number of logical blocks, usually referred to by the term Logical Block Address (LBA). A typical LBA equates to 512 bytes of storage. The usage of LBAs has evolved over time and so four different command variants are provided for reading and writing data. The Read(6) and Write(6) commands contain a 21-bit LBA address. The Read(10), Read(12), Read Long, Write(10), Write(12), and Write Long commands all contain a 32-bit LBA address plus various other parameter options. A "sequential access" (i.e. tape-type) device does not have a specific capacity because it typically depends on the length of the tape, which is not known exactly. Reads and writes on a sequential access device happen at the current position, not at a specific LBA. The block size on sequential access devices can either be fixed or variable, depending on the specific device. (Earlier devices, such as 9-track tape, tended to be fixed block, while later types, such as DAT, almost always supported variable block sizes.)

6.4.4 SCSI device identification


In the modern SCSI transport protocols, there is an automated process of "discovery" of the IDs. SSA initiators "walk the loop" to determine what devices are there and then assign each one a 7bit "hop-count" value. FC-AL initiators use the LIP (Loop Initialization Protocol) to interrogate each device port for its WWN (World Wide Name). For iSCSI, because of the unlimited scope of the (IP) network, the process is quite complicated. These discovery processes occur at poweron/initialization time and also if the bus topology changes later, for example if an extra device is added. On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a
51

"SCSI ID", which is a number in the range 0-7 on a narrow bus and in the range 015 on a wide bus. On earlier models a physical jumper or switch controls the SCSI ID of the initiator (host adapter). On modern host adapters (since about 1997), doing I/O to the adapter sets the SCSI ID; for example, the adapter often contains a BIOS program that runs when the computer boots up and that program has menus that let the operator choose the SCSI ID of the host adapter. Alternatively, the host adapter may come with software that must be installed on the host computer to configure the SCSI ID. The traditional SCSI ID for a host adapter is 7, as that ID has the highest priority during bus arbitration (even on a 16 bit bus). The SCSI ID of a device in a drive enclosure that has a backplane is set either by jumpers or by the slot in the enclosure the device is installed into, depending on the model of the enclosure. In the latter case, each slot on the enclosure's back plane delivers control signals to the drive to select a unique SCSI ID. A SCSI enclosure without a backplane often has a switch for each drive to choose the drive's SCSI ID. The enclosure is packaged with connectors that must be plugged into the drive where the jumpers are typically located; the switch emulates the necessary jumpers. While there is no standard that makes this work, drive designers typically set up their jumper headers in a consistent format that matches the way that these switches implement. Note that a SCSI target device (which can be called a "physical unit") is often divided into smaller "logical units." For example, a high-end disk subsystem may be a single SCSI device but contain dozens of individual disk drives, each of which is a logical unit (more commonly, it is not that simplevirtual disk devices are generated by the subsystem based on the storage in those physical drives, and each virtual disk device is a logical unit). The SCSI ID, WWNN, etc. in this case identifies the whole subsystem, and a second number, the logical unit number (LUN) identifies a disk device within the subsystem. It is quite common, though incorrect, to refer to the logical unit itself as a "LUN." Accordingly, the actual LUN may be called a "LUN number" or "LUN id". Setting the bootable (or first) hard disk to SCSI ID 0 is an accepted IT community recommendation. SCSI ID 2 is usually set aside for the Floppy drive while SCSI ID 3 is typically for a CD ROM.

52

6.4.5 SCSI enclosure services


In larger SCSI servers, the disk-drive devices are housed in an intelligent enclosure that supports SCSI Enclosure Services (SES). The initiator can communicate with the enclosure using a specialized set of SCSI commands to access power, cooling, and other non-data characteristics. Exercise List a few types of SCSI.

6.5 Media Control Interface (MCI)


The Media Control Interface, MCI in short, is an aging API for controlling multimedia peripherals connected to a Microsoft Windows or OS/2 computer. MCI makes it very simple to write a program which can play a wide variety of media files and even to record sound by just passing commands as strings. It uses relations described in Windows registries or in the [MCI] section of the file SYSTEM.INI. The MCI interface is a high-level API developed by Microsoft and IBM for controlling multimedia devices, such as CD-ROM players and audio controllers. The advantage is that MCI commands can be transmitted both from the programming language and from the scripting language (open script, lingo). For a number of years, the MCI interface has been phased out in favor of the DirectX APIs.

6.5.1 MCI Devices


The Media Control Interface consists of 4 parts: AVIVideo CDAudio Sequencer WaveAudio Each of these so-called MCI devices can play a certain type of files e.g. AVI Video plays avi files, CDAudio plays cd tracks among others. Other MCI devices have also been made available over time.

6.5.2 Playing media through the MCI interface


To play a type of media, it needs to be initialized correctly using MCI commands. These commands are subdivided into categories: System Commands
53

Required Commands Basic Commands Extended Commands

6.6 IDE
Usually storage devices connect to the computer through an Integrated Drive Electronics (IDE) interface. Essentially, an IDE interface is a standard way for a storage device to connect to a computer. IDE is actually not the true technical name for the interface standard. The original name, AT Attachment (ATA), signified that the interface was initially developed for the IBM AT computer. IDE was created as a way to standardize the use of hard drives in computers. The basic concept behind IDE is that the hard drive and the controller should be combined. The controller is a small circuit board with chips that provide guidance as to exactly how the hard drive stores and accesses data. Most controllers also include some memory that acts as a buffer to enhance hard drive performance. Before IDE, controllers and hard drives were separate and often proprietary. In other words, a controller from one manufacturer might not work with a hard drive from another manufacturer. The distance between the controller and the hard drive could result in poor signal quality and affect performance. Obviously, this caused much frustration for computer users. IDE devices use a ribbon cable to connect to each other. Ribbon cables have all of the wires laid flat next to each other instead of bunched or wrapped together in a bundle. IDE ribbon cables have either 40 or 80 wires. There is a connector at each end of the cable and another one about two-thirds of the distance from the motherboard connector. This cable cannot exceed 18 inches (46 cm) in total length (12 inches from first to second connector, and 6 inches from second to third) to maintain signal integrity. The three connectors are typically different colors and attach to specific items: The blue connector attaches to the motherboard. The black connector attaches to the primary (master) drive. The grey connector attaches to the secondary (slave) drive. Enhanced IDE (EIDE) an extension to the original ATA standard again developed by Western Digital allowed the support of drives having a storage capacity larger than 504 MiBs (528 MB), up to 7.8 GiBs (8.4 GB). Although these new names originated in branding convention and not as an official standard, the terms IDE and EIDE often appear as if
54

interchangeable with ATA. This may be attributed to the two technologies being introduced with the same consumable devices these "new" ATA hard drives. With the introduction of Serial ATA around 2003, conventional ATA was retroactively renamed to Parallel ATA (P-ATA), referring to the method in which data travels over wires in this interface.

6.7 USB
Universal Serial Bus (USB) is a serial bus standard to interface devices. A major component in the legacy-free PC, USB was designed to allow peripherals to be connected using a single standardized interface socket and to improve plug-and-play capabilities by allowing devices to be connected and disconnected without rebooting the computer (hot swapping). Other convenient features include providing power to low-consumption devices without the need for an external power supply and allowing many devices to be used without requiring manufacturer specific, individual device drivers to be installed. USB is intended to help retire all legacy varieties of serial and parallel ports. USB can connect computer peripherals such as mouse devices, keyboards, PDAs, gamepads and joysticks, scanners, digital cameras, printers, personal media players, and flash drives. For many of those devices USB has become the standard connection method. USB is also used extensively to connect non-networked printers; USB simplifies connecting several printers to one computer. USB was originally designed for personal computers, but it has become commonplace on other devices such as PDAs and video game consoles. The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry standards body incorporating leading companies from the computer and electronics industries. Notable members have included Apple Inc., Hewlett-Packard, NEC, Microsoft, Intel, and Agere. A USB system has an asymmetric design, consisting of a host, a multitude of downstream USB ports, and multiple peripheral devices connected in a tiered-star topology. Additional USB hubs may be included in the tiers, allowing branching into a tree structure, subject to a limit of 5 levels of tiers. USB host may have multiple host controllers and each host controller may provide one or more USB ports. Up to 127 devices, including the hub devices, may be connected to a single host controller. USB devices are linked in series through hubs. There always exists one hub known as the root hub, which is built-in to the host controller. So-called "sharing hubs" also exist; allowing multiple computers to access the same peripheral device(s), either switching access between PCs
55

automatically or manually. They are popular in smalloffice environments. In network terms they converge rather than diverge branches. A single physical USB device may consist of several logical sub-devices that are referred to as device functions, because each individual device may provide several functions, such as a webcam (video device function) with a built-in microphone (audio device function). Exercise List the connecting devices discussed in this lesson.

6.8 Let us sum up


In this lesson we have learnt the different hardware required for multimedia production. We have discussed the following points related with connecting devices used in a multimedia computer. SCSI (Small Computer System Interface) is a set of standards for physically connecting and transferring data between computers and peripheral devices. On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a "SCSI ID", which is a number in the range 0-7 on a narrow bus and in the range 015 on a wide bus. The Media Control Interface, MCI in short, is an aging API for controlling multimedia peripherals connected to a Microsoft Windows

6.9 Lesson-end activities


1. Identify the SYSTEM.INI file present in a computer and find the list of devices installed in a computer. Try to identify the settings for each device.

56

CHAPTER 7 Multimedia Workstation


7.0 Aims and Objectives
In this lesson we will learn the different requirements for a computer to become a multimedia workstation. At the end of this chapter the learner will be able to identify the requirements for making a computer, a multimedia workstation.

7.1 Introduction
A multimedia workstation is computer with facilities to handle multimedia objects such as text, audio, video, animation and images. A multimedia workstation was earlier identified as MPC (Multimedia Personal Computer). In the current scenario all computers are prebuilt with multimedia processing facilities. Hence it is not necessary to identify a computer as MPC. A multimedia system is comprised of both hardware and software components, but the major driving force behind a multimedia development is research and development in hardware capabilities.Besides the multimedia hardware capabilities of current personal computers (PCs) and workstations, computer networks with their increasing throughput and speed start to offer services which support multimedia communication systems. Also in this area, computer networking technology advances faster than the software.

7.2 Communication Architecture


Local multimedia systems (i.e., multimedia workstations) frequently include a network interface (e.g., Ethernet card) through which they can communicate with other. However, the transmission of audio and video cannot be carried out with only the conventional communication infrastructure and network adapters. Until now, the solution was that continuous and discrete media have been considered in different environments, independently of each other. It means that fully different systems were built. For example, on the one hand, the analog telephone system provides audio transmission services using its original dial devices connected by copper wires to the telephone companys nearest end office. The end offices are connected to switching centers, called toll offices, and these centers are connected through high bandwidth intertoll trunks to intermediate switching offices. This hierarchical structure allows for reliable audio

57

communication. On the other hand, digital computer networks provide data transmission services at lower data rates using network adapters connected by copper wires to switches and routers. Even today, professional radio and television studios transmit audio and video streams in the form of analog signals, although most network components (e.g., switches), over which these signals are transmitted, work internally in a digital mode.

7.3 Hybrid Systems


By using existing technologies, integration and interaction between analog and digital environments can be implemented. This integration approach is called the hybrid approach. The main advantage of this approach is the high quality of audio and video and all the necessary devices for input, output, storage and transfer that are available. The hybrid approach is used for studying application user interfaces, application programming interfaces or application scenarios. Integrated Device Control One possible integration approach is to provide a control of analog input/output audio-video components in the digital environment. Moreover, the connection between the sources (e.g., CD player, camera, microphone) and destinations (e.g., video recorder, write-able CD), or the switching of audio-video signals can be controlled digitally. Integrated Transmission Control A second possibility to integrate digital and analog components is to provide a common transmission control. This approach implies that analog audio-video sources and destinations are connected to the computer for control purposes to transmit continuous data over digital networks, such as a cable network. Integrated Transmission The next possibility to integrated digital and analog components is to provide a common transmission network. This implies that external analog audio-video devices are connected to computers using A/D (D/A) converters outside of the computer, not only for control, but also for processing purposes. Continuous data are transmitted over shared data networks.

7.4 Digital Systems


Connection to Workstations

58

In digital systems, audio-video devices can be connected directly to the computers (workstations) and digitized audio-video data are transmitted over shared data networks, Audio-video devices in these systems can be either analog or digital.

Connection to switches Another possibility to connect audio-video devices to a digital network is to connect them directly to the network switches.

7.5 Multimedia Workstation


Current workstations are designed for the manipulation of discrete media information. The data should be exchanged as quickly as possible between the involved components, often interconnected by a common bus. Computationally intensive and dedicated processing requirements lead to dedicated hardware, firmware and additional boards. Examples of these components are hard disk controllers and FDDI-adapters. A multimedia workstation is designed for the simultaneous manipulation of discrete and continuous media information. The main components of a multimedia workstation are: Standard Processor(s) for the processing of discrete media information. Main Memory and Secondary Storage with corresponding autonomous controllers. Universal Processor(s) for processing of data in real-time (signal processors). Special-Purpose Processors designed for graphics, audio and video media (containing, for example, a micro code decompression method for DVI processors). Bus Within current workstations, data are transmitted over the traditional asynchronous bus, meaning that if audio-video devices are connected to a workstation, continuous data are processed in a workstation, and the data transfer is done over this bus, which provides low and unpredictable
59

Graphics and video Adapters. Communications Adapters (for example, the Asynchronous Transfer Mode Host Interface. Further special-purpose adapters.

time guarantees. In multimedia workstations, in addition to this bus, the data will be transmitted over a second bus which can keep time guarantees. In later technical implementations, a bus may be developed which transmits two kinds of data according to their requirements (this is known as a multi-bus system). The notion of a bus has to be divided into system bus and periphery bus. In their current versions, system busses such as ISA, EISA, Microchannel, Q-bus and VME-bus support only limited transfer of continuous data. The further development of periphery busses, such as SCSI, is aimed at the development of data transfer for continuous media. Multimedia Devices The main peripheral components are the necessary input and output multimedia devices. Most of these devices were developed for or by consumer electronics, resulting in the relative low cost of the devices. Microphones, headphones, as well as passive and active speakers, are examples. For the most part, active speakers and headphones are connected to the computer because it, generally, does not contain an amplifier. The camera for video input is also taken from consumer electronics. Hence, a video interface in a computer must accommodate the most commonly used video techniques/standards, i.e., NTSC, PAL, SECAM with FBAS, RGB, YUV and YIQ modes. A monitor serves for video output. Besides Cathode Ray Tube (CRT) monitors (e.g., current workstation terminals), more and more terminals use the color-LCD technique (e.g., a projection TV monitor uses the LCD technique). Further, to display video, monitor characteristics, such as color, high resolution, and flat and large shape, are important. Primary Storage Audio and video data are copied among different system components in a digital system. An example of tasks, where copying of data is necessary, is a segmentation of the LDUs or the appending of a Header and Trailer. The copying operation uses system software-specific memory management designed for continuous media. This kind of memory management needs sufficient main memory (primary storage). Besides ROMs, PROMs, EPROMS, and partially static memory elements, low-cost of these modules, together with steadily increasing storage capacities, profits the multimedia world. Secondary Storage The main requirements put on secondary storage and the corresponding controller is a high storage density and low access time, respectively. On the one hand, to achieve a high storage
60

density, for example, a Constant Linear Velocity (CLV) technique was defined for the CD-DA (Compact Disc Digital Audio). CLV guarantees that the data density is kept constant for the entire optical disk at the expense of a higher mean access time. On the other hand, to achieve time guarantees, i.e., lower mean access time, a Constant Angle Velocity (CAV) technique could be used. Because the time requirement is more important, the systems with a CAV are more suitable for multimedia than the systems with a CLV. Processor In a multimedia workstation, the necessary work is distributed among different processors. Although currently, and for the near future, this does not mean that all multimedia workstations must be multi-processor systems. The processors are designed for different tasks. For example, a Dedicated Signal Processor (DSP) allows compression and decompression of audio in real-time. Moreover, there can be special-purpose processors employed for video. The following Figure shows an example of a multiprocessor for multimedia workstations envisioned for the future.

Operating System Another possible variant to provide computation of discrete and continuous data in a multimedia workstation could be distinguishing between processes for discrete data computation and for continuous data processing. These processes could run on separate processors. Given an adequate operating system, perhaps even one processor could be shared according to the requirements between processes for discrete and continuous data. Exercise
61

List a few components required for a multimedia workstation.

7.6 Preference of Operating System for Workstation.


Selection of the proper platform for developing the multimedia project may be based on your personal preference of computer, your budget constraints, and project delivery requirements, and the type of material and content in the project. Many developers believe that multimedia project development is smoother and easier on the Macintosh than in Windows, even though projects destined to run in Windows must then be ported across platforms. But hardware and authoring software tools for Windows have improved; today you can produce many multimedia projects with equal ease in either the Windows or Macintosh environment.

7.6.1 The Macintosh Platform


All Macintoshes can record and play sound. Many include hardware and software for digitizing and editing video and producing DVD discs. High-quality graphics capability is available out of the box. Unlike the Windows environment, where users can operate any application with keyboard input, the Macintosh requires a mouse. The Macintosh computer you will need for developing a project depends entirely upon the projects delivery requirements, its content, and the tools you will need for production.

7.6.2 The Windows Platform


Unlike the Apple Macintosh computer, a Windows computer is not a computer per se, but rather a collection of parts that are tied together by the requirements of the Windows operating system. Power supplies, processors, hard disks, CD-ROM players, video and audio components, monitors, key-boards and mice-it doesnt matter where they come from or who makes them. Made in Texas, Taiwan, Indonesia, Ireland, Mexico, or Malaysia by widely known or littleknown manufactures, these components are assembled and branded by Dell, IBM, Gateway, and other into computers that run Windows. In the early days, Microsoft organized the major PC hardware manufactures into the Multimedia PC Marketing Council to develop a set of specifications that would allow Windows to deliver a dependable multimedia experience.

62

7.6.3 Networking Macintosh andWindows Computers


When a user works in a multimedia development environment consisting of a mixture of Macintosh and Windows computers, you will want them to communicate with each other. It may also be necessary to share other resources among them, such as printers. Local area networks (LANs) and wide area networks (WANs) can connect the members of a workgroup. In a LAN, workstations are usually located within a short distance of one another, on the same floor of a building, for example. WANs are communication systems spanning great distances, typically set up and managed by large corporation and institutions for their own use, or to share with other users. LANs allow direct communication and sharing of peripheral resources such as file servers, printers, scanners, and network modems. They use a variety of proprietary technologies, most commonly Ethernet or TokenRing, to perform the connections. They can usually be set up with twisted-pair telephone wire, but be sure to use data-grade level 5 or cat5 wire-it makes a real difference, even if its a little more expensive! Bad wiring will give the user never-ending headache of intermittent and often untraceable crashes and failures.

7.7 Let us sum up


In this lesson we have learnt the different requirement for a multimedia workstation. A multimedia workstation is computer with facilities to handle multimedia objects such as text, audio, video, animation and images. Macintosh is the pioneer in Multimedia OS.

7.8 Lesson-end activities


1. Identify the workstation components installed in a computer and list the multimedia component/object associated with each devices.

63

Chapter 8: Documents, Hypertext, Hypermedia


8.0 Aims and Objectives
This lesson aims at introducing the concepts of hypertext and hypermedia. At the end of this chapter the learner will be able to : i) Understand the concepts of hypertext and hypermedia ii) Distinguish hypertext and hypermedia

8.1 Introduction
A document consists of a set of structural information that can be in different forms of media, and during presentation they can be generated or recorded. A document is aimed at the perception of a human, and is accessible for computer processing.

8.2 Documents
A multimedia document is a document which is comprised of information coded in at least one continuous (time-dependent) medium and in one discrete (timeindependent) medium. Integration of the different media is given through a close relation between information units. This is also called synchronization. A multimedia document is closely related to its environment of tools, data abstractions, basic concepts and document architecture.

8.2.1 Document Architecture:


Exchanging documents entails exchanging the document content as well as the document structure. This requires that both documents have the same document architecture. The current standardized, respectively in the progress of standardization, architectures are the Standard Generalized Markup Language(SGML) and the Open Document Architecture(ODA). There are also proprietary document architectures, such as DECs Document Content Architecture (DCA) and IBMs Mixed Object Document Content Architecture (MO:DCA). Information architectures use their data abstractions and concepts. A document architecture describes the connections among the individual elements represented as models (e.g., presentation model, manipulation model). The elements in the document architecture and their relations are shown in the following

64

Figure. The Figure shows amultimedia document architecture including relations between individual discrete media units and continuous media units. The manipulation model describes all the operations allowed for creation, change and deletion of multimedia information. The representation model defines: (1) the protocols for exchanging this information among different computers; and, (2) the formats for storing the data. It includes the relations between the individual information elements which need to be considered during presentation. It is important to mention that architecture may not include all described properties, respectively models.

Document architecture and its elements.

8.3 HYPERTEXT
Hypertext most often refers to text on a computer that will lead the user to other, related information on demand. Hypertext represents a relatively recent innovation to user interfaces, which overcomes some of the limitations of written text. Rather than remaining static like traditional text, hypertext makes possible a dynamic organization of information through links and connections (called hyperlinks). Hypertext can be designed to perform various tasks; for instance when a user "clicks" on it or "hovers" over it, a bubble with a word definition may appear, or a web page on a related subject may load, or a video clip may run, or an application may open. The prefix hyper ("over" or "beyond") signifies the overcoming of the old linear
65

constraints of written text.

Types and uses of hypertext Hypertext documents can either be static (prepared and stored in advance) or dynamic (continually changing in response to user input). Static hypertext can be used to cross-reference collections of data in documents, software applications, or books on CD. A well-constructed system can also incorporate other user-interface conventions, such as menus and command lines. Hypertext can develop very complex and dynamic systems of linking and cross-referencing. The most famous implementation of hypertext is the World Wide Web.

8.4 Hypermedia
Hypermedia is used as a logical extension of the term hypertext, in which graphics, audio, video, plain text and hyperlinks intertwine to create a generally nonlinear medium of information. This contrasts with the broader term multimedia, which may be used to describe non-interactive linear presentations as well as hypermedia. Hypermedia should not be confused with hypergraphics or super-writing which is not a related subject. The World Wide Web is a classic example of hypermedia, whereas a noninteractive cinema presentation is an example of standard multimedia due to the absence of hyperlinks. Most modern hypermedia is delivered via electronic pages from a variety of systems. Audio hypermedia is emerging with voice command devices and voice browsing.

8.5 Hypertext and Hypermedia


Communication reproduces knowledge stored in the human brain via several media. Documents are one method of transmitting information. Reading a document is an act of reconstructing knowledge. In an ideal case, knowledge transmission starts with an author and ends with a reconstruction of the same ideas by a reader. Todays ordinary documents (excluding hypermedia), with their linear form, support neither the reconstruction of knowledge, nor simplify its reproduction. Knowledge must be artificially serialized before the actual exchange. Hence, it is transformed into a linear document and the structural information is integrated into the actual content. In the case of hypertext and hypermedia, a graphical structure is possible in a document which may simplify the writing and reading processes.

66

Problem Description

Exercise Distinguish hypertext and hypermedia.

67

8.6 Hypertext, Hypermedia and multimedia


A book or an article on a paper has a given structure and is represented in a sequential form. Although it is possible to read individual paragraphs without reading previous paragraphs, authors mostly assume a sequential reading. Therefore many paragraphs refer to previous learning in the document. Novels, as well as movies, for example, always assume a pure sequential reception. Scientific literature can consist of independent chapters, although mostly a sequential reading is assumed. Technical documentation (e.g., manuals) consist often of a collection of relatively independent information units. A lexicon or reference book about the Airbus, for example, is generated by several authors and always only parts are read sequentially. There also exist many cross references in such documentations which lead to multiple searches at different places for the reader. Here, an electronic help facility, consisting of information links, can be very significant. The following figure shows an example of such a link. The arrows point to such a relation between the information units (Logical Data Units - LDUs). In a text (top left in the figure), a reference to the landing properties of aircrafts is given. These properties are demonstrated through a video sequence (bottom left in the figure). At another place in the text, sales of landing rights for the whole USA are shown (this is visualized in the form of a map, using graphicsbottom right in the figure). Further information about the airlines with their landing rights can be made visible graphically through a selection of a particular city. A special information about the number of the different airplanes sold with landing rights in Washington is shown at the top right in the figure with a bar diagram. Internally, the diagram information is presented in table form. The left bar points to the plane, which can be demonstrated with a video clip.

68

Hypertext System: A hypertext system is mainly determined through non-linear links of information. Pointers connect the nodes. The data of different nodes can be represented with one or several media types. In a pure text system, only text parts are connected. We understand hypertext as an information object which includes links to several media.

69

Multimedia System: A multimedia system contains information which is coded at least in a continuous and discrete medium. For example, if only links to text data are present, then this is not a multimedia system, it is a hypertext. A video conference, with simultaneous transmission of text and graphics, generated by a document processing program, is a multimedia application. Although it does not have any relation to hypertext and hypermedia. Hypermedia System: As the above figure shows, a hypermedia system includes the non-linear information links of hypertext systems and the continuous and media of multimedia systems. For example, if a nonlinear link consists of text and video data, then this is a hypermedia, multimedia and hypertext system.

8.7 Hypertext and the World Wide Web


In the late 1980s, Berners-Lee, then a scientist at CERN, invented the World Wide Web to meet the demand for automatic information-sharing among scientists working in different universities and institutes all over the world. In 1911, Lynx (web browser) was born as the world's first Internet web browser. Its ability to provide hypertext links within documents that could reach into documents anywhere on the Internet began the creation of the web on the Internet. After the release of web browsers for both the PC and Macintosh environments, traffic on the World Wide Web quickly exploded from only 500 known web servers in 1993 to over 10,000 in 1994. Thus, all earlier hypertext systems were overshadowed by the success of the web, even though it originally lacked many features of those earlier systems, such as an easy way to edit what you were reading, typed links, back links, transclusion, and source tracking.

70

Chapter 9 Document Architecture and MPEG


9.0 Aims and Objectives
This lesson aims at teaching the different document architecture followed in Multimedia. At the end of this lesson the learner will be able to : i) learn different document architectures. ii) enumerate the architecture of MHEG.

9.1 Introduction
Exchanging documents entails exchanging the document content as well as the document structure. This requires that both documents have the same document architecture. The current standards in the document architecture are 1. Standard Generalized Markup Language 2. Open Document Architecture

9.2 Document Architecture - SGML


The Standard Generalized Markup Language (SGML) was supported mostly by American publisher. Authors prepare the text, i.e., the content. They specify in a uniform way the title, tables, etc., without a description of the actual representation (e.g., script type and line distance). The publisher specifies the resulting layout. The basic idea is that the author uses tags for marking certain text parts. SGML determines the form of tags. But it does not specify their location or meaning. User groups agree on the meaning of the tags. SGML makes a frame available with which the user specifies the syntax description in an object-specific system. Here, classes and objects, hierarchies of classes and objects, inheritance and the link to methods (processing instructions) can be used by the specification. SGML specifies the syntax, but not the semantics. For example, <title>Multimedia-Systems</title> <author>Felix Gatou</author> <side>IBM</side> <summary>This exceptional paper from Peter
71

This example shows an application of SGML in a text document. The following figure shows the processing of an SGML document. It is divided into two processes:

SGML : Document processing from the information to the presentation Only the formatter knows the meaning of the tag and it transforms the document into a formatted document. The parser uses the tags, occurring in the document, in combination with the corresponding document type. Specification of the document structure is done with tags. Here, parts of the layout are linked together. This is based on the joint context between the originator of the document and the formatter process. It is one defined through SGML.

9.2.1 SGML and Multimedia


Multimedia data are supported in the SGML standard only in the form of graphics. A graphical image as a CGM (Computer Graphics Metafile) is embedded in an SGML document. The standard does not refer to other media : <!ATTLIST video id ID #IMPLIED> <!ATTLIST video synch synch #MPLIED> <!ELEMENT video (audio, movpic)> <!ELEMENT audio (#NDATA)> -- not-text media <!ELEMENT movpic (#NDATA)> -- not-text media ..
72

<!ELEMENT story (preamble, body, postamble)> : A link to concrete data can be specified through #NDATA. The data are stored mostly externally in a separate file. The above example shows the definition of video which consists of audio and motion pictures. Multimedia information units must be presented properly. The synchronization between the components is very important here.

9.3 Open Document Architecture ODA


The Open Document Architecture (ODA) was initially called the Office Document Architecture because it supports mostly office-oriented applications. The main goal of this document architecture is to support the exchange, processing and presentation of documents in open systems. ODA has been endorsed mainly by the computer industry, especially in Europe.

9.3.1 Details of ODA


The main property of ODA is the distinction among content, logical structure and layout structure. This is in contrast to SGML where only a logical structure and the contents are defined. ODA also defines semantics. Following figure shows these three aspects linked to a document. One can imagine these aspects as three orthogonal views f the same document. Each of these views represent on aspect, together we get the actual document. The content of the document consists of Content Portions. These can be manipulated according to the corresponding medium.

73

A content architecture describes for each medium: (1) the specification of the elements, (2) the possible access functions and, (3) the data coding. Individual elements are the Logical Data Units (LDUs), which are determined for each medium. The access functions serve for the manipulation of individual elements. The coding of the data determines the mapping with respect to bits and bytes. ODA has content architectures for media text, geometrical graphics and raster graphics. Contents of the medium text are defined through the Character Content Architecture. The Geometric Graphics Content Architecture allows a content description of still images. It also takes into account individual graphical objects. Pixel-oriented still images are described through Raster Graphics Content Architecture. It can be a bitmap as well as a facsimile.

9.3.2 Layout structure and Logical Structure


The Structure and presentation models describe-according to the information architecture-the cooperation of information units. These kinds of meta information distinguish layout and logical structure. The layout structure specifies mainly the representation of a document. It is related to a two dimensional representation with respect to a screen or paper. The presentation model is a tree. Using frames the position and size of individual layout elements is established. For example, the page size and type style are also determined. The logical structure includes the partitioning of the content. Here, paragraphs and individual heading are specified according to the tree structure. Lists with their entries are defined as: Paper = preamble body postamble Body = heading paragraphpicture Chapter2 = heading paragraph picture paragraph The above example describes the logical structure of an article. Each article consists of a preamble, a body and a postamble. The body includes two chapters; both of them start with headings. Content is assigned to each element of this logical structure. The Information architecture ODA includes the cooperative models shown in the following figure. The fundamental descriptive means of the structural and presentational models are linked to the individual nodes which build a document. The document is seen as a tree. Each node (also a document) is a constituent, or an object. It consists of a set of attributes, which represent the

74

properties of the nodes. A node itself includes a concrete value or it defines relations between other nodes. Hereby, relations and operators, as shown in following table, are allowed.

The simplified distinction is between the edition, formatting (Document Layout Process and Content Layout Process) and actual presentation (Imaging Process). Current WYSIWYG (What You See Is What You Get) editors include these in one single step. It is important to mention that the processing assumes a liner reproduction. Therefore, this is only partially suitable as document architecture for a hypertext system. Hence, work is occurring on Hyper-ODA. A formatted document includes the specific layout structure, and eventually the generic layout structure. It can be printed directly or displayed, but it cannot be changed. A processable document consists of the specific logical structure, eventually the generic logical structure, and later of the generic layout structure. The document cannot be printed directly or displayed. Change of content is possible. A formatted processable document is a mixed form. It can be printed, displayed and the content can be changed. For the communication of an ODA document, the representation model, show in the following Figure is used. This can be either the Open Document Interchange Format (ODIF) (based on ASN.1), or the Open Document Language (ODL) (based on SGML).

75

ODA Information Architecture with structure, content, presentation and representation model The manipulation model in ODA, shown in the above figure, makes use of Document Application Profiles (DAPs). These profiles are an ODA (Text Only, Text + Raster Graphics + Geometric Graphics, Advanced Level). Exercise Distinguish additive and subtractive colors and write their area of use.

9.3.3 ODA and Multimedia


Multimedia requires, besides spatial representational dimensions, the time as a main part of a document. If ODA should include continuous media, further extensions in the standard are necessary. Currently, multimedia is not part of the standard. All further paragraphs discuss only possible extensions, which formally may or may not be included in ODA in this form.

76

Contents The content portions will change to timed content portions. Hereby, the duration does not have to be specified a priori. These types of content portions are called Open Timed Content Portions. Let us consider an example of an animation which is generated during the presentation time depending on external events. The information, which can be included during the presentation time, is images taken from the camera. In the case of a Closed Timed Content Portion, the duration is fixed. A suitable example is a song. Structure Operations between objects must be extended with a time dimension where the time relation is specified in the farther node. Content Architecture Additional content architectures for audio and video must be defined. Hereby, the corresponding elements, LDUs, must be specified. For the access functions, a set of generally valid functions for the control of the media streams needs to be specified. Such functions are, for example, Start and Stop. Many functions are very often device dependent. One of the most important aspects is a compatibility provision among different systems implementing ODA. Logical Structures Extensions for multimedia of the logical structure also need to be considered. For example, a film can include a logical structure. It could be a tree with the following components: 1. Prelude Introductory movie segment Participating actors in the second movie segment 2. Scene 1 3. Scene 2 4. 5. Postlude Such a structure would often be desirable for the user. This would allow one to deterministically skip some areas and to show or play other areas. Layout Structure

77

The layout structure needs extensions for multimedia. The time relation by a motion picture and audio must be included. Further, questions such as: When will something be played? From which point? And With which attributes and dependencies? The spatial relation can specify, for example, relative and absolute positions by the audio object. Additionally, the volume and all other attributes and dependencies should be determined.

9.4 MPEG
The committee ISO/IEC JTCI/SC29 (Coding of Audio, Picture, Multimedia and Hypermedia Information) works on the standardization of the exchange format for multimedia systems. The actual standards are developed at the international level in three working groups cooperating with the research and industry. The following figure shows that the three standards deal with the coding and compressing of individual media. The results of the working groups: the Joint Photographic Expert Group (JPEG) and the Motion Picture Expert Group (MPEG) are of special importance in the area of multimedia systems.

In a multimedia presentation, the contents, in the form of individual information objects, are described with the help of the above named standards. The structure (e.g., processing in time) is specified first through timely spatial relations between the information objects. The standard of this structure description is the subject of the working group WG12, which is known as the Multimedia and Hypermedia Information Coding Expert Group (MHEG). The name of the developed standard is officially called Information Technology- Coding of Multimedia and Hypermedia Information (MHEG). The final MHEG standard will be described
78

in three documents. The first part will discuss the concepts, as well as the exchange format. The second part describes an alternative, semantically to the first part, isomorph syntax of the exchange format. The third part should present reference architecture for a linkage to the script languages. The main concepts are covered in the first document, and the last two documents are still in progress; therefore, we will focus on the first document with the basic concepts. Further discussions about MHEG are based mainly on the committee draft version, because: (1) all related experiences have been gained on this basis; (2) the basic concepts between the final standard and this committed draft remain to be the same; and, (3) the finalization of this standard is still in progress. 9.4.1 Example of an Interactive Multimedia Presentation Before a detailed description of the MHEG objects is given, we will briefly examine the individual elements of a presentation using a small scenario. The following figure presents a time diagram of an interactive multimedia presentation. The presentation starts with some music. As soon as the voice of a news-speaker is heard in the audio sequence, a graphic should appear on the screen for a couple of seconds. After the graphic disappears, the viewer carefully reads a text. After the text presentation ends, a Stop button appears on the screen. With this button the user can abort the audio sequence. Now, using a displayed input field, the user enters the title of a desired video sequence.

79

These

video

data

are

displayed

immediately

after

the

modification.

Content A presentation consists of a sequence of information representations. For the representation of this information, media with very different properties are available. Because of later reuse, it is useful to capture each information LDU as an individual object. The contents in our example are: the video sequence, the audio sequence, the graphics and the text. Behavior The notion behavior means all information which specifies the representation of the contents as well as defines the run of the presentation. The first part is controlled by the actions start, set volume, set position, etc. The last part is generated by the definition of timely, spatial and conditional links between individual elements. If the state of the contents presentation changes, then this may result in further commands on other objects (e.g., the deletion of the graphic causes the display of the text). Another possibility, how the behavior of a presentation can be determined, is when external programs or functions (script) are called. User Interaction In the discussed scenario, the running animation could be aborted by a corresponding user interaction. There can be two kinds of user interactions. The first one is the simple selection,
80

which controls the run of the presentation through a pre specified choice (e.g., push the Stop button). The second kind is the more complex modification, which gives the user the possibility to enter data during the run of the presentation (e.g., editing of a data input field). Merging together several elements as discussed above, a presentation, which progresses in time, can be achieved. To be able to exchange this presentation between the involved systems, a composite element is necessary. This element is comparable to a container. It links together all the objects into a unit. With respect to hypertext/hypermedia documents, such containers can be ordered to a complex structure, if they are linked together through so-called hypertext pointers.

9.4.2 Derivation of a Class Hierarchy


The following figure summarizes the individual elements in the MHEG class hierarchy in the form of a tree. Instances can be created from all leaves (roman printed classes). All internal nodes, including the root (italic printed classes), are abstract classes, i.e., no instances can be generated from them. The leaves inherit some attributes from the root of the tree as an abstract basic class. The internal nodes do not include any further functions. Their task is to unify individual classes into meaningful groups. The action, the link and the script classes are grouped under the behavior class, which defines the behavior in a presentation. The interaction class includes the user interaction, which is again modeled through the selection and modification class. All the classes together with the content and composite classes specify the individual components in the presentation and determine the component class. Some properties of the particular MHEG engine can be queried by the descriptor class. The macro class serves as the simplification of the access, respectively reuse of objects. Both classes play a minor role; therefore, they will not be discussed further.

81

The development of the MHEG standard uses the techniques of object-oriented design. Although a class hierarchy is considered a kernel of this technique, a closer look shows that the MHEG class hierarchy does not have the meaning it is often assigned. MH-Object-Class The abstract MH-Object-Class inherits both data structures MHEG Identifier and Descriptor. MHEG Identifier consists of the attributes NHEG Identifier and Object Number and it serves as the addressing of MHEG objects. The first attribute identifies a specific application. The Object Number is a number which is defined only within the application. The data structure Descriptor provides the possibility to characterize more precisely each MHEG object through a number of optional attributes. For example, this can become meaningful if a presentation is decomposed into individual objects and the individual MHEG objects are stored in a database. Any author, supported by proper search function, can reuse existing MHEG objects.

82

Chapter 10 Basic Tools for Multimedia Objects

10.0 Aims and Objectives


This lesson is intended to teach the learner the basic tools (software) used for creating and capturing multimedia. The second part of this lesson educates the user on various video file formats. At the end of the lesson the learner will be able to: i. ii. iii. identify software for creating multimedia objects Locate software used for editing multimedia objects understand different video file formats

10.1 Introduction
The basic tools set for building multimedia project contains one or more authoring systems and various editing applications for text, images, sound, and motion video. A few additional applications are also useful for capturing images from the screen, translating file formats and tools for the making multimedia production easier.

10.2 Text Editing and Word Processing Tools


A word processor is usually the first software tool computer users rely upon for creating text. The word processor is often bundled with an office suite. Word processors such as Microsoft Word and WordPerfect are powerful applications that include spellcheckers, table formatters, thesauruses and prebuilt templates for letters, resumes, purchase orders and other common documents.

10.3 OCR Software


Often there will be multimedia content and other text to incorporate into a multimedia project, but no electronic text file. With optical character recognition (OCR) software, a flat-bed scanner, and a computer, it is possible to save many hours of rekeying printed words, and get the job done faster and more accurately than a roomful of typists. OCR software turns bitmapped characters into electronically recognizable ASCII text. A scanner is typically used to create the bitmap. Then the software breaks the bitmap into chunks according
83

to whether it contains text or graphics, by examining the texture and density of areas of the bitmap and by detecting edges. The text areas of the image are then converted to ASCII character using probability and expert system algorithms.

10.4 Image-Editing Tools


Image-editing application is specialized and powerful tools for enhancing and retouching existing bitmapped images. These applications also provide many of the feature and tools of painting and drawing programs and can be used to create images from scratch as well as images digitized from scanners, video frame-grabbers, digital cameras, clip art files, or original artwork files created with a painting or drawing package. Here are some features typical of image-editing applications and of interest to multimedia developers: Multiple windows that provide views of more than one image at a time Conversion of major image-data types and industry-standard file formats Direct inputs of images from scanner and video sources Employment of a virtual memory scheme that uses hard disk space as RAM for images that require large amounts of memory Capable selection tools, such as rectangles, lassos, and magic wands, to select ortions of a bitmap Image and balance controls for brightness, contrast, and color balance Good masking features Multiple undo and restore features Anti-aliasing capability, and sharpening and smoothing controls Color-mapping controls for precise adjustment of color balance Tools for retouching, blurring, sharpening, lightening, darkening, smudging, and tinting Geometric transformation such as flip, skew, rotate, and distort and perspective changes Ability to resample and resize an image 134-bit color, 8- or 4-bit indexed color, 8-bit gray-scale, black-and-white and customizable color palettes Ability to create images from scratch, using line, rectangle, square, circle, ellipse, polygon, airbrush, paintbrush, pencil, and eraser tools, with customizable brush shapes and user-definable bucket and gradient fills Multiple typefaces, styles, and sizes, and type manipulation and masking routines
84

Filters for special effects, such as crystallize, dry brush, emboss, facet, fresco, graphic pen, mosaic, pixelize, poster, ripple, smooth, splatter, stucco, twirl, watercolor, wave, and wind Support for third-party special effect plug-ins Ability to design in layers that can be combined, hidden, and reordered Plug-Ins Image-editing programs usually support powerful plug-in modules available from third-party developers that allow to wrap, twist, shadow, cut, diffuse, and otherwise filter your images for special visual effects.

Exercise List a few image editing features that an image editing tool should possess. 10.5 Painting and Drawing Tools Painting and drawing tools, as well as 3-D modelers, are perhaps the most important items in the toolkit because, of all the multimedia elements, the graphical impact of the project will likely have the greatest influence on the end user. If the artwork is amateurish, or flat and uninteresting, both the creator and the users will be disappointed. Painting software, such as Photoshop, Fireworks, and Painter, is dedicated to producing crafted bitmap images. Drawing software, such as CorelDraw, FreeHand, Illustrator, Designer, and Canvas, is dedicated to producing vectorbased line art easily printed to paper at high resolution. Some software applications combine drawing and painting capabilities, but many authoring systems can import only bitmapped images. Typically, bitmapped images provide the greatest choice and power to the artist for rendering fine detail and effects, and today bitmaps are used in multimedia more often than drawn objects. Some vector based packages such as Macromedias Flash are aimed at reducing file download times on the Web, and may contain both bitmaps and drawn art. The anti-aliased character shown in the bitmap of Color Plate 5 is an example of the fine touches that improve the look of an image. Look for these features in a drawing or painting packages: An intuitive graphical user interface with pull-down menus, status bars, palette control, and dialog boxes for quick, logical selection Scalable dimensions, so you can resize, stretch, and distort both large and small bitmaps
85

Paint tools to create geometric shapes, from squares to circles and from curves to complex polygons Ability to pour a color, pattern, or gradient into any area Ability to paint with patterns and clip art Customizable pen and brush shapes and sizes Eyedropper tool that samples colors Auto trace tool that turns bitmap shapes into vector-based outlines Support for scalable text fonts and drop shadows Multiple undo capabilities, to let you try again Painting features such as smoothing coarse-edged objects into the background with antialiasing, airbrushing in variable sizes, shapes, densities, and patterns; washing colors in gradients; blending; and masking Support for third-party special effect plug-ins Object and layering capabilities that allow you to treat separate elements independently Zooming, for magnified pixel editing All common color depths: 1-, 4-, 8-, and 16-, 134-, or 313- bit color, and grayscale Good color management and dithering capability among color depths using various color models such as RGB, HSB, and CMYK Good palette management when in 8-bit mode Good file importing and exporting capability for image formats such as PIC, GIF, TGA, TIF, WMF, JPG, PCX, EPS, PTN, and BMP

10.6 Sound Editing Tools


Sound editing tools for both digitized and MIDI sound lets hear music as well as create it. By drawing a representation of a sound in fine increments, whether a score or a waveform, it is possible to cut, copy, paste and otherwise edit segments of it with great precision. System sounds are shipped both Macintosh and Windows systems and they are available as soon the Operating system is installed. For MIDI sound, a MIDI synthesizer is required to play and record sounds from musical instruments. For ordinary sound there are varieties of software such as Soundedit, MP3cutter, Wavestudio.

86

10.7 Animation, Video and Digital Movie Tools


Animation and digital movies are sequences of bitmapped graphic scenes (frames, rapidly played back. Most authoring tools adapt either a frame or object oriented approach to animation. Moviemaking tools typically take advantage of Quicktime for Macintosh and Microsoft Video for Windows and lets the content developer to create, edit and present digitized motion video segments.

10.7.1 Video formats


A video format describes how one device sends video pictures to another device, such as the way that a DVD player sends pictures to a television or a computer to a monitor. More formally, the video format describes the sequence and structure of frames that create the moving video image. Video formats are commonly known in the domain of commercial broadcast and consumer devices; most notably to date, these are the analog video formats of NTSC, PAL, and SECAM. However, video formats also describe the digital equivalents of the commercial formats, the aging custom military uses of analog video (such as RS-170 and RS-343), the increasingly important video formats used with computers, and even such offbeat formats such as color field sequential. Video formats were originally designed for display devices such as CRTs. However, because other kinds of displays have common source material and because video formats enjoy wide adoption and have convenient organization, video formats are a common means to describe the structure of displayed visual information for a variety of graphical output devices.

10.7.2 Common organization of video formats


A video format describes a rectangular image carried within an envelope containing information about the image. Although video formats vary greatly in organization, there is a common taxonomy: A frame can consist of two or more fields, sent sequentially, that are displayed over time to form a complete frame. This kind of assembly is known as interlace. An interlaced video frame is distinguished from a progressive scan frame, where the entire frame is sent as a single intact entity. A frame consists of a series of lines, known as scan lines. Scan lines have a regular and consistent length in order to produce a rectangular image. This is because in analog
87

formats, a line lasts for a given period of time; in digital formats, the line consists of a given number of pixels. When a device sends a frame, the video format specifies that devices send each line independently from any others and that all lines are sent in top-tobottom order. As above, a frame may be split into fields odd and even (by line numbers) or upper and lower, respectively. In NTSC, the lower field comes first, then the upper field, and thats the whole frame. The basics of a format are Aspect Ratio, Frame Rate, and Interlacing with field order if applicable: Video formats use a sequence of frames in a specified order. In some formats, a single frame is independent of any other (such as those used in computer video formats), so the sequence is only one frame. In other video formats, frames have an ordered position. Individual frames within a sequence typically have similar construction. However, depending on its position in the sequence, frames may vary small elements within them to represent additional information. For example, MPEG-13 compression may eliminate the information that is redundant frame-to-frame in order to reduce the data size, preserving the information relating to changes between frames. Analog video formats NTSC PAL SECAM Digital Video Formats These are MPEG13 based terrestrial broadcast video formats ATSC Standards DVB ISDB These are strictly the format of the video itself, and not for the modulation used for transmission.

88

10.7.3 QuickTime
QuickTime is a multimedia framework developed by Apple Inc. capable of handling various formats of digital video, media clips, sound, text, animation, music, and several types of interactive panoramic images. Available for Classic Mac OS, Mac OS X and Microsoft Windows operating systems, it provides essential support for software packages including iTunes, QuickTime Player (which can also serve as a helper application for web browsers to play media files that might otherwise fail to open) and Safari. The QuickTime technology consists of the following: 1. The QuickTime Player application created by Apple, which is a media player. 2. The QuickTime framework, which provides a common set of APIs for encoding and decoding audio and video. 3. The QuickTime Movie (.mov) file format, an openly-documented media container. QuickTime is integral to Mac OS X, as it was with earlier versions of Mac OS. All Apple systems ship with QuickTime already installed, as it represents the core media framework for Mac OS X. QuickTime is optional for Windows systems, although many software applications

89

require it. Apple bundles it with each iTunes for Windows download, but it is also available as a stand-alone installation. QuickTime players QuickTime is distributed free of charge, and includes the QuickTime Player application. Some other free player applications that rely on the QuickTime framework provide features not available in the basic QuickTime Player. For example: iTunes can export audio in WAV, AIFF, MP3, AAC, and Apple Lossless. In Mac OS X, a simple AppleScript can be used to play a movie in full-screen mode. However, since version 7.13 the QuickTime Player now also supports for full screen viewing in the non-pro version. QuickTime framework The QuickTime framework provides the following: Encoding and transcoding video and audio from one format to another. Decoding video and audio, and then sending the decoded stream to the graphics or audio subsystem for playback. In Mac OS X, QuickTime sends video playback to the Quartz Extreme (OpenGL) Compositor. A plug-in architecture for supporting additional codecs (such as DivX). The framework supports the following file types and codecs natively: Audio Video 3GPP & 3GPP13 file formats
90

Apple Lossless Audio Interchange (AIFF) Digital Audio: Audio CD 16-bit (CDDA), 134-bit, 313-bit integer & floating point, and 64-bit floating point MIDI MPEG-1 Layer 3 Audio (.mp3) MPEG-4 AAC Audio (.m4a, .m4b, .m4p) Sun AU Audio ULAW and ALAW Audio Waveform Audio (WAV)

AVI file format Bitmap (BMP) codec and file format DV file (DV NTSC/PAL and DVC Pro NTSC/PAL codecs) Flash & FlashPix files GIF and Animated GIF files H.1361, H.1363, and H.1364 codecs JPEG, Photo JPEG, and JPEG-13000 codecs and file formats MPEG-1, MPEG-13, and MPEG-4 Video file formats and associated codecs (such as AVC) QuickTime Movie (.mov) and QTVR movies Other video codecs: Apple Video, Cinepak, Component Video, Graphics, and Planar RGB Other still image formats: PNG, TIFF, and TGA

Specification for QuickTime file format

The QuickTime (.mov) file format functions as a multimedia container file that contains one or more tracks, each of which stores a particular type of data: audio, video, effects, or text (for subtitles, for example). Other file formats that QuickTime supports natively (to varying degrees) include AIFF, WAV, DV, MP3, and MPEG-1. With additional QuickTime Extensions, it can also support Ogg, ASF, FLV, MKV, DivX Media Format, and others. Exercise : List the different file formats supported by Quicktime
91

CHAPTER 11 User Interface


11.0 Aims and Objectives
In this lesson we will learn how human computer interface is effectively done when a multimedia presentation is designed. At the end of the lesson the reader will be able to understand user interface and the various criteria that are to be satisfied in order to have effective human-computer interface.

11.1 Introduction
In computer science, we understand the user interface as the interactive input and output of a computer as its is perceived and operated on by users. Multimedia user interfaces are used for making the multimedia content active. Without user interface the multimedia content is considered to be linear or passive.

11.2 User Interfaces


Multimedia user interfaces are computer interfaces that communicate with users using multiple media modes such as written text together with spoken language. Multimedia would be without much value without user interfaces. The input media determines not only how human computer interaction occurs but also how well. Graphical user interfaces using the mouse as the main input device have greatly simplified human-machine interaction.

11.3 General Design Issues


The main emphasis in the design of multimedia user interface is multimedia presentation. There are several issues which must be considered. 1. To determine the appropriate information content to be communicated. 2. To represent the essential characteristics of the information. 3. To represent the communicative intent. 4. To chose the proper media for information presentation. 5. To coordinate different media and assembling techniques within a presentation. 6. To provide interactive exploration of the information presented.
92

11.3.1 Information Characteristics for presentation:


A complete set of information characteristics makes knowledge definitions and representation easier because it allows for appropriate mapping between information and presentation techniques. The information characteristics specify: Types Characterization schemes are based on ordering information. There are two types of ordered data: (1) coordinate versus amount, which signify points in time, space or other domains; or (2) intervals versus ratio, which suggests the types of comparisons meaningful among elements of coordinate and amount data types. Relational Structures This group of characteristics refers to the way in which a relation maps among its domain sets (dependency). There are functional dependencies and non -functional dependencies. An example of a relational structure which expresses functional dependency is a bar chart. An example of a relational structure which expresses nonfunctional dependency is a student entry in a relational database. Multi-domain Relations Relations can be considered across multiple domains, such as: (1) multiple attributes of a single object set (e.g., positions, colors, shapes, and/or sizes of a set of objects in a chart); (2) multiple object sets (e.g., a cluster of text and graphical symbols on a map); and (3) multiple displays. Large Data Sets Large data sets refer to numerous attributes of collections of heterogeneous objects (e.g., presentations of semantic networks, databases with numerous object types and attributes of technical documents for large systems, etc.). Exercise List a few information characters required for presentation.

11.3.2 Presentation Function


Presentation function is a program which displays an object. It is important to specify the presentation function independent from presentation form, style or the information it conveys. Several approaches consider the presentation function from different points of view

93

11.3.3 Presentation Design Knowledge


To design a presentation, issues like content selection, media and presentation technique selection and presentation coordinating must be considered. Control selection is the key to convey the information to the user. However, we are not free in the selection of it because content can be influenced by constraints imposed by the size and complexity of the presentation. Media selection determines partly the information characteristics. For selecting presentation techniques, rule can be used. For example, rules for selection methods, i.e., for supporting users ability to locate on the facts in a presentation, may specify a preference for graphical techniques. Coordination can be viewed as a process of composition. Coordination needs mechanisms such as (1) encoding techniques (2) presentation objects that represent facts (3) multiple displays. Coordination of multimedia employs a set of composition operators for merging, aligning and synthesizing different objects to construct displays that convey multiple attributes of one or more data sets.

11.4 Effective Human-Computer Interaction


One of the most important issues regarding multimedia interfaces is effective human-computer interaction of the interface, i.e., user-friendliness. The main issues the user interface designer should keep in mind: (1) context; (2) linkage to the world beyond the presentation display; (3) evaluation of the interface with respect to other human-computer interfaces; (4) interactive capabilities; and (5)separability of the user interface from the application.

11.5 Video at the User Interface


A continuous sequence of, at least, 15 individual image per second gives a rough perception of a continuous motion picture. At the use interface, video is implemented through a continuous sequence of individual images. Hence, video can be manipulated at this interface similar to manipulation of individual still images. When an individual image consisting of pixels (no graphics, consisting of defined objects) can be presented and modified, this should also be possible for video (e.g., to create special effects in a movie). However, the functionalities for video are not as simple to deliver because the high data transfer rate necessary is not guaranteed by most of the hardware in current graphics systems.

94

11.6 Audio at the User Interface


Audio can be implemented at the user interface for application control. Thus, speech analysis is necessary. Speech analysis is either speaker-dependent or speaker-independent. Speaker dependent solutions allow the input of approximately 25,000 different words with a relatively low error rate. Here, an intensive learning phase to train the speech analysis system for speakerspecific characteristics is necessary prior to the speech analysis phase. A speaker-independent system can recognize only a limited set of words and no training phase is needed.

Audio Tool User Interface During audio output, the additional presentation dimension of space can be introduced using two or more separate channels to give a more natural distribution of sound. The best-known example of this technique is stereo. In the case of monophony, all audio sources have the same spatial location. A listener can only properly understand the loudest audio signal. The same effect can be simulated by closing one ear. Stereophony allows listeners with bilateral hearing capabilities to hear lower intensity sounds. It is important to mention that the main advantage of bilateral hearing is not the spatial localization of audio sources, but the extraction of less intensive signals in a loud environment.

11.7 User-friendliness as the Primary Goal


User-friendliness is the main property of a good user interface. In a multimedia environment in office or home the following user friendliness could be implemented:

11.7.1 Easy to Learn Instructions


Application instructions must be easy-to-learn.

95

11.7.2 Context-sensitive Help Functions


A context-sensitive help function using hypermedia techniques is very helpful, i.e., according to the state of the application, different help-texts are displayed.

11.7.3 Easy to Remember Instructions


A user-friendly interface must also have the property that the user easily remembers the application instruction rules. Easily remembered instructions might be supported by the intuitive association to what the user already knows.

11.7.4 Effective Instructions


The user interface should enable effective use of the application. This means: Logically connected functions should be presented together and similarly. Graphically symbols or short clips are more effective than textual input and output. They trigger faster recognition. Different media should be able to be exchanged among different applications. Actions should be activated quickly. A configuration of a user interface should be usable by both professional and sporadic users.

11.7.5 Aesthetics
With respect to aesthetics, the color combination, character sets, resolution and form of the window need to be considered. They determine a users first and lasting impressions.

11.7.6 Entry elements


User interfaces use different ways to specify entries for the user: Entries in a menu In menus there are visible and non-visible entry elements. Entries which are relevant task are to be made available for east menu selection Entries on a graphical interface If the interface includes text, the entries can be marked through color and/or different font If the interface includes images, the entries can be written over the image.

96

11.7.7 Presentation
The presentation, i.e., the optical image at the user interface, can have the following variants: Full text Abbreviated text Icons, i.e., graphics Micons, i.e., motion video

11.7.8 Dialogue Boxes


Different dialogue boxes should have similar constructions. This requirement applies to the design of: (1) The buttons OK and Abort; (2) Joined windows; and (3) Other applications in the same window system.

11.7.9 Additional Design Criteria


A few additional hints for designing a user friendly interface are The form of the cursor can change to visualize the current state of the system. For example a our glass can be shown for a processing task in progress When time intensive tasks are performed, the progress of the task should be presented. The selected entry should be immediately highlights as work in progress before performance actually starts. The main emphasis has been on video and audio media because they represent live information. At the user interface, these media become important because they help users learn by enabling them to choose how to distribute research responsibilities among applications (e.g., on-line encyclopedias, tutors, simulations), to compose and integrate results and to share learned material with colleagues (e.g., video conferencing). Additionally, computer applications can effectively do less reasoning about selection of a multimedia element (e.g., text, graphics, animation or sound) since alternative media can be selected by the user. Exercise Distinguish additive and subtractive colors and write their area of use.

97

Chapter 12 Multimedia Communication Systems


12.0 Aims and Objectives
In this lesson we discuss the important issues related to multimedia communication systems which are present above the date link layer. In this lesson application subsystems, management and service issues for group collaboration and session orchestration are presented. At the end of this lesson the learner will be able: i. ii. iii. iv. v. To understand the various layers of communication subsystem The transport subsystem and its features To understand group communication architecture Concepts behind conferencing Enumerate the concepts of Session management

12.1 Introduction
The consideration of multimedia applications supports the view that local systems expand toward distributed solutions. Applications such as kiosks, multimedia mail, collaborative work systems, virtual reality applications and others require high-speed networks with a high transfer rate and communication systems with adaptive, lightweight transmission protocols on top of the networks. From the communication perspective, we divide the higher layers of the Multimedia Communication System (MCS) into two architectural subsystems: an application subsystem and a transport subsystem.

12.2 Application Subsystem 12.2.1 Collaborative Computing


The current infrastructure of networked workstations and PCs, and the availability of audio and video at these end-points, makes it easier for people to cooperate and bridge space and time. In this way, network connectivity and end-point integration of multimedia provides users with a collaborative computing environment. Collaborative computing is generally known as Computer-Supported Cooperative Work (CSCW). There are many tools for collaborative
98

computing, such as electronic mail, bulletin boards (e.g., Usenet news), screen sharing tools (e.g., ShowMe from Sunsoft), text-based conferencing systems(e.g., Internet Relay Chat, CompuServe, American Online), telephone conference systems, conference rooms (e.g., VideoWindow from Bellcore), and video conference systems (e.g., MBone tools nv, vat). Further, there are many implemented CSCWsystems that unify several tools, such as Rapport from AT&T, MERMAID from NEC and others.

12.2.2 Collaborative Dimensions


Electronic collaboration can be categorized according to three main parameters: time, user scale and control. Therefore, the collaboration space can be partitioned into a three-dimensional space. Time With respect to time, there are two modes of cooperative work: asynchronous and synchronous. Asynchronous cooperative work specifies processing activities that do not happen at the same time; the synchronous cooperative work happens at the same time. User Scale The user scale parameter specifies whether a single user collaborates with another user or a group of more that two users collaborate together. Groups can be further classified as follows: A group may be static or dynamic during its lifetime. A group is static if its participating members are pre-determined and membership does not change during the activity. A group is dynamic if the number of group members varies during the collaborative activity, i.e., group members can join or leave the activity at any time. Group members may have different roles in the CSCW, e.g., a member of a group (if he or she is listed in the group definition), a participant of a group activity (if he or she successfully joins the conference), a conference initiator, a conference chairman, a token holder or an observer. Groups may consist of members who have homogeneous or heterogeneous or heterogeneous characteristics and requirements of their collaborative environment. Control Control during collaboration can be centralized or distributed. Centralized control means that there is a chairman (e.g., main manger) who controls the collaborative work and every group member (e.g., user agent) reports to him or her. Distributed control means that every group

99

member has control over his/her own tasks in the collaborative work and distributed control protocols are in place to provide consistent collaboration. Other partition parameter may include locality, and collaboration awareness. Locality partition means that a collaboration can occur either in the same place (e.g., a group meeting in an officer or conference room) or among users located in different place through tele-collaboration. Group communication systems can be further categorized into computer augmented collaboration systems, where collaboration is emphasized, and collaboration augmented computing systems, where the concentrations are on computing.

12.2.3 Group Communication Architecture


Group communication (GC) involves the communication of multiple users in a synchronous or an asynchronous mode with centralized or distributed control. A group communication architecture consists of a support model, system model and interface model. The GC support model includes group communication agents that communicate via a multi-point multicast communication network as shown in following Figure. Group communication agents may use the following for their collaboration: Group Rendezvous Group rendezvous denotes a method which allows one to organize meetings, and to get information about the group, ongoing meetings and other static and dynamic information. Shared Applications Application sharing denotes techniques which allow one to replicate information to multiple users simultaneously. The remote users may point to interesting aspects (e.g., via tele-pointing) of the information and modify it so that all users can immediately see the updated information (e.g., joint editing). Shared applications mostly belong to collaboration transparent applications. Conferencing Conferencing is a simple form of collaborative computing. This service provides the management of multiple users for communicating with each other using multiple media. Conferencing applications belong to collaboration-aware applications.

100

Group communication support model The GC system model is based on a client-server model. Clients provide user interfaces for smooth interaction between group members and the system. Servers supply functions for accomplishing the group communication work, and each server specializes in its own function.

Exercise List the collaborations used by group communication agents.

12.3 Application Sharing Approach


Sharing applications is recognized as a vital mechanism for supporting group communication activities. Sharing applications means that when a shared application program (e.g., editor) executes any input from a participant, all execution results performed on the shared object (e.g., document text) are distributed among all the participants. Shared objects are displayed, generally, in shared windows. Application sharing is most often implemented in collaborationtransparent systems, but can also be developed through collaboration-aware, special-purpose
101

applications. An example of a software toolkit that assists in development of shared computer applications is Bellcores Rendezvous system (language and architecture). Shared applications may be used as conversational props in tele-conferencing situations for collaborative document editing and collaborative software development. An important issue in application sharing is shared control. The primary design decision in sharing applications is to determine whether they should be centralized or replicated: Centralized Architecture In a centralized architecture, a single copy of the shared application runs at one site. All participants input to the application is then distributed to all sites. The advantage of the centralized approach is easy maintenance because there is only one copy of the application that updates the shared object. The disadvantage is high network traffic because the output of the application needs to be distributed every time. Replicated Architecture In a replicated architecture, a copy of the shared application runs locally at each site. Input events to each application are distributed to all sites and each copy of the shared application is executed locally at each site. The advantages of this architecture are low network traffic, because only input events are distributed among the sites, and low response times, since all participants get their output from local copies of the application. The disadvantages are the requirement of the same execution environment for the application at each site, and the difficulty in maintaining consistency.

12.4 Conferencing
Conferencing supports collaborative computing and is also called synchronous telecollaboration. Conferencing is a management service that controls the communication among multiple users via multiple media, such as video and audio, to achieve simultaneous face-to-face communication. More precisely, video and audio have the following purposes in a teleconferencing system: Video is used in technical discussions to display view-graph and to indicate how many users are still physically present at a conference. For visual support, workstations, PCs or video walls can be used. For conferences with more than three or four participants, the screen resources on a PC or workstation run out quickly, particularly if other applications, such as shared editors or

102

drawing spaces, are used. Hence, mechanisms which quickly resize individual images should be used. Conferencing services control a conference (i.e., a collection of shared state information such as who is participating in the conference, conference name, start of the conference, policies associated with the conference, etc,) Conference control includes several functions: Establishing a conference, where the conference participants agree upon a common state, such as identity of a chairman (moderator), access rights (floor control) and audio encoding. Conference systems may perform registration, admission, and negotiation services during the conference establishment phase, but they must be flexible and allow participants to join and leave individual media sessions or the whole conference. The flexibility depends on the control model. Closing a conference. Adding new users and removing users who leave the conference. Conference states can be stored (located) either on a central machine (centalised control), where a central application acts as the repository for all information related to the conference, or in a distributed fashion.

12.5 Session Management


Session management is an important part of the multimedia communication architecture. It is the core part which separates the control, needed during the transport, from the actual transport. Session management is extensively studied in the collaborative computing area; therefore we concentrate on architectural and management issues in this area.

12.5.1 Architecture
A session management architecture is built around an entity-session manager which separates the control from the transport. By creating a reusable session manager, which is separated from the user-interface, conference-oriented tools avoid a duplication of their effort. The session control architecture consists of the following components: Session Manager Session manager includes local and remote functionalities. Local functionalities may include (1) Membership control management, such as participant authentication or presentation of coordinated user interfaces; (2) Control management for shared workspace, such as floor control
103

(3) Media control management, such as intercommunication among media agents or synchronization (4) Configuration management, such as an exchange of interrelated QoS parameters of selection of appropriate services according to QoS; and (5) Conference control management, such as an establishment, modification and a closing of a conference. Media agents Media agents are separate from the session manager and they are not responsible for decisions specific to each type of media. The modularity allows replacement of agents. Each agent performs its own control mechanism over the particular medium, such as mute, unmute, change video quality, start sending, stop sending, etc. Shared Workspace Agent The shared workspace agent transmits shared objects (e.g., telepointer coordinate, graphical or textual object) among the shared application.

12.5.2 Session Control


Each session is described through the session state. This state information is either private or shared among all session participants. Dependent on the functions, which an application required and a session control provides, several control mechanisms are embedded in session management: Floor control: In a shared workspace, the floor control is used to provide access to the shared workspace. The floor control in shared application is often used to maintain data consistency. Conference Control: In conferencing applications, Conference control is used. Media control: This control mainly includes a functionality such as the synchronization of media streams. Configuration Control: Configuration control includes a control of media quality,QOS handling, resource availability and other system components to provide a session according to users requirements. Membership control: This may include services, for example invitation to a session, registration into a session, modification of the membership during the session etc.

104

CHAPTER 13: Quality of Service and Resource Management


14.0 Aims and Objectives
In this chapter we will learn how to measure and improve the quality of service in multimedia transmission. At the end of this lesson, the learner will have a clear ideal on the parameters used in measuring the quality of service and the various measures that are taken to ensure quality in the multimedia content and transmission

13.1 Introduction
Every product is expected to have a quality apart from satisfying the requirements. The quality is measured by various parameters. Parameterization of the services is defined in ISO (International Standard Organization) standards through the notion of Quality of Service (QoS). The ISO standard defines QoS as a concept for specifying how good the offered networking services are. QoS can be characterized by a number of specific parameters. There are several important issues which need to be considered with respect to QoS:

13.2 Quality of Service and Process Management


The user/application requirements on the Multimedia Communication System (MCS) are mapped into communication services which make the effort to satisfy the requirements. Because of the heterogeneity of the requirements, coming from different distributed multimedia applications, the services in the multimedia systems need to be parameterized. Parameterization allows for flexibility and customization of the services, so that each application does not result in implementing of a new set of service providers. QoS Layering Traditional QoS (ISO standards) was provided by the network layer of the communication system. An enhancement of QoS was achieved through inducing QoS into transport services. For MCS, the QoS notion must be extended because many other services contribute to the end-to-end service quality. To discuss further QoS and resource management, we need a layered model of the MCS with respect to QoS, we refer throughout this lesson the model shown in the following Figure. The MCS consists of three layers: application, system (including communication services and
105

operating system services), and devices (network and Multimedia (MM) devices). Above the application may or may not reside a human user. This implies the introduction of QoS in the application (application QoS), in the system (system QoS) and in the network (network QoS). In the case of having a human user, the MCS may also have a user QoS specification. We concentrate in the network layer on the network device and its QoS because it is of interest to us in the MCS. The MM devices find their representation (partially) in application QoS. QoS Description The set of chosen parameters for the particular service determines what will be measured as the QoS. Most of the current QoS parameters differ from the parameters described in ISO because of the variety of applications, media sent and the quality of the networks and end-systems. This also leads to many different QoS parameterizations in the literature. We give here one possible set of QoS parameters for each layer of MCS. The application QoS parameters describe requirements on the communication services and OS services resulting from the application QoS.

QoS layered model for Multimedia Communication System

They may be specified in terms of both quantitative and qualitative criteria. Quantitative criteria are those which can be evaluated in terms of certain measures, such as bits per second, number of errors, task processing time, PDU size, etc. The QoS parameters include throughput, delay, response time, rate, data corruption at the system level and task and buffer specification.

106

13.3 Translation
It is widely accepted that different MCS components require different QoS parameters, for example, the man loss rate, known from packet networks, has no meaning as a QoS video capture device. Likewise, frame quality is of little use to a link layer service provider because the frame quality in terms of number of pixels in both axes in a QoS value to initialize frame capture buffers. We always distinguish between user and application, system and network with different QoS parameters. However, in future systems, there may be even more layers or there may be hierarchy of layers, where some QoS values are inherited and others are specific to certain components. In any case, it must always be possible to derive all QoS values from the user and application QoS values. This derivation-known as translation may require additional knowledge stored together with the specific component. Hence, translation is an additional service for layer-to-layer communication during the call establishment phase. The split of parameters, requires translation functions as follows: Human Interface-Application QoS The service which may implement the translation between a human user and application QoS parameters is called tuning service. A tuning service provides a user with a Graphical user Interface (GUI) for input of application QoS, as well as output of the negotiated application QoS. The translation is represented through video and audio clips (in the case of audio-visual media), which will run at the negotiated quality corresponding to, for example, the video frame resolution that end-system and the network can support. Application QoS-System QoS Here, the translation must map the application requirements into the system QoS parameters, which may lead to translation such as from high quality synchronization user requirement to a small (milliseconds) synchronization skew QoS parameter, or from video frame size to transport packet size. It may also be connected with possible segmentation/reassembly functions. System QoS-Network QoS This translation maps the system QoS (e.g., transport packet en-to-end delay) into the underlying network QoA parameters (e.g., in ATM, the end-to-end delay of cells) and vice versa.

107

13.4 Managing Resources during Multimedia Transmission


QoS guarantees must be met in the application, system and network to get the acceptance of the users of MCS. There are several constraints which must be satisfied to provide guarantees during multimedia transmission: (1) time constraints which include delays; (2) space constraints such as system buffers; (3) device constraints such as frame grabbers allocation; (4) frequency constraints which include network bandwidth and system bandwidth for data transmission; and, (5) reliability constraints. These constraints can be specified if proper resource management is available at the end-points, as well as in the network. Rate Control If we assume an MCS to be a tightly coupled system, which has a central process managing all system components, then this central instance can impose a synchronous data handling over all resources; in effect we encounter a fixed, imposed data rate. However, an MCS usually comprises loosely coupled end-systems which communicate over networks. In such a setup, rates must be imposed. Here, we make use of all available strategies in the communications environment. A rate-based service discipline is one that provides a client with a minimum service rate independent of the traffic characteristics of other clients. Such a discipline, operating at a switch, manages the following resources: bandwidth, service time (priority) and buffer space. Several rate-based scheduling disciplines have been developed. Fair Queuing If N channels share an output trunk, then each one should get 1/Nth of the bandwidth. If any channel uses less bandwidth than its share, then this portion is shared among the rest equally. This mechanism can be achieved by the Bit by-bit Round Tobin (BR) service among the channels. The BR discipline serves queues in the round robin service, sending one bit from each queue that has packet in it. Clearly, this scheme is not efficient; hence, fair queuing emulates BR as follows: each packet is given a finish number, which is the round number at which the packet would have received service, if the server had been doing BR. The packets are served in the order of the finish number. Channels can be given different fractions of the bandwidth by assigning them weights, where weight corresponds to the number of bits of service the channel receives per round of BR service.

108

Virtual Clock

This discipline emulates Time Division Multiplexing (TDCM). A virtual transmission time is allocated to each packet. It is the time at which the packet would have been transmitted, if the server would actually be doing TDM. Delay Earliest-Due-Date (Delay EDD) Delay EDD is an extension of EDF scheduling (Earliest Deadline First) where the server negotiates a service contract with each source. The contract states that if a source obeys a peak and average sending rate, them the server provides bounded delay. The key then lies in the assignment of deadlines to packets. The server sets a packets deadline to the time at which it should be sent, if it had been received according to the contract. This actually is the expected arrival time added to the delay bound at the server. By reserving bandwidth at the peak rate, Delay EDD can assure each channel a guaranteed delay bound. Jitter Earliest-Due-Date (Jitter EDD) Jitter EDD extends Delay EDD to provide delay-jitter bounds. After a packet has been served at each server, it is stamped with the difference between its deadline and actual finishing time. A regulator at the entrance of the next switch holds the packet for this period before it is made eligible to be scheduled. This provides the minimum and maximum delay guarantees. Stop-and-Go This discipline preserves the smoothness property of the traffic as it traverses through the network. The main idea is to treat all traffic as frames of length T bits, meaning the time is divided into frames. At each frame time, only packets that have arrived at the server in the previous frame time are sent. It can be shown that the delay and delay-jitter are bounded, although the jitter bound does not come free. The reason is that under Stop-and-Go rules, packets arriving at the start of an incoming frame must be held by full time T before being forwarded. So, all the packets that would arrive quickly are instead being delayed. Further, since the delay and delay-jitter bounds are linked to the length of the frame time, improvement of Stopand-Go can be achieved using multiple frame sizes, which means it may operate with various frame sizes. Hierarchical Round Robin (HRR) An HRR server has several service levels where each level provides round robin service to a fixed number of slots. Some number of slots at a selected level are allocated to a channel and the
109

server cycles through the slots at each level. The time a server takes to service all the slots at a level is called the frame time at the level. The key of HRR is that it gives each level a constant share of the bandwidth. Higher levels get more bandwidth than lower levels, so the frame time at a higher level is smaller than the frame time at a lower level. Since a server always completes one round through its slots once every frame time, it can provide a maximum delay bound to the channels allocated to that level. Exercise Enumerate a few rate-based scheduling disciplines :

13.5 Architectural Issues


Networked multimedia systems work in connection-oriented mode; although the Internet is an example of a connectionless network where QoS is introduced on a packet basis (every IP packet carries type of service parameters because the Internet does not have a service notion). MCS, based on that Internet protocol stack, uses RSVP, the new control reservation protocol, which accompanies the IP protocol and provides some kind of connection along the path where resources are allocated. QoS description, distribution, provision and connected resource admission, reservation, allocation and provision must be embedded in different components of the multimedia communication architecture. This means that proper services and protocols in the end-points and the underlying network architectures must be provided. Especially, the system domain needs to have QoS and resource management. Several important issues, as described in detail in previous sections, must be considered in the end-point architectures: (1) QoS specification, negotiation and provision; (2) resource admission and reservation for end-to-end QoS; and, (3) QoS configurable transport systems. Some examples of architectural choices where QoS and resource management are designed and implemented include the following: 1. The OSI architecture provides QoS in the network layer and some enhancements in the transport layer. The OSI 95 project considers integrated QoS specification and negotiation in the transport protocols. 2. Lancasters QoS-Architecture (QoS-A) offers a framework to specify and implement the required performance properties of multimedia applications over high-performance ATM-based
110

networks. QoS-A incorporates the notions of flow, service contract and flow management. The Multimedia Enhanced Transport Service (METS) provides the functionality to contract QoS. 3. The Heidelberg Transport System (HeiTs), based on ST-II network protocol, provides continuous-media exchange with QoS guarantees, upcall structure, resource management and real-time mechanisms. HeiTS transfers continuous media data streams from one origin to one or multiple targets via multicast. HeiTS nodes negotiate QoS values by exchanging flow specification to determine the resources required-delay, jitter, throughput and reliability. 4. The UC Berkeleys Tenet Protocol Suite with protocol set RCAP, RTIP, RMTP and CMTP provides network QoS negotiation, reservation and resource administration through the RCAP control and management protocol. 5. The Internet protocol stack, based on IP protocol, provides resource reservation if the RSVP control protocol is used. 6. QoS handling and management is provided in UPenns end-point architecture (OMEGA Architecture) at the application and transport subsystems, where the QoS Broker, as the end-toend control and management protocol, implements QoS handling over both subsystems and relies on control and management in ATM networks. 7. The Native-Mode ATM Protocol Stack, developed in the IDLInet (IIT Delhi Low-cost Integrated Network) tested at the Indian Institute of technology, provides network QoS guarantees.

111

CHAPTER 14 Synchronisation
14.0 Aims and Objectives
This lesson aims at introducing the concept of synchronisation. At the end of this lesson the learner will be able to : i. ii. iii. Know the Meaning of synchronisation Synchronisation in audio Implementation of a Reference model for synchronisation

14.1 Introduction
Advanced multimedia systems are characterized by the integrated computer controlled generation, storage, communication, manipulation and presentation of independent timedependent and time-independent media. The key issue which provides integration is the digital representation of any data and the synchronization of and between various kinds of media and data. The word synchronization refers to time. Synchronization in multimedia systems refers to the temporal relations between media objects in the multimedia system. In a more general and widely used sense some authors use synchronization in multimedia systems as comprising content, spatial and temporal relations between media objects. We differentiate between timedependent media object are equal, it is called continuous media object. A video consists of a number of ordered frames; each of these frames has fixed presentation duration. A timeindependent media object is any kind of traditional media like text and images. The semantic of the respective content does not depend upon a presentation according to the time domain. Synchronization between media objects comprises relations between time dependent media objects and time-independent media objects. A daily example of synchronization between continuous media is the synchronization between the visual and acoustical information in television. Ina multimedia system, the similar synchronization must be provided for audio and moving pictures.

112

Synchronization is addressed and supported by many system components including the operating system, communication systems, database, and documents and even often by applications. Hence, synchronization must be considered at several levels in a multimedia system.

14.2 Notion of Synchronization


Several definitions for the terms multimedia application and multimedia systems are described in the literature. Three criteria for the classification of a system as a multimedia system can be distinguished: the number of media, the types of supported media and the degree of media integration. The simplest criterion is the number of media used in an application, using only this criterion, even a document processing application that supports text and graphics can be regarded as a multimedia system. The following figure classifies applications according to the three criteria. The arrows indicate the increasing degree of multimedia capability for each criterion.

113

Classification of media use in multimedia systems Integrated digital systems can support all types of media, and due to digital processing, may provide a high degree of media integration. Systems that handle time dependent analog media objects and time-independent digital media objects are called hybrid systems. The disadvantage of hybrid systems is that they are restricted with regard to the integration of time-dependent and time-independent media, because, for example, audio and video are stored on different devices than-independent media objects and multimedia workstations must comprise both types of devices.

14.3 Basic Synchronization Issues


Integrated media processing is an important characteristic of a multimedia system. The main reasons for these integration demands are the inherent dependencies between the information coded in the media objects. These dependencies must be reflected in the integrated processing
114

including storage, manipulation, communication, capturing and, in particular, the presentation of the media objects. Content Relations Content relations define a dependency of media objects from data. Spatial Relations The spatial relations that are usually known as layout relationships define the space used for the presentation of a media object on an output device at a certain point of time in a multimedia presentation. If the output device is two-dimensional (e.g., monitor or paper), the layout specifies the two-dimensional area to be used. In desktop-publishing applications, this is usually expressed using layout frames. A layout frame is placed and content is assigned to this frame. The positioning of a layout frame in a document may be fixed to a position in a document, to a position on a page or it may be relative to the positioning of other frames. Temporal Relations Temporal relations define the temporal dependencies between media objects. They are of interest whenever time-dependent media objects exist. An example of temporal relations is the relation between a video and an audio object that are recorded during a concert. If these objects are presented, the temporal relation during the presentations of the two media objects must correspond to the temporal relation at the recording moment.

14.4 Intra and Inter Object Synchronization


We distinguish between time relations within the units of one time-dependent media object itself and time relations between media objects. This separation helps to clarify the mechanisms supporting both types of relations, which are often very different. Intra-object synchronization: Intra-object synchronization refers to the time relation between various presentation units of one time-dependent media object. An example is the time relation between the single frames of a video sequence. For a video with a rate of 25 frames per second, each of the frames must be displayed for 40ms. The following Figure shows this for a video sequence presenting a bouncing ball.

115

Inter-object synchronization: Inter-object synchronization refers to the synchronization between media objects. The following figure shows an example of the time relations of a multimedia synchronization that starts with an audio/video sequence, followed by several pictures and an animation that is commented by an audio sequence.

Live and Synthetic Synchronization The live and synthetic synchronization distinction refers to the type of the determination of temporal relations. In the case of live synchronization, the goal of the synchronization is to exactly reproduce at a presentation the temporal relations as they existed during the capturing process. In the case of synthetic synchronization, the temporal relations are artificially specified. Live Synchronization A typical application of live synchronization is conversational services. In the scope of a source/sink scenario, at the source, volatile data streams (i.e., data being captured from the environment) are created which are presented at the sink. The common context of several streams on the source site must be preserved at the sink. The source may be comprised of acoustic and optical sensors, as well as media conversion units. The connection offers a data path between source and sink. The sink presents the units to the user. A source and sink may be located at different sites.

116

Synthetic Synchronization The emphasis of synthetic synchronization is to support flexible synchronization relations between media. In synthetic synchronization, two phases can be distinguished: In the specification phase, temporal relations between the media objects are defined. In the presentation phase, a run-time system presents data in a synchronized mode.

The following example shows aspects of live synchronization: Two persons located at different sites of a company discuss a new product. Therefore, they use a video conference application for person-to-person discussion. In addition, they share a blackboard where they can display parts of the product and they can point with their mouse pointers to details of these parts and discuss some issues like: This part is designed to In the case of synthetic synchronization, temporal relations have been assigned to media objects that were created independently of each other. The synthetic synchronization is often used in presentation and retrieval-based systems with stored data objects that are arranged to provide new combined multimedia objects. A media object may be part of several multimedia objects.

14.5 Lip synchronization Requirements


Lip synchronization refers to the temporal relationship between an audio and video stream for the particular case of human speaking. The time difference between related audio and video LDUs is known as the skew. Streams which are perfectly in sync have no skew, i.e., 0 ms. Experiments at the IBM European Networking Center measured skews that were perceived as out of sync. In their experiments, users often mentioned that something was wrong with the
117

synchronization, but this did not disturb their feeling for the quality of the presentation. Therefore, the experimenters additionally evaluated the tolerance of the users by asking if the data out of sink affected the quality of the presentation. Steps of 40 ms were chosen for: 1. The difficulty of human perception to distinguish any lip synchronization skews with a higher resolution. 2. The capability of multimedia software and hardware devices to refresh motion video data every 33ms/40ms.

14.6 Pointer synchronization Requirements


In a computer-Supported Co-operative Work (CSCW) environment, cameras and microphones are usually attached to the users workstations. In the next experiment, the experimenters looked at a business report that contained some data with accompanying graphics. All participants had a window with these graphics on their desktop where a shared pointer was used in the discussion. Using this pointer, speakers pointed out individual elements of the graphics which may have been relevant to the discussion taking place. This obviously required synchronization of the audio and remote telepointer.

14.7 Reference Model for Multimedia Synchronization


A reference model is needed to understand the various requirements for multimedia synchronization identify and structure run-time mechanisms that support the execution of the synchronization, identify interface between run-time mechanisms and compare system solutions for multimedia synchronization systems. First the existing classification and structuring methods are described and then, a four-layer reference model is presented and used for the classification of multimedia synchronization systems in our case studies. As many multimedia synchronization mechanisms operate in a networked environment, we also discuss special synchronization issues in a distributed environment and their relation to the reference model.

14.7.1 The Synchronization Reference Model


A four-layer synchronization reference model is shown in the following Figure. Each layer implements synchronization mechanisms which are provided by an appropriate interface. These
118

interfaces can be used to specify and/or enforce the temporal relationships. Each interface defines services, i.e., offering the user a means to define his/her requirements. Each interface can be used by an application directly, or by the next higher layer to implement an interface. Higher layers offer higher programming and Quality of Service (QoS) abstractions. For each layer, typical objects and operations on these objects are described in the following. The semantics of the objects and operations are the main criteria for assigning them to one of the layers.

Media Layer: At the media layer, an application operated on a single continuous media stream, is treated as a sequence of LDUs. Stream Layer: The stream layer operates on continuous media streams, as well as on groups of media streams. In a group, all streams are presented in parallel by using mechanisms for interstream synchronization. The abstraction offered by the stream layer is the notion of streams with timing parameters concerning the QoS for intrastream synchronization in a stream and interstream synchronization between streams of a group. Continuous media is seen in the stream layer as a data flow with implicit time constraints; individual LDUs are not visible. The streams are executed in a Real-Time Environment (RTE), where all processing is constrained by welldefined time specifications.

119

Object Layer: The object layer operates on all types of media and hides the differences between discrete and continuous media. The abstraction offered to the application is that of a complete, synchronized presentation. This layer takes a synchronization specification as input and is responsible for the correct schedule of the overall presentation. The task of this layer is to close the gap between the needs for the execution of a synchronized presentation and the streamoriented services. The functions located at the object layer are to compute and execute complete presentation schedules that include the presentation of the no-continuous media objects and the calls to the stream layer. Specification Layer: The specification layer is an open layer. It does not offer an explicit interface. This layer contains applications and tools that are allowed to create synchronization specifications. Such tools are synchronization editors, multimedia documents editors and authoring systems. Also located at the specification layer are tools for converting specifications to an object layer format. The specification layer is also responsible for mapping QoS requirements of the user level to the qualities offered at the object layer interface. Synchronization specification methods can be classified into the following main categories: Interval-based specifications, which allow the specification of temporal relations between the time intervals of the presentations of media objects. Axes-based specifications, which relate presentation events to axes that are shared by the objects of the presentation. Control flow-based specifications, in which at given synchronization points, the flow of the presentations is synchronized. Event-based specifications, in which events in the presentation of media trigger presentation actions. Exercise1 Write down the four layers present in the synchronisation reference mode.

14.8 Synchronization Specification


The synchronization specification of a multimedia object describes all temporal dependencies of the included objects in the multimedia object. It is produced using tools at the specification
120

layers and is used at the interface to the object layer. Because the synchronization specification determines the whole presentation, it is a central issue in multimedia systems. In the following, requirements for synchronization specifications are described and specification methods are described and evaluated. A synchronization specification should be comprised of: Intra-object synchronization specifications for the media objects of the presentation. QoS descriptions for intra-object synchronization. Inter-object synchronization specifications for media objects of the presentation. QoS descriptions for inter-object synchronization.

The synchronization specification is part of the description of a multimedia object.

121

Chapter 15 Multimedia Networking System

15.0 Aims and Objectives


We will examine in the remainder of this chapter different networks with respect to their multimedia transmission capabilities. At the end of this lesson the learner will be able to i. ii. understand the networking concepts identify the features available in FDDI

15.1 Introduction
A multimedia networking system allows for the data exchange of discrete and continuous media among computers. This communication requires proper services and protocols for data transmission. Multimedia networking enables distribution of media to different workstation.

15.2 Layers, Protocols and Services


A service provides a set of operations to the requesting application. Logically related services are grouped into layers according to the OSI reference model. Therefore, each layer is a service provider to the layer lying above. The services describe the behavior of the layer and its service elements (Service Data Units = SDUs). A proper service specification contains no information concerning any aspects of the implementation. A protocol consists of a set of rules which must be followed by peer layer instances during any communication between these two peers. It is comprised of the formal (syntax) and the meaning (semantics) of the exchanged data units (Protocol Data Units = PDUs). The peer instances of different computers cooperate together to provide a service. Multimedia communication puts several requirements on services and protocols, which are independent from the layer in the network architecture. In general, this set of requirements depends to a large extent on the respective application. However, without defining a precise value for individual parameters, the following requirements must be taken into account:

122

Audio and video data process need to be bounded by deadlines or even defined by a time interval. The data transmission-both between applications and transport layer interfaces of the involved components-must follow within the demands concerning the time domains. End to-end jitter must be bounded. This is especially important for interactive applications such as the telephone. Large jitter values would mean large buffers and higher end-to-end delays. All guarantees necessary for achieving the data transfer within the required time span must be met. This includes the required processor performance, as well as the data transfer over a bus and the available storage for protocol processing. Cooperative work scenarios using multimedia conference systems are the main application areas of multimedia communication systems. These systems should support multicast connections to save resources. The sender instance may often change during a single session. Further, a user should be able to join or leave a multicast group without having to request a new connection setup,which needs to be handled by all other members of this group. The services should provide mechanisms for synchronizing different data streams, or alternatively perform the synchronization using available primitives implemented in another system component. The multimedia communication must be compatible with the most widely used communication protocols and must make use of existing, as well as future networks. Communication compatibility means that different protocols at least coexist and run on the same machine simultaneously. The relevance of envisaged protocols can only be achieved if the same protocols are widely used. Many of the current multimedia communication systems are, unfortunately, proprietary experimental systems. The communication of discrete data should not starve because of preferred or guaranteed video/audio transmission. Discrete data must be transmitted without any penalty. The fairness principle among different applications, users and workstations must be enforced. The actual audio/video data rate varies strongly. This leads to fluctuations of the data rate, which needs to be handled by the services.

123

15.2.1 Physical Layer


The physical layer defines the transmission method of individual bits over the physical medium, such as fiber optics. For example, the type of modulation and bit-synchronization are important issues. With respect to the particular modulation, delays during the data transmission arise due to the propagation speed of the transmission medium and the electrical circuits used. They determine the maximal possible bandwidth of this communication channel. For audio/video data in general, the delays must be minimized and a relatively high bandwidth should be achieved. 15.2.2 Data Link Layer The data link layer provides the transmission of information blocks known as data frames. Further, this layer is responsible for access protocols to the physical medium, error recognition and correction, flow control and block synchronization. Access protocols are very much dependent on the network. Networks can be divided into two categories: those using point-topoint connections and those using broadcast channels, sometimes called multi-access channels or random access channels. In a broadcast network, the key issue is how to determine, in the case of competition, who gets access to the channel. To solve this problem, the Medium Access Control (MAC) sublayer was introduced and MAC protocols, such as the Timed Token Rotation Protocol and Carrier Sense Multiple Access with Collision Detection (CSMA/CD), were developed. Continuous data streams require reservation and throughput guarantees over a line. To avoid larger delays, the error control for multimedia transmission needs a different mechanism than retransmission because a late frame is a lost frame.

15.2.3. Network Layer


The network layer transports information blocks, called packets, from one station to another. The transport may involve several networks. Therefore, this layer provides services such as addressing, internetworking, error handling, network management with congestion control and sequencing of packets. Again, continuous media require resource reservation and guarantees for transmission at this layer. A request for reservation for later resource guarantees is defined through Quality of Service (QoS) parameters, which correspond to the requirements for continuous data stream transmission. The reservation must be done along the path between the communicating stations.

124

15.2.4. Transport Layer


The transport layer provides a process-to-process connection. At this layer, the QoS, which is provided by the network layer, is enhanced, meaning that if the network service is poor, the transport layer has to bridge the gap between what the transport users want and what the network layer provides. Large packets are segmented at this layer and reassembled into their original size at the receiver. Error handling is based on process-to-process communication.

15.2.5 Session Layer


In the case of continuous media, multimedia sessions which reside over one or more transport connections, must be established. This introduces a more complex view on connection reconstruction in the case of transport problems. 15.2.6 Presentation Layer The presentation layer abstracts from different formats (the local syntax) and provides common formats (transfer syntax). Therefore, this layer must provide services for transformation between the application-specific formats and the agreed upon format. An example is the different representation of a number for Intel or Motorola processors. The multitude of audio and video formats also requires conversion between formats. This problem also comes up outside of the communication components during exchange between data carriers, such as CD-ROMs, which store continuous data. Thus, format conversion is often discussed in other contexts.

15.2.7 Application Layer


The application layer considers all application-specific services, such as file transfer service embedded in the file transfer protocol (FTP) and the electronic mail service. With respect to audio and video, special services for support of real-time access and transmission must be provided. Exercise1 List the different layers used in networking.

15.3 Multimedia on Networks


The main goal of distributed multimedia communication systems is to transmit all their media over the same network. Depending mainly on the distance between end-points

125

(station/computers), networks are divided into three categories: Local Area Networks (LANs), Metropolitan Area Networks (MANs), and Wide Area Networks (WANs). Local Area Networks (LANs) A LAN is characterized by (1) its extension over a few kilometers at most, (20) a total data rate of at least several Mbps, and (3) its complete ownership by a single organization. Further, the number of stations connected to a LAN is typically limited to 100. However, the interconnection of several LANs allows the number of connected stations to be increased. The basis of LAN communication is broadcasting using broadcast channel (multi-access channel). Therefore, the MAC sublayer is of crucial importance in these networks.

High-speed Ethernet Ethernet is the most widely used LAN. Currently available Ethernet offers bandwidth of at least 10 Mbps, but new fast LAN technologies for Ethernet with bandwidths in the range of 100Mbps are starting to come on the market. This bus-based network uses the CSMA/CD protocol for resolution of multiple access to the broadcast channel in the MAC sub-layer-before data transmission begins, the network state is checked by the sender station. Each station may try to send its data only if, at that moment, no other station transmits data. Therefore, each station can simultaneously listen and send. Dedicated Ethernet Another possibility for the transmission of audio/video data is to dedicate a separate Ethernet LAN to the transmission of continuous data. This solution requires compliance with a proper additional protocol. Further, end-users need at least two separate networks for their communications: one for continuous data and another for discrete data. This approach makes sense for experimental systems, but means additional expense in the end-systems and cabling. Hub A very pragmatic solution can be achieved by exploiting an installed network configuration. Most of the Ethernet cables are not installed in the form of a bus system. They make up a star (i.e., cables radiate from the central room to each station). In this central room, each cable is attached to its own Ethernet interface.

126

Instead of configuring bus, each station is connected via its own Ethernet to a hub. Hence, each station has the full Ethernet bandwidth available, and a new network for multimedia transmission is not necessary. Fast Ethernet Fast Ethernet, known as 100Base-T offers throughput speed of up to 100 Mbits/s, and it permits users to move gradually into the world of high-speed LANs. The Fast Ethernet Alliance, an industry group with more than 60 member companies began work on the 100-Mbits/s 100 BaseTX specification in the early 1990s. The alliance submitted the proposed standard to the IEEE and it was approved. During the standardization process, the alliance and the IEEE also defined a Media-Independent Interface (MII) for fast Ethernet, which enables it to support various cabling types on the same Ethernet network. Therefore, fast Ethernet offers three media options: 100 Base-T4 for half-duplex operation on four pairs of UTP (Unshielded Twisted Pair cable), 100 Base-TX for half-or full-0duplex operation on two pairs of UTP oR STP (Shielded Twisted Pair cable), and 100 Base-FX for half-and full-duplex transmission over fiber optic cable.

Token Ring The Token Ring is a LAN with 4 or 16 Mbits/s throughput. All stations are connected to a logical ring. In a Token Ring, a special bit pattern (3-byte), called a token, circulates around the ring whenever all stations are idle. When a station wants to transmit a frame, it must get the token and remove it from the ring before transmitting. Ring interfaces have two operating modes: listen and transmit. In the listen mode, input bits are simply copied to the output. In the transmit mode, which is entered only after the token has been seized, the interface breaks the connection between the input and the output, entering its own data onto the ring. As the bits that were inserted and subsequently propagated around the ring come back, they are removed from the ring by the sender. After a station has finished transmitting the last bit of its last frame, it must regenerate the token. When the last bit of the frame has gone around and returned, it must be removed, and the interface must immediately switch back into the listen mode to avoid a duplicate transmission of the data. Each station receives, reads and sends frames circulating in the ring according to the Token Ring MAC Sublayer Protocol (IEEE standard 8020.5). Each frame includes a Sender Address (SA) and a Destination Address (DA). When the sending station drains the frame from
127

the ring, a Frame Status field is update, i.e., the A and C bits of the field are examined. Three combinations are allowed: A=0, C=0 : destination not present or not powered up. A=1, C=0 : destination present but frame not accepted. A=1, C=1 : destination present and frame copied.

15.4 FDDI The Fiber Distributed Data Interface (FDDI) is a high-performance fiver optic LAN, which is configured as a ring. It is often seen as the successor of the Token Ring IEEE 802.5 protocol. The standardization began in the American Standards Institute (ANSI in the group X3T9.5 in 1982. Early implementations appeared in 1988. Compared to the Token Ring, FDDI is more a backbone than a LAN only because it runs at 100 Mbps over distances up to 100 km with up to 500 stations. The Token Ring supports typically between 50-2050 stations. The distance of neighboring stations is less than 20 km in FDDI. The FDDI design specification calls for no more than one error in 20.5*10^10 bits. Many implementations will do much better. The FDDI cabling consists of two fiber rings, one transmitting clockwise and the other transmitting counter-clockwise. If either one breaks, the other can be used as backup. FDDI supports different transmission modes which are important for the communication of multimedia data. The synchronous mode allows a bandwidth reservation; the asynchronous mode behaves similar to the Token Ring protocol. Many current implementations support only the asynchronous mode. Before diving into a discussion of the different mode, we will briefly describe the topologies and FDDI system components.

128

15.4.1 Topology of FDDI


The main topology features of FDDI are the two fiber rings, which operate in opposite directions (dual ring topology). The primary ring provides the data transmission; the secondary ring improves the fault tolerance. Individual stations can be but do not have to be connected to both rings. FDDI defines two classes of stations, A and B: Any class A station (Dual Attachment Station) connects to both rings. It is connected either directly to a primary ring and secondary ring or via a concentrator to a primary and secondary ring. The class B station (Single Attachment Station) only connects to one of the rings. It is connected via a concentrator to the primary ring.

15.4.2 FDDI Architecture


FDDI includes the following components which are shown in the following Figure: PHYsical Layer Protocol (PHY) Is defined in the standard ISO 9314-1 Information processing Systems: Fiver Distributed Data Interface-Part 1: Token Ring Physical Protocol. Physical Layer Medium-Dependent (PMD)
129

Is defined in the standard ISO 9314-1 Information Processing Systems: Fiver Distributed Data Interface-Part 1: Token Ring Physical Layer, Medium Dependent. Station Management (SMT) Defines the management functions of the ring according to ANSI Preliminary Draft Proposal American National Standard Z3T9.5/84-49 Revision 6.20, FDDI Station Management. Media Access Control (MAC) Defines the network access according to ISO 9314-20 Information Processing Systems: Fiber Distributed Data Interface-Part 20: Token Ring Media Access Control.

Exercise Explain the different components of FDDI.

130

15.4.3 Further properties of FDDI


Multicasting : The multicasting service became one of the most important aspects of networking. FDDI supports group addressing, which enables multicasting. Synchronisation : Synchronisation among different data streams is not part of the network; therefore it must be solved separately Packet Size : The size of the packets can directly influence the data delay in applications. Implementations: Many FDDI implementations do not support the synchronous mode, which is very useful for the transmission of continuous media. In asynchronous mode additionally, the same methods can be used as described by the token ring. Restricted tokens : If only two stations interact by transmitting continuous media data, then one can also use the asynchronous mode with Restricted token Several new protocols at the network/transport layers in Internet and higher layers in BISDN are currently centers of research to support more efficient transmission of multimedia and multiple types of service.

131

CHAPTER 16 Multimedia Operating System


16.0 Aims and Objectives
In this lesson we will learn the concepts behind Multimedia Operating System. Various issues related with handling of resources are discussed in this lesson.

16.1 Introduction
The operating system is the shield of the computer hardware against all software components. It provides a comfortable environment for the execution of programs, and it ensures effective utilization of the computer hardware. The operating system offers various services, related to the essential resources of a computer: CPU, main memory, storage and all input and output devices.

16.2 Multimedia Operating System


For the processing of audio and video, multimedia application demands that humans perceive these media in a natural, error-free way. These continuous media data originate at sources like microphones, cameras and files. From these sources, the data are transferred to destinations like loudspeakers, video windows and files located at the same computer or at a remote station. The major aspect in this context is real-time processing of continuous media data. Process management must take into account the timing requirements imposed by the handling of multimedia data. Appropriate scheduling methods should be applied. In contrast to the traditional real-time operating systems, multimedia operating systems also have to consider tasks without hard timing restrictions under the aspect of fairness. The communication and synchronization between single processes must meet the restrictions of real-time requirements and timing relations among different media. The main memory is available as shared resource to single processes. In multimedia systems, memory management has to provide access to data with a guaranteed timing delay and efficient data manipulation functions. For instance, physical data copy operations must be avoided due to their negative impact on performance; buffer management operations (such as are known from communication systems) should be used. Database management is an important component in multimedia systems. However, database management abstracts the details of storing data on secondary media storage. Therefore, database
132

management should rely on file management services provided by the multimedia operating system to access single files and file systems. Since the operating system shields devices from applications programs, it must provide services for device management too. In multimedia systems, the important issue is the integration of audio and video devices in a similar way to any other input/output device. The addressing of a camera can be performed similar to the addressing of a keyboard in the same system, although most current systems do not apply this technique.

16.3 Real Time Process


A real-time process is a process which delivers the results of the processing in a given time-span. Programs for the processing of data must be available during the entire run-time of the system. The data may require processing at a priory known point in time, or it may be demanded without any previous knowledge. The system must enforce externally-defined time constraints. Internal dependencies and their related time limits are implicitly considered. External events occur deterministically (at a predetermined instant) or stochastically (randomly). The real-time system has the permanent task of receiving information from the environment, occurring spontaneously or in periodic time intervals, and/or delivering it to the environment given certain time constraints.

16.3.1 Characteristics of Real Time Systems


The necessity of deterministic and predictable behavior of real-time systems requires processing guarantees for time-critical tasks. Such guarantees cannot be assured for events that occur at random intervals with unknown arrival times, processing requirements or deadlines. Predictably fast response to time-critical events and accurate timing information. A high degree of schedulability. Schedulability refers to the degree of resource utilization at which, or below which, the deadline of each time-critical task can be taken into account. Stability under transient overload. Under system overload, the processing of critical tasks must be ensured.

16.3.2 Real Time and Multimedia


Audio and video data streams consist of single, periodically changing values of continuous media data, e.g., audio samples or video frames. Each Logical Data Unit (LDU) must be
133

presented by a well-determined deadline. Jitter is only allowed before the final presentation to the user. A piece of music, for example, must be played back at a constant speed. To fulfill the timing requirements of continuous media, the operating system must use real-time scheduling techniques. These techniques must be applied to all system resources involved in the continuous media data processing, i.e., the entire end-to-end data path is involved. The real-time requirements of traditional real-time scheduling techniques (used for command and control systems in application areas such as factory automation or aircraft piloting) have a high demand for security and fault-tolerance. The fault-tolerance requirements of multimedia systems are usually less strict than those of real-time systems that have a direct physical impact. The short time failure of a continuous media system will not directly lead to the destruction of technical equipment or constitute a threat to human life. Please note that this is a general statement which does not always apply. For example, the support of remove surgery by video and audio has stringent delay and correctness requirements. For many multimedia system applications, missing a deadline is not a severe failure, although it should be avoided. It may even go unnoticed, e.g., if an uncompressed video frame (or parts of it) is not available on time it can simply be omitted. The viewer will hardly notice this omission, assuming it does not happen for a contiguous sequence of frames. A sequence of digital continuous media data is the result of periodically sampling a sound or image signal. Hence, in processing the data units of such a data sequence, all timecritical operations are periodic. The bandwidth demand of continuous media is not always that stringent; it must not be a priori fixed, but it may eventually be lowered. As some compression algorithms are capable of using different compression ratios leading to different qualities the required bandwidth can be negotiated. If not enough bandwidth is available for full quality, the application may also accept reduced quality (instead of no service at all).

16.4 Resource Management


Multimedia systems with integrated audio and video processing are at the limit of their capacity, even with data compression and utilization of new technologies. Current computers do not allow
134

processing of data according to their deadlines without any resource reservation and real-time process management. Resource management in distributed multimedia systems covers several computers and the involved communication networks. It allocates all resources involved in the data transfer process between sources and sinks. In an integrated distributed multimedia system, several applications compete for system resources. This shortage of resources requires careful allocation. The system management must employ adequate scheduling algorithms to serve the requirements of the applications. Thereby, the resource is first allocated and then managed. Resource management in distributed multimedia systems covers several computers and the involved communication networks. It allocates all resources involved in the data transfer process between sources and sinks.

16.4.1 Resources
A resource is a system entity required by tasks for manipulating data. Each resource has a set of distinguishing characteristics classified using the following scheme: A resource can be active or passive. An active resource is the CPU or a network adapter for protocol processing; it provides a service. A passive resource is the main memory, communication bandwidth or a file system (whenever we do not take care of the processing of the adapter); it denotes some system capability required by active resources. A resource can be either used exclusively by one process at a time or shared between various processes. Active resources are often exclusive; passive resources can usually be shared among processes. A resource that exists only once in the system is known as a single, otherwise it is a multiple resource. In a transporter-based multiprocessor system, the individual CPU is a multiple resource.

16.4.2 Requirements
The requirements of multimedia applications and data streams must be served for the single components of a multimedia system. The resource management maps these requirements on the respective capacity. The transmission and processing requirements of local and distributed multimedia applications can be specified according to the following characteristics :

135

1. The throughput is determined by the needed data rate of a connection to satisfy the application requirements. It also depends on the size of the data units. 2. It is possible to distinguish between local and global end-to-end delay : a. The delay at the resource is the maximum time span for the completion of a certain task at this resource. b. The end-to-end delay is the total delay for a data unit to be transmitted from the source to its destination. For example, the source of a video telephone is the camera, the destination is the video window on the screen of the partner. 3. The jitter determines the maximum allowed variance in the arrival of data at the destination. 4. The reliability defines error detection and correction mechanism used for the transmission and processing of multimedia tasks. Errors can be ignored, indicated and/or corrected. It is important to notice that error correction through retransmission is rarely appropriate for time-critical data because the retransmitted data will usually arrive late. Forward error correction mechanisms are more useful. In accordance with communication systems, these requirements are also known as Quality of Service parameters (QoS).

16.4.3 Components of the Resources


One possible realization of resource allocation and management is based on the interaction between clients and their respective resource managers. The client selects the resource and requests a resource allocation by specifying its requirements through a QoS specification. This is equivalent to a workload request. First, the resource manager checks its own resource utilization and decides if the reservation request can be served or not. All existing reservations are stored. This way, their share in terms of the respective resource capacity is guaranteed. Moreover, this component negotiates the reservation request with other resource managers, if necessary. In the following figure two computers are connected over a LAN. The transmission of video data between a camera connected to a computer server and the screen of the computer user involves, for all depicted components, a resource manager.

136

16.4.4 Phases of the Resource Reservation and Management Process


A resource manager provides components for the different phases of the allocation and management process: 1. Schedulability Test The resource manager checks with the given QoS parameters (e.g., hroughput and reliability) to determine if there is enough remaining resource capacity available to handle this additional request. 2. Quality of Service Calculation After the schedulability test, the resource manager calculates the best possible performance (e.g., delay) the resource can guarantee for the new request. 3. Resource Reservation The resource manager allocates the required capacity to meet the QoS guarantees for each request. 4. Resource Scheduling Incoming messages from connections are scheduled according to the given QoS guarantees. For process management, for instance, the allocation of the resource is done by the scheduler at the moment the data arrive for processing.

137

Exercise List the phases available in the resource reservation.

16.4.5 Resource Allocation Scheme Reservation of resources can be made either in a pessimistic or optimistic way: The pessimistic approach avoids resource conflicts by making reservations for the worst case, i.e., resource bandwidth for the longest processing time and the highest rate which might ever be needed by a task is reserved. Resource conflicts are therefore avoided. This leads potentially to an underutilization of resources. With the optimistic approach, resources are reserved according to an average workload only. This means that the CPU is only reserved for the average processing time. This approach may overlook resources with the possibility of unpredictable packet delays. QoS parameters are met as far as possible. Resources are highly utilized, though an overload situation may result in failure. To detect an overload situation and to handle it accordingly a monitor can be implemented. The monitor may, for instance, preempt processes according to their importance. The optimistic approach is considered to be an extension of the pessimistic approach. It requires that additional mechanisms to detect and solve resource conflicts be implemented. Exercise Distinguish additive and subtractive colors and write their area of use.

138

Chapter 17 Multimedia OS - Process Management


17.0 Aims and Objectives
This lesson aims at teaching the concepts of process management. At the end of this lesson the learner will be able to: i) Identify the requirements for processor management ii) Enumerate the different real time processor scheduling algorithms 17.1 Introduction One of the main activities of an Operating System is managing the multimedia processes and managing the processor. Effective management of processor is necessary to enhance the multimedia production and multimedia playback. 17.2 Process Management Process management deals with administration of the resource main processor. The capacity of this resource is specified as processor capacity. The process manager maps single processes onto resources according to a specified scheduling policy such that all processes meet their requirements. In most systems, a process under control of the process manager can adopt one of the following states: In the initial state, no process is assigned to the program. The process is in the idle state. If a process is waiting for an event, i.e., the process lacks one of the necessary resources for processing, it is in the blocked state. If all necessary resources are assigned to the process, it is ready to run. The process only needs the processor for the execution of the program. A process is running as long as the system processor is assigned to it. The process manager is the scheduler. This component transfers a process into the ready-to-run state by assigning it a position in the respective queue of the dispatcher, which is the essential part of the operating system kernel. The dispatcher manages the transition from ready-to-run to run. In most operating systems, the next process to run is chosen according to priority policy. Between processes with the same priority, the one with the longest ready time is chosen.

139

17.3 Real-time Processing Requirements


Continuous media data processing must occur in exactly predetermined usually periodic intervals. Operations on these data recur over and over and must be completed at certain deadlines. The real-time process manager determines a schedule for the resource CPU that allows it to make reservations and to give processing guarantees. The problem is to find a feasible scheduler which schedules all time critical continuous media tasks in a way that each of them can meet its deadlines. This must be guaranteed for all tasks in every period for the whole run-time of the system. In a multimedia system, continuous and discrete media data are processed concurrently. For scheduling of multimedia tasks, two conflicting goals must be considered : An uncritical process should not suffer from starvation because time critical processes are executed. Multimedia applications rely as much on text and graphics as on audio and view. Therefore, not all resources should be occupied by the time critical processes and their management processes. Alternatively, a time critical process must never be subject to priority inversion. The scheduler must ensure that any priority inversion (also between time-critical processes with different priorities) is avoided or reduced as much as possible.

17.4 Traditional Real-time Scheduling


A few real-time scheduling methods are employed in operations research. They differ from computer science real-time scheduling because they operate in a static environment where no adaptation to changes of the workload is necessary. The goal of traditional scheduling on timesharing computers is optimal throughput, optimal resource utilization and fair queuing. In contrast, the main goal of real-time tasks is to provide a schedule that allows all, respectively, as many time-critical processes are possible, to be processed in time, according to their deadline. The scheduling algorithm must map tasks onto resources such that all tasks meet their time requirements. Therefore, it must be possible to show, or to prove, that a scheduling algorithm applied to real-time systems fulfills the timing requirements of the task. There are several attempts to solve real-time scheduling problems. Many of them are just variations of basic algorithms. To find the best solutions for multimedia systems, two basic algorithms are analyzed,

140

Earliest Deadline First Algorithm and Rate Monotonic Scheduling, and their advantages and disadvantages are elaborated.

17.4.1 Earliest Deadline First Algorithm


The Earliest Deadline First (EDF) algorithm is one of the best known algorithms for real-time processing. At every new ready state, the scheduler selects the task with the earliest deadline among the tasks that are ready and not fully processed. The requested resource is assigned to the selected task. At any arrival of a new task EDF must be computed immediately leading to a new order, i.e., the running task is preempted and the new task is scheduled according to its deadline. The new task is processed immediately if its deadline is earlier than that of the interrupted task. The processing of the interrupted task is continued according to the EDF algorithm later on. EDF is not only an algorithm for periodic tasks, but also for task with arbitrary requests, deadlines and service execution times. In this case, no guarantee about the processing of any task can be given. EDF is an optimal, dynamic algorithm, i.e., it produces a valid schedule whenever one exists. A dynamic algorithm schedules every instance of each incoming task according to its specific demands. Tasks of periodic processes must be scheduled in each period again. With n tasks which have arbitrary ready-times and deadlines, the complexity is (n2). EDF is used by different models as a basic algorithm. An extension of EDF is the Time-Driven Scheduler (TDS). Tasks are scheduled according to their deadlines. Further, the TDS is able to handle overload situations. If an overload situation occurs the scheduler aborts tasks which cannot meet their deadlines anymore. If there is still an overload situation, the scheduler removes tasks which have a low value density. The value density corresponds to the importance of a task for the system. Another priority-driven EDF scheduling algorithm is also introduced. Here, every task is divided into a mandatory and an optional part. A task is terminated according to the deadline of the mandatory part, even if it is not completed at this time. Tasks are scheduled with respect to the deadline of the mandatory parts. A set of task is said to be schedulable if all tasks can meet the deadlines of their fully utilized.

141

17.4.2 Rate Monotonic Algorithm


The rate monotonic scheduling principle was introduced by Liu and Layland in 1973. It is an optimal, static, priority-driven algorithm for preemptive, periodic jobs. Optimal in this context means that there is no other static algorithm that is able to schedule a task set which cannot be scheduled by the rate monotonic algorithm. A process is scheduled by a static algorithm at the beginning of the processing. Subsequently, each task is processed with the priority calculated at the beginning. No further scheduling is required. The following five assumptions are necessary prerequisites to apply the rate monotonic algorithm. 1. The requests for all tasks with deadlines are periodic, i.e., have constant intervals between consecutive requests. 2. The processing of a single task must be finished before the next task of the same data stream becomes ready for execution. Deadlines consist of run ability constraints only, i.e., each task must be completed before the next request occurs. 3. All tasks are independent. This means that the requests for a certain task do not depend on the initiation or completion of requests for any other task. 4. Run-time for each request of a task is constant. Run-time denotes the maximum time which is required by a processor to execute the task without interruption. 5. Any non-periodic task in the system has no required deadline. Typically, they initiate periodic tasks or are tasks failure recovery. They usually displace periodic tasks. Static priorities are assigned to tasks, once at the connection set-up phase, according to their request rates. The priority corresponds to the importance of a task relative to other tasks. Tasks with higher request rates will have higher priorities. The task with the shortest period gets the highest priority and the task with the longest period gets the lowest priority. The rate monotonic algorithm is simple method to schedule time-critical, periodic tasks on the respective resource. A task will always meet its deadline, if this can be proven to be true for the longest response time. The response time is the time span between the request and the end of processing the task. This time span is maximal when all processes with a higher priority request to be processed at the same time. This case is known as the critical instant shown in the following figure. In this figure, the priority of a is, according to the rate monotonic algorithm, higher than b, and b is higher than c. The critical time zone is the time interval between the critical instant and the completion of a
142

task.

17.4.3 Other Approaches to Rate Monotonic Algorithm


There are several approaches to this algorithm. One of them divides a task into a mandatory and an optional part. The processing of the mandatory part delivers a result which can be accepted by the user. The optional part only refines the result. The mandatory part is schedule according to the rate monotonic algorithm. For the scheduling of the optional part, other, different policies are suggested. In some systems there are aperiodic tasks next to periodic ones. To meet the requirements of periodic tasks and the response time requirements of a periodic requests, it must be possible to schedule both aperiodic and periodic tasks. If the aperiodic request is an aperiodic continuous stream (e.g video images as part of a slide show), we have the possibility to transform it into a periodic stream. Every timed data item can be substituted by n items. The new items have the duration of the minimal life span. The numbers of streams is increased, but since the life span is
143

decreased, the semantic remains unchanged. The stream is now periodical because every item has the same life span. If the stream is not continuums, we can apply a sporadic server to respond to aperiodic request. The server is provided with a computation budget. This budget is refreshed t units of time after it has been exhausted. Earlier refreshing is also possible. The budget represents the computation time reserved for aperiodic tasks. Exercise Distinguish additive and subtractive colors and write their area of use.

17.4.4 Other Approaches for In-Time Scheduling


Apart from the two methods previously discussed, further scheduling algorithms have been evaluated regarding their suitability for the processing of continuous media data. Least Laxity First (LLF). The laxity is the time between the actual time t and the dead-line minus the remaining processing time. The laxity in period k is : tk = (s + (k 1)p + d) (t + e) LLF is an optimal, dynamic algorithm for exclusive resources. Furthermore, it is an optimal algorithm for multiple resources if the ready-times of the real-time tasks are the same. The laxity is a function of al deadline, the processing time and the current time. Thereby, the processing time cannot be exactly specified in advance. When calculating the laxity, the worst case is assumed. Therefore, the determination of the laxity is inexact. The laxity of waiting processes dynamically changes over time.

Deadline Monotone Algorithm. If the deadlines of tasks are less than their period (di < pi), the prerequisites of the rate monotonic algorithm are violated. In this case, a fixed priority assignment according to the deadlines of the tasks is optimal. A task Ti gets a higher priority than a task Tj if di < dj. No effective schedulability test for the deadline monotone algorithm exists. To determine the schedulability of a task set, each task must be checked if it meets its deadline in the worst case. In this case, all tasks require execution to their critical instant. Tasks with a deadline shorter than
144

their period, for example,arise during the measurements of temperature or pressure control systems. In multimedia systems, deadlines equal to period lengths can be assumed. Shortest Job First (SJF). The task with the shortest remaining computation time is chosen for execution. This algorithm guarantees that as many tasks as possible meet their deadlines under an overload situation if all of them have the same deadline. In multimedia systems where the resource management allows overload situations this might be a suitable algorithm.

145

CHAPTER 18 MULTIMEDIA OS FILE SYSTEM


19.0 Aims and Objectives
This lesson aims at teaching the learner how file systems are organised in the Multimedia based Operating Systems. At the end of this chapter the learner will: i) be able to identify different file systems available for Multimedia OS. ii) Learn how the files are managed in the Operating System. iii) Understand the algorithms used for disk scheduling.

18.1 Introduction
The file system is said to be the most visible part of an operating system. Most programs write or read files. Their program code, as well as user data, is stored in files. The organization of the file system is an important factor for the usability and convenience of the operating system. A file sequence is a sequence of information held as a unit for storage and use in a computer system.

18.2 File Systems


Files are stored in secondary storage, so they can be used by different applications. The life-span of files is usually longer than the execution of a program. In traditional file systems, the information types stored in files are sources, objects, libraries and executables of programs, numeric data, text payroll records, etc. In multimedia systems, the stored information also covers digitized video and audio with their related real-time read and write demands. Therefore, additional requirements in the design and implementation of file systems must be considered. The file system provides access and control functions for the storage and retrieval of files. From the users viewpoint, it is important how the file system allows file organization and structure. The internals, which are more important in our context, i.e., the organization of the file system, deal with the representation of information in files, their structure and organization in secondary storage. Traditional File Systems The two main goals of traditional files systems are: (1) to provide a comfortable interface for file access to the user and (2) to make efficient use of storage media.

146

18.3 File Structure


We commonly distinguish between two methods of file organization. In sequential storage, each file is organized as a simple sequence of bytes or records. Files are stored consecutively on the secondary storage media as shown in the following figure.

Contiguous and non contiguous storage They are separated from each other by a well defined end of file bit pattern, character or character sequence. A file descriptor is usually placed at the beginning of the file and is, in some systems, repeated at the end of the file. Sequential storage is the only possible way to organize the storage on tape, but it can also be used on disks. The main advantage is its efficiency for sequential access, as well as for direct access. Disk access time for reading and writing is minimized. In non-sequential storage, the data items are stored in a non-contiguous order. There exist mainly two approaches: One way is to use linked blocks, where physical blocks containing consecutive logical locations are linked using pointers. The file descriptor must contain the number of blocks occupied by the file, the pointer to the first block and it may also have the pointer to the last block. A serious disadvantage of this method is the cost of the implementation for random access

147

because all prior data must be read. In MS-DOS, a similar method is applied. A File Allocation Table (FAT) is associated with each disk. One entry in the table represents one disk block. The directory entry of each file holds the block number of the first block. The number in the slot of an entry refers to the next block of a file. The slot of the last block of a file contains an end-of -file mark. Another approach is to store block information in mapping tables. Each file is associated with a table where, apart from the block numbers information like owner, file size, creation time, last access time, etc., is stored. Those tables usually have a fixed size, which means that number of block references is bounded. Files with more blocks are referenced indirectly by additional tables assigned to the files. In UNIX, a small table (on disk) called i-node is associated with each file (See the following figure). The indexed sequential approach is an example for multi-level mapping; here, logical and physical organizations are not clearly separated.

The UNIX inode

148

Directory Structure Files are usually organized in directories. Most of todays operating systems provide treestructured directories where the user can organize the files according to his/her personal needs. In multimedia systems, it is important to organize the files in a way that allows easy, fast, and contiguous data access. Disk Management Disk access is a slow and costly transaction. In traditional systems, a common technique to reduce disk access is block caches. Using block cache, blocks are kept in memory because it is expected that future read or write operations access these data again. Thus, performance is enhanced due to shorter access time. Another way to enhance performance is to reduce disk arm motion. Blocks that are likely to be accessed in sequence are placed together on one cylinder. To refine this method, rotational positioning can be taken into account. Consecutive blocks are placed on the same cylinder, but in an interleaved way as shown in the following figure.

Interleaved and non-interleaved storage

149

Another important issue is the placement of the mapping tables (e.g., I-nodes in UNIX) on the disk. If they are placed near the beginning of the disk, the distance between them and the blocks will be, on average, half the number of cylinders. To improve this, they can be placed in the middle of the disk. Hence, the average seek time is roughly reduced by a factor or two. In the same way, consecutive blocks should be placed on the same cylinder. The use of the same cylinder for the storage of mapping tables and referred blocks also improves performance.

18.4 Disk Scheduling


Sequential storage devices (e.g., tapes) do not have a scheduling problem, for random access storage devices, every file operation may require movements of the read/write head. This operation, known as to seek, is very time consuming, i.e., seek time in order of 250ms for CDs is still state-of-art. The actual time to read or write a disk block is determined by: The seek time (the time required for the movement of the read/write head). The latency time or rotational delay (the time during which the transfer cannot proceed until the right block or sector rotates under the read/write head). The actual data transfer time needed for the data to copy from disk into main memory. Usually the seek time is the largest factor of the actual transfer time. Most systems try to keep the cost of seeking low by applying special algorithms to the scheduling of disk read/write operations. The access of the storage device is a problem greatly influenced by the file allocation method. Most systems apply one of the following scheduling algorithms:

First-Come-First-Served (FCFS) With this algorithm, the disk driver accepts requests one-at-a-time and serves them in incoming order. This is easy to program and an intrinsically fair algorithm. However, it is not optimal with respect to head movement because it does not consider the location of the other queued requests. These results in a high average seek time.

Shortest-Seek-Time First (SSTF) At every point in time, when a data transfer is requested, SSTF selects among all requests the one with the minimum seek time from the current head position. Therefore, the head is moved to
150

the closest track in the request queue. This algorithm was developed to minimize seek time and it is in this sense optimal. SSTF is a modification of Shortest Job First (SJF), and like SJF, it may cause starvation of some requests. Request targets in the middle of the disk will get immediate service at the expense of requests in the innermost and outermost disk areas. SCAN Like SSTF, SCAN orders requests to minimize seek time. In contrast to SSTF, it takes the direction of the current disk movement into account. It first serves all requests in one direction until it does not have any requests in this direction anymore. The head movement is then reversed and service is continued. SCAN provides a very good seek time because the edge tracks get better service times. Note that middle tracks still get a better service then edge tracks. When the head movement is reversed, it first serves tracks that have recently been serviced, where the heaviest density of requests, assuming a uniform distribution, is at the other end of the disk. C-SCAN C-SCAN also moves the head in one direction, but it offers fairer service with more uniform waiting times. It does not alter the direction, as in SCAN. Instead, it scans in cycles, always increasing or decreasing, with one idle head, movement from one edge to the other between two consecutive scans. The performance of C-SCAN is somewhat less than SCAN.

18.5 Multimedia File systems


Compared to the increased performance of processors and networks, storage devices have become only marginally faster. The effect of this increasing speed mismatch is the search for new storage structures, and storage and retrieval mechanisms with respect to the file system. Continuous media data are different from discrete data in: Real Time Characteristics As mentioned previously, the retrieval, computation and presentation of continuous media is time-dependent. The data must be presented (read) before a well-defined deadline with small jitter only. File Size Compared to text and graphics, video and audio have very large storage space requirements. Since the file system has to store information ranging from small, unstructured units like text

151

files to large, highly structured data units like video and associated audio, it must organize the data on disk in a way that efficiently uses the limited storage. Multiple Data Streams A multimedia system must support different media at one time. It does not only have to ensure that all of them get a sufficient share of the resources, it also must consider tight relations between streams arriving from different sources. There are different ways to support continuous media in file systems. Basically there are two approaches. With the first approach, the organization of files on disk remains as is. The necessary real-time support is provided through special disk scheduling algorithms and sufficient buffer to avoid jitter. In the second approach, the organization of audio and video files on disk is optimized for their use in multimedia systems. Scheduling of multiple data streams still remains an issue of research.

18.5.1 Disk Scheduling Algorithms in Multimedia File System


The main goals of traditional disk scheduling algorithms are to reduce the cost of seek operations, to achieve a high throughput and to provide fair disk access for every process. The additional real-time requirements introduced by multimedia systems make traditional disk scheduling algorithms, such as described previously, inconvenient for multimedia systems. Systems without any optimized disk layout for the storage of continuous media depend far more on reliable and efficient disk scheduling algorithms than others. In the case of contiguous storage, scheduling is only needed to serve requests from multiple streams concurrently. A round-robin scheduler is employed that is able to serve hard real-time tasks. Here, additional optimization is provided through the close physical placement of streams that are likely to be accessed together. Earliest Deadline First: EDF scheduling strategy as described for CPU scheduling, is also used for the file system issue as well. Here the block of the stream with the nearest deadline would be read first. The employment of EDF, in the strict sense results in poor throughput and excessive seek time. Further, as EDF is most often applied as a preemptive scheduling scheme, the costs for preemption of a task and scheduling of another task are considerably high. The overhead caused by this is in the same order of magnitude as at least one disk seek. Hence, EDF must be adapted or combined with file system strategies.
152

SCAN-Earliest Deadline First The SCAN-EDF strategy is a combination of the SCAN and EDF mechanisms. The seek optimization of SCAN and the real-time guarantees of EDF are combined in the following way: like in EDF, the request with the earliest deadline is always served first; among requests with the same deadline, the specific one that is first according to the scan direction is served first; among the remaining requests, this principle is repeated until no request with this deadline is left. Group Sweeping Scheduling: With Group Sweeping Scheduling (GSS), requests are served in cycles, in round robin manner. To reduce disk arm movements, the set of n streams is divided into g groups. Groups are served in fixed order. Individual streams within a group are served according to SCAN; therefore, it is not fixed at which time or order individual streams within a group are served. In one cycle, a specific stream may be the first to be served; in another cycle, it may be the last in the same group. A smoothing buffer which is sized according to the cycle time and data rate of the stream assures continuity. Mixed Strategy: The mixed strategy was introduced based on the shortest seek(also called greedy strategy) and the balanced strategy. As shown in following figure, every time data are retrieved from disk they are transferred into buffer memory allocated for the respective data stream. From there, the application process removes them one at a time. The goal of the scheduling algorithm is: o To maximize transfer efficiency by minimizing seek time and latency. o To serve process requirements with a limited buffer space.

153

With shortest seek, the first goal is served, i.e., the process of which data block is closest is served first. The balanced strategy chooses the process which has the least amount of buffered data for service because this process is likely to run out of data. The crucial part of this algorithm is the decision of which of the two strategies must be applied (shortest seek or balanced strategy). For the employment of shortest, seek two criteria must be fulfilled: the number of buffers for all processes should be balanced (i.e., all processes should nearly have the same number of buffered data) and the overall required bandwidth should be sufficient for the number of active processes, so that none of them will try to immediately read data out of an empty buffer. The urgency is introduced as an attempt to measure both. The urgency is the sum of the reciprocals of the current fullness (amount of buffered data). This number measures both the relative balance of all read processes and the number of read processes. If the urgency is large, the balance strategy will be used; it is small, it is safe to apply the shortest seek algorithm. Exercise Distinguish additive and subtractive colors and write their area of use.

Continuous Media File System:


154

CMFS Disk scheduling is a non-preemptive disk scheduling scheme designed for the Continuous Media File System (CMFS) at UC-Berkeley. Different policies can be applied in this scheme. Here the notion of the slack time H is introduced. The slack time is the time during which CMFS is free to do non-real-time operations or work ahead for real-time processes, because the current work ahead of each process is sufficient so that no process would starve, even if it would not be served for H seconds. Several real time scheduling policies such as Static/Minimum policy, Greedy policy, cyclical plan policy have been implemented and tested in prototype file system.

18.6 Additional Operating System Issues


Interprocess Communication and Synchronization: In multimedia systems, interprocess communication refers to the exchange of different data between processes. This data transfer must be very efficient because continuous media require the transfer of a large amount of data in a given time span. For the exchange of discrete media data, the same mechanisms are used as in traditional operating systems. Data interchange of continuous media is close related to memory management and is discussed in the previous section. Synchronization guarantees timing requirements between different processes. In the context of multimedia, this is an especially interesting aspect. Memory Management: The memory manager assigns physical resource memory to a single process. Virtual memory is mapped onto memory that is actually available. With paging, less frequently used data is swapped between main memory and external storage. Pages are transferred back into the main memory when data on them is required by a process. Note, continuous media data must not be swapped out of the main memory. If a page of virtual memory containing code or data required by a real-time process is not in real memory when it is accessed by the process, a page fault occurs, meaning that the page must be read from disk. Page faults affect the real-time performance very seriously, so they must be avoided. A possible approach is to lock code and/or data into real memory. However, care should be taken when locking code and/or data into real memory. Device Management Device management and the actual access to a device allow the operating system to integrate all hardware components. The physical device is represented by an abstract device driver. The physical characteristics of devices are hidden. In a conventional system, such devices include a
155

graphics adapter card, disk, keyboard and mouse. In multimedia a systems, additional devices like cameras, microphones, speakers and dedicated storage devices for audio and video must be considered. In most existing multimedia systems, such devices are not often integrated by device management and the respective device drivers. Existing operating system extensions for multimedia usually provide one common system-wide interface for the control and management of data streams and devices.

156

References
1. Multimedia Making it work By Tay Vaughan

2. Multimedia in Practice Technology and applications By Jeffcoat


3. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 4. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 5. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen 6. Multimedia Storage and Retrieval: An Algorithmic Approach By Jan Korst, Verus Pronk

7. Multimedia Making it work By Tay Vaughan


8. Multimedia Servers: Applications, Environments and Design By Dinkar Sitaram, Asit Dan 9. Semantic Models for Multimedia Database Searching and Browsing By Shu- Ching Chen, Rangasami L. Kashyap, Arif Ghafoor 10. "Multimedia:Concepts and Practice" By Stephen McGloughlin 11. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt. 12. Multimedia Computing, Communication and application By Steinmetz and Klara Nahrstedt.. Multimedia Systems, Standards, and Networks By Atul Puri, Tsuhan Chen 13. Multimedia Storage and Retrieval: An Algorithmic Approach By Jan Korst, Verus Pronk

157

You might also like