0% found this document useful (0 votes)
13 views19 pages

PAM Finals

Multimedia refers to the integration of various media types such as text, audio, video, and graphics for interactive applications. A multimedia system processes and manages these diverse media, characterized by digital representation and interactivity. Key components include capture devices, storage, and communication networks, with applications classified by interaction style and user engagement.

Uploaded by

montaser123411
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views19 pages

PAM Finals

Multimedia refers to the integration of various media types such as text, audio, video, and graphics for interactive applications. A multimedia system processes and manages these diverse media, characterized by digital representation and interactivity. Key components include capture devices, storage, and communication networks, with applications classified by interaction style and user engagement.

Uploaded by

montaser123411
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Multimedia Information System

What is Multimedia?
“Multimedia” has no strict definition.
Multimedia can have a many definitions these include:
 Computer professional: uses computer to present and combine text, image, audio, video, and
interactive features in several ways.
 Consumer entertainment vendor: interactive cable TV with hundreds of digital channels
available, or a cable TV-like service delivered over a high-speed Internet connection.
A Multimedia Application is an application which uses a collection of multiple media sources e.g. text,
graphics, images, sound/audio, animation and/or video.

History of Multimedia

Multimedia information / System


 Multimedia information can be defined as information that consists of one or more different
media types.
 Today, multimedia information consists of text, audio, video, 2D graphics, and 3D graphics,
animation.
 Multimedia System is a system capable of processing multimedia data and applications.
 Multimedia System is characterized by the processing, storage, generation, manipulation and
rendition of Multimedia information.
Characteristics of Multimedia Systems
A Multimedia system has four basic characteristics:
 Multimedia systems must be computer controlled.
 Multimedia systems are integrated.
 The information they handle must be represented digitally.
 The interface to the final presentation of media is usually interactive.
Key Issues for Multimedia Systems
Three main processes inherent to multimedia systems:
 Content creation or authoring.
Capturing, digitization, rendering, and filtering.
 Storage and compression.
Available storage medium and the size of stored media.
 Distribution.
How multimedia content is distributed.

Components of a Multimedia System


The Components (Hardware and Software) required for a multimedia system:
 Capture devices—Camera, Microphone, Keyboards, graphics tablets, 3D input devices.
 Storage Devices —Hard disks, CD-ROMs, DVD-ROM, etc.
 Communication Networks —Local Networks, Internet, special high speed networks.
 Computer Systems —Multimedia Desktop machines, Workstations,
 Display Devices — CD-quality speakers, HDTV,SVGA, Hi-Res monitors, Color printers etc.

Classification of Multimedia Systems


Depending on the application, multimedia systems can be classified in a variety of ways, such as
interaction style, the number of users, when the content is live, and so on.
 Static versus dynamic
 Real time versus orchestrated
 Linear versus nonlinear
 Person-to-machine versus person-to-person
 Single user, peer-to-peer, peer-multi-peer, and broadcast.

Media Types Used Today


Two broad classes of media types:
 Static, time-independent discrete media: Text, graphics, images. Information in these media
consist exclusively of a sequence of individual elements without a time component.
 Dynamic, time-dependent continuous media: Sound, video. Information is expressed as not
only of its individual value, but also by the time of its occurrence.
Text
 Text has been commonly used to express information not just today but from the early days.
 Hypertext is a text which contains links to other texts, allowing nonlinear access to information.
Hypermedia
 HyperMedia is not constrained to be text-based. It can include other media, e.g., graphics,
images.
 The World Wide Web (WWW) is the best example of a hypermedia application.
Images
 Consists of a set of units called pixels organized in the form of a two dimensional array, Each
pixel has a bit depth. (a grid of pixels).
 Bit depth: the number of bits assigned to each pixel. Binary images (1bit/pixel) Gray images
(8bit/pixel) Color images (24bit/pixel)
Video
 A sequence of images (frames) having the same spatial parameters. Number of frames displayed
per second is called the frame rate (fps).
 Aspect ratio: A common aspect ratio for video is 4:3, which defines the ratio of the width to
height.
 Scanning format: Scanning helps convert the frames of video into a one dimensional signal for
broadcast.
Audio
 Digital audio is characterized by asampling rate in hertz, A sample can be defined as an
individual unit of audio information.
 Sampling rate: how often the samples are taken (measured in kilohertz, or thousands of samples
per second.
 Sample size: how many numbers are used to represent the value of each sample, 8-bits to 16-
bits depending on the application.
 Dimensionality: the number of channels contained in the signal.

2D and 3D graphics
 2D/ 3D graphic elements are represented by 2D/3D vector coordinates.
 Have properties such as a fill color, boundary thickness, and so on.

Inherent Qualities of Multimedia Data


 Nature of media (analogue or digital).
Our concern is the digital form.
 Voluminous.
Concerning with the size of data resulting from combining different types of media.
 Interactive.
Interacting with multimedia content.
 Real time and synchronization
transmitting multimedia at a real time.
Understanding these properties is core to design good working multimedia application
Multimedia Revolution
what are the main reasons of this revolution?
 Digitization of virtually any and every device.
 Digitization of libraries of information.
 Evolution of communication and data networks.
 New algorithms for compression.
 Better hardware performance.
 Smarter user interface paradigms to view/interact with multimedia information.
 Standardizations.
Digital Data Acquisition
Analog and Digital Signals
The physical world around us exists in a continuous form.
 Sensing light
 Sensing sound energy.
 Sensing pressure.
 Sensing temperature.
 Sensing motion.
On the other hand, digital recording instruments attempts to measure information in an electrical and
digital form.
Analog and Digital Signals
Analogue signal Digital signal

Represented by a con nuous func on Represented by a discreet set of values

Advantages of digital signals over analog ones


 It is possible to create complex, interactive content.
 Stored digital signals do not degrade over time or distance as analog signals do
 Digital data can be efficiently compressed and transmitted across digital networks.
 It easy to store all types of digital media on a common storage medium.

Analog-to-Digital Converter (ADC)


 Special hardware devices : Analog-to-Digital converters.
 Take analog signals from analog sensor (e.g. microphone) and digitallysample data.

Digital-to-Analog Converter (DAC)


 Playback – a converse operation to Analog-to-Digital
 Takes digital signal, possible after modification by computer (e.g. volume change, equalization)
 Outputs an analog signal that may be played by analog output device (e.g. loudspeaker, CRT
display)
Analog-to-Digital Conversion
 The conversion of signals from analog to digital occurs via two main processes: sampling and
quantization.
 The reverse process of converting digital signals to analog is known as interpolation.
Sampling
 Sampling basically involves: Measuring the analog signal at regular discrete intervals and
Recording the value at these points.
 If you reduce T (increasef ), the number of samples increases; and correspondingly,so does the
storage requirement, and Vice versa.
 T is clearly a critical parameter. Should it be the same for every signal?

Hence, xs(1) = x(T); xs(2) = x(2T); xs(3) = x(3T);


Quantization
 Quantization deals with encoding the signal value at every sampled location with a predefined
precision, defined by a number of levels.

More bits  Larger number of levels  better quality and large storage requirements and vise
verse.
 how many bits should be used to represent each sample?
 Is this number the same for all signals?
 It actually dependson the type of signal and what its intended use is.
 For example, Audio signals which represent music, must be quantized on 16 bits, whereas
speech only requires 8 bits.

Bit Rate
 Bit rate is the number of bits produced per second.It is very important for storage and distribution.

Examples
Signal Sampling rate Quantization Bit rate
Speech 8 KHZ 8 bits 64 kbps
AM Radio 11 KHZ 8 bits 88 kbps
HD TV(1080) 1920*1080 12 bits 24.88 Mbits/frame
Sampling theorem and Aliasing
What is the rate at which sampling should occur?
To determine the correct number of samples, you have to calculate
 what is called Nyquist number and it is twice the maximum frequency occurring in the signal.
 If a signal has maximum frequency of 10KHZ, it should be sampled at a frequency of 20KHZ.

Sampling theorem and Aliasing


What happened if your sampling frequency is higher than your Nyquist frequency?
 The same analog signal will be reproduced, however unnecessary samples will increase storage
and transmission requirements.
 What happened if your sampling frequency is Lower than your Nyquist frequency?
 Reproduced signal will differ from the original one because all the frequency content is not well
captured during the digitization process.
 This results in artifacts which are termed as aliasing. (used to describe loss of information during
digitization).

FILTERING
 From general point of view It is a methodology to keep some frequencies and remove all other
frequencies.
 Analog filter: uses analog electronic circuits made up from components such as resistors,
capacitors, and operational amplifiers to produce the required filtering effect.
 Digital filter: uses digital numerical computations on sampledrr, quantized values of the signal.
 ADC  Processor  DAC

Filtering
Filters are classified into three categories according to their responses:
 Low-pass filters: remove high frequency content from the input signal and keeping other
content.
 High-pass filters: remove low frequency content from the input signal and keeping other content.
 Band-pass filters: output signals containing the frequencies belonging to a defined band.

Analog and Digital Filters


Advantages of digital filters over analog filters
 A digital filter is programmable.
 Digital filters are easily designed, tested, and implemented on a general-purpose computer or
workstation.
 Digital filters can be combined in parallel or cascaded in series with relative ease by imposing
minimal software requirements.
 Analog filters are subject to drift and are dependent on temperature. Digital filters do not suffer
from these problems.
 Digital filters are more versatile in their ability to process signals in a variety of ways.
Media Representation and Formats
Different Types of Media
 Text
 Digital Images
 Digital Video
 Digital Audio
 Graphics
Digital Image
 An image is a single picture which represents something.
 It may be a picture of a person, of people or animals, or of an outdoor scene, or a
microphotograph of an electronic component, or the result of medical imaging.
 Two very popular methods of producing a digital image are with a digital camera or a at-bed
scanner.
Image Acquisition Devices
1. CCD (charge coupled device) Camera. Inside any device that got camera, a matrix that takes hold
of values such as picture elements, pixels, etc.
2. Flat bed scanner. Scans pictures row by row from beginning to end

Digital Images
 Still or static images.
 They can be combined to create interesting applications, like:
Panoramic photography (Panoramas)
 Segmented panoramas, also called stitched panoramas, are made by joining multiple
photographs with slightly overlapping fields of view to create a panoramic image.
Types of Image
 Bitmap Image(Graphics)
 Vector Image(Graphics)
Bitmap Images
 The most common and comprehensive form of storage for images on computers is bitmap image.
 Bitmap use combination blocks of different colors (known as pixels) to represent an image. Each
pixel is assigned a specific location and color value.
 There are also called pixelized or raster images
 Software to edit bitmapped graphics are :
 Adobe Photoshop
 Paint Shop Pro
Consists of a set of pixels has height and width (dimensions) the pixels have a bit depth
Advantage
 Can have different textures on the drawings; detailed and comprehensive.
Disadvantage
 Large file size.
 Not easy to make modification to objects/drawings.
 Graphics become "blocky" when the size is increased.
Digital Images
Bit depth: refers to the number of bits used to represent each pixel and divided into channels.
 1 bit (1 channel)  binary image (black or white colored)
 8 bit (1 channel)  gray scale image
 24 bit (3 channels; R,G,B)

One more channel can be used and its called α channel


α channel suggests a measure of the transparency for the pixel value and is used in image composting
applications.
1-bit Images
 Each pixel is stored as a single bit (0 or 1), so also referred to as binary image.
 Such an image is also called a 1-bit monochrome image since it contains no color. Also known
as a bi-level image.
 Pics
8-bit Gray Level Images
 Each pixel has a gray-value between 0 and 255. Each pixel is represented by a single byte; e.g.,
a dark pixel might have a value of 10, and a bright one might be 230.
 Each pixel is usually stored as a byte (a value between 0 to 255), so a 640 480 gray scale image
requires 300 kB of storage (640 * 480 = 307200).
8-bit color Images
 Many systems can make use of 8 bits of color information (the so-called \256 colors") in
producing a screen image.
 Such image files use the concept of a lookup table to store color information.
 Basically, the image stores not color, but instead just a set of bytes, each of which is actually an
index into a table with 3-byte values that specify the color for a pixel with that lookup table index.
 Great saving in space for 8-bit images, over 24-bit ones: a 640 * 480 8-bit color image only
requires 300 kB of storage, compared to 921.6 kB for a color image

Color Look-up Tables (LUTs)


 The idea used in 8-bit color images is to store only the index, or code value, for each pixel. Then,
e.g., if a pixel stores the value 25, the meaning is to go to row 25 in a color look-up table (LUT).

24 Bit Color Images


 In a color 24-bit image, each pixel is represented by three bytes, usually representing RGB. Also
known as Truecolor.
 This format supports 256 * 256 * 256 possible combined colors, or a total of 16,777,216
possible colors.
 A 640 * 480 24-bit color image would require 921.6 kB of storage without any compression.
 An important point: many 24-bit color images are actually stored as 32-bit images, with the
extra byte of data for each pixel used to store an alpha value representing special effect
information (e.g., transparency).
Aspect Ratio
 Aspect Ratios: Image aspect ratio refers to the width/height ratio of the images, and plays an
important role in standards.
 Different applications require different aspect ratios. Some of the commonly used aspect ratios for
images are:
 3:2 (when developing and printing photographs)
 4:3 (television images)
 16:9 (high-definition images)
 47:20 (anamorphic formats used in cinemas).

Vector Image
 Vector images are based on drawing elements/objects to create an image.
 The elements and objects are stored as a series of command that define the individual objects.
 Packages that allow to create vector graphics include :
 Macromedia Freehand MX
 Macromedia Flash MX
 Adobe Illustrator

Advantage
 Small file size.
 Maintain quality as the size of the graphics is increased.
 Easy to edit the drawings as each object is independent of the other.
Disadvantage
 Objects/drawings cannot have texture; it can only have plain colors or gradients ; limited level of
detail that can be presented in an image.

Resolution
Image resolution is a measure of how finely a device approximate continuous images using finite pixels
Different concepts of resolution
 Resolution of scanners and printers is represented by their pixel density (dots per inch or dpi).
 Resolution of video frames and monitors is represented by their pixel dimension (width*hieght).
Monitors also have pixel density measured by dpi.
 Resolution of digital camera still is represented by the total number of pixels in the largest image it
can be recorded
Bitmapped(or raster) image has pixel dimensions, but no pixel density, it’s physical resolution depends
on the pixel density of the device it is to be displayed on.
𝑃𝑖𝑥𝑒𝑙 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛
𝑃ℎ𝑦𝑠𝑖𝑐𝑎𝑙 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛 =
𝐷𝑒𝑣𝑖𝑐𝑒 𝑟𝑒𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛
Image File Sizes
 For a 512 X 512 binary image
The number of bits used in this image
512 X 512 X 1 = 262,144 bit
= 32768 bytes
= 32,768 Kb
≈ 0.033 Mb
 For a 512 X 512 Greyscale image
The number of bits used in this image
512 X 512 X 8 = 2,097,152 bits
= 262,144 bytes
= 262.14 Kb
≈ 0.262 Mb
 For a 512 X 512 RGB image
The number of bits used in this image
512 X 512 X 8 X 3 = 6,291,456 bit
= 786,432 bytes
= 786.432 Kb
≈ 0.786 Mb
Image Dithering
 Dithering is often used for displaying monochrome images
 Dithering is used to calculate patterns of dots such that values from 0 to 255 correspond to
patterns that are more and more filled at darker pixel values, for printing on a 1-bit printer.
 To get color to black and white, first turn into gray scale.
I=0.299R+0.587G+0.114B
 This formula reflects the fact that green is more representative of perceived brightness than blue.
Image Dithering
 Reducing Effects of Quantization by Dithering
 Threshold dithering
 Error diffusion dither (Floyd-Steinberg)
 Ordered dithering
 Pattern dithering

Threshold Dithering
 For every pixel: If the intensity < 128, replace with black, else replace with white
- 128 is the threshold
- This is the naïve version of the algorithm
 To keep the overall image brightness the same, you should:
- Compute the average intensity over the image
- Use a threshold that gives that average
- For example, if the average intensity is 150, use a threshold that is higher than 150 replace
with white, else replace with black
Ordered Dithering
 Break the image into small blocks (n x n)
 Define a threshold matrix (n x n):
 Use a different threshold for each pixel of the block
 Compare each pixel to its own threshold (if pixelo >=Thr  pixeln=255 else pixeln=0)
20 75 80 90 150 113 50 160 0 128 32 160
150 90 180 84 155 80 220 100 192 64 224 96
50 176 16 200 220 180 15 130 48 176 16 144
235 128 190 70 220 110 220 85 240 112 208 80

255 0 255 0 255 0 255 255


0 255 255 0 0 255 0 255
255 255 255 255 255 255 0 0
0 255 0 0 0 0 255 255

Sudocode:
An algorithm for ordered dither, with n*n dither matrix, is as follows:

Pattern Dithering
 Compute the intensity of each sub-block and index a pattern.
 NOT the same as before
 Here, each sub-block has one of a fixed number of patterns – pixel is determined only by
average intensity of sub-block
 In ordered dithering, each pixel is checked against the dithering matrix before being turned
on Pattern Dithering

Video Concept
 Video is an excellent tool for delivering multimedia.
 Video places the highest performance demand on computer and its memory and storage.
 Digital video has replaced analog video as the method of choice for making and delivering video
for multimedia.
Video
 Since video is created from a variety of sources, we begin with the signals themselves
 Analog video is represented as a continuous (time-varying) signal
 Digital video is represented as a sequence of digital images.
Analogue Video
 Video information is stored using television video signals, film, videotape or other non-computer
media
 Each frame is represented by a fluctuating voltage signal known as an analogue wave for
Digitizing Video
 Digital video combines features of graphics and audio to create dynamic content for multimedia
products.
 Video is simply moving pictures.
 Digitized video can be edited more easily.
 Digitized video files can be extremely large.
Digitizing Video
 A video source (video camera ,VCR, TV or videodisc) is connected to a video capture card in a
computer
 As the video source is played, the analog signal is sent to the video card and converted into a
digital file (including sound from the video).
Digital Video
Video is represented as a sequence discrete images (frames) shown in quick succession

The Video Image


 A video image is a projection of a 3-D scene onto a 2-D plane
 A 3-D scene consisting of a number of objects each with depth, texture and illumination is
projected onto a plane form a 2-D representation of the scene
 The 2-D representation contains varying texture and illumination but no depth information
In order to represent and process a visual scene digitally it is necessary to sample the real scene
spatially and temporally

Representation of Digital Video


Two important properties govern video representation
1. Frame rate: rate at which the images are shown or the number of frames shown per second.
2. Scanning format: converting the video to a 1D signal

Analog Video Scanning


 Interlaced Scanning
Require lower bandwidth; however it may produce flicker and artifacts
 Progressive Scanning
Require more bandwidth; however it does not produce flicker or artifacts

Why in the sending process the RGB converts to YUV? and in the receiver process the YUV
coverts again to RGB?
1. Better compression efficiency
2. Video standard compatibility
3. Supports both color and grayscal
YUV is turned back to RGB because display devices operate in the RGB color model, since RGB is
necessary for screen rendering, as it directly controls how pixels emit light.
Conversion to YUV
 Decouple the intensity information (Y or luminance) from the color information (UV or
chrominance
 The separation was intended to reduce the transmission bandwidth and is based on experiments
with the human visual system, which suggests that humans are more tolerant to color distortions.
 In other words, reducing the color resolution does not affect our perception.

YUV color space


 If We have R, G, B Channels, Y,U and V are calculated as:

 If We have Y, U, V, R,G and B are calculated as:

Analog Display Interfaces


Analog video signals are often transmitted in one of three different interfaces:
1. Composite Video
 When connecting to TVs or VCRs, composite video uses only one wire (and hence one
connector, such as a BNC connector at each end of a coaxial cable), and video color signals are
mixed, not sent separately.
 The audio signal is another addition to this one signal.
2. S-Video
 S-video (separated video, or super-video) uses two wires: one for luminance and another for a
composite chrominance signal.
 The reason for placing luminance into its own part of the signal is that black-and white
information is most important for visual perception.
 Therefore, color information transmitted can be much less accurate than intensity information.
3. Component Video
 Higher end video systems, such as for studios, make use of three separate video signals for
the red, green, and blue image planes.
 This is referred to as component video.
 This kind of system has three wires (and connectors) connecting the camera or other devices
to a TV or monitor.
Digital Display Interfaces
 The rise of digital video processing and the monitors that directly accept digital video signals,
there is a great demand toward video display interfaces that transmit digital video signals.
 Today, the most widely used digital video interfaces include Digital Visual Interface (DVI), High-
Definition Multimedia Interface (HDMI), and Display Port.

Digital Visual Interface (DVI)


 Digital Visual Interface (DVI) was developed by the Digital Display Working Group for transferring
digital video signals from a computer’s video card to a monitor.
 It carries uncompressed digital video and can be configured to support multiple modes, including
DVI-D (digital only), DVI-A (analog only), or DVI-I (digital and analog).
 The DVI allows a maximum 16:9 screen resolution of 1920×1080 pixels.

High-Definition Multimedia Interface (HDMI)


 HDMI is a newer digital audio/video interface developed to be backward-compatible with DVI.
 HDMI, however, differs from DVI in the following aspects:
1) HDMI does not carry analog signal and hence is not compatible with VGA.
2) DVI is limited to the RGB color range (0–255).
3) HDMI supports digital audio, in addition to digital video.
 The HDMI allows a maximum screen resolution of 2560×1600 pixels.

Display Port
 Display Port is a digital display interface. It is the first display interface that uses packetized data
transmission, like the Internet or Ethernet
 Display Port can achieve a higher resolution with fewer pins than the previous technologies.
 The use of data packets also allows Display Port to be extensible, i.e., new features can be
added over time without significant changes to the physical interface itself.
 Display Port can be used to transmit audio and video or either of them.
 Compared with HDMI, Display Port has slightly more bandwidth.

Introduction of video compression


There are two main types of video compression
 Spatial compression (intra-frame): Each frame is compresses individually using image
compression techniques.
 Temporal compression (inter-frame): a group of frames is compressed by only storing the
differences between them.
 Chrominance subsampling is nearly always applied before any compression.
 Spatial compression: usually based on the discrete cosine transform (DCT) like JPEG.
 Temporal compression: in which, a certain frames are selected as a key frame. Often, key
frames are specified to occur at regular intervals. These key frames are left without compression.
Each frame between the key frames is replaced by a difference frame.
What is Sound
 Sound is a continuous wave that travels through the air.
 The wave is made up of pressure differences.
 Sound is detected by measuring the pressure level at a location.
 Sound waves have normal wave properties (reflection, refraction, diffraction etc.

Sound Facts
Wave Characteristics
 Frequency: Represents the number of periods in a second and is measured in hertz (Hz) or
cycles per second.
 Human hearing frequency range: 20Hz to 20kHz (audio)
 Amplitude: The measure of displacement of the air pressure wave from its mean.

Digital Representation of Audio


 Acoustics is the study of sound: generation, transmission, and reception of sound waves
 The process of conversion to digital sound is known as pulse code modulation (PCM).
Sampling process basically involves:
 Measuring the analog signal at regular discrete intervals
 Recording the values at these points
Sampling Rate
 For audio, typical sampling rates are from 8 kHz (8,000 samples per second) to 48 kHz.

Nyquist’s Sampling Theorem


The Sampling Frequency is critical to the accurate reproduction of a digital version of an analog
waveform
 Nyquist’s Sampling Theorem suggests that the sampling frequency for a signal must be at least
twice the highest frequency component in the signal
Quantization
Quantization: Divide the vertical axis (signal strength - voltage) into pieces. For example, 8-bit
quantization divides the vertical axis into 256 levels. 16 bit gives you 65536 levels.
 Lower the quantization, lower the quality of the sound.
Linear vs. Non-Linear quantization:
 If the scale used for the vertical axis is linear we say its linear quantization;
 If its logarithmic then we call it non-linear ( -law or A-law in Europe).

Characteristic of Sound
 The common characteristic used to describe audio signals is the number of channels
(Dimensionality), which may be one (mono), two (stereo), or multichannel (surround sound).
 Mono and stereo sound technology are the most commonly used.
Surround Sound
 Surround sound aims to create a Multi-dimensional sound experience.
 It uses multiple audio tracks to engulf the audience in many sources of sound, making them feel
as if they are in the middle of the action.
 Surround sound is mostly used in movie theaters. It allow the audience to hear sounds coming
from all around them.
 It makes the audience is completely captivated by the movie and is no longer aware of their real-
world.
Surround sound formats rely on multiple dedicated speakers that physically surround the audience.
 Movie theaters today use the THX 10.2 surround sound system.
 One center speaker carries most of the dialog.
 The left and right front speakers carry most of the soundtrack (music and sound effects).
 A pair of surround sound speakers are placed to the side of (and slightly above) the audience to
provide the surround sound and ambient effects.
 Finally, a subwoofer can be used to reproduce the low and very low frequency effects.

Multimedia Authoring
What is the meaning of authoring multimedia tools?
 Authoring tools provide an integrated environment for binding together the different elements of a
Multimedia production.

What are the requirements of multimedia authoring tools?


 Creating, editing, and making the individual media items that make up the presentation
production ready.
 Assembling the items into a coherent presentation, including the specification of the temporal and
spatial layout of the media elements.
 Specifying the interaction between the media elements, which often amounts to also defining the
flow of content as a viewer interacts with the presentation.
Intramedia Processing
 Related to the production, enhancement, or editing of individual media type.
Raw Media  Interamedia Processing  Production Ready Media
 For each media type, there are specific operations and dedicated software.

Intramedia Issues Related to Images


 Captured or scanned Images may be edited in the following forms
o Cropping.
o Scaling and Resizing.
o Filtering.
o Retouching
o Adding text.
o Multiple image composition.
 Adobe Photoshop and Paint shop Pro are two famous image editing software.

Intramedia Issues Related to Video


 Video may be captured or produced by an animation software, and some standard operations can
be performed during its editing.
o Changing the video properties.
o Cutting, editing out, and combining parts of video into a tight.
o Creating titles that can scroll on and off screen.
o Creating transitions.
o Using filters.
o Synchronizing audio to video.
o Compressing video for a required bandwidth.
 Adobe Premiere, Apple Final Cut Pro, Avid Media Composer are three famous video editing
software.

Intramedia Issues Related to Audio


 It is recorded in conjunction with the video or separately, and it may be edited in the following
forms
o Synchronizing it to video.
o Noise reduction.
o Down sampling and compression

Intermedia Processing
 Responsible for assembling different media types in one product. An example of this software is
Adobe director.
 Common requirements of intermedia authoring tools
 Spatial placement control
 Temporal control
 Interactivity setup
Multimedia Authoring Paradigms
An authoring paradigm or an authoring metaphor can be referred to as the methodology by which an
authoring tool allows an author to accomplish creation of content. There are several metaphors.
Timeline
 Timelines are a useful way of representing multimedia data during the course of a presentation or
application.
o Time is represented along the x-axis
o Tracks are represented along the y-axis
 The developer can move objects left and right to change the order of the information and can
lengthen or shorten the bars to change their duration
Scripting
 Scripting languages are cut-down versions of complete programming languages
- They tend to have less features and are therefore easier to learn
 Scripting models allow the developer to write small scripts (programs) which can be associated
with a multimedia object
- e.g. you may write a script to make a graphic image move across the screen or to make a
window pop up when an item is clicked.
Flow Control
 It is looks like the flowchart.
 Each part is represented an icon (symbolic picture)
 Each icon does a specific task, e.g. plays a sound
 Icons are then linked together to form complete applications
 Can easily visualise the structure and navigation of the final application
Cards
 In these, authoring systems elements are organized as pages of a book or a stack of cards.
 The authoring system lets you link these pages or cards into organized sequences.
 You can jump, on command, to any page you wish in a structured navigation pattern.
 A page may contain hyperlinks to other pages to provide navigation or pages may be sequentially
viewed
 There maybe global parameters that can be set to affect the entire application
- e.g. background colour, default font, etc

Nature of the Color


Light is an electromagnetic wave, its color characterized by the wavelength.

Color Spaces
 There are different color spaces for image and video likes,
o RGB
o CMYK
o YUV
o HSV
RGB Color Spaces
 The RGB color space is a linear color space that formally uses single wavelength primaries
(645.16 nm for R, 526.32 nm for G, and 444.44 nm for B).
 The RGB color space is common in display devices and it is device dependent

CMYK Color Spaces


 Subtractive color space.
 Used in printing devices and it is device dependent.
 Use colors of cyan, magenta, and yellow which are the complements of red, green and blue

HSV Color Spaces


 Unlike RGB and CMYK, it is not a linear system. It is a color space that depends on the artistic
view. Any color is described by its hue, saturation, and value.
o Hue - The color we see (red, green, purple).
o Saturation - How far is the color from gray (pink is less saturated than red, sky blue is less
saturated than royal blue).
o Brightness/Lightness (Luminance) - How bright is the color.

You might also like