0% found this document useful (0 votes)
33 views36 pages

Unit 5 - Ii

Maths sem important questions
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views36 pages

Unit 5 - Ii

Maths sem important questions
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

SCS1302 COMPUTER GRAPHICS & MULTIMEDIA SYSTEMS

(Common to CSE & IT)

UNIT V MULTIMEDIA BASICS AND TOOLS

Introduction to multimedia - Compression & Decompression - Data & File Format standards -
Digital voice and audio - Video image and animation. Introduction to Photoshop – Workplace
– Tools – Navigating window – Importing and exporting images – Operations on Images –
resize, crop, and rotate. Introduction to Flash – Elements of flash document – flash
environment – Drawing tools – Flash animations – Importing and exporting - Adding sounds –
Publishing flash movies – Basic action scripts – GoTo, Play, Stop, Tell Target.

INTRODUCTION TO MULTIMEDIA

Multimedia is a combination of text, graphic art, and sound, animation and video elements.

The IBM dictionary of computing describes multimedia as "comprehensive material,


presented in a combination of text, graphics, video, animation and sound. Any system that is
capable of presenting multimedia, is called a multimedia system". A multimedia application
accepts input from the user by means of a keyboard, voice or pointing device. Multimedia
applications involve using multimedia teclmology for business, education and entertainment.
Multimedia is now available on standard computer platforms. It is the best way to gain
attention of users and is widely used in many fields as follows:

• Business - In any business enterprise, multimedia exists in the form of


advertisements, presentations, video conferencing, voice mail, etc.
• Schools - Multimedia tools for learning are widely used these days. People of all
ages learn easily and quickly when they are presented information with the visual
treat.
• Home PCs equipped with CD-ROMs and game machines hooked up with TV
screens have brought home entertainment to new levels. These multimedia titles
viewed at home would probably be available on the multimedia highway soon.
• Public places - Interactive maps at public places like libraries, museums, airports and
the stand-alone terminal
• Virtual Reality (VR) - This technology helps us feel a 'real life-like' experience.
Games using virtual reality effect is very popular

Multimedia Elements

High-impact multimedia applications, such as presentations, training and messaging, require


the use of moving images such as video and image animation, as well as sound (from the
video images as well as overlaid sound by a narrator) intermixed with document images and
graphical text displays. Multimedia applications require dynamic handling of data consisting
of a mix of text, voice, audio components, video components, and image animation.
Integrated multimedia applications allow the user to cut sections of all or any of these
components and paste them in a new document or in another application such as an animated
sequence of events, a. desktop publishing system, or a spreadsheet.
FacsimileFacsimile transmissions were the first practical means of transmitting document
images over telephone lines. The basic technology, now widely used, has evolved to allow
higher scanning density for better-quality fax

Document images Document images are used for storing business documents that must be
retained for long periods oftime or may need to be accessed by a large number of people.
Providing multimedia access to such documents removes the need far making several copies
ofthe original for storage or distribution

Photographic images Photographic images are used for a wide range of applications . such
as employee records for instant identification at a security desk, real estates systems with
photographs of houses in the database containing the description of houses, medical case
histories, and so on.

Geographic information systems map (GIS)


Map created in a GIS system are being used wildly for natural resources and wild life
management as well as urban planning. These systems store the geographical information of
the map along with a database containing information relating highlighted map elements
with statistical or item information such as wild life statistics or details of the floors and
rooms and workers in an office building

Voice commands and voice synthesisVoice commands and voice synthesis are used for
hands-free operations of a computer program. Voice synthbsis is used for presenting the
results of an action to the user in a synthesized voice. Applications such as a patient
monitoring system in a surgical theatre will be prime beneficiaries of these capabilities.
Voice commands allow the user to direct computer operation by spoken commands

Audio messageAnnotated voice mail already uses audio or voice message as attachments to
memos and documents such as maintenance manuals.

Video messagesVideo messages are being used in a manner similar to annotated voice mail.

Holographic images

All of the technologies so for essentially present a flat view of information. Holographic
images extend the concept of virtual reality by allowing the user to get "inside" a part, such
as, an engine and view its operation from the inside.

Fractals

Fractals started as a technology in the early 1980s but have received serious attention only
recently. This technology is based on synthesizing and storing algorithms that describes the
information.

COMPRESSION AND DECOMPRESSION


Compression is the way of making files to take up less space. In multimedia systems, in
order to manage large multimedia data objects efficiently, these data objects need to be
compressed to reduce the file size for storage of these objects.
Compression tries to eliminate redundancies in the pattern of data.
For example, if a black pixel is followed by 20 white pixels, there is no need to store all 20
white pixels. A coding mechanism can be used so that only the count of the white pixels is
stored. Once such redundancies are removed, the data object requires less time for
transmission over a network. This in turn significantly reduces storage and transmission
costs.

TYPES OF COMPRESSION
Compression and decompression techniques are utilized for a number of applications, such
as facsimile system, printer systems, document storage and retrieval systems, video
teleconferencing systems, and electronic multimedia messaging systems. An important
standardization of compression algorithm was achieved by the CCITT when it specified
Group 2 compression for facsimile system. When information is compressed, the
redundancies are removed. Sometimes removing redundancies is not sufficient to reduce the
size of the data object to manageable levels. In such cases, some real information is also
removed. The primary criterion is that removal of the real information should not perceptly
affect the quality of the result. In the case of video, compression causes some information to
be lost; some information at a delete level is considered not essential for a reasonable
reproduction of the scene. This type of compression is called lossy compression. Audio
compression, on the other hand, is not lossy. It is called lossless compression.

Lossless Compression
In lossless compression, data is not altered or lost in the process of compression or
decompression. Decompression generates an exact replica of the original object. Text
compression is a good example of lossless compression. The repetitive nature of text, sound
and graphic images allows replacement of repeated strings of characters or bits by codes.
Lossless compression techniques are good for text data and for repetitive data in images all
like binary images and gray-scale images.

Some of the commonly accepted lossless standards are given below:


• Packpits encoding (Run-length encoding)
• CCITT Group 3 I D
• CCITT Group 3 2D
• CCITT Group 4
• Lempe l-Ziv and Welch algorithm LZW.
Lossy compression is that some loss would occur while compressing information objects.
Lossy compression is used for compressing audio, gray-scale or color images, and video
objects in which absolute data accuracy is not necessary. The idea behind the lossy
compression is that, the human eye fills in the missing information in the case of video. But,
an important consideration is how much information can be lost so that the result should not
affect. For example, in a grayscale image, if several bits are missing, the information is still
perceived in an acceptable manner as the eye fills in the gaps in the shading gradient. Lossy
compression is applicable in medical screening systems, video tele-conferencing, and
multimedia electronic messaging systems.
Lossy compressions techniques can be used alone only in combination with other
compression methods in a multimedia object consisting of audio, color images, and video as
well as other specialized data types. The following lists some of the lossy compression
mechanisms:
o Joint Photographic Experts Group (JPEG)
o Moving Picture Experts Group (MPEG)
o Intel DVI
o CCITT H.261 (P * 24) Video Coding Algorithm
o Fractals.
Binary Image compression schemes

Binary Image Compression Scheme is a scheme by which a binary image containing black
and white pixel is generated when a document is scanned in a binary mode. The schemes are
used primarily for documents that do not contain any continuous-tone information or where
the continuous-tone information can be captured in a black and white mode to serve the
desired purpose. The schemes are applicable in office/business documents, handwritten text,
line graphics, engineering drawings, and so on. Let us view the scanning process.A scanner
scans a document as sequential scan lines, starting from the top of the page. A scan line is
complete line of pixels, of height equal to one pixel, running across the page. It scans the
first line of pixels (Scan Line), then scans second "line, and works its way up to the last scan
line of the page. Each scan line is scanned from left to right of the page generating black and
white pixels for that scan line.
This uncompressed image consists of a single bit per pixel containing black and white
pixels. Binary 1 represents a black pixel, binary 0 a white pixel. Several schemes have been
standardized and used to achieve various levels of compressions. Let us review the more
commonly used schemes.

1. Packpits Encoding( Run-Length Encoding)


It is a scheme in which a consecutive repeated string of characters is replaced by two bytes.
It is the simple, earliest of the data compression scheme developed. It need not to have a
standard. It is used to compress black and white (binary) images. Among two bytes which
are being replaced, the first byte contains a number representing the number of times the
character is repeated, and the second byte contains the character itself.In some cases, one
byte is used to represent the pixel value, and the other seven bits to represents the run length.

2. CCITT Group 3 1-D Compression


This scheme is based on run-length encoding and assumes that a typical scanline has long
runs of the same color.This scheme was designed for black and white images only, not for
gray scale or color images. The primary application of this scheme is in facsimile and early
document imaging system.

Huffman Encoding
A modified version of run-length encoding is Huffman encoding. It is used for many
software based document imaging systems. It is used for encoding the pixel run length in
CCITT Group 3 1-dGroup 4. It is variable-length encoding. It generates the shortest code for
frequently occurring run lengths and longer code for less frequently occurring run lengths.

Mathematical Algorithm for huffman encoding:


Huffman encoding scheme is based on a coding tree.It is constructed based on the
probability of occurrence of white pixels or black pixels in the run length or bit stream.
Table below shows the CCITT Group 3 tables showing codes or white run lengths and black
run lengths.
For example, from the above table, the run-length code of 16 white pixels is 101010, and of
16 black pixels 0000010111. Statistically, the occurrence of 16 white pixels is more frequent
than the occurrence of 16 black pixels. Hence, the code generated for 16 white pixels is
much shorter. This allows for quicker decoding. For this example, the tree structure could be
constructed.
The codes greater than a string of 1792 pixels are identical for black and white pixels. A new
code indicates reversal of color, that is, the pixel Color code is relative to the color of the
previous pixel sequence. The following table shows the codes for pixel sequences larger
than 1792 pixels.

CCITT Group 3 compression utilizes Huffman coding to generate a set of make-up codes
and a set of terminating codes for a given bit stream. Make-up codes are used to represent
run length in multiples of 64 pixels. Terminating codes are used to represent run lengths of
less than 64 pixels. As shown in the above table, run-length codes for black pixels are
different from the run-length codes for white pixels. For example, the run-length code for 64
white pixels is 11011. The run length code for 64 black pixels is 0000001111. Consequently,
the run length of 132 white pixels is encoded by the following two codes: Makeup code for
128 white pixels - 10010 Terminating code for 4 white pixels - 1011

The compressed bit stream for 132 white pixels is 100101011, a total of nine bits. Therefore
the compression ratio is 14, the ratio between the total number of bits (132) divided by the
number of bits used to code them (9).

JOINT PHOTOGRAPHIC EXPERTS GROUP


COMPRES5ION (JPEG)
ISO and CCITT working committee joint together and formed Joint Photographic Experts
Group. It is focused exclusively on still image compression. Another joint committee,
known as the Motion Picture Experts Group (MPEG), is concerned with full motion video
standards. JPEG is a compression standard for still color images and grayscale images,
otherwise known as continuous tone images.
JPEG has been released as an ISO standard in two parts
Part I specifies the modes of operation, the interchange formats, and the
encoder/decoder specifies for these modes along with substantial implementation
guide lines.
Part 2 describes compliance tests which determine whether the implementation of an
encoder or decoder conforms to the standard specification of part I to ensure
interoperability of systems compliant with JPEG standards
Requirements addressed by JPEG
o The design should address image quality.
o The compression standard should be applicable to practically any kind of
continuous-tone digital source image .
o It should be scalable from completefy lossless to lossy ranges to adapt it. It
should provide sequential encoding .
o It should provide for progressive encoding .
o It should also provide for hierarchical encoding .
o The compression standard should provide the option of lossless encoding so
that images can be guaranteed to provide full detail at the selected resolution
when decompressed.

Definitions in the JPEG Standard


The JPEG Standards have three levels of definition as follows:
* Base line system
* Extended system
* Special lossless function.

The base line system must reasonably decompress color images, maintain a high
compression ratio, and handle from 4 bits/pixel to 16 bits/pixel. The extended system covers
the various encoding aspects such as variable-length encoding, progressive encoding, and
the hierarchical mode of encoding. The special lossless function is also known as predictive
lossless coding. It ensures that at the resolution at which the image is no loss of any detail
that was there in the original source image.

Overview of JPEG Components JPEG Standard components are:


(i) Baseline Sequential Codec (ii) OCT Progressive Mode
(iii) Predictive Lossless Encoding (iv) Hierarchical Mode.

These four components describe four four different levels of jPEG compression. The
baseline sequential code defines a rich compression scheme the other three modes describe
enhancements to this baseline scheme for achieving different results. Some of the terms used
in JPEG methodologies are:

Discrete Cosine Transform (OCT)

OCT is closely related to Fourier transforms. Fourier transforms are used to represent a two
dimensional sound signal.DCT uses a similar concept to reduce the gray-scale level or color
signal amplitudes to equations that require very few points to locate the amplitude in Y-axis
X-axis is for locating frequency.

DCT Coefficients

The output amplitudes of the set of 64 orthogonal basis signals are called OCT Co-
efficients. Quantization This is a process that attempts to determine what information can
be safely discarded without a significant loss in visual fidelity. It uses OCT co-efficient and
provides many-to-one mapping. The quantization process is fundamentally lossy due to its
many-to-one mapping.

De Quantization This process is the reverse of quantization. Note that since quantization
used a many-to-one mapping, the information lost in that mapping cannot be fully recovered
Entropy Encoder / Decoder Entropy is defined as a measure of randomness, disorder, or
chaos, as well as a measure of a system's ability to undergo spontaneous change. The
entropy encoder compresses quantized DCT co-efficients more compactly based on their
spatial characteristics. The baseline sequential. codec uses Huffman coding. Arithmetic
coding is another type of entropy encoding Huffman Coding Huffman coding requires that
one or more sets of huff man code tables be specified by the application for encoding as well
as decoding. The Huffman tables may be pre-defined and used within an application as
defaults, or computed specifically for a given image.

JPEG Methodology The JPEG compression scheme is lossy, and utilizes forward discrete
cosine transform (or forward DCT mathematical function), a uniform quantizer, and entropy
encoding. The DCT function removes data redundancy by transforming data from a spatial
domain to a frequency domain; the quantizer quantizes DCT co-efficients with weighting
functions to generate quantized DCT co-efficients optimized for the human eye; and the
entropy encoder minimizes the entropy of quantized DCT co-efficients. The JPEG method is
a symmetric algorithm. Here, decompression is the exact reverse process of compression.

Moving Picture Experts Group Compression


The MPEG standards consist of a number of different standards. The MPEG 2 suite of
standards consist of standards for MPEG2 Video, MPEG - 2 Audio and MPEG - 2 systems.
It is also defined at different levels, called profiles. The main profile is designed to cover the
largest number of applications. It supports digital video compression in the range of2 to 15
M bits/sec. It also provides a generic solution for television worldwide, including cable,
direct broadcast satellite, fibre optic media, and optical storage media (including digital
VCRs).

MPEG Coding Methodology

The above said requirements can be achieved only by incremental coding of successive
frames. It is known as interframe coding. If we access information randomly by frame
requires coding confined to a specific frame, then it is known as intraframe coding. The
MPEG standard addresses these two requirements by providing a balance between
interframe coding and intraframe coding. The MPEG standard also provides for recursive
and non-recursive temporal redundancy reduction.

The MPEG video compression standard provides two basic schemes: discrete-transform-
based compression for the reduction of' spatial redundancy and block-based motion
compensation for the reduction of temporal (motion) redundancy. During the initial stages of
DCT compression, both the full motion MPEG and still image JPEG algorithms are
essentially identical. First an image is converted to the YUVcolor space (a
luminance/chrominance color space similar to that used for color television). The pixel data
is then fed into a discrete cosine transform, which creates a scalar quantization (a two-
dimensional array representing various frequency ranges represented in the image) of the
pixel data.

Following quantization, a number of compression algorithms are applied, including run-


length and Huffman encoding. For full motion video (MPEG I and 2), several more levels of
block based motion-compensated techniques are applied to reduce temporal redundancy
with both causal and noncausal coding to further reduce spatial redundancy. The MPEG
algorithm for spatial reduction is lossy and is defined as a hybrid which employs motion
compensation, forward discrete cosine transform (DCF), a uniform quantizer, and Huffman
coding. Block-based motion compensation is utilized for reducing temporal redundancy (i.e.
to reduce the amount of data needed to represent each picture in a video sequence). Motion-
compensated reduction is a key feature of MPEG.

MPEG -2

It is defined to include current television broadcasting compression and decompression


needs, and attempts to include hooks for HDTV broadcasting.
The MPEG-2 Standard Supports:
1.Video Coding: * MPEG-2 profiles and levels.
2.Audio Coding:*MPEG-l audio standard fro backward compatibility.
* Layer-2 audio definitions for MPEG-2 and stereo sound.
* Multichannel sound.
3. Multiplexing: MPEG-2 definitions

MPEG-2, "The Grand Alliance"


It consists of following companies AT&T, MIT, Philips, Sarnoff Labs, GI Thomson, and
Zenith. The MPEG-2committee and FCC formed this alliance. These companies together
have defined the advanced digital television system that include the US and European
HDTV systems. The outline of the advanced digital television system is as follows:
1.Format: 1080/2: 1160 or 720/1.1160
2.Video coding: MPEG-2 main profile and high level
3.Audio coding: Dolby AC3
4.Multiplexor: As defined in MPEG-2
Modulation: 8- VSB for terrestrial and 64-QAM for cable.

Vector Quantization

Vector quantization provides a multidimensional representation of information stored in


look-up tables, vector quantization is an efficient pattern-matching algorithm in which an
image is decomposed into two or more vectors, each representing particular features of the
image that are matched to a code book of vectors. These are coded to indicate the best fit.
In image compression, source samples such as pixels are blocked into vectors so that each
vector describes a small segment or sub block of the original image. The image is then
encoded by quantizing each vector separately.

DATA AND FILE FORMATS STANDARDS


There are large number of formats and standards available for multimedia system. Let us
discuss about the following file formats:

• Rich-Text Format (RTF)


• Tagged Image file Format (TIFF)
• Resource Image File Format (RIFF)
• Musical Instrument Digital Interface (MIDI)
• Joint Photographic Experts Group (JPEG)
• Audio Video Interleaved (AVI) Indeo file format
• TWAIN.
Rich Text Format

This format extends the range of information from one word processor application or DTP
system to another. The key format information carried across in RTF documents are given
below: Character Set: It determines the characters that supports in a particular
implementation.

Font Table: This lists all fonts used. Then, they are mapped to the fonts available in
receiving application for displaying text.

Color Table: It lists the colors used in the documents. The color table then mapped for
display by receiving application to the nearer set of colors available to that applications.

Document Formatting: Document margins and paragraph indents are specified here.

Section Formatting: Section breaks are specified to define separation of groups of


paragraphs. Paragraph Formatting: It specifies style sheds. It specifies control characters for
specifying paragraph justification, tab positions, left, right and first indents relative to
document margins, and the spacing between paragraphs.

General Formatting: It includes footnotes, annotations, bookmarks and pictures.

Character Formatting: It includes bold, italic, underline (continuous, dotted or word), strike
through, shadow text, outline text, and hidden text.

Special Characters: It includes hyphens, spaces, backslashes, underscore and so on

TIFF File Format

TIFF is an industry-standard file format designed to represent raster image data generated by
scanners, frame grabbers, and paint/ photo retouching applications.

TIFF Version 6.0 .


It offers the following formats:

(i) Grayscale, palette color, RGB full-color images and black and white.
(ii) Run-length encoding, uncompressed images and modified Huffman data
compression schemes.
The additional formats are:
(i) Tiled images, compression schemes, images using CMYK, YCbCr color
models.

TIFF Structure
TIFF files consists of a header. The header consists of byte ordering flag, TIFF file format
version number, and a pointer to a table. The pointer points image file directory. This
directory contains table of entries of various tags and their information.
TIFF file format Header:

TIFF Tags
The first two bytes of each directory entry contain a field called the Tag ID.

Tag IDs arc grouped into several categories. They are Basic, Informational, Facsimile,
Document storage and Retrieval.

TIFF Classes: (Version 5.0)


It has five classes
1. Class B for binary images
2. Class F for Fax
3. Class G for gray-scale images
4. Class P for palette color images
5. Class R for RGB full-color images.

Resource Interchange File Format (RIFF)

The RIFF file formats consist' of blocks of data called chunks. They are RIFF Chunk -
defines the content of the RIFF file.

List Chunk - allows to embed archival location copy right information and creating date.
Subchunk - allow additional information to a primary chunk.
The first chunk in a RIFF file must be a RIFF chunk and it may contain one or more sub
chunk.

The first four bytes of the RIFF chunk data field are allocated for the form type field
containing four characters to identify the format of the data stored in the file: AVI, WAV,
RMI, PAL and so.
The sub chunk contains a four-character ASCII string 10 to identify the type of data.

Four bytes of size contains the count of data values, and the data. The data structure of a
chunk is same as all other chunks.

RIFF ChunkThe first 4 characters of the RlFF chunk are reserved for the "RIFF" ASCII
string. The next four bytes define the total data size.

The first four characters of the data field are reserved for form tyPe. The rest of the data field
contains two subchunk:

(i) fmt ~ defines the recording characteristics of the waveform.


(ii) data ~ contains the data for the waveform.

LIST Chunk
RlFF chunk may contains one or more list chunks.

List chunks allow embedding additional file information such as archival location, copyright
information, creating date, description of the content of the file.

RlFF MIDI FILE FORMAT

RlFF MIDI contains a RlFF chunk with the form type "RMID"and a subchunk called "data"
for MIDI data.

The 4 bytes are for ID of the RlFF chunk. 4 bytes are for size 4 bytes are for form type 4
bytes are for ID of the subchunk data and 4 bytes are for the size of MIDI data.

MIDI File Format

The MIDI file format follows music recording metaphor to provide the means of storing
separate tracks of music for each instrument so that they can be read and syn~hronized when
they are played.

The MIDI file format also contains chunks (i.e., blocks) of data. There are two types of
chunks: (i) header chunks (ii) track chunks.

Header Chunk
It is made up of 14 bytes .
The first four-character string is the identifier string, "MThd" .

The second four bytes contain the data size for the header chunk. It is set to a fixed value of
six bytes .
The last six bytes contain data for header chunk.

Track chunk
The Track chunk is organized as follows:

.:. The first 4-character string is the identifier.


.:. The second 4 bytes contain track length.
MIDI Communication Protocol
This protocol uses 2 or more bytes messages.

The number of bytes depends on the types of message. There are two types of messages:
(i) Channel messages and (ii) System messages.

Channel Messages
A channel message can have up to three bytes in a message. The first byte is called a status
byte, and other two bytes are called data bytes. The channel number, which addresses one of
the 16 channels, is encoded by the lower nibble of the status byte. Each MIDI voice has a
channel number; and messages are sent to the channel whose channel number matches the
channel number encoded in the lower nibble of the status byte. There are two types of
channel messages: voice messages and the mode messages.
Voice messages
Voice messages are used to control the voice of the instrument (or device); that is, switch the
notes on or off and sent key pressure messages indicating that the key is depressed, and send
control messages to control effects like vibrato, sustain, and tremolo. Pitch wheel messages
are used to change the pitch of all notes .
Mode messages
Mode messages are used for assigning voice relationships for up to 16 channels; that is, to
set the device to MOWO mode or POLY mode. Omny Mode on enables the device to
receive voice messages on all channels.

System Messages
System messages apply to the complete system rather than specific channels and do not
contain any channel numbers. There are three types of system messages: common messages,
real-time messages, and exclusive messages. In the following, we will see how these
messages are used.
Common Messages These messages are common to the complete system. These messages
provide for functions such as select a song, setting the song position pointer with number of
beats, and sending a tune request to an analog synthesizer.
System Real Time Messages
These messages are used for setting the system's real-time parameters. These parameters
include the timing clock, starting and stopping the sequencer, ressuming the sequencer from
a stopped position, and resetting the system.
System Exclusive messages
These messages contain manufacturer-specific data such as identification, serial number,
model number, and other information. Here, a standard file format is generated which can be
moved across platforms and applications.

JPEG Motion Image:

JPEG Motion image will be embedded in A VI RIFF file format. There are two standards
available:

(i) MPEG ~ In this, patent and copyright issues are there.


(ii) MPEG 2 ~ It provide better resolution and picture quality.
TWAIN

To address the problem of custom interfaces, the TWAIN working group was formed to
define an open industry standard interface for input devices. They designed a standard
interface called a generic TWAIN interface. It allows applications to interface scanners,
digital still cameras, video cameras.

TWAIN ARCHITECHTURE:

o The Twain architecture defines a set of application programming interfaces (APls) and a
protocol to acquire data from input devices.

o It is a layered architecture.

o It has application layer, the protocol layer, the acquisition layer and device layer.

o Application Layer: This layer sets up a logical connection with a device. The application
layer interfaces with protocol layer.

o Protocol Layer: This layer is responsible for communications between the application and
acquisition layers.
o The main part of the protocol layer is the source Manager.
o Source manager manages all sessions between an application and the sources, and
monitors data acquisition transactions. The protocol layer is a complex layer.

It provides the important aspects of device and application interfacing functions. The
Acquisition Layer: It contains the virtual device driver.

It interacts directly with the device driver. This layer is also known as source. It performs the
following functions:

1.Control of the device.


2.Acquisition of data from the device.
3.Transfer of data in agreed format.
4.Provision of user interface to control the device.

The Device Layer: The device layer receives software commands and controls the device
hardware. NEW WAVE RIFF File Format: This format contains two subchunks:
(i) Fmt (ii) Data.
It may contain optional subchunks:

(i) Fact (ii) Cue points (iii)Play list (iv) Associated datalist.

Fact Chunk: It stores file-dependent information about the contents of the WAVE file. Cue
Points Chunk: It identifies a series of positions in the waveform data stream. Playlist
Chunk: It specifies a play order for series of cue points. Associated Data Chunk: It
provides the ability to attach information, such as labels, to sections of the waveform data
stream. Inst Chunk: The file format stores sampled sound synthesizer's samples.

DIGITAL VOICE AND AUDIO

Digital Audio
Sound is made up of continuous analog sine waves that tend to repeat depending on the
music or voice. The analog waveforms are converted into digital format by analog-to-digital
converter (ADC) using sampling process.
Sampling process
Sampling is a process where the analog signal is sampled over time at regular intervals to
obtain the amplitude of the analog signal at the sampling time.
Sampling rate
The regular interval at which the sampling occurs is called the sampling rate.
Digital Voice
Speech is analog in nature and is converted to digital form by an analog-to-digital converter
(ADC). An ADC takes an input signal from a microphone and converts the amplitude of the
sampled analog signal to an 8, 16 or 32 bit digital value.
The four important factors governing the ADC process are sampling rate, resolution,
linearity and conversion speed.
• Sampling Rate: The rate at which the ADC takes a sample of an analog signal.
• Resolution: The number of bits utilized for conversion determines the resolution of
ADC.
• Linearity: Linearity implies that the sampling is linear at all frequencies and that the
amplitude tmly represents the signal.
• Conversion Speed: It is a speed of ADC to convert the analog signal into Digital
signals. It must be fast enough.

VOICE Recognition System

Voice Recognition Systems can be classified into three types.


1.Isolated-word Speech Recognition.
2.Connected-word Speech Recognition.
3.Continuous Speech Recognition.

1. Isolated-word Speech Recognition.

It provides recognition of a single word at a time. The user must separate every word by a
pause. The pause marks the end of one word and the beginning of the next word.

Stage 1: Normalization

The recognizer's first task is to carry out amplitude and noise normalization to minimize the
variation in speech due to ambient noise, the speaker's voice, the speaker's distance from and
position relative to the microphone, and the speaker's breath noise.

Stage2: Parametric Analysis

It is a preprocessing stage that extracts relevent time-varying sequences of speech


parameters. This stage serves two purposes: (i) It extracts time-varying speech parameters.
(ii) It reduces the amount of data of extracting the relevant speech parameters.

Training mode: In training mode of the recognizer, the new frames are added to the
reference list.
Recognizer mode: If the recognizer is in Recognizer mode, then dynamic time warping is
applied to the unknown patterns to average out the phoneme (smallest distinguishable sound,
and spoken words are constructed by concatenatic basic phonemes) time duration. The
unknown pattern is then compared with the reference patterns.

A speaker independent isolated word recognizer can be achieved by groupi.ng a large


number of samples corresponding to a word into a single cluster.

2. Connected-Word Speech Recognition: Connected-word speech consists of spoken


phrase consisting of a sequence of words. It may not contain long pauses between words.
The method using Word Spotting technique
It recognizes words in a connected-word phrase. In this technique, Recognition is carried out
by compensating for rate of speech variations by the process called dynamic time warping
(this process is used to expand or compress the time duration of the word), and sliding the
adjusted connected-word phrase representation in time past a stored word template for a
likely match.

Continuous Speech Recognition


This sytem can be divided into three sections:
(i) A section consisting of digitization, amplitude normalization, time normalization and
parametric representation.
(ii) Second section consisting of segmentation and labeling of the speech segment into a
symbolic string based on a knowledge based or rule-based systems.
(iii) The final section is to match speech segments to recognize word sequences.

Voice Recognition performance


It is categorized into two measures: Voice recognition performance and system performance.
The following four measures are used to determine voice recognition performance.

Voice Recognition Applications


Voice mail integration: The voice-mail message can be integrated with e-mail messages to
create an integrated message.

DataBase Input and Query Applications


A number of applications are developed around the voice recognition and voice synthesis
function. The following lists a few applications which use Voice recognition.

• Application such as order entry and tracking

It is a server function; It is centralized; Remote users can dial into the system to enter an
order or to track the order by making a Voice query.

• Voice-activated rolodex or address book

When a user speaks the name of the person, the rolodex application searches the name and
address and voice-synthesizes the name, address, telephone numbers and fax numbers of a
selected person. In medical emergency, ambulance technicians can dial in and register
patients by speaking into the hospital's centralized system.

Police can make a voice query through central data base to take follow-up action ifhe catch
any suspect.

Language-teaching systems are an obvious use for this technology. The system can ask the
student to spell or speak a word. When the student speaks or spells the word, the systems
performs voice recognition and measures the student's ability to spell. Based on the student's
ability, the system can adjust the level of the course. This creates a self-adjustable learning
system to follow the individual's pace.

Foreign language learning is another good application where"' an individual student can
input words and sentences in the system. The system can then correct for pronunciation or
grammar.

Musical Instrument Digital Interface (MIDI)


MIDI interface is developed by Daver Smith of sequential circuits, inc in 1982. It is an
universal synthesizer interface

MIDI Specification 1.0


MIDI is a system specification consisting of both hardware and software components which
define inter-connectivity and a communication protocol for electronic synthesizers,
sequences, rythm machines, personal computers, and other electronic musical instruments.
The inter-connectivity defines the standard cabling scheme, connector type and input/output
circuitry which enable these different MIDI instruments to be interconnected. The
communication protocol defines standard multibyte messages that allow controlling the
instrument’s voice and messages including to send response, to send status and to send
exclusive.

MIDI Input and output circuitry:

MIDI Hardware Specification


The MIDI. hardware specification require five pin panel mount requires five pin panel
mount receptacle DIN connectors for MIDI IN, MIDI OUT and MIDI THRU signals. The
MIDI IN connector is for input signals The MIDI OUT is for output signals MIDI THRU
connector is for daisy-chaining multiple MIDI instruments.
MIDI Interconnections
The MIDI IN port of an instrument receives MIDI ncssages to play the instrument's internal
synthesizer. The MIDI OUT port sends MIDI messages to play these messages to an
external synthesizer. The MIDI THRU port outputs MIDI messages received by the MIDI
IN port for daisy-chaining external synthesizers.

Communication Protocol
The MIDI communication protocol uses multibyte messages; There are two types of
messages:
(i) Channel messages
(ii) System messages

The channel messages have three bytes. The first byte is called a status byte, and the other
two bytes are called data bytes. The two types of channel messages: (i) Voice messages (ii)
Mode messages.

System messages: The three types of system messages.

Common message: These messages are common to the complete system. These messages
provide for functions.

System real.time messages: These messages are used for setting the system's real-time
parameters. These parameters include the timing clock, starting and stopping the sequencer,
resuming the sequencer from a stopped position and restarting the system.

System exclusive message: These messages contain manufacturer specific data such as
identification, serial number, model number and other information.

SOUND BOARD ARCHITECTURE


A sound card consist of the following components: MIDI Input/Output Circuitry, MIDI
Synthesizer Chip, input mixture circuitry to mix CD audio input with LINE IN input and
microphone input, analog-to-digital converter with a pulse code modulation circuit to
convert analog signals to digital to create WAVfiles, a decompression and compression chip
to compress and decompress audio files, a speech synthesizer to synthesize speech output, a
speech recognition circuitry to recognize speech input and output circuitry to output stereo
audio OUT or LINEOUT.

AUDIO MIXER
The audio mixer component of the sound card typically has external inputs for stereo CD
audio, stereo LINE IN, and stereo microphone MICIN. These are analog inputs, and they go
through analog-to-digital conversion in conjunction with PCM or ADPCM to generate
digitized samples.

SOUND BOARD ARCHITECTURE:

Analog-to-Digital Converters: The ADC gets its input from the audio mixer and converts
the amplitude of a sampled analog signal to either an 8-bit or 16-bit digital value.

Digital-to-Analog Converter (DAC): A DAC converts digital input in the 'foml of W AVE
files, MIDI output and CD audio to analog output signals.
Sound Compression and Decompression: Most sound boards include a codec for sound
compression and decompression. ADPCM for windows provides algorithms for sound
compression.

CD-ROM Interface: The CD-ROM interface allows connecting u CD ROM drive.to the
sound board.

VIDEO IMAGES AND ANIMATION


VIDEO FRAME GRABBER ARCHITECTURE
A video frame grabber is used to capture, manipulate and enhance video images.
A video frame grabber card consists of video channel multiplexer, Video ADC, Input look-
up table with arithmetic logic unit, image frame buffer, compression-decompression
circuitry, output color look-up table, video DAC and synchronizing circuitry.

Video Channel Multiplexer:


A video channel multiplexer has multiple inputs for different video inputs. The video
channel multiplexer allows the video channel to be selected under program control and
switches to the control circuitry appropriate for the selected channel in a TV with multi –
system inputs.

Analog to Digital Converter: The ADC takes inputs from video multiplexer and converts
the amplitude of a sampled analog signal to either an 8-bit digital value for monochrome or a
24 bit digital value for color.

Input lookup table: The input lookup table along with the arithmetic logic unit (ALU)
allows performing image processing functions on a pixel basis and an image frame basis.
The pixel image-processing functions ate histogram stretching or histogram shrinking for
image brightness and contrast, and histogram sliding to brighten or darken the image. The
frame-basis image-processing functions perform logical and arithmetic operations.

Image Frame Buffer Memory: The image frame buffer is organized as a l024 x 1024 x 24
storage buffer to store image for image processing and display.

Video Compression-Decompression: The video compression-decompression processor is


used to compress and decompress still image data and video data.

Frame Buffer Output Lookup Table: The frame buffer data represents the pixel data and
is used to index into the output lookup table. The output lookup table generates either an 8
bit pixel value for monochrome or a 24 bit pixel value for color.

SVGA Interface: This is an optional interface for the frame grabber. The frame grabber
can be designed to include an SVGA frame buffer with its own output lookup table and
digital-to-analog converter.

Analog Output Mixer: The output from the SVGA DAC and the output from image frame
buffer DAC is mixed to generate overlay output signals. The primary components involved
include the display image frame buffer and the display SVGA buffer. The display SVGA
frame buffer is overlaid on the image frame buffer or live video, This allows SVGA to
display live video.

Video and Still Image Processing


Video image processing is defined as the process of manipulating a bit map image so that
the image can be enhanced, restored, distorted, or analyzed.

The terms using in video and still image processing are,

Pixel point to point processing: In pixel point-to-point processing, operations are carried
out on individual pixels one at a time.

Histogram Sliding: It is used to change the overall visible effect of brightening or


darkening of the image. Histogram sliding is implemented by modifying the input look-up
table values and using the input lookup table in conjunction with arithmetic logic unit.

Histogram Stretching and Shrinking: It is to increase or decrease the contrast.


In histogram shrinking, the brighter pixels are made less bright and the darker pixels are
made less dark. Pixel Threshold: Setting pixel threshold levels set a limit on the bright or
dark areas of a picture. Pixel threshold setting is also achieved through the input lookup
table.

Inter- frame image processing


Inter- frame image processing is the same as point-to-point image processing, except that the
image processor operates on two images at the same time. The equation of the image
operations is as follows:
Pixel output (x, y) = (Image l(x, y)
Operator (Image 2(x, y)
Image Averaging: Image averaging minimizes or cancels the effects of random noise.
Image Subtraction: Image subtraction is used to determine the change from one frame to the
next for image comparisons for key frame detection or motion detection.
Logical Image Operation: Logical image processing operations are useful for comparing
image frames and masking a block in an image frame.

Spatial Filter Processing The rate of change of shades of gray or colors is called spatial
frequency. The process of generating images with either low-spatial frequency-components
or high frequency components is called spatial filter processing.

Low Pass Filter: A low pass filter causes blurring of the image and appears to cause a
reduction in noise.

High Pass Filter: The high-pass filter causes edges to be emphasized. The high-pass filter
attenuates low-spatial frequency components, thereby enhancing edges and sharpening the
image.

Laplacian Filter: This filter sharply attenuates low-spatial-frequency components without


affecting and high-spatial frequency components, thereby enhancing edges sharply.

Frame Processing: Frame processing operations are most commonly for geometric
operations, image transformation, and image data compression and decompression Frame
processing operations are very compute intensive many multiply and add operations, similar
to spatial filter convolution operations.

Image scaling: Image scaling allows enlarging or shrinking the whole or part of an image.
Image rotation: Image rotation allows the image to be rotated about a center point. The
operation can be used to rotate the image orthogonally to reorient the image if it was
scanned incorrectly. The operation can also be used for animation.

The rotation formula is:


pixel output-(x, y) = pixel input (x, cos Q + y sin Q, - x sin Q + Y cos Q) where, Q is the
orientation angle and x, y are the spatial co-ordinates of the original pixel.

Image translation: Image translation allows the image to be moved up and down or side to
side. Again, this function can be used for animation.
The translation formula is:
Pixel output (x, y) =Pixel Input (x + Tx, y + Ty) where
Tx and Ty are the horizontal and vertical coordinates. x, y are the spatial coordinates of the
original pixel.
Image transformation: An image contains varying degrees of brightness or colors defined
by the spatial frequency. The image can be transformed from spatial domain to the
frequency domain by using frequency transform.

Image Animation Techniques


Animation: Animation is an illusion of movement created by sequentially playing still
image frames at the rate of 15-20 frames per second.
The illusion of motion created by the consecutive display of images of static
elements.
In multimedia, animation is used to further enhance / enriched the experience of the
user to further understand the information conveyed to them.
When you create an animation, organize its execution into a series of logical steps.
First, gather up in your mind all the activities you wish to provide in the animation; if it is
complicated, you may wish to create a written script with a list of activities and required
objects.
Choose the animation tool best suited for the job. Then build and tweak your
Sequences; experiment with lighting effects. Allow plenty of time for this phase when you
are experimenting and testing. Finally, post-process your animation, doing any special
rendering and adding sound effects
Types of animation
1. Cel Animation
2. Computer animation
3. Kinematics
4. Morphing

1. Cel animation
The term cel derives from the clear celluloid sheets that were used for drawing each frame,
which have been replaced today by acetate or plastic. Cel animation artwork begins with
keyframes (the first and last frame of an action). For example, when an animated figure of a
man walks across the screen, he balances the weight of his entire body on one foot and then
the other in a series
of falls and recoveries, with the opposite foot and leg catching up to support the
body.

2. Computer Animation
Computer animation programs typically employ the same logic and procedural concepts as
cel animation, using layer, keyframe, and tweening techniques, and even borrowing from the
vocabulary of classic animators. The primary difference between the animation software
program is in how much must be drawn by the animator and how much is automatically
generated by the software
• In 2D animation the animator creates an object and describes a path for the object to
follow. The software takes over, actually creating the animation on the fly as the
program is being viewed by your user.
• In 3D animation the animator puts his effort in creating the models of individual and
designing the characteristic of their shapes and surfaces.
• Paint is most often filled or drawn with tools using features such as gradients and
anti- aliasing.
3. Kinematics
• It is the study of the movement and motion of structures that have joints, such as a
walking man.
• Inverse Kinematics is in high-end 3D programs, it is the process by which you link
objects such as hands to arms and define their relationships and limits.
• Once those relationships are set you can drag these parts around and let the computer
calculate the result.
4. Morphing
Morphing is popular effect in which one image transforms into another. Morphing
application and other modeling tools that offer this effect can perform transition not only
between still images but often between moving images as well.
Animation File Formats
Some file formats are designed specifically to contain animations and the can be
ported among application and platforms with the proper translators.
• Director *.dir, *.dcr
• AnimationPro *.fli, *.flc
• 3D Studio Max *.max
• SuperCard and Director *.pics
• CompuServe *.gif
• Flash *.fla, *.swf
Following is the list of few Software used for computerized animation:
• 3D Studio Max
• Flash
• AnimationPro

Toggling between image frames: We can create simple animation by changing images at
display time. The simplest way is to toggle between two different images. This approach is
good to indicate a "Yes" or "No" type situation.

Rotating through several image frames: The animation contains several frames displayed
in a loop. Since the animation consists of individual frames, the playback can be paused and
resumed at any time.

PHOTOSHOP
INTRODUCTION

Photoshop is a graphics based program created with images known as raster


graphics. Other graphic applications, i.e. Illustrator, Corel Draw and Freehand, create vector
graphics. Vector graphics are composed of solid lines, curves and other geometric shapes
that are defined by a set of mathematical instructions. Vector images work best for type and
other shapes that require clear crisp boundaries. Raster images work best with photographs.
Raster graphics are comprised of a raster (a grid) of small squares called pixels. Objects in
Photoshop are groups of many pixels – each of which can be a different color. Raster
images require more memory and storage than vector images. Photoshop is a memory-
hungry program.
Photoshop is unlike other common software interfaces which emulate virtual
typewriters or graphing paper. Photoshop creates an artist's virtual studio/darkroom. When
you open the program you see a toolbox on the left with tools you will use to manipulate
your images, and on the right, a white square which is your "canvas" or work area. The gray
area surrounding the canvas is not part of your image, but only defines its edges.

WORKPLACE (ACTUALLY WORKSPACE)

Generally, there are four components in your workspace that you will use while creating
or modifying graphics. These components are as follows:
• The Menu Bar
• The Drawing Canvas
• The Toolbox
• Palettes (There are five palettes by default)
The figure below shows each of these components similar to how they will appear on your
screen.

NAVIGATION WINDOW

It is a roadmap to your image document.


The Navigator panel is one panel that you probably want readily accessible. It’s most useful
when it’s visible at all times. Undock the Navigator panel by pulling the tab of the panel to
the left. Position the Navigator panel to one side of your image so it’s ready for instant use.
View the thumbnail. The entire Navigator window shows the full document image, with an
outline called a View box showing the amount of image visible in the document window at
the current zoom level.
Change the view. Click anywhere in the thumbnail outside the View box to center the box
at that position. The comparable view in your main document window changes to match.
Move the view. Click anywhere in the thumbnail inside the View box and then drag to
move the box to a new position. The main document window changes to match the new
view.
Zoom in or out. Click the Zoom In button (which has an icon of two large pyramids) or
Zoom Out button (which has an icon of two smaller pyramids) to zoom in or out. Or drag
the Zoom slider that resides between the two icons.

TOOLS

Marquee Tool : The images in Photoshop are stored pixel by pixel, with a code
indicating the color of each. The image is just a big mosaic of dots. Therefore, before you
can do anything in Photoshop, you first need to indicate which pixels you want to change.
The selection tool is one way of doing this. Click on this tool to select it, then click and drag
on your image to make a dotted selection box. Hold shift while you drag if you want a
perfect square or circle. Any pixels within the box will be affected when you make your next
move. If you click and hold on this tool with your mouse button down, you will see that
there is also an oval selection shape, and a crop tool .
Crop Tool: . To crop your image, draw a box with the crop tool. Adjust the selection
with the selection
tion points, and then hit return to crop.
Lasso Tool : The lasso tool lets you select freeform shapes, rather than just rectangles
and ovals.
Magic Wand: Yet another way to select pixels is with the magic wand. When you
click on an area of the image with this tool, all pixels that are the same color as the pixel you
clicked will be selected. Double click on the tool to set the level of tolerance you would like
(i.e. how similar in color the pixels must be to your original pixel color. A higher toleran
tolerance
ce
means a broader color range).

The Move Tool: This is a very important tool, because up until now all you have been
able to do is select pixels, and not actually move them. The move tool not only allows you to
move areas you have selected, but also to move entire layers without first making a
selection. If you hold the option (or alt)) key while clicking and dragging with the move tool,
you can copy the selection.
Airbrush Paintbrush and Pencil tools can be used to draw with the
foreground color on whichever layer is selected. To change the foreground color, double-
double
click on it in the toolbox. You will then see a palette of colors from which to choose. Select
one and click OK. To change the brush size, go to Window > Show Brushes
Brushes.
Eraser Tool: Erases anything on the selected layer. You can change the eraser size by
going to Window > Show Brushes.
Brushes
Line Tool: Can be used to draw straight lines. Click on the tool to select it, then click
with the tool on the canvas area and drag to draw a line. W
When
hen you release the mouse button,
the line will end. You can change the thickness of the line or add arrowheads to it by double
clicking on the tool to see this dialog box:

Text tool: Click on this tool to select it, then click in the Canvas area. You will be
given a dialog box in which to type your text, and choose its attributes. Each new block of
text goes on its own layer, so you can move it around with the Move Tool. Once you have
placed the text, however, it is no longer editable. To correct mist
mistakes,
akes, you must delete the
old version (by deleting its layer) and replace it.
Eyedropper: Click with this tool on any color in the canvas to make that color the
foreground color. (You can then paint or type with it).
Magnifier: Click with this tool on a part of your image you want to see closer, or drag
with it to define the area you want to expand to the size of the window. Hold down
the Option or Alt key to make it a "reducer" instead and zoom back out.
Grabber: Click with this and drag to move the entire page for better viewing.
Options Bar

The Options bar appears at the top of the screen and is context sensitive, changing as you
change tools. The tool in use is shown in the left corner, and options relating to the tool
appear to the right of that.

IMPORT TO PHOTOSHOP

Flash can import still images in many formats, but you usually use the native
Photoshop PSD format when importing still images from Photoshop into Flash.
When importing a PSD file, Flash can preserve many of the attributes that were
applied in Photoshop, and provides options for maintaining the visual fidelity of the image
and further modifying the image. When you import a PSD file into Flash, you can choose
whether to represent each Photoshop layer as Flash layers or individual key frames.

SAVE AND EXPORT

There are two methods to saving a photo in Photoshop, and each has a specific purpose. One
way is to use the typical Save As... dialogue, the other is a special feature in Photoshop
called Save for Web & Devices... which is used sed to save your photos in preparation for
publication to the Web.
1) Save as: Use this method when saving your photo for archiving or if you plan to work on
it later. We recommend saving the file type as a Photoshop or .PSD file, which will also
save extraa Photoshop-specific
Photoshop specific information about your photo.
2) Save for Web: Use this when you are ready to export your photo for publication to the
Web. While it's possible to save a photo with the regular "save as..." option and still publish
it to the Web, the Photoshop
P built-in
in "Save for Web" feature specifically prepares your
photo for the Web and has added features that allow you to see how it will appear once it's
on a Web site. This ensures your photos will show up properly on the Web.

OPERATIONS ON IMAGES – RESIZE, CROP, ROTATE

Crop images

Cropping is the process of removing portions of an image to create focus or


strengthen the composition. You can crop an image using the Crop tool and the Crop
command. You can also trim pixels using the Crop And Straighten and the Trim commands.
Crop an image using the Crop tool | CS5
1. Select the Crop tool .
2. (Optional) Set resample options in the options bar.
• To crop the image without resampling (default), make sure that the
Resolution text box in the options bar is empty. You can click the Clear
button to quickly clear all text boxes.
• To resample the image during cropping, enter values for height, width, and
resolution in the options bar. To switch the height and width dimensions,
click the Swaps Height And Width icon .
• To resample an image based on the dimensions and resolution of another
image, open the other image, select the Crop tool, and click Front Image in
the options bar. Then make the image you want to crop active.
3. Drag over the part of the image you want to keep to create a marquee.
4. Select Hide to preserve the cropped area in the image file. You can make the hidden area
visible by moving the image with the Move tool . Select Delete to discard the
cropped area.
5. To complete the crop, press Enter (Windows) or Return (Mac OS), click the Commit
button in the options bar, or double-click inside the cropping marquee.
6. To cancel the cropping operation, press Esc or click the Cancel button in the options
bar.

Crop an image using the Crop command


1. Use a selection tool to select the part of the image you want to keep.
2. Choose Image > Crop.

Resize & Rotate

Creating an aesthetically pleasing design in Adobe Photoshop is dependent on the ability to


properly composite and arrange image layers. Therefore, it's vital to know how to resize and
rotate layers using Photoshop's "Free Transform" command. Free Transform combines
several "Transform" options into one tool, letting you quickly scale and rotate layers. This
command has been a continuous and preferred feature in Photoshop

STEPS:
1. Open a file containing layers.
2. Click on the layer you want to resize and rotate.
3. Go to "Edit," "Free Transform" or press "Ctrl" and "T."
4. Click and drag the corner handles to resize the layer. Hold down the "Shift" key while
you do this to constrain the image's proportions. You can also scale the layer using the
top, bottom or side handles.
5. Hold the mouse pointer just outside one of the corner handles until it becomes a double-
sided arrow. Click and drag to rotate the layer. "Shift" will limit the rotation to every 15
degrees.
6. Double-click within the "Free Transform" box to apply the changes to the layer.

FLASH
INTRODUCTION

Flash can be used for creating games, making presentations, animations,


visualizations, webpage components, and many other interactive applications. Flash is a
multimedia software that is used to design user interfaces and applications. Flash packs a lot
of functionality into one easy-to-use program.
In Flash you can:
- create animated movies from scratch
- import graphic content created in other programs and use Flash to animate it.
- how the site will function and appear regardless of the browser used.

FLASH ELEMENTS

To design and deliver Flash documents, you work within the Adobe Flash authoring
environment. The Quick Start page provides easy access to your most frequently used
actions. The stage contains the visible elements, such as text, components, and images of the
document. The Timeline represents different phases, or frames, of an animation. The Tools
panel provides the tools used to create and manipulate objects on the stage. The Edit bar tells
you what you are currently working on and gives you the ability to change the magnification
level of the stage. The panels provide access to a wide variety of authoring tools.
The Tools Panel
The Tools panel is divided into four sections. You can use the tools in the Tools panel to draw,
paint, select, and modify artwork, as well as change the view of the stage.

Area On Tools
Tools Contain drawing, painting, and selection tools.
View Contains zooming and panning tools.
Colors Contains tools for setting stroke and fill colors.
Options Displays options for the selected tool that affect the tool's painting or editing
operations.

Expanded Stage Work Area


All software from the Adobe CS3 family have similar user interfaces, similar tools, familiar
icons, and customizable workspaces that enable you to move smoothly between Flash and other
Adobe design software. The gray area around the stage can be used as the expanded stage to
store graphics for future use.

Panels
Panels are Flash screen elements that give you easy access to the most commonly used features
in Flash. Panels help you view, organize, and change elements in your Flash document. Panels
can have different appearances or states and are categorized into different types based on
function.

Panel States
Panels have three states:
1. Open—Visible in the interface with a content window.
2. Collapsed—Visible in the interface as only a title bar with the content window hidden.
3. Not visible in the interface.
Panel Types
Panels are divided into three different types.
Panel Type Included Panels
Design
• Align
• Color
• Swatches
• Info
• Scene
• Transform
Development
• Actions
• Behaviors
• Components
• Component Inspector
• Debugger
• Output
• Web Services
Other
• Accessibility
• History
• Movie Explorer
• Strings
• Common Libraries

FLASH FILE FORMATS

There are two main file types used in Flash.


1. FLA is the file type you create in the authoring environment. You can think of it as your
source file. The term document is associated with this file type.
2. When you publish your document, a SWF file is created. The creation process is similar
to a compilation process in other languages. This file is actually viewed by users of a web
page or Flash application. The term application is associated with this file type. When
you publish a Flash document, an HTML file is also created along with the SWF.

How to Produce a Flash Application File

To produce a Flash application file:


1. If necessary, launch Adobe Flash CS3.
2. Open an FLA file.
a. Choose File→Open or in the Quick Start page, click Open.
b. In the Open dialog box, navigate to the location where the desired FLA file is stored.
c. Select the file, and click Open to open the file.
3. Choose File→Publish to publish the file.
4. Minimize the Adobe Flash CS3 Professional window.
5. View the published file.
a. Navigate to the location where the original FLA file is stored. A SWF file is generated
with the same name as that of the FLA file.
b. Double-click the SWF file to launch and test the application.
c. Close the Adobe Flash Player 9 window.
6. Restore the Adobe Flash CS3 Professional window.
7. Close the FLA file

FLASH ANIMATIONS

TWEENING: Short for in-betweening, the process of generating intermediate frames between
two images to give the appearance that the first image evolves smoothly into the second image.
Tweening is a key process in all types of animation, including computer animation. Sophisticated
animation software enables you to identify specific objects in an image and define how they
should move and change during the tweening process.

IMPORT

When you work with Flash, you'll often import assets into a document. Perhaps you have
a company logo, or graphics that a designer has provided for your work. You can import a
variety of assets into Flash, including sound, video, bitmap images, and other graphic formats
(such as PNG, JPEG, AI, and PSD). Imported graphics are stored in the document's library. The
library stores both the assets that you import into the document, and symbols that you create
within Flash. A symbol is a vector graphic, button, font, component, or movie clip that you
create once and can reuse multiple times.
So you don’t have to draw your own graphics in Flash, you can import an image of a pre-drawn
gnome from the tutorial source file. Before you proceed, make sure that you save the source files
for this tutorial as described in “Open the finished FLA file”, and save the images to the same
directory as your banner.fla file.

1. Select File > Import > Import to Library to import an image into the current document.34
Basic Tasks: Creating a banner, Part 1 You'll see the Import dialog box which enables you to
browse to the file you want to import. Browse to the folder on your hard disk that contains an
image to import
into your Flash document.
2. Navigate to the directory where you saved the tutorial’s source files, and locate the bitmap
image saved in the FlashBanner/Part1 directory.
3. Select the gnome.png image, and click Open (Windows) or Import (Macintosh). The image is
imported into the document's library.
4. Select Window > Library to open the Library panel. You'll see the image you just imported,
gnome.png, in the document's library.
5. Select the imported image in the library and drag it onto the Stage. Don't worry about where
you put the image on the Stage, because you'll set the coordinates for the image later. When you
drag something onto the Stage, you will see it in the SWF file when the file plays.
ADDING SOUNDS

The first step of this process is to prepare the audio file you will be importing. Flash is
most compatible with uncompressed audio formats such as WAV and AIFF files, and
automatically compresses the audio to MP3 when you publish your movie. If Flash returns an
error when you attempt to import an MP3 file, convert it to either WAV or AIFF in a 3rd-party
audio conversion application and try again.
Once your audio file is ready, the next step file will be to make a new layer and name it
audio. This naming system is of course only a suggestion.

In the next step we will locate the file we wish to import. Go to 'File>Import>Import to
Library' and browse to your audio file. Selecting it once it is located will import the file into the
library and a new audio icon will appear with your other files.
Once the audio file has been imported into the library, select the audio layer in your
timeline, and drag your file from the library directly onto the work area of your main stage. You
should see a thin waveform appear in those few frames in the audio layer of your timeline. Now
select the end frame and drag it further down the timeline, this should reveal the rest of the
waveform in the audio layer.
These waveform spikes will assist you in timing to your audio to your animation. If you
want to move the beginning of the audio around in the layer, you can select the first frame, and
drag it to the frame where you would like the audio to begin. Then just grab the last frame and
drag it to where you want your audio to end. One thing to know is that the audio file on the layer
cannot have a Keyframe between the beginning and the end of the waveform. If you insert one it
will stop the audio at that point and clear any audio after it. You can still move the frame at the
end of the audio to reveal the waveform again however. Also it would likely make it easier if you
reserve this layer only for the audio you have applied to it.
If you are importing a longer audio file that requires a fade out you will have to do that. If
you do not cut the audio down, your .swf file will get to the end of the animation and repeat
while the previous audio continues to play. This will result in multiple layers of audio playing at
once, and will distort the audio. To fade the audio out simply select it in the "audio" layer, and
you will notice the right hand side of your properties inspector will display the audio file name.
(audio inspector)

Select edit from your properties inspector. This will open the edit envelope window,
which displays the left and right channel waveforms up close. The first thing to do is check and
bottom of the window and make sure frames is selected instead of time. Doing this will change
to numbers between the waves to correspond to the frame on which the audio is happening
instead of at what time it happens.
The dark line across the top of the waveform represents the audio level. If you click on it,
you will create a control point. I created two control points, then left one at the top around frame
158, and dragged the other to the bottom around frame 171.

(fading the audio out)

This will cause the audio to fade out when it gets to these frames. My audio is now where
I want it to be. You can also use this technique to edit the level of the left and right channels
throughout your project if you wanted certain parts to be louder or softer.

ACTION SCRIPTS

You’ll need to give Flash instructions. Among other things, you’ll tell Flash to stop the
movie, play the movie, or jump to a specific place in the timeline. These tasks that Flash will
perform at your request are called "actions".

There are two types of actions.


Button actions perform a specific task when a button is clicked. Among other things, you
can use button actions to jump to a different part of the timeline, stop the movie, or instruct a
movie clip to start playing. We’re not going to mess with buttons too much right now, but well
get to them later.
The other type of action is a frame action. A frame action is automatically triggered when
the playback head reaches a certain frame. Among other things, frame actions can be used to stop
the movie, jump the playback head to a different part of the timeline, or preload frames to ensure
a smooth playing animation.

GO TO & PLAY / GO TO & STOP

USING FRAME ACTIONS

In this section, well go through an exercise to use the stop action to keep a movie from
looping. The stop action can also be used to halt the movie while youre waiting for a user to
make a choice about what button they want to click or what they’re going to do next.
1. Build a small, twenty-frame movie. It doesn’t have to be anything special. A symbol
animating across the screen will do fine.
2. Hit F12 to test the movie.
Notice that it loops.
Now we’re going to place a stop action on the last frame of our twenty-frame movie. The stop
action will keep halt the playback head when it reaches frame 20, thus keeping the movie from
looping.
3. Add a new layer and name it Actions.
Remember that frame actions should get their own top layer.
4. On the Actions Layer, add a keyframe (F6) on frame 20

5. Open the Actions Palette


(WINDOW-> ACTION)

6. Open the Basic Actions By clicking on the plus sign. You’ll be able to see the actions.
7. Choose Stop.
Youll notice that coding for your action was added to the right side of the Frame Action Palette.

You may also notice that a small "a" appeared in your keyframe.

8. Hit F12 to test your movie.It should stop at the end!

References:
http://www.readorrefer.in/article/Multimedia-Basics_10188/

You might also like