Unit 5 - Ii
Unit 5 - Ii
Introduction to multimedia - Compression & Decompression - Data & File Format standards -
Digital voice and audio - Video image and animation. Introduction to Photoshop – Workplace
– Tools – Navigating window – Importing and exporting images – Operations on Images –
resize, crop, and rotate. Introduction to Flash – Elements of flash document – flash
environment – Drawing tools – Flash animations – Importing and exporting - Adding sounds –
Publishing flash movies – Basic action scripts – GoTo, Play, Stop, Tell Target.
INTRODUCTION TO MULTIMEDIA
Multimedia is a combination of text, graphic art, and sound, animation and video elements.
Multimedia Elements
Document images Document images are used for storing business documents that must be
retained for long periods oftime or may need to be accessed by a large number of people.
Providing multimedia access to such documents removes the need far making several copies
ofthe original for storage or distribution
Photographic images Photographic images are used for a wide range of applications . such
as employee records for instant identification at a security desk, real estates systems with
photographs of houses in the database containing the description of houses, medical case
histories, and so on.
Voice commands and voice synthesisVoice commands and voice synthesis are used for
hands-free operations of a computer program. Voice synthbsis is used for presenting the
results of an action to the user in a synthesized voice. Applications such as a patient
monitoring system in a surgical theatre will be prime beneficiaries of these capabilities.
Voice commands allow the user to direct computer operation by spoken commands
Audio messageAnnotated voice mail already uses audio or voice message as attachments to
memos and documents such as maintenance manuals.
Video messagesVideo messages are being used in a manner similar to annotated voice mail.
Holographic images
All of the technologies so for essentially present a flat view of information. Holographic
images extend the concept of virtual reality by allowing the user to get "inside" a part, such
as, an engine and view its operation from the inside.
Fractals
Fractals started as a technology in the early 1980s but have received serious attention only
recently. This technology is based on synthesizing and storing algorithms that describes the
information.
TYPES OF COMPRESSION
Compression and decompression techniques are utilized for a number of applications, such
as facsimile system, printer systems, document storage and retrieval systems, video
teleconferencing systems, and electronic multimedia messaging systems. An important
standardization of compression algorithm was achieved by the CCITT when it specified
Group 2 compression for facsimile system. When information is compressed, the
redundancies are removed. Sometimes removing redundancies is not sufficient to reduce the
size of the data object to manageable levels. In such cases, some real information is also
removed. The primary criterion is that removal of the real information should not perceptly
affect the quality of the result. In the case of video, compression causes some information to
be lost; some information at a delete level is considered not essential for a reasonable
reproduction of the scene. This type of compression is called lossy compression. Audio
compression, on the other hand, is not lossy. It is called lossless compression.
Lossless Compression
In lossless compression, data is not altered or lost in the process of compression or
decompression. Decompression generates an exact replica of the original object. Text
compression is a good example of lossless compression. The repetitive nature of text, sound
and graphic images allows replacement of repeated strings of characters or bits by codes.
Lossless compression techniques are good for text data and for repetitive data in images all
like binary images and gray-scale images.
Binary Image Compression Scheme is a scheme by which a binary image containing black
and white pixel is generated when a document is scanned in a binary mode. The schemes are
used primarily for documents that do not contain any continuous-tone information or where
the continuous-tone information can be captured in a black and white mode to serve the
desired purpose. The schemes are applicable in office/business documents, handwritten text,
line graphics, engineering drawings, and so on. Let us view the scanning process.A scanner
scans a document as sequential scan lines, starting from the top of the page. A scan line is
complete line of pixels, of height equal to one pixel, running across the page. It scans the
first line of pixels (Scan Line), then scans second "line, and works its way up to the last scan
line of the page. Each scan line is scanned from left to right of the page generating black and
white pixels for that scan line.
This uncompressed image consists of a single bit per pixel containing black and white
pixels. Binary 1 represents a black pixel, binary 0 a white pixel. Several schemes have been
standardized and used to achieve various levels of compressions. Let us review the more
commonly used schemes.
Huffman Encoding
A modified version of run-length encoding is Huffman encoding. It is used for many
software based document imaging systems. It is used for encoding the pixel run length in
CCITT Group 3 1-dGroup 4. It is variable-length encoding. It generates the shortest code for
frequently occurring run lengths and longer code for less frequently occurring run lengths.
CCITT Group 3 compression utilizes Huffman coding to generate a set of make-up codes
and a set of terminating codes for a given bit stream. Make-up codes are used to represent
run length in multiples of 64 pixels. Terminating codes are used to represent run lengths of
less than 64 pixels. As shown in the above table, run-length codes for black pixels are
different from the run-length codes for white pixels. For example, the run-length code for 64
white pixels is 11011. The run length code for 64 black pixels is 0000001111. Consequently,
the run length of 132 white pixels is encoded by the following two codes: Makeup code for
128 white pixels - 10010 Terminating code for 4 white pixels - 1011
The compressed bit stream for 132 white pixels is 100101011, a total of nine bits. Therefore
the compression ratio is 14, the ratio between the total number of bits (132) divided by the
number of bits used to code them (9).
The base line system must reasonably decompress color images, maintain a high
compression ratio, and handle from 4 bits/pixel to 16 bits/pixel. The extended system covers
the various encoding aspects such as variable-length encoding, progressive encoding, and
the hierarchical mode of encoding. The special lossless function is also known as predictive
lossless coding. It ensures that at the resolution at which the image is no loss of any detail
that was there in the original source image.
These four components describe four four different levels of jPEG compression. The
baseline sequential code defines a rich compression scheme the other three modes describe
enhancements to this baseline scheme for achieving different results. Some of the terms used
in JPEG methodologies are:
OCT is closely related to Fourier transforms. Fourier transforms are used to represent a two
dimensional sound signal.DCT uses a similar concept to reduce the gray-scale level or color
signal amplitudes to equations that require very few points to locate the amplitude in Y-axis
X-axis is for locating frequency.
DCT Coefficients
The output amplitudes of the set of 64 orthogonal basis signals are called OCT Co-
efficients. Quantization This is a process that attempts to determine what information can
be safely discarded without a significant loss in visual fidelity. It uses OCT co-efficient and
provides many-to-one mapping. The quantization process is fundamentally lossy due to its
many-to-one mapping.
De Quantization This process is the reverse of quantization. Note that since quantization
used a many-to-one mapping, the information lost in that mapping cannot be fully recovered
Entropy Encoder / Decoder Entropy is defined as a measure of randomness, disorder, or
chaos, as well as a measure of a system's ability to undergo spontaneous change. The
entropy encoder compresses quantized DCT co-efficients more compactly based on their
spatial characteristics. The baseline sequential. codec uses Huffman coding. Arithmetic
coding is another type of entropy encoding Huffman Coding Huffman coding requires that
one or more sets of huff man code tables be specified by the application for encoding as well
as decoding. The Huffman tables may be pre-defined and used within an application as
defaults, or computed specifically for a given image.
JPEG Methodology The JPEG compression scheme is lossy, and utilizes forward discrete
cosine transform (or forward DCT mathematical function), a uniform quantizer, and entropy
encoding. The DCT function removes data redundancy by transforming data from a spatial
domain to a frequency domain; the quantizer quantizes DCT co-efficients with weighting
functions to generate quantized DCT co-efficients optimized for the human eye; and the
entropy encoder minimizes the entropy of quantized DCT co-efficients. The JPEG method is
a symmetric algorithm. Here, decompression is the exact reverse process of compression.
The above said requirements can be achieved only by incremental coding of successive
frames. It is known as interframe coding. If we access information randomly by frame
requires coding confined to a specific frame, then it is known as intraframe coding. The
MPEG standard addresses these two requirements by providing a balance between
interframe coding and intraframe coding. The MPEG standard also provides for recursive
and non-recursive temporal redundancy reduction.
The MPEG video compression standard provides two basic schemes: discrete-transform-
based compression for the reduction of' spatial redundancy and block-based motion
compensation for the reduction of temporal (motion) redundancy. During the initial stages of
DCT compression, both the full motion MPEG and still image JPEG algorithms are
essentially identical. First an image is converted to the YUVcolor space (a
luminance/chrominance color space similar to that used for color television). The pixel data
is then fed into a discrete cosine transform, which creates a scalar quantization (a two-
dimensional array representing various frequency ranges represented in the image) of the
pixel data.
MPEG -2
Vector Quantization
This format extends the range of information from one word processor application or DTP
system to another. The key format information carried across in RTF documents are given
below: Character Set: It determines the characters that supports in a particular
implementation.
Font Table: This lists all fonts used. Then, they are mapped to the fonts available in
receiving application for displaying text.
Color Table: It lists the colors used in the documents. The color table then mapped for
display by receiving application to the nearer set of colors available to that applications.
Document Formatting: Document margins and paragraph indents are specified here.
Character Formatting: It includes bold, italic, underline (continuous, dotted or word), strike
through, shadow text, outline text, and hidden text.
TIFF is an industry-standard file format designed to represent raster image data generated by
scanners, frame grabbers, and paint/ photo retouching applications.
(i) Grayscale, palette color, RGB full-color images and black and white.
(ii) Run-length encoding, uncompressed images and modified Huffman data
compression schemes.
The additional formats are:
(i) Tiled images, compression schemes, images using CMYK, YCbCr color
models.
TIFF Structure
TIFF files consists of a header. The header consists of byte ordering flag, TIFF file format
version number, and a pointer to a table. The pointer points image file directory. This
directory contains table of entries of various tags and their information.
TIFF file format Header:
TIFF Tags
The first two bytes of each directory entry contain a field called the Tag ID.
Tag IDs arc grouped into several categories. They are Basic, Informational, Facsimile,
Document storage and Retrieval.
The RIFF file formats consist' of blocks of data called chunks. They are RIFF Chunk -
defines the content of the RIFF file.
List Chunk - allows to embed archival location copy right information and creating date.
Subchunk - allow additional information to a primary chunk.
The first chunk in a RIFF file must be a RIFF chunk and it may contain one or more sub
chunk.
The first four bytes of the RIFF chunk data field are allocated for the form type field
containing four characters to identify the format of the data stored in the file: AVI, WAV,
RMI, PAL and so.
The sub chunk contains a four-character ASCII string 10 to identify the type of data.
Four bytes of size contains the count of data values, and the data. The data structure of a
chunk is same as all other chunks.
RIFF ChunkThe first 4 characters of the RlFF chunk are reserved for the "RIFF" ASCII
string. The next four bytes define the total data size.
The first four characters of the data field are reserved for form tyPe. The rest of the data field
contains two subchunk:
LIST Chunk
RlFF chunk may contains one or more list chunks.
List chunks allow embedding additional file information such as archival location, copyright
information, creating date, description of the content of the file.
RlFF MIDI contains a RlFF chunk with the form type "RMID"and a subchunk called "data"
for MIDI data.
The 4 bytes are for ID of the RlFF chunk. 4 bytes are for size 4 bytes are for form type 4
bytes are for ID of the subchunk data and 4 bytes are for the size of MIDI data.
The MIDI file format follows music recording metaphor to provide the means of storing
separate tracks of music for each instrument so that they can be read and syn~hronized when
they are played.
The MIDI file format also contains chunks (i.e., blocks) of data. There are two types of
chunks: (i) header chunks (ii) track chunks.
Header Chunk
It is made up of 14 bytes .
The first four-character string is the identifier string, "MThd" .
The second four bytes contain the data size for the header chunk. It is set to a fixed value of
six bytes .
The last six bytes contain data for header chunk.
Track chunk
The Track chunk is organized as follows:
The number of bytes depends on the types of message. There are two types of messages:
(i) Channel messages and (ii) System messages.
Channel Messages
A channel message can have up to three bytes in a message. The first byte is called a status
byte, and other two bytes are called data bytes. The channel number, which addresses one of
the 16 channels, is encoded by the lower nibble of the status byte. Each MIDI voice has a
channel number; and messages are sent to the channel whose channel number matches the
channel number encoded in the lower nibble of the status byte. There are two types of
channel messages: voice messages and the mode messages.
Voice messages
Voice messages are used to control the voice of the instrument (or device); that is, switch the
notes on or off and sent key pressure messages indicating that the key is depressed, and send
control messages to control effects like vibrato, sustain, and tremolo. Pitch wheel messages
are used to change the pitch of all notes .
Mode messages
Mode messages are used for assigning voice relationships for up to 16 channels; that is, to
set the device to MOWO mode or POLY mode. Omny Mode on enables the device to
receive voice messages on all channels.
System Messages
System messages apply to the complete system rather than specific channels and do not
contain any channel numbers. There are three types of system messages: common messages,
real-time messages, and exclusive messages. In the following, we will see how these
messages are used.
Common Messages These messages are common to the complete system. These messages
provide for functions such as select a song, setting the song position pointer with number of
beats, and sending a tune request to an analog synthesizer.
System Real Time Messages
These messages are used for setting the system's real-time parameters. These parameters
include the timing clock, starting and stopping the sequencer, ressuming the sequencer from
a stopped position, and resetting the system.
System Exclusive messages
These messages contain manufacturer-specific data such as identification, serial number,
model number, and other information. Here, a standard file format is generated which can be
moved across platforms and applications.
JPEG Motion image will be embedded in A VI RIFF file format. There are two standards
available:
To address the problem of custom interfaces, the TWAIN working group was formed to
define an open industry standard interface for input devices. They designed a standard
interface called a generic TWAIN interface. It allows applications to interface scanners,
digital still cameras, video cameras.
TWAIN ARCHITECHTURE:
o The Twain architecture defines a set of application programming interfaces (APls) and a
protocol to acquire data from input devices.
o It is a layered architecture.
o It has application layer, the protocol layer, the acquisition layer and device layer.
o Application Layer: This layer sets up a logical connection with a device. The application
layer interfaces with protocol layer.
o Protocol Layer: This layer is responsible for communications between the application and
acquisition layers.
o The main part of the protocol layer is the source Manager.
o Source manager manages all sessions between an application and the sources, and
monitors data acquisition transactions. The protocol layer is a complex layer.
It provides the important aspects of device and application interfacing functions. The
Acquisition Layer: It contains the virtual device driver.
It interacts directly with the device driver. This layer is also known as source. It performs the
following functions:
The Device Layer: The device layer receives software commands and controls the device
hardware. NEW WAVE RIFF File Format: This format contains two subchunks:
(i) Fmt (ii) Data.
It may contain optional subchunks:
(i) Fact (ii) Cue points (iii)Play list (iv) Associated datalist.
Fact Chunk: It stores file-dependent information about the contents of the WAVE file. Cue
Points Chunk: It identifies a series of positions in the waveform data stream. Playlist
Chunk: It specifies a play order for series of cue points. Associated Data Chunk: It
provides the ability to attach information, such as labels, to sections of the waveform data
stream. Inst Chunk: The file format stores sampled sound synthesizer's samples.
Digital Audio
Sound is made up of continuous analog sine waves that tend to repeat depending on the
music or voice. The analog waveforms are converted into digital format by analog-to-digital
converter (ADC) using sampling process.
Sampling process
Sampling is a process where the analog signal is sampled over time at regular intervals to
obtain the amplitude of the analog signal at the sampling time.
Sampling rate
The regular interval at which the sampling occurs is called the sampling rate.
Digital Voice
Speech is analog in nature and is converted to digital form by an analog-to-digital converter
(ADC). An ADC takes an input signal from a microphone and converts the amplitude of the
sampled analog signal to an 8, 16 or 32 bit digital value.
The four important factors governing the ADC process are sampling rate, resolution,
linearity and conversion speed.
• Sampling Rate: The rate at which the ADC takes a sample of an analog signal.
• Resolution: The number of bits utilized for conversion determines the resolution of
ADC.
• Linearity: Linearity implies that the sampling is linear at all frequencies and that the
amplitude tmly represents the signal.
• Conversion Speed: It is a speed of ADC to convert the analog signal into Digital
signals. It must be fast enough.
It provides recognition of a single word at a time. The user must separate every word by a
pause. The pause marks the end of one word and the beginning of the next word.
Stage 1: Normalization
The recognizer's first task is to carry out amplitude and noise normalization to minimize the
variation in speech due to ambient noise, the speaker's voice, the speaker's distance from and
position relative to the microphone, and the speaker's breath noise.
Training mode: In training mode of the recognizer, the new frames are added to the
reference list.
Recognizer mode: If the recognizer is in Recognizer mode, then dynamic time warping is
applied to the unknown patterns to average out the phoneme (smallest distinguishable sound,
and spoken words are constructed by concatenatic basic phonemes) time duration. The
unknown pattern is then compared with the reference patterns.
It is a server function; It is centralized; Remote users can dial into the system to enter an
order or to track the order by making a Voice query.
When a user speaks the name of the person, the rolodex application searches the name and
address and voice-synthesizes the name, address, telephone numbers and fax numbers of a
selected person. In medical emergency, ambulance technicians can dial in and register
patients by speaking into the hospital's centralized system.
Police can make a voice query through central data base to take follow-up action ifhe catch
any suspect.
Language-teaching systems are an obvious use for this technology. The system can ask the
student to spell or speak a word. When the student speaks or spells the word, the systems
performs voice recognition and measures the student's ability to spell. Based on the student's
ability, the system can adjust the level of the course. This creates a self-adjustable learning
system to follow the individual's pace.
Foreign language learning is another good application where"' an individual student can
input words and sentences in the system. The system can then correct for pronunciation or
grammar.
Communication Protocol
The MIDI communication protocol uses multibyte messages; There are two types of
messages:
(i) Channel messages
(ii) System messages
The channel messages have three bytes. The first byte is called a status byte, and the other
two bytes are called data bytes. The two types of channel messages: (i) Voice messages (ii)
Mode messages.
Common message: These messages are common to the complete system. These messages
provide for functions.
System real.time messages: These messages are used for setting the system's real-time
parameters. These parameters include the timing clock, starting and stopping the sequencer,
resuming the sequencer from a stopped position and restarting the system.
System exclusive message: These messages contain manufacturer specific data such as
identification, serial number, model number and other information.
AUDIO MIXER
The audio mixer component of the sound card typically has external inputs for stereo CD
audio, stereo LINE IN, and stereo microphone MICIN. These are analog inputs, and they go
through analog-to-digital conversion in conjunction with PCM or ADPCM to generate
digitized samples.
Analog-to-Digital Converters: The ADC gets its input from the audio mixer and converts
the amplitude of a sampled analog signal to either an 8-bit or 16-bit digital value.
Digital-to-Analog Converter (DAC): A DAC converts digital input in the 'foml of W AVE
files, MIDI output and CD audio to analog output signals.
Sound Compression and Decompression: Most sound boards include a codec for sound
compression and decompression. ADPCM for windows provides algorithms for sound
compression.
CD-ROM Interface: The CD-ROM interface allows connecting u CD ROM drive.to the
sound board.
Analog to Digital Converter: The ADC takes inputs from video multiplexer and converts
the amplitude of a sampled analog signal to either an 8-bit digital value for monochrome or a
24 bit digital value for color.
Input lookup table: The input lookup table along with the arithmetic logic unit (ALU)
allows performing image processing functions on a pixel basis and an image frame basis.
The pixel image-processing functions ate histogram stretching or histogram shrinking for
image brightness and contrast, and histogram sliding to brighten or darken the image. The
frame-basis image-processing functions perform logical and arithmetic operations.
Image Frame Buffer Memory: The image frame buffer is organized as a l024 x 1024 x 24
storage buffer to store image for image processing and display.
Frame Buffer Output Lookup Table: The frame buffer data represents the pixel data and
is used to index into the output lookup table. The output lookup table generates either an 8
bit pixel value for monochrome or a 24 bit pixel value for color.
SVGA Interface: This is an optional interface for the frame grabber. The frame grabber
can be designed to include an SVGA frame buffer with its own output lookup table and
digital-to-analog converter.
Analog Output Mixer: The output from the SVGA DAC and the output from image frame
buffer DAC is mixed to generate overlay output signals. The primary components involved
include the display image frame buffer and the display SVGA buffer. The display SVGA
frame buffer is overlaid on the image frame buffer or live video, This allows SVGA to
display live video.
Pixel point to point processing: In pixel point-to-point processing, operations are carried
out on individual pixels one at a time.
Spatial Filter Processing The rate of change of shades of gray or colors is called spatial
frequency. The process of generating images with either low-spatial frequency-components
or high frequency components is called spatial filter processing.
Low Pass Filter: A low pass filter causes blurring of the image and appears to cause a
reduction in noise.
High Pass Filter: The high-pass filter causes edges to be emphasized. The high-pass filter
attenuates low-spatial frequency components, thereby enhancing edges and sharpening the
image.
Frame Processing: Frame processing operations are most commonly for geometric
operations, image transformation, and image data compression and decompression Frame
processing operations are very compute intensive many multiply and add operations, similar
to spatial filter convolution operations.
Image scaling: Image scaling allows enlarging or shrinking the whole or part of an image.
Image rotation: Image rotation allows the image to be rotated about a center point. The
operation can be used to rotate the image orthogonally to reorient the image if it was
scanned incorrectly. The operation can also be used for animation.
Image translation: Image translation allows the image to be moved up and down or side to
side. Again, this function can be used for animation.
The translation formula is:
Pixel output (x, y) =Pixel Input (x + Tx, y + Ty) where
Tx and Ty are the horizontal and vertical coordinates. x, y are the spatial coordinates of the
original pixel.
Image transformation: An image contains varying degrees of brightness or colors defined
by the spatial frequency. The image can be transformed from spatial domain to the
frequency domain by using frequency transform.
1. Cel animation
The term cel derives from the clear celluloid sheets that were used for drawing each frame,
which have been replaced today by acetate or plastic. Cel animation artwork begins with
keyframes (the first and last frame of an action). For example, when an animated figure of a
man walks across the screen, he balances the weight of his entire body on one foot and then
the other in a series
of falls and recoveries, with the opposite foot and leg catching up to support the
body.
2. Computer Animation
Computer animation programs typically employ the same logic and procedural concepts as
cel animation, using layer, keyframe, and tweening techniques, and even borrowing from the
vocabulary of classic animators. The primary difference between the animation software
program is in how much must be drawn by the animator and how much is automatically
generated by the software
• In 2D animation the animator creates an object and describes a path for the object to
follow. The software takes over, actually creating the animation on the fly as the
program is being viewed by your user.
• In 3D animation the animator puts his effort in creating the models of individual and
designing the characteristic of their shapes and surfaces.
• Paint is most often filled or drawn with tools using features such as gradients and
anti- aliasing.
3. Kinematics
• It is the study of the movement and motion of structures that have joints, such as a
walking man.
• Inverse Kinematics is in high-end 3D programs, it is the process by which you link
objects such as hands to arms and define their relationships and limits.
• Once those relationships are set you can drag these parts around and let the computer
calculate the result.
4. Morphing
Morphing is popular effect in which one image transforms into another. Morphing
application and other modeling tools that offer this effect can perform transition not only
between still images but often between moving images as well.
Animation File Formats
Some file formats are designed specifically to contain animations and the can be
ported among application and platforms with the proper translators.
• Director *.dir, *.dcr
• AnimationPro *.fli, *.flc
• 3D Studio Max *.max
• SuperCard and Director *.pics
• CompuServe *.gif
• Flash *.fla, *.swf
Following is the list of few Software used for computerized animation:
• 3D Studio Max
• Flash
• AnimationPro
Toggling between image frames: We can create simple animation by changing images at
display time. The simplest way is to toggle between two different images. This approach is
good to indicate a "Yes" or "No" type situation.
Rotating through several image frames: The animation contains several frames displayed
in a loop. Since the animation consists of individual frames, the playback can be paused and
resumed at any time.
PHOTOSHOP
INTRODUCTION
Generally, there are four components in your workspace that you will use while creating
or modifying graphics. These components are as follows:
• The Menu Bar
• The Drawing Canvas
• The Toolbox
• Palettes (There are five palettes by default)
The figure below shows each of these components similar to how they will appear on your
screen.
NAVIGATION WINDOW
TOOLS
Marquee Tool : The images in Photoshop are stored pixel by pixel, with a code
indicating the color of each. The image is just a big mosaic of dots. Therefore, before you
can do anything in Photoshop, you first need to indicate which pixels you want to change.
The selection tool is one way of doing this. Click on this tool to select it, then click and drag
on your image to make a dotted selection box. Hold shift while you drag if you want a
perfect square or circle. Any pixels within the box will be affected when you make your next
move. If you click and hold on this tool with your mouse button down, you will see that
there is also an oval selection shape, and a crop tool .
Crop Tool: . To crop your image, draw a box with the crop tool. Adjust the selection
with the selection
tion points, and then hit return to crop.
Lasso Tool : The lasso tool lets you select freeform shapes, rather than just rectangles
and ovals.
Magic Wand: Yet another way to select pixels is with the magic wand. When you
click on an area of the image with this tool, all pixels that are the same color as the pixel you
clicked will be selected. Double click on the tool to set the level of tolerance you would like
(i.e. how similar in color the pixels must be to your original pixel color. A higher toleran
tolerance
ce
means a broader color range).
The Move Tool: This is a very important tool, because up until now all you have been
able to do is select pixels, and not actually move them. The move tool not only allows you to
move areas you have selected, but also to move entire layers without first making a
selection. If you hold the option (or alt)) key while clicking and dragging with the move tool,
you can copy the selection.
Airbrush Paintbrush and Pencil tools can be used to draw with the
foreground color on whichever layer is selected. To change the foreground color, double-
double
click on it in the toolbox. You will then see a palette of colors from which to choose. Select
one and click OK. To change the brush size, go to Window > Show Brushes
Brushes.
Eraser Tool: Erases anything on the selected layer. You can change the eraser size by
going to Window > Show Brushes.
Brushes
Line Tool: Can be used to draw straight lines. Click on the tool to select it, then click
with the tool on the canvas area and drag to draw a line. W
When
hen you release the mouse button,
the line will end. You can change the thickness of the line or add arrowheads to it by double
clicking on the tool to see this dialog box:
Text tool: Click on this tool to select it, then click in the Canvas area. You will be
given a dialog box in which to type your text, and choose its attributes. Each new block of
text goes on its own layer, so you can move it around with the Move Tool. Once you have
placed the text, however, it is no longer editable. To correct mist
mistakes,
akes, you must delete the
old version (by deleting its layer) and replace it.
Eyedropper: Click with this tool on any color in the canvas to make that color the
foreground color. (You can then paint or type with it).
Magnifier: Click with this tool on a part of your image you want to see closer, or drag
with it to define the area you want to expand to the size of the window. Hold down
the Option or Alt key to make it a "reducer" instead and zoom back out.
Grabber: Click with this and drag to move the entire page for better viewing.
Options Bar
The Options bar appears at the top of the screen and is context sensitive, changing as you
change tools. The tool in use is shown in the left corner, and options relating to the tool
appear to the right of that.
IMPORT TO PHOTOSHOP
Flash can import still images in many formats, but you usually use the native
Photoshop PSD format when importing still images from Photoshop into Flash.
When importing a PSD file, Flash can preserve many of the attributes that were
applied in Photoshop, and provides options for maintaining the visual fidelity of the image
and further modifying the image. When you import a PSD file into Flash, you can choose
whether to represent each Photoshop layer as Flash layers or individual key frames.
There are two methods to saving a photo in Photoshop, and each has a specific purpose. One
way is to use the typical Save As... dialogue, the other is a special feature in Photoshop
called Save for Web & Devices... which is used sed to save your photos in preparation for
publication to the Web.
1) Save as: Use this method when saving your photo for archiving or if you plan to work on
it later. We recommend saving the file type as a Photoshop or .PSD file, which will also
save extraa Photoshop-specific
Photoshop specific information about your photo.
2) Save for Web: Use this when you are ready to export your photo for publication to the
Web. While it's possible to save a photo with the regular "save as..." option and still publish
it to the Web, the Photoshop
P built-in
in "Save for Web" feature specifically prepares your
photo for the Web and has added features that allow you to see how it will appear once it's
on a Web site. This ensures your photos will show up properly on the Web.
Crop images
STEPS:
1. Open a file containing layers.
2. Click on the layer you want to resize and rotate.
3. Go to "Edit," "Free Transform" or press "Ctrl" and "T."
4. Click and drag the corner handles to resize the layer. Hold down the "Shift" key while
you do this to constrain the image's proportions. You can also scale the layer using the
top, bottom or side handles.
5. Hold the mouse pointer just outside one of the corner handles until it becomes a double-
sided arrow. Click and drag to rotate the layer. "Shift" will limit the rotation to every 15
degrees.
6. Double-click within the "Free Transform" box to apply the changes to the layer.
FLASH
INTRODUCTION
FLASH ELEMENTS
To design and deliver Flash documents, you work within the Adobe Flash authoring
environment. The Quick Start page provides easy access to your most frequently used
actions. The stage contains the visible elements, such as text, components, and images of the
document. The Timeline represents different phases, or frames, of an animation. The Tools
panel provides the tools used to create and manipulate objects on the stage. The Edit bar tells
you what you are currently working on and gives you the ability to change the magnification
level of the stage. The panels provide access to a wide variety of authoring tools.
The Tools Panel
The Tools panel is divided into four sections. You can use the tools in the Tools panel to draw,
paint, select, and modify artwork, as well as change the view of the stage.
Area On Tools
Tools Contain drawing, painting, and selection tools.
View Contains zooming and panning tools.
Colors Contains tools for setting stroke and fill colors.
Options Displays options for the selected tool that affect the tool's painting or editing
operations.
Panels
Panels are Flash screen elements that give you easy access to the most commonly used features
in Flash. Panels help you view, organize, and change elements in your Flash document. Panels
can have different appearances or states and are categorized into different types based on
function.
Panel States
Panels have three states:
1. Open—Visible in the interface with a content window.
2. Collapsed—Visible in the interface as only a title bar with the content window hidden.
3. Not visible in the interface.
Panel Types
Panels are divided into three different types.
Panel Type Included Panels
Design
• Align
• Color
• Swatches
• Info
• Scene
• Transform
Development
• Actions
• Behaviors
• Components
• Component Inspector
• Debugger
• Output
• Web Services
Other
• Accessibility
• History
• Movie Explorer
• Strings
• Common Libraries
FLASH ANIMATIONS
TWEENING: Short for in-betweening, the process of generating intermediate frames between
two images to give the appearance that the first image evolves smoothly into the second image.
Tweening is a key process in all types of animation, including computer animation. Sophisticated
animation software enables you to identify specific objects in an image and define how they
should move and change during the tweening process.
IMPORT
When you work with Flash, you'll often import assets into a document. Perhaps you have
a company logo, or graphics that a designer has provided for your work. You can import a
variety of assets into Flash, including sound, video, bitmap images, and other graphic formats
(such as PNG, JPEG, AI, and PSD). Imported graphics are stored in the document's library. The
library stores both the assets that you import into the document, and symbols that you create
within Flash. A symbol is a vector graphic, button, font, component, or movie clip that you
create once and can reuse multiple times.
So you don’t have to draw your own graphics in Flash, you can import an image of a pre-drawn
gnome from the tutorial source file. Before you proceed, make sure that you save the source files
for this tutorial as described in “Open the finished FLA file”, and save the images to the same
directory as your banner.fla file.
1. Select File > Import > Import to Library to import an image into the current document.34
Basic Tasks: Creating a banner, Part 1 You'll see the Import dialog box which enables you to
browse to the file you want to import. Browse to the folder on your hard disk that contains an
image to import
into your Flash document.
2. Navigate to the directory where you saved the tutorial’s source files, and locate the bitmap
image saved in the FlashBanner/Part1 directory.
3. Select the gnome.png image, and click Open (Windows) or Import (Macintosh). The image is
imported into the document's library.
4. Select Window > Library to open the Library panel. You'll see the image you just imported,
gnome.png, in the document's library.
5. Select the imported image in the library and drag it onto the Stage. Don't worry about where
you put the image on the Stage, because you'll set the coordinates for the image later. When you
drag something onto the Stage, you will see it in the SWF file when the file plays.
ADDING SOUNDS
The first step of this process is to prepare the audio file you will be importing. Flash is
most compatible with uncompressed audio formats such as WAV and AIFF files, and
automatically compresses the audio to MP3 when you publish your movie. If Flash returns an
error when you attempt to import an MP3 file, convert it to either WAV or AIFF in a 3rd-party
audio conversion application and try again.
Once your audio file is ready, the next step file will be to make a new layer and name it
audio. This naming system is of course only a suggestion.
In the next step we will locate the file we wish to import. Go to 'File>Import>Import to
Library' and browse to your audio file. Selecting it once it is located will import the file into the
library and a new audio icon will appear with your other files.
Once the audio file has been imported into the library, select the audio layer in your
timeline, and drag your file from the library directly onto the work area of your main stage. You
should see a thin waveform appear in those few frames in the audio layer of your timeline. Now
select the end frame and drag it further down the timeline, this should reveal the rest of the
waveform in the audio layer.
These waveform spikes will assist you in timing to your audio to your animation. If you
want to move the beginning of the audio around in the layer, you can select the first frame, and
drag it to the frame where you would like the audio to begin. Then just grab the last frame and
drag it to where you want your audio to end. One thing to know is that the audio file on the layer
cannot have a Keyframe between the beginning and the end of the waveform. If you insert one it
will stop the audio at that point and clear any audio after it. You can still move the frame at the
end of the audio to reveal the waveform again however. Also it would likely make it easier if you
reserve this layer only for the audio you have applied to it.
If you are importing a longer audio file that requires a fade out you will have to do that. If
you do not cut the audio down, your .swf file will get to the end of the animation and repeat
while the previous audio continues to play. This will result in multiple layers of audio playing at
once, and will distort the audio. To fade the audio out simply select it in the "audio" layer, and
you will notice the right hand side of your properties inspector will display the audio file name.
(audio inspector)
Select edit from your properties inspector. This will open the edit envelope window,
which displays the left and right channel waveforms up close. The first thing to do is check and
bottom of the window and make sure frames is selected instead of time. Doing this will change
to numbers between the waves to correspond to the frame on which the audio is happening
instead of at what time it happens.
The dark line across the top of the waveform represents the audio level. If you click on it,
you will create a control point. I created two control points, then left one at the top around frame
158, and dragged the other to the bottom around frame 171.
This will cause the audio to fade out when it gets to these frames. My audio is now where
I want it to be. You can also use this technique to edit the level of the left and right channels
throughout your project if you wanted certain parts to be louder or softer.
ACTION SCRIPTS
You’ll need to give Flash instructions. Among other things, you’ll tell Flash to stop the
movie, play the movie, or jump to a specific place in the timeline. These tasks that Flash will
perform at your request are called "actions".
In this section, well go through an exercise to use the stop action to keep a movie from
looping. The stop action can also be used to halt the movie while youre waiting for a user to
make a choice about what button they want to click or what they’re going to do next.
1. Build a small, twenty-frame movie. It doesn’t have to be anything special. A symbol
animating across the screen will do fine.
2. Hit F12 to test the movie.
Notice that it loops.
Now we’re going to place a stop action on the last frame of our twenty-frame movie. The stop
action will keep halt the playback head when it reaches frame 20, thus keeping the movie from
looping.
3. Add a new layer and name it Actions.
Remember that frame actions should get their own top layer.
4. On the Actions Layer, add a keyframe (F6) on frame 20
6. Open the Basic Actions By clicking on the plus sign. You’ll be able to see the actions.
7. Choose Stop.
Youll notice that coding for your action was added to the right side of the Frame Action Palette.
You may also notice that a small "a" appeared in your keyframe.
References:
http://www.readorrefer.in/article/Multimedia-Basics_10188/