MM Unit 1 - 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

MULTIMEDIA

UNIT – I Introduction: Multimedia - Characteristics of Multimedia Presentation– Hardware and


Software Requirement - Steps for Creating a Multimedia Presentation. Analog Representation: Waves –
Digital Representation and its Needs.

UNIT – II Text: Introduction - Types of text: Unformatted, Formatted and Hypertext. Font:
Appearance, Size & Style - Insertion of text: Using key board- Copying & Pasting- Using OCR software -
Text Compression - File formats: TXT- DOCRTH- PDF.

UNIT –III Image: Introduction- Image types - Color models: RGB – CMYK- Device Dependency
& Gamut-Basis steps for Image Processing- Scanner: Working Principle- Scanner Type- Color scanning-
Digital Camera: Working PrincipleStorage and Software Utility.

UNIT-IV Audio: Introduction- Acoustics – Nature of sound wave –Fundamental Characteristics of


Sound: Amplitude- Frequency- Waveform - Speed - Microphone: Types of Microphone – Dynamic,
Condenser, Omni directional, Bidirectional, Uni directional, Polar Plot - Loudspeaker.

UNIT - V Video: Introduction- Analog Video Camera: Monochrome Video Camera, Color Video
Camera - Transmission of Video Signals - Video Signal Formats: Component, Composite, S_Video,
SCART Connector - Video File Formats
UNIT – I

Multimedia an overview: Introduction The word ‗multimedia‘ comes from the Latin words
multus which means ‗numerous‘ and media which means ‗middle‘ or center. Multimedia
therefore means ‗multiple intermediaries‘ or ‗multiple means‘. Multimedia is a combination of
following elements. They are
Text (e.g. books,letters,newspapers)
Images and graphics (e.g. photographs,charts,maps,logos,sketches)
Sound (e.g. radio, gramophone records and audio cassettes)
Video and animation (e.g. TV, video cassettes and motion pictures)

Characteristics of Multimedia presentation: Multimedia is any combination of text, graphics,


art, sound and video elements. The following are the important characteristics of Multimedia
presentation. They are
Multiple media
Non-linearity
Interactivity
Digital representation
Integrity

MULTIPLE MEDIA:
In addition to text, pictures are also started being used to communicate ideas. Pictures were sub-
divided into two types.
I. A real-world picture captured by a camera is called images.
II. A hand-drawn picture like sketches, diagrams and portraits called graphics.

Text, images and graphics are together referred to as static elements, because they do not change
overtime. With further improve in technology, time varying elements like sound and movies
were used. Movies are again divided into two classes. They are
Motion pictures
Animation

Legitimate multimedia presentation should contain at least one static media like text, images or
graphics and at least one time varying media like audio, video or animation.

NON-LINEARITY: Non-Linearity is the capability of jumping or navigating from within a


presentation with one point without appreciable delay.TV shows and motion pictures are
considered linear presentation because the user or viewer has to watch the information being
prescribed. The user cannot modify the content. In a multimedia presentation the user can
instantly navigate to different parts of the presentation and display the frames in any way,
without appreciable delay, due to which it is called a non-linear presentation.
INTERACTIVITY: In a non-linear presentation user will have to specify the desire to watch
the presentation. The presentation should be capable of user inputs and capable of change the
content of the presentation. Interactivity is considered to be one of salient features on which next
generation e-learning tools are expected to reply for greater effectively.
DIGITAL REPRESENTATION: Magnetic tapes are called the sequential access storage
devices (i.e.) data is recorded sequentially along the length of the tape. When a specific potion of
the data is required to be played back, the portion before that needs to be skipped. Multimedia
requires instant access to different portion of the presentation. This is done by random access
storage devices like hardware, floppy disks, and compact disks. Digital representations has other
advantages, software based programs can be used to edit the digitized media in various ways to
appearances and compress the file sizes to increase the performance efficiency.
INTEGRITY:
An important characteristic of a multimedia presentation is integrity. This means that although
there may be several media types present and playing simultaneously, they need to be integrated
or be part of a single entity which is the presentation. It should not be able to separate out the
various media and control them independently; rather they should be controlled from within the
frame work of the presentation. Moreover, the presentation should decide how the individual
elements can be controlled.

HARDWARE & SOFTWARE REQUIRMENTS: Hardware and software requirements of a


multimedia personal computer can be classified into tow classes. They are:
a. Multimedia playback
b. Multimedia production

Multimedia playback:
Processor – At least Pentium class and minimum of 8MB RAM-to-32MB RAM.
Hard disk drive(HDD) – Atleast 540MB having 15M/s. access time and should be able to
provide 1.5MB per second sustained throughput.
The monitor and video display adapter should confirm through SVGA standards and
support 800x600 display modes with true color.
CD-ROM drives having a speed of at least 4X but highest speed like 36X are recommended.
PC should have a sound card with attached speakers standard 101 keys keyboard and
mouse.
Multimedia PC system software should be compatible with windows 95 or higher, with
standard software with playback of media files in standard formats.(e.g.) Windows Media Player.

Multimedia production:
Processor - Pentium II or higher, memory should be at least 128MB with 256MB
recommended.
Hard disk drive (HDD) – Typical requirements would be around 10GB with 40GB
recommended.
The monitor and video display adapter should confirm through SVGA standards and
should be able to support 800x600 display mode with true color, RAM should be 4MB to 8MB.
CD-ROM drive having a speed of at least 4X to 36X, PC should have a CD writer.
PC should have a sound card with attached speakers standard 101 keys keyboard and
mouse.
Multimedia PC system software should be compatible with windows or higher, with standard
software with playback of media files in standard formats. (e.g.) Windows Media Player.
Editing software is used to manipulate media components to suit the developers,
requirements. (e.g.) Adobe Photoshop, Flash, Cool Edit, and sound Forge.
Authoring softwares are used to integrate all the edited media into single presentations and
build navigational pathways for accessing the media.
To display the web content web browsers will be required. (e.g.) MS Internet Explorer, to
create web content HTML, and java Script editors might be required (e.g.) Macromedia, dream
viewer.

STEPS FOR CREATING A MULTIMEDIA PRESENTATION: Here are the basic steps for
creating a multimedia presentation.

(i) Choosing a Topic

(ii) Writing a Story

(iii) Writing a Script

(iv) Preparing a Storyboard

(v) Preparing a flow line

(vi) Implementation

(vii)Testing and Feedback

(viii)Final Delivery

Choosing a Topic: The first topic/task is to choose a topic on which to create the presentation.
In principle, one can select any topic; topics which can be explained or demonstrated using
various media types are more conductive to multimedia presentation. Use of text is not
prohibited, but should be kept at a minimum. For example, not more than a few lines per page of
the presentation when choosing a topic one should make a metal note, of how the subject matter
should be divided and what entry points should give access to which module. The author should
also decide who should be the target audience. The author should also decide the objectives of
the presentation (i.e.) what the audience is expected to learn after going through presentation.

Writing a script: Once the overall subject matter has been finalized, the next step is to create a
script. A script emphasizes how the subject matter unfolds. While writing a script, the author
visualizes the content in terms of frames. For example, what is to be displayed on the first
screen? This requires the subject matter of the story be divided into small modules one for each
screen. The script could also includes other accessory information like how the elements are
displayed on the screen.
Preparing a Storyboard: Once the script has been prepared, the author needs to prepare the
storyboard. The storyboard depicts what should be the layer of each screen within the
presentation. The screen should have an aesthetic feel about them and should be pleasant to look.

Preparing a flow line: Along-side a storyboard, the author should also prepare a flow line. A
flow line at a glance tells us how the user can access different pages of the presentation.

Implementation: Implementation needs actually creating the physical presentation using


required hardware and software. Implementation has a number of sub steps. The first step is the
collection of media items. The author can use software to create their own items. There are two
types of implementation software.
(i) The first type is the editing software, which are used to edit the digitized items.
(ii) The second type of softwares is the authoring software; which are used to integrate all the
editor media into a single presentation. The output of the authoring software is usually an
executable file (exe) which contains its own runtime engine and therefore can be played without
the help of any other software.

Testing and feedback: After the implementation phase is completed, an important step of
testing and feedback should be done for improving the quality of the presentation. This step
involves distributing whole (or) part of the presentation to sections of the target audience and
heading the feedback from them about the possible areas which need improvement. Developers
always work under various constraints and do not have indefinite time on their hands.

Final delivery: The final phase in the production schedule is the delivery of the application to be
intended client. Usually the runtime version of the application files are copied into a CD-ROM
and physically handed over to the customer. It is also important for the author to state clearly the
hardware and software requirements which should be present on the client machine to run the
application smoothly.
UNIT-II
Text:

Introduction

In multimedia presentations, text can be combined with other media in a powerful way to present
information and express moods. Text can be of various types:

Plaintext, consisting of fixed sized characters having essentially the same type of appearance.
Formatted text, where appearance can be changed using font parameters
Hypertext, which can serve to link different electronic documents and enable the user to jump
from one to the other in a non-linear way.

Internally text is represented via binary codes as per the ASCII table. The ASCII table is
however quite limited in its scope and a new standard has been developed to eventually replace
the ASCII standard. This standard is called the Unicode standard and is capable of representing
international characters from various languages throughout the world. We also generate text
automatically from a scanned version of a paper document or image using Optical Character
Recognition (OCR) software.

TYPES OF TEXT:

There are three types of text that can be used to produce pages of a document:
 Unformatted text
 Formatted text
 Hypertext

I. Unformatted Text:

Also known as plaintext, this comprise of fixed sized characters from a limited character set. The
character set is called ASCII table which is short for American Standard Code for Information
Interchange and is one of the most widely used character sets. It basically consists of a table
where each character is represented by a unique 7-bit binary code. The characters include a to z,
A to Z, 0 to 9, and other punctuation characters like parenthesis, ampersand, single and double
quotes, mathematical operators, etc. All the characters are of the same height. In addition, the
ASCII character set also includes a number of control characters. These include BS (backspace),
LF (linefeed), CR (carriage return), SP (space), DEL (delete), ESC (escape), FF (form feed) and
others.

II. Formatted Text:

Formatted text are those where apart from the actual alphanumeric characters, other control
characters are used to change the appearance of the characters, e.g. bold, underline, italics,
varying shapes, sizes, and colors etc., Most text processing software use such formatting options
to change text appearance. It is also extensively used in the publishing sector for the preparation
of papers, books, magazines, journals, and so on.

III. Hypertext:

The term Hypertext is used to mean certain extra capabilities imparted to normal or standard text.
Like normal text, a hypertext document can be used to reconstruct knowledge through sequential
reading but additionally it can be used to link multiple documents in such a way that the user can
navigate non-sequentially from one document to the other for cross-references. These links are
called hyperlinks. Microsoft Home Page The underlined text string on which the user clicks the
mouse is called an anchor and the document which opens as a result of clicking is called the
target document. On the web target documents are specified by a specific nomenclature called
Web site address technically known as Uniform Resource Locators or URL. Node or Anchor:
The anchor is the actual visual element (text) which provides an entry point to another document.
In most cases the appearance of the text is changed from the surrounding text to designate a
hypertext, e.g. by default it is colored blue with an underline. Moreover the mouse pointer
changes to a finger icon when placed over a hypertext. The user usually clicks over the hypertext
in order to activate it and open a new document in the document viewer. In some cases instead of
text an anchor can be an image, a video or some other non-textual element (hypermedia).

Pointer or Link These provide connection to other information units known as target
documents. A link has to be defined at the time of creating the hyperlink, so that when the user
clicks on an anchor the appropriate target document can be fetched and displayed. Usually some
information about the target document should be available to the user before clicking on the
anchor. If the destination is a text document, a short description of the content can be
represented.
FONT:
Insertion of Text
Text can be inserted in a document using a variety of methods. These are:

1) Using a keyboard

The most common process of inserting text into a digital document is by typing the text using an
input device like the keyboard. Usually a text editing software, like Microsoft Word, is used to
control the appearance of text which allows the user to manipulate variables like the font, size,
style, color, etc.,

2) Copying and Pasting

Another way of inserting text into a document is by copying text from a pre-existing digital
document. The existing document is opened using the corresponding text processing program
and portions of the text may be selected by using the keyboard or mouse. Using the Copy
command the selected text is copied to the clipboard. By choosing the Paste command,
whereupon the text is copied from the clipboard into the target document.

3) Using an OCR Software

A third way of inserting text into a digital document is by scanning it from a paper document.
Text in a paper document including books, newspapers, magazines, letterheads, etc. can be
converted into the electronic form using a device called the scanner. The electronic
representation of the paper document can then be saved as a file on the hard disk of the
computer. To be able to edit the text, it needs to be converted from the image format into the
editable text format using software called an Optical Character Recognition (OCR). The OCR
software traditionally works by a method called pattern matching. Recent research on OCR is
based on another technology called feature extraction.

TEXT COMPRESSION:

Large text documents covering a number of pages may take a lot of disk space. We can apply
compression algorithms to reduce the size of the text file during storage. A reverse algorithm
must be applied to decompress the file before its contents can be displayed on screen. There are
two types of compression methods that are applied to text as explained:
a. Huffman Coding:

This type of coding is intended for applications in which the text to be compressed has known
characteristics in terms of the characters used and their relative frequencies of occurrences. An
optimum set of variable-length code words is derived such that the shortest code word is used to
represent the most frequently occurring characters. This approach is called the Huffman coding
method.
b. Lempel-Ziv (LZ) Coding

In the second approach followed by the Lempel-Zir (LZ) method, instead of using a single
character as a basis of the coding operation, a string of characters is used. For example, a table
containing all the possible words that occur in a text document, is held by both the encoder and
decoder.
c. Lempel-Ziv-Welsh (LZW) Coding

Most word processing packages have a dictionary associated with them which is used for both
spell checking and compression of text. The variation of the above algorithm called Lempel-Ziv-
Welsh (LZW) method allows the dictionary to be built up dynamically by the encoder and
decoder for the document under processing.

FILE FORMATS:

The following text formats are usually used for textual documents.

TXT (Text)

Unformatted text document created by an editor like Notepad on Windows platform. This
documents can be used to transfer textual information between different platforms like Windows,
DOS, and UNIX.

DOC (Document)

Developed by Microsoft as a native format for storing documents created by the MS Word
package. Contains a rich set of formatting capabilities.

RTF (Rich Text Format)

Developed by Microsoft in 1987 for cross platform document exchanges. It is the default format
for Mac OS X‘s default editor TextEdit. RTF control codes are human readable, similar to
HTML code.

PDF (Portable Document Format)

Developed by Adobe Systems for cross platform exchange of documents. In addition to text the
format also supports images and graphics. PDF is an open standard and anyone may write
programs that can read and write PDFs without any associated royalty charges.
PostScript (PS)

Postscript is a page description language used mainly for desktop publishing. A page
description language is a high-level language that can describe the contents of a page such that it
can be accurately displayed on output devices usually a printer. A PostScript interpreter inside
the printer converted the vercors backi into the raster dots to be printed. This allows arbitrary
scaling, rotating and other transformations.
UNIT-III
IMAGES: INTRODUCTION
The pictures that we see in our everyday life can be broadly classified into two groups:

 Images
 Graphics

Images can either be pure black and white, or grayscale having a number of grey shades, or color
containing a number of color shades. Color is a sensation that light of different frequencies generates
on our eyes, the higher frequencies producing the blue end and the lower frequencies producing the
red end of the visible spectrum. White light is a combination of all the colors of the spectrum. To
recognize and communicate color information we need to have color models. To recognize and
communicate color information we need to have color models.

The two most well known color models are the RGB model used for colored lights like images
on a monitor screen, and the CMYK model used for colored inks like images printed on paper. One
of the most well known device independent color model is the HSB Model where the primaries are
hue, saturation and brightness. The total range of colors in a color model is known is its gamut. The
input stage deals with the issues of converting hardcopy paper images into electronic versions. This
is usually done via a device called the scanner. While scanners are used to digital documents,
another device called the digital camera can convert a real world scene into a digital image.
Digital camera can also contain a number of these electronic sensors which are known as
Charge-Couple Devices (CCD) and essentially operate on the same principle as the scanner. This is
the editing stage and involves operations like selecting, copying, scaling, rotating, trimming,
changing the brightness, contrast color tones, etc. of an image to transform it as per the requirements
of the application.The output stage involves saving the transformed image in a file format which can
be displayed on the monitor screen or printed on a printer. To save the image, it is frequently
compressed by a compression algorithm is ued the final image can be saved into a variety of file
formats. IMAGE TYPES: Images that we see in our everyday lives can be categorized into various
types.

1. Hard Copy vs. Soft Copy

The typical images that we usually come across are the pictures that have been printed on
paper or some other kinds of surfaces like plastic, cloth, wood, etc. These are also called hard copy
images because they have been printed on solid surfaces. Such images have been transformed from
hard copy images or real objects into the electronic form using specialized procedures and are
referred to as soft copy images.

2. Continuous Tone, Half-tone and Bitone

Photographs are also known as continuous tone images because they are usually composed
of a large number of varying tones or shades of colors. Sometimes due to limitations of the display or
printed devices, all the colors of the photograph cannot be represented adequately. In those cases a
subset of the total number of colors of displayed. Such images are called partial tone or half-tone
images. A third category of images is called bitonal images which uses only two colors, typically
black and white, and do not use any shades of grey.

SEEING-COLOR
Phenomenon of seeing color is dependent on a triad of factors: the nature of light, the interaction of
light and matter, and the physiology of human version. Light is a form of energy known as
electromagnetic radiation. It consists of a large number of waves with varying frequencies and
wavelengths. Out of the total electromagnetic spectrum a small range of waves cause sensations of
light in our eyes. This is called the visible spectrum of waves. The second part of the color triad is
human vision. The retina is the light-sensitive part of the eye and its surface is composed of
photoreceptors or nerve endings. The third factor is the interaction of light with matter. Whenever
light waves strike an object, part of the light energy gets absorbed and /or transmitted, while the
remaining part gets reflected back to our eyes. Refraction Index(RI) is the ratio of the speed of
light in a vaccum. A beam of transmitted light changes direction according to the difference in
refractive index and also the angle at which it strikes the transparent object. This is called refraction.
If light is only partly transmitted by the object, the object is translucent. COLOR MODELS:
Researchers have found out that most of the colors that we see around us can be derived from mixing
a few elementary colors. These elementary colors are known as primary colors. Primary colors
mixed in varying proportions produce other colors called composite colors. Two primary colors
mixed in equal proportions produce a secondary color. The primary colors along with the total range
of composite colors they can produce constitute a color model.

RGB Model

The RGB color model is used to describe behavior of colored lights like those emitted from a TV
screen or a computer monitor. This model has three primary colors: red, green, blue, in short RGB.
Proportions of colors are determined by the beam strength. An electron beam having the maximum
intensity falling on a phosphor dot creates 100% of the corresponding color.50% of the color results
from a beam having the half the peak strength. All three primary colors at full intensities combine
together to produce white, i.e. their brightness values are added up. Because of this the RGB model is
called an additive model. Lower intensity values produce shades of grey. A color present at 100% of
its intensity is called saturated, otherwise the color is said to be unsaturated.
CMYK Model

The RGB model is only valid for describing behavior of colored lights. This new model is named
CMYK model and is used to specify printed colors. The primary colors of this model are cyan,
magenta and yellow. These colors when mixed together in equal proportions produce black, due to
which the model is known as a subtractive model. Mixing cyan and magenta in equal proportions
produce blue, magenta and yellow produce red, and yellow and cyan produce green. Thus, the
secondary colors of the CMYK model are the same as the primary colors of the RGB model and vice
versa. These two models are thus, known as complimentary models.

Device Dependency and Gamut

It is to be noted that both the RGB and the CMYK models do not have universal or absolute color
values. But different devices will give rise to slightly different sets of colors. For this reason both the
RGB and the CMYK models are known as device dependent color models. Another issue of
concern here is the total range of colors supported by each color model. This is known as the gamut
of the model.

BASIC STEPS FOR IMAGE PROCESSING:

Image processing is the name given to the entire process involved with the input, editing and output
of images from a system. There are three basic steps:

a. Input
Image input is the first stage of image processing. It is concerned with getting natural images
into a computer system for subsequent work. Essentially it deals with the conversion of analog
images into digital forms using two devices. The first is the scanner which can convert a printed
image or document into the digital form. The second is the digital camera which digitizes real-world
images, similar to how a conventional camera works.

b. Editing
After the images have been digitized and stored as files on the hard disk of a computer, they
are changed or manipulated to make them more suitable for specific requirements. This step is called
editing. Before the actual editing process can begin, and important step called color calibration
needs to be performed to ensure that the image looks consistent when viewed on multiple monitors.

c. Output
Image output is the last stage in image processing concerned with displaying the edited image
to the user. The image can either be displayed in a stand-alone manner or as part of some application
like a presentation or web-page.

SCANNER

For images, digitization involves physical devices like the scanner or digital camera. The scanner
is a device used to convert analog images into the digital form. The most common type of scanner for
the office environment is called the flatbed scanner. The traditional way of attaching a scanner to
the computer is through an interface cable connected to the parallel port of the PC. Construction
and Working principle: To start a scanning operation, the paper document ot be scanned is placed
face down on the glass panel of the scanner, and the scanner is activated using a software from the
host computer. The light on getting reflected by the paper image is made to fall on a grid of
electronic sensors, by an arrangement of mirrors and lenses. The electronic sensors are called
Charge Coupled Devices (CCD) and are basically converters of the light energy into voltage pulses.
After a complete scan, the image is converted from a continuous entity into a discrete form
represented by a series of voltage pulses. This process is called sampling. The voltage signals are
temporarily stored in a buffer inside the scanner. The next step called quantization involves
representing the voltage pulses as binary numbers and carried out by an ADC inside the scanner in
conjuction with a software bundled with the scanner called the scanning software. Since each
number has been derived from the intensity of the incident light, these essentially represent
brightness values at different points of the image and are known as pixels.

Scanner Types: Scanners can be of various types each designed for specific purposes.

a. Flatbed scanners:

The flatbed scanner is the most common type in office environments and has been described above.
It looks like a photocopying machine with a glass panel on which the document to be scanned is
placed face down. Below the glass panel is a moving head with a source of white light usually xenon
lamps.
b. Drum Scanners:

Drum Scanner is used to obtain good quality scans for professional purposes and generally provides a
better performance than flatbed scanners. It consists of a cylindrical drum made out of a highly
translucent plastic like material. The fluid can either be oil-based or alcohol-based. For the sensing
element, drum scanners use a Photo-Multiplier Tube (PMT) instead of a CCD. An amplifier gain of
the order of 108 can be achieved in multipliers containing about 14 dynode, which can provide
measurable pulses from even single photons.

c. Bar-code Scanners:

A barcode scanner is designed specifically to read barcodes printed on various surfaces. A barcode is
a machine-readable representation of information in a visual format. Nowadays they come in other
forms like dots and concentric circles. Barcodes relieve the operator of typing strings in a computer,
the encoded information is directly read by the scanner. A LASER barcode scanner is more
expensive that a LED one but is capable of scanning barcodes at a distance of about 25cm. Most
barcode scanners use the PS/2 port for getting connected to the computer.

d. Color Scanning

Since the CCD elements are sensitive to the brightness of the light, the pixels essentially store only
the brightness information of the original image. This is also known as luminance (or luma)
information. To include the color or chrominance (or chroma) information, there are three CCD
elements for each pixel of image formed. White light reflected off the paper document is split into
the primary color components by a glass prism and made to fall on the corresponding CCD sub-
components.

e.Pixel information: To describe a color digital image, the pixels need to contain both the luma
and the chroma values, i.e. the complete RGB information of each color. To represent the orange
color we write: R=245 (96% of 255), G=102 (40% of 255), B=36 (14% of 255). This is called a
RGB triplet and notation for making it more compact, e.g. given below. These values are also called
RGB attributes of a pixel.

f. Scan quality:

The quality of a scanned image is determined mostly by its resolution and color depth. The scanner
resolution pertains to the resolution of the CCD elements inside a scanner measured in dots per inch
(dpi). Scanner resolution can be classified into two categories; the optical resolution refers to the
actual number of sensor elements per inch on the scan head. Scanners however are often rated with
resolution values higher than that of the optical resolution e.g. 5400, 7200 or 9600dpi. These
resolutions are called interpolated resolutions and basically involve an interpolation process for
generating new pixel values.

g. Scanning Software:

To scan an image, the user needs a scanning software to be installed on the computer as in (fig) given
below. The software lets the user interact with the scanner and set parameters like bit depth and
resolution. A typical scanning software should allow the user to do the following:
i. Set the bit depth of the image file, which in turn determines the total number of colors.
ii. Set the output path of the scanned image.
iii. Set the file type of the scanned image. Most scanners nowadays suppor the standard file types like
DMP, JPG, TIFF, etc.
iv. Adjust the brightness and contrast parameters usually by dragging sliders.
v. Change the size of the image by specifying a scale factor.
vi. Adjust the color of the scanned image by manipulating the amounts of red, green and blue
primaries.
vii. Adjust the resolution value.

The ‗final‘ button instructs the scanner to save the updated pixel values in a file whose type and
location have been previously specified.

+
DIGITAL CAMERA:

Construction and working principle:

Apart from the scanner used to digitize paper documents and film, another device used to
digitize real world images is the digital camera. Unlike a scanner a digital camera is usually not
attached to the computer via a cable. The camera has its own storage facility inside it usually in the
form of a floppy drive, which can save the image created into a floppy disc. So instead they are
compressed to reduce their file sizes and stored usually in the JPEG format. This is a lossy
compression technique and results in slight loss in image quality. Most of the digital cameras have an
LCD screen at eh back, which serve now important purposes: first it can be used as a viewfinder for
composition and adjustment; secondly it can be used for viewing the images stored inside the
camera. The recent innovation of built-in microphones provides for sound annotation, in standard
WAV format. After recording, this sound can be sent to an external device for playback on
headphones using an ear socket.

Storage and Software utility

Digital cameras also have a software utility resident in a ROM chip inside it which allow the
user to toggle between the CAMERA mode and PLAY mode. In the PLAY mode the user is
presented with a menu structure having some fo the functionalities like: displaying all the images on
the floppy , selecting a particular image, deleting selected images, write-protecting the important
image for deletion, setting the date and time, displaying how much of the floppy disk space is free
and even allowing a floppy to be formatted in the drive.
UNIT-4

Audio
UNIT:4
INTRODUCTON

You might also like