0% found this document useful (0 votes)
39 views22 pages

Graphics and Images

Uploaded by

shikha sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views22 pages

Graphics and Images

Uploaded by

shikha sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Graphics and images

• Graphics and images are both non-textual information that can be displayed and
printed.
• They may appear on screens as well as on printers but cannot be displayed with
devices only capable of handling characters.
• Graphics are normally created in a graphics application and internally
represented as an assemblage of objects such as lines, curves, or circles.
• Attributes such as style, width, and color define the appearance of graphics.
• We say that the representation is aware of the semantic contents. The objects
graphics are composed of can be individually deleted, added, moved, or
modified later
• In contrast, images can be from the real world or virtual and are not editable in
the sense given above.
• They ignore the semantic contents. They are described as spatial arrays of
values. The smallest addressable image element is called a pixel.
• The array, and thus the set of pixels, is called a bitmap. Object-based editing is
not possible, but image editing tools exist for enhancing and retouching bitmap
images.
• The drawback of bitmaps is that they need much more storage capacity then
graphics.
• Their advantage is that no processing is necessary before displaying them, unlike
graphics where the abstract definition must be processed first to produce a
bitmap.
• Of course, images captured from an analog signal, via scanners or video
cameras, are represented as bitmaps, unless semantic recognition takes place
such as in optical character recognition.
Digital Image Representation
For computer representation, function (e.g. intensity) must be sampled at discrete
intervals. Sampling quantizes the intensity values into discrete intervals.
• Point at which an image is sampled are called picture elements or pixels.
• Resolution specifies the distance between points accuracy. A digital image is
represented by a matrix of numeric values each representing a quantized
intensity value.
A digital image is a numeric representation (normal binary) of two dimensional
images. When I is a two-dimensional matrix, then I (r, c) is the intensity value at the
position corresponding to row r and column c of the matrix. Intensity value can be
represented by bits for black and white images (binary valued images), 8 bits for
monochrome imagery to encode color or grayscale levels, 24 bit (RGB).
Image Formats
There are different kinds of image formats in the literature. We shall consider the
image format that comes out of an image frame grabber, i.e., the captured image
format, and the format when images are stored, i.e., the stored image format.
(i) Captured Image Format
(ii) Stored Image Format
Captured Image Format: - The image format is specified by two main parameters:
spatial resolution, which is specified as pixels x pixels (e.g. 640x480) and color
encoding, which is specified by bits per pixel. Both parameter values depend on
hardware and software for input/output of images.
Stored Image Format: - When we store an image, we are storing a two-dimensional
array of values, in which each value represents the data associated with a pixel in
the image. For a bitmap, this value is a binary digit.
A bitmap is a simple information matrix describing the individual dots that are the
smallest elements of resolution on a computer screen or other display or printing
device.
Image file format include:
• GIF (Graphic Interchange Format)
• X11 bitmap
• Postscript
• JPEG (Joint Picture Expert Group)
• TIFF (Tagged Image File Format) etc.
There are many file formats used to store bitmaps and vectored drawing. Following
is a list of few image file formats.
Graphics Format
Graphic image formats are specified through graphics primitives and their
attributes.
Graphic primitive – line, rectangle, circle, ellipses, specification 2D and 3D objects.
Graphic attribute – line style, line width, color.
Graphics formats represent a higher level of image representation, i.e., they are not
represented by a pixel matrix initially
• PHIGS (Programmer’s Hierarchical Interactive Graphics).
• GKS (Graphical Kernel System)
Computer Image Processing
• Image processing is any form of signal processing for which the input is an
image, such as a photograph or video frame; the output of image processing
may be either an image or a set of characteristics or parameters related to the
image.
• Image processing usually refers to digital image processing, but optional and
analog image processing also are possible.
• Computer graphics concern the pictorial synthesis of real or imaginary objects
from their computer-based models.
• The related field of image processing treats the converse process: the analysis of
scenes, or the reconstruction of model from pictures of 2D or 3D objects.
Image Synthesis
• Image synthesis is an integral part of all computer user interfaces is
indispensable for visualizing 2D, 3D and higher dimensional objects. Areas as
diverse as education, science, engineering, medicine, advertising and
entertainment all rely on graphics.
Dynamic in Graphics
• Graphics are not confined to static pictures. Picture can be dynamically varied;
for example, a user can control animation by adjusting the speed, portion of the
total scene inn view, amount of detail shown, etc.
Motion Dynamic:
• With motion dynamic, objects can be moved and enabled with respect to a
stationary observer.
Update Dynamic:
• Update dynamic is the actual change of the shape, color, or other properties of
the objects being viewed.
The Framework of Interactive Graphics System
• Image can be generated by video digitizer cards that capture NTSC (PAL) analog
signals and create a digital image.
• Graphical images are generated using interactive graphics systems.
• The high-level conceptual framework of almost any interactive graphics system
consists of three software components: an application model, an application
program and a graphics system, and a hardware component: graphics hardware.
Application Model:
• The application model represents the data or objects to be picture on the screen;
it is stored in an application database. The model is an application-specific and is
created independency of any particular display system.
Application Program:
• The application program handles user input. It produces views by sending to the
third component, the graphics system, a series of graphics output commands that
contain both a detailed geometric description of what is to be viewed and the
attributes describing how the objects should appear.
Graphics System:
• The graphics system is responsible for actually producing the picture from the
detailed descriptions and for passing the user’s input to the application program
for processing. The graphics system is an intermediary component between the
application program and the display hardware.
Graphics Hardware:
• At the hardware level, a computer receives input from interaction devices and
output images to display devices.
Input:
Current input technology provide us with the ubiquitous mouse, the data tablet
and transparent, touch sensitive panel mounted on the screen. The other graphics
input are track-balls, space-balls or the data glove. Track-ball can be made to sense
rotation about the vertical axis in addition to the about two horizontal axes
A space-ball is a rigid sphere containing strain gauges. The user pushes or pulls the
sphere in any direction, providing 3D translation and orientation. The data glove
records hand position and orientation as well as finger movements. It is a glove
covered with small, lightweight sensors.
Output: Raster Display
• Most common type of graphic monitors using raster scan display type CRT .
• Point plotting device
• Based on TV technology
• Electron beam is swept across the screen, one row at a time from top to
bottom, starting at the upper left corner of the display
• Process is repeated until the entire screen is covered, and the beam is then
returned to the upper left corner to start a new scan
• Beam intensity is turned on and off to create a pattern of illuminated spots
• Pictures are dynamically stored in a piece of memory known as frame buffer or
refresh buffer
• This buffer holds the set of intensity values all the screen points (pixels)
• Requirement to control the intensity of the screen positions:
– Simple black and white system: - 1 bit per pixel (bitmap)
– Color system: - 24 bits/pixel (maximum no. of color representation, pixmap)
• Frame buffer or refresh buffer (storage) requirements:
• Large storage
– e.g. 24 bits/pel, screen resolution of
– 1024x1024 requires 3mb of RAM
• Refresh rate: 60 to 80 frames per second
Advantages of Raster Scan Display:
 Capable of presenting bright pictures
❖ Unaffected by picture complexity
❖ Suitable for showing dynamic motion
❖ Lower cost
❖ Ability to display areas filled with solid colors or patterns
Disadvantages of Raster Scan Display:
❖ Requires large amount of memory (RAM)
❖ Produced “stair stepped” appearance of diagonal lines on the image (known as
aliasing effect)
 True line cannot be represented exactly due to the discretization of the display
surface (discrete nature of pixel representation)
Image Analysis
Image analysis is concerned with techniques for extracting descriptions from
images that are necessary for high-level scene analysis methods.
Image analysis techniques include computation of perceived brightness and color,
partial or complete recovery of three-dimensional data in the scene, location of
discontinuities corresponding to objects in the scene and characterization of the
properties of uniform regions in the image.
• Image processing includes image enhancement, pattern detection and
recognition and scene analysis and computer vision.
• Image enhancement deals with improving image quality by eliminating noise or
by enhancing contrast.
• Pattern detection and recognition deal with detecting and clarifying standard
patterns and finding distortions from these patterns. Scene analysis and
computer vision deal with recognizing and reconstructing 3D models of a scene
from several 2D images.
Image Recognition Steps
Formatting
Capturing an image from a camera and bringing it into a digital form. It means that
we will have a digital representation of an image in the form of pixels.
Conditioning
In image, there is usually uninteresting object introduced during digitize as noise.
In conditioning, interesting objects are highlighted by suppressing or analyzing
uninteresting in systematic or patterned variations. Conditioning is typically applied
uniformly and is context-independent.
Labeling
The informative pattern has structure as a spatial arrangement of events, each
spatial event being a set of connected pixels. Labeling determines in what kinds of
spatial events each pixel participates. E.g. edge detection technique Edge detection
technique determines continuous adjacent pairs which differ in intensity or color.
Another labeling operation must occur after edge detection, namely thresholding.
Thresholding specifies which edges should be accepted and which should not; the
thresholding operation filters only the significant edges from the image and labels
them.
Grouping
It can turn edges into line by determining edges belongs to same spatial event. A
grouping operation, where edges are grouped into lines, is called line filtering. The
grouping operation involves a change of logical data structure.
Extracting
Generating list of properties from set of pixel in spatial event. Extraction can also
measure topological or spatial relationship between two or more grouping.
Matching
After the completion of the extracting operation, the events occurring on the image
have been identified and measured but the events in and of themselves have no
meaning. It is the matching operation that determines the interpretation of some
related set of image events, associating these events with some given three
dimensional object or two-dimensional shape. The classic example is template
matching, which compares the examined pattern with stored models (templates) of
known patterns and chooses the best match.
Image Transmission
Image transmission takes into account transmission of digital images through
computer networks. There are several requirements on the networks when images
are transmitted:
• The network must accommodate bursty data transport because image
transmission is bursty (The burst is caused by the large size of the image).
• Image transmission requires reliable transport.
• Time-dependence is not a dominant characteristic of the image in contrast to
audio/video transmission.
Image size depends on the image representation format used for transmission.
There are several possibilities:
Raw Image Data Transmission
The image is generated through a video digitizer and transmitted in its digital
format.
Size = Spatial_resolution x Pixel_quantization
For example, the transmission of an image with a resolution of 640 x 480 pixels and
pixel quantization of 8 bit per pixel requires transmission of 307,200 bytes through
the network.
Compressed Image Data Transmission
The image is generated through a video digitizer and compressed before
transmission. The reduction of image size depends on the compression method and
compression rate. JPEG (Joint Picture Expert Group) & MPEG (Motion Picture Expert
Group ).
Symbolic Image Data Transmission
The image is represented through symbolic data representation as image primitives
(e.g. 2D or 3D geometric representation), attributes and other control information.
Storage Space and Coding Requirements
Storage Requirement
Uncompressed graphics, audio and video data require considerable storage capacity.
For e.g. an image represented in 640 by 480 resolutions with 8 bits per pixel require
300 kilobytes of storage space. So, in order to reduce the storage requirement of the
multimedia data it should be compressed. The compression should be such that
there should not be a loss of quality of the data and hence compression should not
reduce the information content of the data. For e.g. 90% of the raw audio can be
deleted without affecting the quality of the audio.
Bandwidth Requirement
Uncompressed data transfer requires greater bandwidth or data rate for its
communication. If the data is compressed there can be a considerable reduction in
the bandwidth requirement for the transmission of the data.
The following examples specify continuous media and derive the amount of storage
required for one second of playback:
• An uncompressed audio signal of telephone quality sampled at 8 kHz and
quantized with 8 bits per sample requires 64 kbits to store one second of
playback and a bandwidth of 64 kbits/sec.
• An uncompressed stereo audio signal of CD quality is sampled at a rate of 44.1
kHz quantized at 16 bits per sample require 705.6 * 103 bits to store one second
of playback and the bandwidth requirement is 705.6 * 103 bits/second.
As mentioned above compression in multimedia systems is subject to certain
constraints.
• The quality of the compressed data should be as good as possible.
• To make cost-effective implementation possible, the complexity of the
technique used should be minimal.
• The processing of the algorithm must not exceed certain time span.
In retrieval mode application, the following demands arise:
• Fast forward and backward data retrieval should be possible with simultaneous
display.
• Random access to single images and audio frames of a data stream should be
possible without extending the access time more than 0.5 second.
• Decompression of data should be independent of other data units.
For both dialogue and retrieval mode, the following requirements apply
• The format should be independent of frame size and video frame rate.
• The format should support various data rate.
• There should be synchronization between the audio and video.
• Compression and decompression should not require additional hardware
• The compression of data in one system of multimedia should ensure the
decompression in the other system.
Source, Entropy and Hybrid Coding
Source coding takes into account the semantics and the characteristics of the data.
Thus, the degree of compression that can be achieved depends on the data
contents. Source coding is a lossy coding process in which there is some loss of
information content. For e.g. in case of speech the speech is transformed from the
time domain to frequency domain. In the psychoacoustic the encoder analyzes the
incoming audio signals to identify perceptually important information by
incorporating several psychoacoustic principles of the human ear. One is the
critical-band spectral analysis, which accounts for the ear’s poorer discrimination in
higher frequency regions than in lower-frequency regions. The encoder performs
the psychoacoustic analysis based on either a side-chain FFT analysis or the output
of the filter bank.
E.g. Differential Pulse Code Modulation, Delta Modulation, Fast Fourier Transform,
Discrete Fourier Transform, Sub-band coding etc
Entropy Coding
Entropy coding can be used for different media regardless of the medium’s specific
characteristics. The data to be compressed are viewed as a sequence of digital data
values, and their semantics are ignored. It is lossless because the data prior to
encoding is identical to the data after decoding; no information is lost. Thus run-
length encoding, for example, can be used for compression of any type of data in a
file system, for example, text, still images for facsimile, or as part of motion picture
or audio coding.
Major Steps of Data Compression
Figure below shows the typical sequence of operations performed in the
compression of still images and video and audio data streams. The following
example describes the compression of one image:
1. The preparation step (here picture preparation) generates an appropriate
digital representation of the information in the medium being compressed. For
example, a picture might be divided into blocks of 8×8 pixels with a fixed
number of bits per pixel.
2. The processing step (here picture processing) is the first step that makes use of
the various compression algorithms. For example, a transformation from the
time domain to the frequency domain can be performed using the Discrete
Cosine Transform (DCT). In the case of interframe coding, motion vectors can
be determined here for each 8×8-pixel block.
3. Quantization takes place after the mathematically exact picture processing
step. Values determined in the previous step cannot and should not be
processed with full exactness; instead, they are quantized according to a
specific resolution and characteristic curve. This can also be considered
equivalent to the µ-law and Alaw, which are used for audio data . In the
transformed domain, the results can be treated differently depending on their
importance (e.g., quantized with different numbers of bits).
• Entropy coding starts with a sequential data stream of individual bits and bytes.
Different techniques can be used here to perform a final, lossless compression.
For example, frequently occurring long sequences of zeroes can be compressed
by specifying the number of occurrences followed by the zero itself.

• Decompression is the inverse process of compression. Specific coders and


decoders can be implemented very differently. Symmetric coding is
characterized by comparable costs for encoding and decoding, which is
especially desirable for dialogue applications. In an asymmetric technique, the
decoding process is considerably less costly than the coding process. This is
intended for applications where compression is performed once and
decompression takes place very frequently, or if the decompression must take
place very quickly. For example, an audio-visual course module is produced
once, but subsequently decoded by the many students who use it. The main
requirement is real-time decompression. An asymmetric technique can be used
to increase the quality of the compressed images.

You might also like