0% found this document useful (0 votes)
23 views29 pages

FM Mod-4

Uploaded by

bagheera.gunnu18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views29 pages

FM Mod-4

Uploaded by

bagheera.gunnu18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

FM MOD-4

PART-A

1)Define Multimedia Database. Explain different types of


multimedia applications

Multimedia database is the collection of interrelated multimedia data that includes


text, graphics (sketches, drawings), images, animations, video, audio etc and have
vast amounts of multisource multimedia data. The framework that manages different
types of multimedia data which can be stored, delivered and utilized in different ways is
known as multimedia database management system. There are three classes of the
multimedia database which includes static media, dynamic media and dimensional
media.

Types of multimedia applications based on data management characteristic are :


1. Repository applications – A Large amount of multimedia data as well as
meta-data(Media format date, Media keyword data, Media feature data) that is stored
for retrieval purpose, e.g., Repository of satellite images, engineering drawings,
radiology scanned pictures.
2. Presentation applications – They involve delivery of multimedia data subject to
temporal constraint. Optimal viewing or listening requires DBMS to deliver data at
certain rate offering the quality of service above a certain threshold. Here data is
processed as it is delivered. Example: Annotating of video and audio data, real-time
editing analysis.
3. Collaborative work using multimedia information – It involves executing a
complex task by merging drawngs, changing notifications. Example: Intelligent
healthcare network.
2)Describe the content of multimedia database and explain challanges in
multimedia databases
Content of Multimedia Database management system :
1. Media data – The actual data representing an object.
2. Media format data – Information such as sampling rate, resolution, encoding
scheme etc. about the format of the media data after it goes through the acquisition,
processing and encoding phase.
3. Media keyword data – Keywords description relating to the generation of data.
It is also known as content descriptive data. Example: date, time and place of
recording.
4. Media feature data – Content dependent data such as the distribution of colors,
kinds of texture and different shapes present in data.

There are still many challenges to multimedia databases, some of which are :

1. Modelling – Working in this area can improve database versus information


retrieval techniques thus, documents constitute a specialized area and deserve special
consideration.
2. Design – The conceptual, logical and physical design of multimedia databases
has not yet been addressed fully as performance and tuning issues at each level are
far more complex as they consist of a variety of formats like JPEG, GIF, PNG, MPEG
which is not easy to convert from one form to another.
3. Storage – Storage of multimedia database on any standard disk presents the
problem of representation, compression, mapping to device hierarchies, archiving and
buffering during input-output operation. In DBMS, a ”BLOB”(Binary Large Object)
facility allows untyped bitmaps to be stored and retrieved.
4. Performance – For an application involving video playback or audio-video
synchronization, physical limitations dominate. The use of parallel processing may
alleviate some problems but such techniques are not yet fully developed. Apart from
this multimedia database consume a lot of processing time as well as bandwidth.
5. Queries and retrieval –For multimedia data like images, video, audio accessing
data through query opens up many issues like efficient query formulation, query
execution and optimization which need to be worked upon.

3)Discuss the concept of content based information retrieval. Explain different


types of indexing in content based information retrieval

Content-based information retrieval (CBIR) is a type of information retrieval system


that allows users to search for and retrieve multimedia documents based on their
content, rather than metadata such as file names or keywords. CBIR systems use
algorithms to analyze and extract features from the content of multimedia
documents, and then use these features to search and retrieve relevant documents.

There are several different types of indexing that can be used in CBIR systems,
including:

1. Feature-based indexing: In feature-based indexing, the features of the


multimedia documents are extracted and indexed using techniques such as
color histograms, texture analysis, or edge detection. These features are then
used to search and retrieve relevant documents based on their visual or
auditory characteristics.
2. Text-based indexing: In text-based indexing, the text content of multimedia
documents is extracted and indexed using techniques such as keyword
extraction or Optical Character Recognition (OCR). These text-based features
are then used to search and retrieve relevant documents based on their
textual content.
3. Semantic indexing: In semantic indexing, the meaning of the content in
multimedia documents is analyzed and indexed using techniques such as
natural language processing or machine learning. These semantic features
are then used to search and retrieve relevant documents based on their
meaning or context.

Indexing is an important aspect of CBIR systems, as it allows the system to


efficiently search and retrieve relevant multimedia documents based on their
content.
4)Describe briefly content based image retrieval

Content-based image retrieval (CBIR) is a type of information retrieval system that


allows users to search for and retrieve images based on their visual content, rather
than metadata such as file names or keywords. CBIR systems use algorithms to
analyze and extract features from the visual content of images, and then use these
features to search and retrieve relevant images.

CBIR systems can be used to search and retrieve images based on various visual
features, including:

1. Color: CBIR systems can search and retrieve images based on the colors
present in the images, such as specific colors or color ranges.
2. Shape: CBIR systems can search and retrieve images based on the shapes
present in the images, such as circles, squares, or lines.
3. Texture: CBIR systems can search and retrieve images based on the texture
of the images, such as smooth, rough, or patterned textures.
4. Edge detection: CBIR systems can search and retrieve images based on the
edges present in the images, such as sharp or blurry edges.

Overall, CBIR systems are used to search and retrieve images based on their visual
content, and they are used in a wide range of applications including image
databases, online image libraries, and visual search engines.

5)What are different techniques in content based image retrieval and explain in detail

There are several different techniques that are used in content-based image retrieval
(CBIR) systems to analyze and extract features from the visual content of images,
and to search and retrieve relevant images based on these features. Some of the
main techniques used in CBIR systems include:

1. Color histogram: A color histogram is a graphical representation of the colors


present in an image, which is used to extract and index the color content of
the image. To create a color histogram, the image is divided into a grid of
cells, and the number of pixels in each cell is counted. The resulting counts
are then plotted on a graph, with the x-axis representing the colors in the
image and the y-axis representing the number of pixels in each color.
2. Texture analysis: Texture analysis is a technique that is used to extract and
index the texture of an image, which is the visual pattern of the image. To
perform texture analysis, the image is divided into a grid of cells, and the
texture of each cell is analyzed using techniques such as Gabor filters or
Local Binary Patterns (LBP). The resulting texture features are then indexed
and used to search and retrieve relevant images.
3. Edge detection: Edge detection is a technique that is used to extract and
index the edges of an image, which are the points in the image where there is
a significant change in the pixel values. To perform edge detection, the image
is processed using algorithms such as the Sobel operator or the Canny edge
detector, which detect the edges in the image based on the gradient of the
pixel values. The resulting edge features are then indexed and used to search
and retrieve relevant images.
4. Feature extraction: Feature extraction is a technique that is used to extract
and index the features of an image, which are the unique characteristics of
the image. To perform feature extraction, the image is processed using
algorithms such as Scale-Invariant Feature Transform (SIFT) or Speeded Up
Robust Features (SURF), which extract the features of the image based on the
scale and orientation of the image features. The resulting features are then
indexed and used to search and retrieve relevant images.

These are some of the main techniques used in CBIR systems to analyze and extract
features from the visual content of images, and to search and retrieve relevant
images based on these features.

6)Draw an architecture of content-based video retrieval and explain in detail

The architecture of a content-based video retrieval (CBVR) system typically consists of the following
components:

1. Video capture: This component is responsible for capturing video data from
various sources, such as video cameras or video files.
2. Video analysis: This component is responsible for analyzing the video data
and extracting relevant features from it, such as color, texture, shape, and
motion.
3. Indexing: This component is responsible for creating an index of the extracted
features, which can be used to search and retrieve relevant videos.
4. Query processing: This component is responsible for processing user queries
and searching the feature index to retrieve relevant videos.
5. Video display: This component is responsible for displaying the retrieved
videos to the user.

Overall, the architecture of a CBVR system is designed to capture, analyze, index, and
retrieve video data based on its content, rather than metadata such as file names or
keywords. This allows users to search and retrieve relevant videos based on their
visual or auditory characteristics, rather than relying on metadata alone.

7)What is the major motivation behind the development of MPEG-7? Give three examples of
real-world applications that may benefit from MPEG-7

The major motivation behind the development of MPEG-7 (Multimedia Content


Description Interface) was to provide a standardized way to describe the content of
multimedia documents in a way that is machine-readable, so that multimedia
content can be searched, retrieved, and used more efficiently.

Three examples of real-world applications that may benefit from MPEG-7 are:

1. Digital libraries: Digital libraries are collections of digital multimedia


documents, such as images, videos, or audio files, that are stored and
managed electronically. MPEG-7 can be used to describe the content of these
multimedia documents in a standardized way, allowing them to be searched
and retrieved more efficiently.
2. Video search engines: Video search engines are websites or applications that
allow users to search for and retrieve videos based on their content, rather
than metadata such as file names or keywords. MPEG-7 can be used to
describe the content of the videos in a standardized way, allowing them to be
searched and retrieved more efficiently.
3. Multimedia content management systems: Multimedia content management
systems are software applications that are used to manage, store, and
retrieve multimedia documents such as images, videos, or audio files.
MPEG-7 can be used to describe the content of these multimedia documents
in a standardized way, allowing them to be searched and retrieved more
efficiently.

Overall, MPEG-7 is a valuable tool for a wide range of applications that involve the
search, retrieval, and use of multimedia content, and it is widely used in digital
libraries, video search engines, and multimedia content management systems.

8)Explain MPEG-7 descriptor

An MPEG-7 descriptor is a standardized way to describe a feature or characteristic of


a multimedia document using the MPEG-7 (Multimedia Content Description
Interface) standard. MPEG-7 descriptors are used to describe various aspects of
multimedia documents, including visual, auditory, and semantic features.

There are several different types of MPEG-7 descriptors, including:

1. Visual descriptors: Visual descriptors are used to describe the visual content
of multimedia documents, such as images or videos. Examples of visual
descriptors include color histograms, edge detection, and texture analysis.
2. Auditory descriptors: Auditory descriptors are used to describe the auditory
content of multimedia documents, such as audio or music. Examples of
auditory descriptors include spectral analysis, pitch detection, and tempo
estimation.
3. Semantic descriptors: Semantic descriptors are used to describe the meaning
or context of multimedia documents, such as text or speech. Examples of
semantic descriptors include natural language processing and machine
learning algorithms.

Overall, MPEG-7 descriptors are a standardized way to describe the content of


multimedia documents, and they are used to facilitate the search, retrieval, and use
of multimedia content in a wide range of applications.

9)Discuss MPEG-7 Description Schemes


MPEG-7 Description Schemes are standardized ways to describe the content of
multimedia documents using the MPEG-7 (Multimedia Content Description
Interface) standard. Description Schemes define the structure and format of the
descriptors that are used to describe the multimedia content, and they provide a
common framework for organizing and accessing multimedia content.

There are several different types of Description Schemes in MPEG-7, including:

1. Visual Description Schemes: Visual Description Schemes are used to


describe the visual content of multimedia documents, such as images or
videos. Examples of Visual Description Schemes include Color and Shape
Descriptors, and Texture Description Schemes.
2. Audio Description Schemes: Audio Description Schemes are used to describe
the auditory content of multimedia documents, such as audio or music.
Examples of Audio Description Schemes include Audio Spectral Envelope
Descriptors and Audio Temporal Descriptors.
3. Multimodal Description Schemes: Multimodal Description Schemes are used
to describe multimedia documents that contain multiple modalities, such as
audio, video, and text. Examples of Multimodal Description Schemes include
Audio-Visual Descriptors and Textual Descriptors.

Overall, Description Schemes in MPEG-7 provide a standardized way to describe the


content of multimedia documents, and they are used to facilitate the search,
retrieval, and use of multimedia content in a wide range of applications.

10)Describe design of video-on-demand systems

Video-on-demand (VOD) systems are designed to allow users to watch video content
on demand, rather than being tied to a specific schedule or broadcast schedule. VOD
systems typically have the following components:

1. Video server: The video server is a computer or group of computers that


stores and manages the video content that is available for streaming. The
video server is responsible for storing the video files, organizing them into a
database or library, and making them available for streaming to users.
2. Network infrastructure: The network infrastructure is the hardware and
software that is used to transmit the video data from the video server to the
user's device. This may include local area networks (LANs), wide area
networks (WANs), and the Internet.
3. User device: The user device is the device that the user uses to access and
watch the video content, such as a computer, smartphone, or smart TV. The
user device must be capable of connecting to the network infrastructure and
streaming the video data.
4. User interface: The user interface is the interface that the user uses to
browse and select the video content that they want to watch. This may be a
web-based interface, a mobile app, or a user interface built into the user
device.

The design of a VOD system is focused on providing users with access to a wide
variety of video content on demand, and on making it easy for users to browse and
select the content that they want to watch.
PART-B

1)Design and construct multimedia database

2)Explain the different types of multimedia databases

Multimedia databases are databases that are designed to store, manage, and
retrieve multimedia documents such as images, videos, or audio files. There are
several different types of multimedia databases, including:
1. Image databases: Image databases are databases that are specifically
designed to store and manage images. Image databases may include
features such as image metadata (e.g., file size, resolution, format), image
annotation (e.g., tags, labels, descriptions), and image processing algorithms
(e.g., image resizing, image enhancement).
2. Video databases: Video databases are databases that are specifically
designed to store and manage videos. Video databases may include features
such as video metadata (e.g., file size, resolution, format), video annotation
(e.g., tags, labels, descriptions), and video processing algorithms (e.g., video
transcoding, video compression).
3. Audio databases: Audio databases are databases that are specifically
designed to store and manage audio files. Audio databases may include
features such as audio metadata (e.g., file size, format, duration), audio
annotation (e.g., tags, labels, descriptions), and audio processing algorithms
(e.g., audio transcoding, audio compression).
4. Multimodal databases: Multimodal databases are databases that are
designed to store and manage multimedia documents that contain multiple
modalities, such as audio, video, and text. Multimodal databases may include
features such as multimodal annotation (e.g., tags, labels, descriptions) and
multimodal processing algorithms (e.g., audio-video synchronization).

Multimedia databases are designed to store, manage, and retrieve multimedia


documents, and they offer a range of features and capabilities depending on the
specific type of multimedia content they are designed to handle.

3)Write about the content-based information retrieval

Refer part A qn 3
4)Explain about content-based image retrieval

Refer part A qn 4,5

5)What is image retrieval techniques

refer part A qn 5

6)What are the advantages and disadvantages of Video-on-demand systems?

Advantages
Convenience: Through keyword searches, viewers can search through the video library
and watch their choice of content whenever and wherever they please, without being
bound to any broadcast schedules.

Sharing: Videos can be shared with intended audiences and they can watch them at any
time at their convenience.

Navigation: Using chapter markers (indicators of sequences) to jump to a specific section


in the video.

Content variety: Viewers can search and access a myriad of content topics. Results from
searches are generated in seconds with rapid stream delivery.

Reach: Content creators can tap into any demographic segment without geographical
and time restrictions due to the prevalent availability of screens in the internet space.
Affordable Promotions: Launching commercials in an online space is cheaper than
buying prime time spots for TV commercials.

Compatibility: Providing a high-quality viewing experience over broadband internet with


quick start up and minimum buffering time on various gadgets on the go like Tablets, PCs,
phones, and Smart TVs.

Viewership metrics: You can easily gauge measurements of viewer activity through
figures from analytics. Unlike TV metrics, where the determination of target audience and
their viewing behaviors are difficult.

Interactivity: Modern VOD platforms include various features to enable viewers to


interact with each other. These include likes, comments, timed comments, and in-video
files such as quizzes, surveys, or a file attachment.

While YouTube and Netflix are great platforms targeted for consumer entertainment, they
may not be suitable for enterprise purposes such as video content management, branding
and customization, monetization, and security compliance. VIDIZMO can offer these
features through its VoD portal that enables you to customize communication for both
internal and external audiences.

Disadvantages

1. Cost: VOD systems can be expensive to set up and maintain, as they require
specialized hardware and software, as well as a network infrastructure to
transmit the video data to users.
2. Limited content: VOD systems may have a limited selection of video content,
as they rely on the video content being uploaded to the video server and made
available for streaming. This may be less comprehensive than the selection of
content available through traditional television broadcasting or
subscription-based streaming services.
3. Limited accessibility: VOD systems may not be accessible to users who do
not have a compatible device or a stable Internet connection. This can limit
the audience for VOD content and may exclude certain groups of users.
4. Quality issues: VOD systems may experience quality issues, such as buffering
or low resolution, due to network congestion or limited bandwidth. This can
affect the user experience and make it difficult to watch the video content.
5. Lack of social interaction: VOD systems do not typically provide a platform
for social interaction or community-building, as users are typically watching
the video content individually rather than in a group setting. This can reduce
the sense of community or shared experience that is often associated with
traditional television viewing.

7)Discuss the typical features of MPEG-7

Refer part A qn 7 8 9

8)How are multimedia databases organized? Give examples.

Multimedia databases are organized in a variety of ways, depending on the specific


needs and goals of the database. Some common ways that multimedia databases
are organized include:

1. By type of multimedia content: Multimedia databases may be organized


based on the type of multimedia content they contain, such as images, videos,
or audio files. Within each category, the content may be further organized by
metadata such as file format, resolution, or duration.
2. By subject matter: Multimedia databases may be organized based on the
subject matter of the content, such as art, history, science, or sports. This can
make it easier for users to find and retrieve content that is relevant to their
interests or research needs.
3. By source: Multimedia databases may be organized based on the source of
the content, such as a specific photographer, artist, or media organization.
This can be useful for tracking the origin of the content or for identifying the
rights holder.
4. By annotation: Multimedia databases may be organized based on the
annotation or metadata associated with the content, such as tags, labels, or
descriptions. This can make it easier for users to search and retrieve content
based on specific keywords or criteria.

Overall, multimedia databases are organized in a variety of ways to facilitate the


storage, management, and retrieval of multimedia content, and the specific
organization scheme will depend on the specific needs and goals of the database.

9)Discuss about video retrieval techniques

TECHNIQUES FOR VIDEO RETRIEVAL

Video retrieval techniques are methods and algorithms that are used to search and retrieve video

content from a video database. Some common video retrieval techniques include:
1. Keyword-based search: Keyword-based search is a simple but effective video retrieval
technique that allows users to search for video content based on specific keywords or
phrases. This can be done using a search bar or query form, and the results may be
ranked based on relevance or other criteria.
2. Content-based retrieval: Content-based retrieval is a more advanced video retrieval
technique that uses algorithms to analyze the content of the video itself (e.g., visual
features, audio features) rather than relying on metadata or annotation. This can be
useful for retrieving video content that is not well-described or annotated, or for finding
similar video content based on visual or auditory features.
3. Context-based retrieval: Context-based retrieval is a video retrieval technique that takes
into account the context or environment in which the video is being viewed, such as the
user's location, device, or language. This can be useful for personalized or customized
video recommendations or search results.
4. Collaborative filtering: Collaborative filtering is a video retrieval technique that uses data
from other users (e.g., ratings, views, likes) to recommend video content to a particular
user. This can be useful for discovering new video content that is similar to content that
the user has previously watched or liked.

Overall, video retrieval techniques are used to facilitate the search and retrieval of video content

from a video database, and different techniques may be more appropriate for different types of

video content and users.

10)What do you understand by benchmarking of multimedia databases?


Distinguish between relational and object oriented model of multimedia
databases.

Benchmarking of multimedia databases refers to the process of evaluating the performance of a

multimedia database system. This can involve measuring various characteristics of the system,

such as its speed, accuracy, reliability, and scalability. The goal of benchmarking is to identify the
strengths and weaknesses of a multimedia database system and to compare it to other systems

in order to determine which one is best suited for a particular task or application.

There are several different approaches to benchmarking multimedia databases, depending on

the specific goals of the benchmarking process and the characteristics of the system being

evaluated. Some common methods of benchmarking include:

1. Testing the performance of the system under various workloads, including different types
and amounts of data, to determine how well it handles different levels of demand.
2. Comparing the system to other multimedia database systems using standardized
benchmarks or test cases.
3. Measuring the system's ability to perform common multimedia database tasks, such as
searching, indexing, and querying.
4. Evaluating the system's usability and user experience to determine how well it meets the
needs of different users and applications.

Overall, benchmarking is an important tool for evaluating the performance and capabilities of

multimedia database systems and helping organizations choose the best one for their needs.

Relational Database
A relational database is a database that stores data in tables that consist of
rows and columns. Each row has a primary key and each column has a unique
name. A file processing environment uses the terms file, record, and field to
represent data. A relational database uses terms different from a file processing
system. A developer of a relational database refers to a file as a relation, a record
as a tuple, and a field as an attribute. A user of a relational database, by contrast,
refers to a file as a table, a record as a row, and a field as a column.

In addition to storing data, a relational database also stores data relationships. A


relationship is a link within the data. In a relational database, you can set up a
relationship between tables at any time. The tables must have a common column
(field). In a relational database, the only data redundancy (duplication) exists in the
common columns (fields). The database uses these common columns for
relationships. Many organizations use relational databases for payroll, accounts
receivable, accounts payable, general ledger, inventory, order entry, invoicing, and
other business-related functions.

OODB
An object-oriented database (OODB) stores data in objects. An object is an
item that contains data, as well as the actions that read or process the data. A
Student object, for example, might contain data about a student such as Student
ID, First Name, Last Name, Address, and so on. It also could contain instructions
about how to print a student transcript or the formula required to calculate a
student’s grade point average.

Object-oriented databases have several advantages compared with relational


databases: they can store more types of data, access this data faster, and allow
programmers to reuse objects. An object-oriented database stores unstructured
data more efficiently than a relational database. Unstructured data includes
photos, video clips, audio clips, and documents. When users query an
object-oriented database, the results often are displayed more quickly than the
same query of a relational database. If an object already exists, programmers can
reuse it instead of recreating a new object — saving on program development
time.

11)What do you understand by benchmarking of multimedia databases?


Distinguish between relational and object oriented model of multimedia
databases.

Refer part b 10th qn

Why synchronization is important for delivery of multimedia data?

Multimediarefers to the integration of text, images, audio, and video in a variety of appli-cation
environments. These data can be heavily time-dependent, such as audio and videoin a movie, and
can require time-ordered presentation during use. The task of coordinatingsuch sequences is called
multimedia synchronization. Synchronization can be applied to theplayout of concurrent or
sequential streams of data, and also to the external events generatedby a human user.

12)Explain, how video-conferencing standards are different from video and/or audio compression
standards.

Video-conferencing standards are technical specifications that define how different


devices and systems can communicate and exchange video and audio data for the
purpose of conducting real-time, interactive communication. These standards
typically define protocols for establishing and maintaining a video-conferencing
connection, as well as for exchanging data such as audio, video, and control signals.

Video and audio compression standards, on the other hand, are technical
specifications that define how to efficiently encode and compress digital video and
audio data for storage and transmission. These standards specify algorithms and
techniques for reducing the size of the data while maintaining its quality.

There are several key differences between video-conferencing standards and video
and audio compression standards:

1. Purpose: Video-conferencing standards are designed to enable real-time,


interactive communication, while video and audio compression standards are
designed to reduce the size of video and audio data for storage and
transmission.
2. Scope: Video-conferencing standards typically cover a wide range of topics,
including protocols for establishing and maintaining a connection, as well as
data exchange formats for audio, video, and control signals. Video and audio
compression standards, on the other hand, are typically focused on the
algorithms and techniques used to compress and encode data.
3. Compatibility: Video-conferencing standards are designed to ensure that
different devices and systems can communicate and exchange data with
each other. Video and audio compression standards, on the other hand, are
typically focused on the encoding and decoding of data, rather than the
communication between different devices and systems.

Overall, video-conferencing standards and video and audio compression standards


are two different sets of technical specifications that serve different purposes in the
field of multimedia communication and storage.

13)Explain about MPEG-7.

Refer part a qn 7

14) What is the difference between video conferencing and videophone service? Show major
components of each?

The difference between a “Video Phone”, and “Video Conferencing”. When using a Video Phone
service / set up, it’s point to point, person to person, just like a ‘normal’ phone call (except with video,
of course) Providing you have the necessary Hardware (Microphone, Speakers, Video Cam) there are
many services you can use. (Skype, Yahoo, MSN, etc…)

“Video Conferencing” is different in that there are usually many people all capable of talking, and
seeing each other at the same time. Sometimes in multiple (more than two) locations. To do that you
need a completely different service. Sometimes (usually) a bit costly.
major components video conferencing:

1. Camera

Specialized, and document cameras may also be used in conjunction with video conferencing to convey
information whose clarity needs to be preserved, such as in the case of education sectors and in medical
applications. High-definition (HD) cameras are usually preferred, as they offer the highest resolutions and
the largest images.

2. Video Display

The most common displays are (a) LCD or HD Plasma Display, and (b) LCD/DLP Projector / XGA PC
Type Display. Video conferencing systems may use more than one display option. Fact, many
enterprise-level collaboration systems and large-venue video conferencing systems have several display
tools that present different endpoints and data all together. The most preferred video displays are
high-definition displays between 720p and 1080i / 1080p, as they provide the best resolution and allow
about 20 percent more viewing area than standard / traditional definition display devices.

3. Video Conferencing Codec Unit

Often called the “heart and the brain” of the video conferencing system, the CODEC (also called the
coder-decoder) takes the audio and video from the microphone and the camera and then compresses it,
transmits it via an IP network, and decompresses (expands) the incoming audio and video signal or
viewing on the video display device.

4. Microphone / Audio Sub-System

Basic enterprise-level video conferencing and collaboration systems use analog microphone pods, which
are optimal for the use of a small group. In intermediate video collaboration systems, there is usually a
conference phone – gated “array” of digital microphones – which are designed to run on integrated
software. This software enhances the system’s audio capabilities. If the video conferencing is applied to
larger rooms / venues, there needs to be an independent cancellation system for audio echo, and many
microphones are usually connected to the integrated collaboration system to help facilitate large group
interaction.

5. Other Equipment
Video conferencing equipment should be neatly organized is a cart designed especially for housing the
collaboration systems and the ancillary devices. The flat panel display, camera, and codec are usually
placed on top, and other equipment (PC, surges suppressor, DVR, switcher, etc.) are properly stored in
the cabinet below. It is also a good idea to invest in diffuse directional lighting, as the usual fluorescent
lighting found in most offices tends to be inefficient in video conferencing environments. Fluorescent and
other overhead lighting are usually poorly located and do not have the adequate intensity nor the correct
color temperature. Poorly located lighting can cast unwanted shadows on participants face and they will
then appear dark and blurry at the far –end. It will create a lousy video conferencing experience for both
local and far-ends parties.

Correct lighting used for video conferencing will also help the video display systems perform better, and
likewise allow high-definition cameras – which require more light – to reach optimum potential.

major components videophone service:

● The videophone incorporates


● a personal video camera and
● display,
● a microphone and
● speaker,
● and a data-conversion device.

15)Discuss about Design of video-on-Demand Systems

Refer part A qn 10

16)What are the kinds of redundancy that are considered for compressing video data? How does
motion compensated predictive scheme work for videoconference
There are several kinds of redundancy that can be exploited for compressing video
data:

1. Spatial redundancy: This refers to the repetition of patterns within a single


frame of video. For example, a region of a frame may contain multiple pixels
with the same color.
2. Temporal redundancy: This refers to the repetition of patterns across multiple
frames of video. For example, if a person is standing still in a video, their
appearance will be largely unchanged from one frame to the next.
3. Statistical redundancy: This refers to the fact that certain patterns or values
in a video are more likely to occur than others. For example, in a video with a
lot of sky, most of the pixels will be blue.

One common approach to compressing video data is the motion compensated


predictive scheme, which exploits temporal redundancy to reduce the amount of
data that needs to be transmitted. This scheme works by predicting the motion of
objects in a video and sending only the difference between the predicted motion and
the actual motion. This can significantly reduce the amount of data that needs to be
transmitted, as most of the information in a video frame is often unchanged from
one frame to the next.

In a video-conferencing application, the motion compensated predictive scheme can


be used to compress the video data that is being transmitted between two or more
devices. By predicting the motion of objects in the video and sending only the
differences, it is possible to significantly reduce the amount of data that needs to be
transmitted, which can improve the quality and speed of the video-conferencing
connection.
17)Discuss about various types of frames used for video encoding in MPEG.

In the Moving Picture Experts Group (MPEG) standard for digital video compression,
there are several different types of frames that are used for encoding video data:

1. I-frames (intra-coded frames): These are self-contained frames that do not


rely on any other frames for their decoding. I-frames contain all of the
information needed to recreate the video frame, and are typically used at
regular intervals throughout the video to provide a reference point for
decoding.
2. P-frames (predictive-coded frames): These frames are predicted from one or
more previous frames using motion compensation. P-frames contain only the
differences between the predicted frame and the actual frame, and are
typically used between I-frames to reduce the amount of data that needs to be
transmitted.
3. B-frames (bidirectionally predictive-coded frames): These frames are
predicted from both a previous and a future frame using motion
compensation. B-frames contain only the differences between the predicted
frame and the actual frame, and are typically used between I- and P-frames to
further reduce the amount of data that needs to be transmitted.

In an MPEG video, the frames are typically arranged in a hierarchical structure, with
I-frames at the top, followed by P-frames and B-frames. This allows the video
decoder to use the information in the I-frames as a reference point for decoding the
P- and B-frames, which helps to improve the efficiency of the compression process.

18)Describe local area network architecture for delivering multimedia information

A local area network (LAN) is a computer network that connects devices in a limited
geographical area, such as a single building or a campus. LANs are often used to
deliver multimedia information, such as audio, video, and images, to users within the
network.

Here is an overview of the architecture of a LAN for delivering multimedia


information:

1. Network devices: The LAN typically consists of a variety of devices, including


computers, servers, switches, routers, and hubs, that are connected together
to form the network.
2. Network media: The LAN uses various types of media, such as cables and
wireless signals, to connect the devices together and transmit data.
3. Network protocols: The LAN uses a set of protocols, such as Ethernet and
TCP/IP, to establish communication between devices and to control the flow
of data on the network.
4. Network services: The LAN may provide a variety of services, such as file
sharing, email, and video-conferencing, to users within the network.
5. Network security: The LAN may include security measures, such as firewalls
and access control, to protect against unauthorized access and ensure the
privacy and integrity of the data being transmitted.

Overall, the architecture of a LAN for delivering multimedia information involves a


combination of hardware, software, and protocols that work together to provide fast
and reliable access to multimedia content for users within the network.

19)Discuss about Descriptor Techniques

Descriptor techniques are methods for extracting and representing the


characteristics or features of multimedia data, such as images, audio, and video. The
goal of descriptor techniques is to provide a compact and meaningful representation
of the data that can be used for tasks such as indexing, retrieval, and classification.

There are several different types of descriptor techniques, including:

1. Color descriptor techniques: These techniques extract and represent the


color characteristics of an image, such as the dominant colors or color
histograms.
2. Texture descriptor techniques: These techniques extract and represent the
texture characteristics of an image, such as the arrangement of pixels or the
frequency of certain patterns.
3. Shape descriptor techniques: These techniques extract and represent the
shape characteristics of an image, such as the outline or contour of an object.
4. Audio descriptor techniques: These techniques extract and represent the
characteristics of an audio signal, such as the pitch, volume, or spectral
content.
5. Video descriptor techniques: These techniques extract and represent the
characteristics of a video signal, such as the motion, appearance, or spatial
layout of objects.

Overall, descriptor techniques are an important tool for representing the


characteristics of multimedia data and enabling various applications such as
indexing, retrieval, and classification.

20)Discuss about Image Retrieval techniques

Refer Part A qn 5

You might also like