0% found this document useful (0 votes)
78 views3 pages

Multimedia Database (MMDB) Is A Collection of Related Multimedia Data. The Multimedia

This document discusses several algorithmic problems related to the Internet and networking. It summarizes three key problems: 1) Longest prefix matching that routers must perform to route packets efficiently, 2) Providing quality of service guarantees for different types of network traffic through scheduling, and 3) Computing fast checksums and other algorithms for tasks like load balancing. It also briefly mentions other algorithmic challenges.

Uploaded by

Kevin Baladad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views3 pages

Multimedia Database (MMDB) Is A Collection of Related Multimedia Data. The Multimedia

This document discusses several algorithmic problems related to the Internet and networking. It summarizes three key problems: 1) Longest prefix matching that routers must perform to route packets efficiently, 2) Providing quality of service guarantees for different types of network traffic through scheduling, and 3) Computing fast checksums and other algorithms for tasks like load balancing. It also briefly mentions other algorithmic challenges.

Uploaded by

Kevin Baladad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Multimedia database (MMDB) is a collection of related multimedia data.

The multimedia
data include one or more primary media data types such as text, images, graphic objects
(including drawings, sketches and illustrations) animation sequences, audio and video.
Hardware-Compiler architecture design
This advanced computer architecture course explores the nature of and the motivation for
recent trends in uniprocessor computing. Emphasis is placed on a commonality among many
of these trends, increased reliance on sophisticated compilation. The compiler, no longer just
a consideration in instruction-set architecture design, has become the driving factor in many
architectural innovations. Predication, speculation, value prediction, and other
hardware/compiler techniques which exploit instruction level parallelism will be explored
using real codes. The course includes a project involving the IMPACT Research Compiler and
a working EPIC (Explicitly Parallel Instruction Computing) architecture similar to Intels IA-64.
Algorithmic Problems of the Internet
First, prefixes are used in the Internet today to aggregate forwarding entries at routers. Such
aggregation reduces memory and control traffic, but requires that every router perform a
longest matching prefix to process each received packet. Internet backbone routers have a
database of roughly 40,000 prefixes, each of which is a binary string of anywhere from 8 to
32 bits. When your computer sends an internet message to say Joe@JoeHost, your computer
asks something akin to directory assistance (called DNS or the Domain Name Service) to
translate the host name (JoeHost) to a 32 bit Internet address. Each packet then carries a 32
bit IP destination address D; a router must find the best matching prefix corresponding
to D in its database of prefixes in (hopefully) hundreds of nanoseconds.
A third interesting problem is that of scheduling at output links to provide service
guarantees. Consider the example of a number of chefs who share an oven; some of the
chefs prepare fast food and require a guaranteed response time, others make restaurant
meals and need adequate response times, and still others make frozen dinners in bulk for
which throughput is more important. In our metaphor, the oven corresponds to an output
link at a router, the fast food chef corresponds to say video, the restaurant chef to say
remote login, and the frozen food chef to File Transfer.
The problem is to have a fast scheduler at the router, with decision times comparable to a
lookup time that can provide guaranteed response times for delay critical traffic, and yet
provide fair throughput for other kinds of traffic. The problem is distinguished from standard
real time schedulers such as EDF by the throughput fairness requirement. There are a
number of good solutions such as Weighted Fair Queuing and Worst Case Weighted Fair
Queuing, but finding faster solutions is still an interesting research problem
Besides these three important problems, there are a number of other algorithmic problems
that arise in protocol implementations. These include computing fast checksums, load
balancing, and sequence number bookkeeping.
Expert Systems and decision making
The goal of knowledge-based systems is to make the critical information required for the
system to work explicit rather than implicit. In a traditional computer program the logic is
embedded in code that can typically only be reviewed by an IT specialist. With an expert
system the goal was to specify the rules in a format that was intuitive and easily understood,
reviewed, and even edited by domain experts rather than IT experts. The benefits of this
explicit knowledge representation were rapid development and ease of maintenance.

A claim for expert system shells that was often made was that they removed the need for
trained programmers and that experts could develop systems themselves. In reality this was
seldom if ever true. While the rules for an expert system were more comprehensible than
typical computer code they still had a formal syntax where a misplaced comma or other
character could cause havoc as with any other computer language. In addition as expert
systems moved from prototypes in the lab to deployment in the business world issues of
integration and maintenance became far more critical. Inevitably demands to integrate with
and take advantage of large legacy databases and systems arose. To accomplish this
integration required the same skills as any other type of system.
Agent based systems and application
The science and engineering of making intelligent machines, Agent is central in AI for
obvious reasons. AI does always try to make thing which is intelligent. This thing is not
necessary a machine and it can be considered as agent. Therefore we can conclude that
agent is the ultimate objective of AI.
Multicasting in Ad hoc networks
An ad hoc network is a dynamic wireless network with the engagement of cooperative nodes
without a fixed infrastructure. Multicasting is intended for group communication that
supports the dissemination of information from a sender to all the receivers in a group.
Problems in ad hoc networks are the scarcity of bandwidth, short lifetime of the nodes due to
power constraints, dynamic topology caused by the mobility of nodes. These problems put in
force to design a simple, scalable, robust and energy efficient routing protocols for multicast
environment. In this project I discuss different multicasting protocols, their deployment
issues and provide some guidelines for the researchers in this field.
Mobile Ad hoc networks
A mobile ad hoc network (MANET) is a continuously self-configuring, infrastructureless network of mobile devices connected without wires. Ad hoc is Latin and means "for this
purpose".
Each device in a MANET is free to move independently in any direction, and will therefore
change its links to other devices frequently. Each must forward traffic unrelated to its own
use, and therefore be a router. The primary challenge in building a MANET is equipping each
device to continuously maintain the information required to properly route traffic. Such
networks may operate by themselves or may be connected to the larger Internet. They may
contain one or multiple and different transceivers between nodes. This results in a highly
dynamic, autonomous topology.
MANETs are a kind of Wireless ad hoc network that usually has a routable networking
environment on top of a Link Layer ad hoc network. MANETs consist of a peer-to-peer, selfforming, self-healing network in contrast to a mesh network has a central controller (to
determine, optimize, and distribute the routing table). MANETs circa 2000-2015 typically
communicate at radio frequencies (30 MHz - 5 GHz).
Next Generation Protocol (IPv6)
Internet Protocol version 6 (IPv6) is the latest version of the Internet Protocol (IP),
the communications protocol that provides an identification and location system for

computers on networks and routes traffic across the Internet. IPv6 was developed by
the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4
address exhaustion.
IPv6 is intended to replace IPv4, which still carries more than 96% of Internet
traffic worldwide as of May 2014. As of June 2014, the percentage of users
reaching Google services with IPv6 surpassed 4% for the first time.
Privacy preserving data mining
A key problem that arises in any en masse collection of data is that of confidentiality. The
need for privacy is sometimes due to law (e.g., for medical databases) or can be motivated
by business interests. However, there are situations where the sharing of data can lead to
mutual gain. A key utility of large databases today is research, whether it be scientific, or
economic and market oriented. Thus, for example, the medical field has much to gain by
pooling data for research; as can even competing businesses with mutual interests. Despite
the potential gain, this is often not possible due to the confidentiality issues which arise.

You might also like