Seminar Topics List
Seminar Topics List
o
Graphical Processing Unit(22)
o The MBMS
Only a few years ago, seeing in 3-D meant peering through a pair of red-and-blue glasses, or trying not to go cross-
eyed in front of a page of fuzzy dots. It was great at the time, but 3-D technology has moved on. Scientists know
more about how our vision works than ever before, and our computers are more powerful than ever before -- most of
us have sophisticated components in our computer that are dedicated to producing realistic graphics. Put those two
things together, and you ll see how 3-D graphics have really begun to take off.
Most computer users are familiar with 3-D games. Back in the 90s, computer enthusiasts were stunned by the game
Castle Wolfenstein 3D, which took place in a maze-like castle. It may have been constructed from blocky tiles, but
3D PC Glasses the castle existed in three dimensions -- you could move forward and backward, or hold down the appropriate key
and see your viewpoint spin through 360 degrees. Back then, it was revolutionary and quite amazing. Nowadays,
gamers enjoy ever more complicated graphics -- smooth, three-dimensional environments complete with realistic
lighting and complex simulations of real-life physics grace our screens.
But that s the problem -- the screen. The game itself may be in three dimensions, and the player may be able to look
wherever he wants with complete freedom, but at the end of the day the picture is displayed on a computer monitor...and that
s a flat surface.
That s where PC 3-D glasses come in. They re designed to convince your brain that your monitor is showing a real, three-
dimensional object. In order to understand quite how this works, we need to know what sort of work our brain does with the
information our eyes give it. Once we know about that, we ll be able to understand just how 3-D glasses do their job.
These computers include the entire spectrum of PCs, through professional workstations upto super-computers. As
the performance of computers has increased, so too has the demand for communication between all systems for
exchanging data, or between central servers and the associated host computer system. The replacement of copper
with fiber and the advancement sin digital communication and encoding are at the heart of several developments
that will change the communication infrastructure. The former development has provided us with huge amount of
transmission bandwidth. While the latter has made the transmission of all information including voice and video
through a packet switched network possible.
With continuously work sharing over large distances, including international communication, the systems
must be interconnected via wide area networks with increasing demands for higher bit rates.
For the first time, a single communications technology meets LAN and WAN requirements and handles a
ATM wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts
of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-
to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the
boundary between LAN WAN.
ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells
containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving
B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports
transmission speeds of 155Mbits / sec. In the future. Photonic approaches have made the advent of ATM switches
feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data
communication world based on ATM is taking place.
(MGCP). Registration, Admission and Status (RAS). The H.323. The Media Gateway Control Protocol,
(Megaco). The Real-time Transport (RTP) Protocol. On-Board Diagnostics. CDMA2000. AppleTalk. FUNI. MPEG-
2.ISO/IEC 14496 - MPEG-4. Data over Cable System (DOCSIS). VoDSL. Frame Relay. CSS and
DeCSS.IMODE. ShotCode. Mathematical Markup Language (MathML). VOIP in mobile phones.Differential
(OPTV). DVB. 3GP. Ogg. Vorbis.
Acoustic cryptanalysis is a side channel attack which exploits sounds, audible or not, produced during a
computation or input-output operation. In 2004, Dmitri Asonov and Rakesh Agrawal of the IBM Almaden Research
Center announced that computer keyboards and keypads used on telephones and automated teller machines
(ATMs) are vulnerable to attacks based on differentiating the sound produced by different keys. Their attack
employed a neural network to recognize the key being pressed. By analyzing recorded sounds, they were able to
Acoustic cryptanalysis recover the text of data being entered. These techniques allow an attacker using covert listening devices to obtain
passwords, passphrases, personal identification numbers (PINs) and other security information. Also in 2004, Adi
Shamir and Eran Tromer demonstrated that it may be possible to conduct timing attacks against a CPU
performing cryptographic operations by analysis of variations in its humming noise. In his book Spycatcher, former
MI5 operative Peter Wright discusses use of an acoustic attack against Egyptian Hagelin cipher machines in
1956. The attack was codenamed 'ENGULF'.
Adaptive Partition Schedulers are a relatively new type of partition scheduler, pioneered with the most recent
version of the QNX operating system. Adaptive Partitioning (or AP) allows the real-time system designer to
request that a percentage of processing resources be reserved for a particular subsystem (group of threads
and/or processes). The operating systems priority driven pre-emptive scheduler will behave in the same way that
a non-AP system would until the system is overloaded (i.e. system-wide there is more computation to perform,
Adaptive Partition than the processor is capable of sustaining over the long term). During overload, the AP scheduler enforces hard
limits on total run-time for the subsystems within a partition (as dictated by the allocated percentage of processor
Scheduler bandwidth for the particular partition). If the system is not overloaded, a partition that is allocated (for example)
10% of the processor bandwidth, can, in fact, use more than 10%, as it will borrow from the spare budget of other
partitions (but will be required to pay it back later). This is very useful for the non real-time subsystems that
experience variable load, since these subsystems can make use of spare budget from hard real-time partitions in
order to make more forward progress than they would in a Fixed Partition Scheduler such as ARINC-653, but
without impacting the hard real-time subsystems deadlines.
XML is sometimes used as the format for transferring data between the server and client, although any format will
work, including preformatted HTML, plain text, JSON and even EBML.
Like DHTML, LAMP and SPA, Ajax is not a technology in itself, but a term that refers to the use of a group of
technologies together.
ECC is a public key encryption technique based on elliptic curve theory. ECC can be used to create faster,
smaller and more efficient cryptographic keys. It generates keys through the properties of the elliptic curve
equation rather than the traditional method of generation, as the product of very large prime numbers. This
technology can be used in conjunction with most of the public key encryption methods such as RSA and Diffie-
Hellman.
Elliptical curve ECC can yield a level of security with a 164-bit key compared with other systems that require a 1,024-bit key.
cryptography (ECC) Since ECC provides an equivalent security at a lower computing power and battery resource usage, it is widely
used for mobile applications. ECC was developed by Certicom, a mobile e-business security provider and was
recently licensed by Hifn, a manufacturer of integrated circuitry and network security products. Many
manufacturers, including 3COM, Cylink, Motorola, Pitney Bowes, Siemens, TRW and VeriFone have incorporated
support for ECC in their products .
Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain,
Generic visual perception which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human
eye s neural networks and the brain, the chip can handle about 20 billion instructions per second. This electronic
processor eye on the chip can handle a task that ranges from sensing the variable parameters as in the form of video signals
and then process it for controlling purpose.
This describes AMD s Hyper Transport™ technology, a new I/O architecture for personal computers,
workstations, servers, high-performance networking and communications systems, and embedded applications.
This scalable architecture can provide significantly increased bandwidth over existing bus architectures and can
simplify in-the-box connectivity by replacing legacy buses and bridges. The programming model used in Hyper
Transport technology is compatible with existing models and requires little or no changes to existing operating
system and driver software.
It provides a universal connection designed to reduce the number of buses within the system. It is designed to
enable the chips inside of PCs and networking and communications devices to communicate with each other up
to 48 times faster than with existing technologies. Hyper Transport technology is truly the universal solution for in-
the-box connectivity.
Hyper Transport >> It is a new I/O architecture for personal computers, workstations, servers, embedded applications etc.
>> It is a scalable architecture can provide significantly increased.
Technology bandwidth over existing bus architectures .
>> It simplify in-the-box connectivity by replacing legacy buses and bridges.
>> The programming model used in Hyper Transport technology is compatible with existing models and requires
little or no changes to existing operating system and driver software.
Hyper Transport technology provides high speeds while maintaining full software and operating system
compatibility with the Peripheral Component Interconnect (PCI) interface that is used in most systems today. In
older multi-drop bus architectures like PCI, the addition of hardware devices affects the overall electrical
characteristics and bandwidth of the entire bus. Even with PCI-X1.0, the maximum supported clock speed of
133MHz must be reduced when more than one PCI-X device is attached. Hyper Transport technology uses a
point-to-point link that is connected between two devices, enabling the overall speed of the link to transfer data
much faster
In a non-networked personal computing environment resources and information can be protected by physically
securing the personal computer. But in a network of users requiring services from many computers the identity of
Kerberos each user has to be accurately verified. For authentication kerberos is being used. Kerberos is a third party
authentication technology used to identify a user requesting a service.
The Metasploit Project is an open source computer security project which provides information about security
METASPLOIT vulnerabilities and aids in penetration testing and IDS signature development. Its most well-known sub-project is
the Metasploit Framework, a tool for developing and executing exploit code against a remote target machine.
Wearable Computers(44)
SIP
DNA Based Computing
Wi-Fi (802.11b) (45)
High Performance DSP Architectures
Java Cryptography Architecture (JCA)
Future of Satellite Communication(46)
Tablet PC
Image compression
4G Wireless Technology(47)
Choreography
Mobile agent
MPEG-7 (48)
Curl: A Gentle Slope Language For The Web
Genetic programming
High Speed Data In Mobile Networks(49)
JIRO Technology
Future of business Computing
Packet Interception(50)
Internet Telephony
Agile Software development
Crusoe Processors(51)
Peer to peer Networking
Clustering
Augmented Reality(52)
Encrypted Text chat Using Bluetooth
Ovonic Unified Memory
A real time system is defined as follows - A real-time system is one in which the correctness of the computations
not only depends upon the logical correctness of the computation but also upon the time at which the result is
produced. If the timing constraints of the system are not met, system failure is said to be occurred.
Two types Hard real time operating system Strict time constraints Secondary storage limited or absent Conflicts
with the time sharing systems Not supported by general purpose OS Soft real time operating system Reduced
Time Constraints Limited utility in industrial control or robotics Useful in applications (multimedia, virtual reality)
requiring advanced operating-system features. In the robot example, it would be hard real time if the robot arriving
late causes completely incorrect operation. It would be soft real time if the robot arriving late meant a loss of
throughput. Much of what is done in real time programming is actually soft real time system. Good system design
often implies a level of fe/correct behaviour even if the computer system never completes the computation. So if
the computer is only a little late, the system effects may be somewhat mitigated.
Real Time Operating
System Hat makes an os a rtos?
1. A RTOS (Real-Time Operating System) has to be multi-threaded and preemptible.
2. The notion of thread priority has to exist as there is for the moment no deadline driven OS.
3. The OS has to support predictable thread synchronisation mechanisms
4. A system of priority inheritance has to exist
5. For every system call, the maximum it takes. It should be predictable and independent from the number of
objects in the system
6. the maximum time the OS and drivers mask the interrupts. The following points should also be known by the
developer:
1. System Interrupt Levels.
2. Device driver IRQ Levels, maximum time they take, etc.
The MBMS is a unidirectional point to multipoint bearer service in which data is transmitted from a single source
entity to multiple recipients. These services will typically be in the form of streaming video and audio and should
not be confused with the CBS (Cell Broadcast Service) that is currently supported. This paper describes the
The MBMS architecture of the MBMS along with its functional notes and integration into 3G and GERAN (GSM & EDGE
Radio Access Network) with Core Network, UTRAN (UMTS Terrestrial Radio Access Network) and radio aspects
being explained.
VoIP, or Voice over Internet Protocol refers to sending voice and fax phone calls over data networks, particularly
the Internet. This technology offers cost savings by making more efficient use of the existing network.
Traditionally, voice and data were carried over separate networks optimized to suit the differing characteristics of
voice and data traffic. With advances in technology, it is now possible to carry voice and data over the same
networks whilst still catering for the different characteristics required by voice and data.
Voice-over-Internet-Protocol (VOIP) is an emerging technology that allows telephone calls or faxes to be
transported over an IP data network. The IP network could be
A local area network in an office
A wide area network linking the sites of a large international organization
A corporate intranet
The internet
Any combination of the above
There can be no doubt that IP is here to stay. The explosive growth of the Internet, making IP the predominate
networking protocol globally, presents a huge opportunity to dispense with separate voice and data networks and
use IP technology for voice traffic as well as data. As voice and data network technologies merge, massive
infrastructure cost savings can be made as the need to provide separate networks for voice and data can be
Voice Over Internet eliminated.
Protocol Most traditional phone networks use the Public Switched Telephone Network(PSTN), this system employs circuit-
switched technology that requires a dedicated voice channel to be assigned to each particular conversation.
Messages are sent in analog format over this network.
Today, phone networks are on a migration path to VoIP. A VoIP system employs a packet-switched network,
where the voice signal is digitized, compressed and packetized. This compressed digital message no longer
requires a voice channel. Instead, a message can be sent across the same data lines that are used for the
Intranet or Internet and a dedicated channels is no longer needed. The message can now share bandwidth with
other messages in the network.
Normal data traffic is carried between PC s, servers, printers, and other networked devices through a company s
worldwide TCP/IP network. Each device on the network has an IP address, which is attached to every packet for
routing. Voice-over-IP packets are no different.
Users may use appliances such as Symbol s NetVision phone to talk to other IP phones or desktop PC-based
phones located at company sites worldwide, provided that a voice-enabled network is installed at the site.
Installation simply involves assigning an IP address to each wireless handset.
VOIP lets you make toll-free long distance voice and fax calls over existing IP data networks instead of the public
switched telephone network (PSTN). Today business that implement their own VOIP solution can dramatically cut
long distance costs between two or more locations
When its time to find out how to make content available over WAP, we need to get to grips with its Markup
Language. ie, WML. WML was designed from the start as a markup language to describe display of content on
small screen devices.
It is a Markup language enabling the formatting of text in WAP environment using a variety of markup tags to
Wireless Markup Language determine the display appearance of content. WML is defined using the rules of XML-extensible markup language
and therefore an XML application. WML provides a means of allowing the user to navigate around the WAP
application and supports the use of anchored links as found commonly in the web pages. It also provides support
for images and layout within the constraints of the device
B-ISDN Reference Model ATM makes B-ISDN a reality. The Integrated services Digital Network (ISDN) evolved during the 80 s. It carried a
basic channel that could operate at 64kbps (B-channel) and combinations of this and others (D-channels) formed
the basis of communication on the network. In the new B-ISDN world, this is supposed to supply data, voice and
other communication services over a common network with a wide range of data speeds. To understand a lot of
the terminology in ATM-land, it is necessary to understand the B-ISDN Reference Model. Just as the ISO seven-
layer model defines the layers for network software, this model defines layers for the ATM network.
The header is broken up into the following fields.
Generic Flow Control (GFC)
Virtual Channel Identifier (VCI)
Virtual Path Identifier (VPI)
Payload type (PT)
Cell Loss Priority (CLP)
Header Error Control (HEC)
Network - to - Network interface
It is necessary for the switches to know how to send the calls along. There are several techniques that could be
adopted, but the most useful one for the 1P users is called Private Network-to Network Interface (PNNI)The PNNI
is an interface between switches used to distribute information about the state and structure of the network to
establish circuit to ensure that reasonable bandwidth and Qos contract can be established and to provide for
some network management functions. Convergence Sublayer: The function provided at this layer differ depending
on the service provided. It provides bit error correction and may use explicit time stamps to transfer timing
information.
Segmentation and reassembly sublayer:
At this layer the convergence sublayer-protocol data unit is segmented and a header added. The header contains
3 fields Sequence Number used to detect cell insertion and cell loss. Sequence Number protection used to correct
and detect errors that occur in the sequence number. Convergence sublayer indication used to indicate the
presence of the convergence sublayer function.
Future use of biometric Biometric technology is the technology which is based on the samples of the human body. This is based on the
technology for security things which every person is having different to the any other person. And using this technology is far more better
than using any other technology.
and authontication
Genetic programming (GP) is an automated methodology inspired by biological evolution to find computer
programs that best perform a user-defined task. It is therefore a particular machine learning technique that uses
an evolutionary algorithm to optimize a population of computer programs according to a fitness landscape
determined by a program's ability to perform a given computational task. The first experiments with GP were
reported by Stephen F. Smith (1980) and Nichael L. Cramer (1985), as described in the famous book Genetic
Programming: On the Programming of Computers by Means of Natural Selection by John Koza (1992).
Computer programs in GP can be written in a variety of programming languages. In the early (and traditional)
implementations of GP, program instructions and data values were organized in tree-structures, thus favoring the
use of languages that naturally embody such a structure (an important example pioneered by Koza is Lisp). Other
forms of GP have been suggested and successfully implemented, such as the simpler linear representation which
suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP
software Discipulus, for example, uses linear genetic programming combined with machine code language to
achieve better performance. Differently, the MicroGP uses an internal representation similar to linear genetic
programming to generate programs that fully exploit the syntax of a given assembly language.
Genetic programming GP is very computationally intensive and so in the 1990s it was mainly used to solve relatively simple problems.
However, more recently, thanks to various improvements in GP technology and to the well known exponential
growth in CPU power, GP has started delivering a number of outstanding results. At the time of writing, nearly 40
human-competitive results have been gathered, in areas such as quantum computing, electronic design, game
playing, sorting, searching and many more. These results include the replication or infringement of several post-
year-2000 inventions, and the production of two patentable new inventions.
Developing a theory for GP has been very difficult and so in the 1990s genetic programming was considered a
sort of pariah amongst the various techniques of search. However, after a series of breakthroughs in the early
2000s, the theory of GP has had a formidable and rapid development. So much so that it has been possible to
build exact probabilistic models of GP (schema theories and Markov chain models) and to show that GP is more
general than, and in fact includes, genetic algorithms.
Genetic Programming techniques have now been applied to evolvable hardware as well as computer programs.
Meta-Genetic Programming is the technique of evolving a genetic programming system using genetic
programming itself. Critics have argued that it is theoretically impossible, but more research is needed
Inferno is answering the current and growing need in the marketplace for distributed computing solutions. Based
on more than 20 years of Bell Labs research into operating systems and programming languages, Inferno is
poised to propel network computing into the 21st century. Bell Labs will continue to support the evolution of
Inferno(new operating Inferno under a joint development agreement with Vita Nuova. Inferno is an operating system for creating and
supporting distributed services. It was originally developed by the Computing Science Research Center of Bell
system) Labs, the R&D arm of Lucent Technologies, and further developed by other groups in Lucent. Inferno was
designed specifically as a commercial product, both for licensing in the marketplace and for use within new Lucent
offerings. It encapsulates many years of Bell Labs research in operating systems, languages, on-the-fly compilers,
graphics, security, networking and portability.
DAP is actually a simple protocol that is used to access directory services. It is an open, vendor neutral
information such as e-mail addresses and public keys for secure transmission of data. The information contained
within an LDAP directory could be ASCII text files, JPEG photographs or sound files. One way to reduce the time
taken to search for information is to replicate the directory information over different platforms so that the process
of locating a specific data is streamlined and more resilient to failure of connections and computers. This is what is
done with information in an LDAP structure.
LDAP, Lightweight Directory Access Protocol, is an Internet protocol runs over TCP/IP that e-mail programs use
to lookup contact information from a server. A directory structure is a specialized database, which is optimized for
browsing, searching, locating and reading information. Thus LDAP make it possible to obtain directory information
such as e-mail addresses and public keys. LDAP can handle other information, but at present it is typically used to
associate names with phone numbers and e-mail addresses.
LDAP is a directory structure and is completely based on entries for each piece of information. An entry is a
collection of attributes that has a globally-unique Distinguished Name (DN). The information in LDAP is arranged
in a hierarchical tree-like structure. LDAP services are implemented by using the client-server architecture. There
are options for referencing and accessing information within the LDAP structure. An entry is referenced by the
type of its uniquely distinguishable name. Unlike the other directory structure, which allows the user access to all
Lightweight Directory the information available, LDAP allows information to be accessed only after authenticating the user. It also
Access Protocol supports privacy and integrity security services. There are two daemons for LDAP which are slapd and slurpd.
THE LDAP DOMAIN THE COMPONENTS OF AN LDAP DOMAIN A small domain may have a single LDAP
server, and a few clients. The server commonly runs slapd, which will serve LDAP requests and update data. The
client software is comprised of system libraries translating normal lib calls into LDAP data requests and providing
some form of update functionality .Larger domains may have several LDAP slaves (read-only replicas of a master
read/write LDAP server). For large installations, the domain may be divided into sub domains, with referrals to
‘glue’ the sub domains together. THE STRUCTURE OF AN LDAP DOMAIN A simple LDAP domain is
structured on the surface in a manner similar to an NIS domain; there are masters, slaves, and clients. The clients
may query masters or slaves for information, but all updates must go to the masters. The ‘domain name’
under LDAP is slightly different than that under NIS. LDAP domains may use an organization name and country.
The clients may or may not authenticate themselves to the server when performing operations, depending on the
configuration of the client and the type of information requested. Commonly access to no sensitive information
(such as port to service mappings) will be unauthenticated requests, while password information requests or any
updates are authenticated. Larger organizations may subdivide their LDAP domain into sub domains. LDAP
allows for this type of scalability, and uses ‘referrals’ to allow the passing off of clients from one server to
the next (the same method is used by slave servers to pass modification requests to the master).
Mesotechnology describes a budding research field which could replace nanotechnology in the future as the
primary means to control matter at length scales ranging from a cluster of atoms to microscopic elements. The
prefix meso- comes from the Greek word mesos, meaning middle, hence the technology spans a range of length
scales as opposed to nanotechnology which is concerned only with the smallest atomic scales.
describes very well phenomena on the atomic to nanoscale while classical Newtonian Mechanics describes the
Mesotechnology behavior of objects on the microscale and up. However, the length scale in the middle ( Although the term itself
is still quite new, the general concept is not. Many fields of science have traditionally focused either on single
discrete elements or large statistical collections where many theories have been successfully applied. In the field
of physics for example, Quantum Mechanicsmesoscale) is not well described by either theory. Similarly,
psychologists focus heavily on the behavior and mental processes of the individual while sociologists study the
behavior of large societal groups, but what happens when only 3 people are interacting, this is the mesoscale.
By the mid 1980 s, the trend in computing was away from large centralized time-shared computers towards networks of smaller,
personal machines, typically UNIX `workstations . People had grown weary of overloaded, bureaucratic timesharing machines and
were eager to move to small, self-maintained systems, even if that meant a net loss in computing power. As microcomputers
became faster, even that loss was recovered, and this style of computing remains popular today.
Plan 9 began in the late 1980 s as an attempt to have it both ways: to build a system that was centrally administered and cost-
effective using cheap modern microcomputers as its computing elements. The idea was to build a time-sharing system out of
workstations, but in a novel way. Different computers would handle different tasks: small, cheap machines in people s offices
would serve as terminals providing access to large, central, shared resources such as computing servers and file servers. For the
PLAN 9 Operating central machines, the coming wave of shared-memory multiprocessors seemed obvious candidates.
system Plan 9 is designed around this basic principle that all resources appear as files in a hierarchical file system, which is unique to
each process. As for the design of any operating system various things such as the design of the file and directory system
implementation and the various interfaces are important. Plan 9 has all these well-designed features. All these help to provide a
strong base for the operating system that could be well suited in a distributed and networked environment.
The different features of Plan 9 operating system are:
The dump file system makes a daily snapshot of the file store available to the users.
Unicode character set supported throughout the system.
Advanced kernel synchronization facilities for parallel processing.
Security- there is no super-user or root user and the passwords are never sent over the network
SALT stands for Speech Application Language Tags. It consists of small set of XML elements with associated attributes and DOM
object properties, events and methods which apply a speech interface to web pages. SALT allows applications to be run on a wide
variety of devices and also through different methods for inputting data.
SALT (Speech The main design principle of SALT include reuse the existing standards for grammar, speech output and also separation of the
Application speech interface from business logic and data etc. SALT is designed to run inside different Web execution environments. So SALT
does not have any predefined execution model but it uses an event-wiring model.
Language Tags) It contains a set of tags for inputting the data as well as storing and manipulating that data. The main elements of a SALT
document are , and . Using these elements we can specify grammar for inputting data , inspect the results of recognition and copy
those results properly and provide the application needed.The architecture of SALT contains mainly 4 components .
The SAT (SIM Application Toolkit) provides a flexible interface through which developers can build services and MMI (Man
The SAT (SIM Machine Interface) in order to enhance the functionality of the mobile. This module is not designed for service developers, but
Application network engineers who require a grounding in the concepts of the SAT and how it may impact on network architecture and
performance. It explores the basic SAT interface along with the architecture required in order to deliver effective SAT based
Toolkit) services to the handset.
The Wireless Application Protocol (WAP) is a result of the WAP Forum s effort to promote industry-wide
specifications for technology useful in developing applications and services that operates over wireless
Wireless Application communication networks. WAP specifies an application framework and network protocols for wireless devices
Protocol such as mobile telephones, pagers, and personal digital assistants. (PDAs). The specifications extend and
leverage mobile networking technologies (such as digital data networking standards) and Internet technologies
(such as XML, URLs, scripting, and various content formats). The effort is aimed at enabling operation,
manufactures, and content developers to meet the challenges in building advanced differentiated services and
implementation in a fast and flexible manner.
The Objectives of the WAP Forum are: To bring Internet content and advanced data services to digital cellular
phones and other wireless terminals. To create a global wireless protocol specifications that will work across
differing wireless network technologies To enable the creation of content and applications that scale across a very
wide range of bearer networks and device types. To embrace and extend existing standards and technology
wherever appropriate.
The WAP Architecture specification is intended to present the system and protocol architectures essential to
achieving the objective of the WAP Forum.
WAP is positioned at the convergence of two rapidly evolving network technologies, wireless data and Internet.
Both the wireless data market and the Internet are growing very quickly and are continuously reaching new
customers. The explosive growth of the Internet has fuelled the creation of new and exciting information services
Most of the technology developed for the Internet has been designed for desktop and larger computers and
medium to high bandwidth, generally reliable data networks. Mass-market, hand held wireless devices present a
more constrained computing environment compared to desktop computers. Because of fundamental invitation of
power and form factor, mass market handheld devices tend to have:
Less powerful CPUs, Less memory (ROM and RAM), Restricted power consumption, Smaller displays, and
Different input devices (eg. a phone keypad). Similarly, wireless data networks present a more constrained
communication environment compared to wired networks. Because of fundamental limitation of power available
spectrum, and mobility, wireless data networks tend to have: Less bandwidth, More latency, Less connection
stability, and Less predictable availability.
Mobile networks are growing in complexity and the cost of all aspects for provisioning of more value added
services is increasing. In order to meet the requirements of mobile network operators, solutions must be:
Interoperable-terminals from different manufactures communicate with services in the mobile network;
Scalable-mobile network operators are able to scale services to customer needs;
Efficient-provides quality of service suited to the behaviour and characteristics of the mobile network;
Reliable - provides a consistent and predictable platform for deploying services; and Secure-enables services to
be extended over potentially unprotected mobile networks still preserving the integrity of user data; protects the
devices and services from security problems such as denial of service.
The WAP specifications address mobile network characteristics and operator needs by adapting existing network
technology to the special requirements of mass market, hand-held wireless data devices and by introducing new
technology where appropriate
The requirements of the WAP Forum architecture are to:
Leverage existing standards where possible;
Define a layered, scalable and extensible architecture;
Support as many wireless networks as possible;
Optimise for narrow-band bearers with potentially high latency;
Optimise for efficient use of device resources (low memory / CPU usage / power consumption);
Provide support for secure application and communications;
Enable the creation of Man Machine Interfaces (MIMs) with maximum flexibility and vendor control;
Provide access to local handset functionality, such as logical indication for incoming call;
Facilitate network-operator and third party service provisioning;
Support multi-vendor interoperability by defining the optional and mandatory components of the specification
UMA (Unlicensed Mobile Access) is an industry collaboration to extend GSM and GPRS services nto customer
sites by utilizing unlicensed radio technologies such as Wi-Fi (Wireless Fidelity) and Bluetooth®. This is achieved
UMA (Unlicensed Mobile by tunnelling GSM and GPRS protocols through a broadband IP network towards the Access Point situated in the
Access) customer site and across the unlicensed radio link to the mobile device.
Thus UMA provides an additional access network to the existing GERAN (GSM EDGE Radio Access Network)
and UTRAN (UMTS Terrestrial Radio Access Network).
WDDX (Web Distributed Data eXchange) is a programming-language-neutral data interchange mechanism to pass data
between different environments and different computers. It supports simple data types such as number, string, boolean,
etc., and complex aggregates of these in forms such as structures and arrays. There are WDDX interfaces for a wide
variety of languages. The data is encoded into XML using an XML 1.0 DTD, producing a platform-independent but
relatively bulky representation. The XML-encoded data can then be sent to another computer using HTTP, FTP, or other
transmission mechanism. The receiving computer must have WDDX-aware software to translate the encoded data into
the receiver's native data representation. The WDDX protocol was developed in connection with the ColdFusion server
environment. Python, PHP, Java, C++, .NET, lisp, Haskell and various platforms support it very well.
New Age Graphics
Real Time Speech Translation
3D Internet
New Dimension of Data Security using Neural Networks and Numerical Functions
HomeRF- localized wireless technology optimized for the home environment
NVSRAM- Non Volatile Static RAM
Fusion Memory
Earth Simulator- Fastest Supercomputer
Graphic processing Unit
Open-Rar
High Altitude Aeronautical Platforms
Aspect-oriented programming (Aop)
Intel MMX Technology
Voice Over Internet Protocol
Internet Searching
Wireless Technologies (bluetooth, 802.11x, IRDA)
Tracking and Positioning of Mobiles in Telecommunication
DNA Based computer
ATM Virtual connections
Botnet Security Threats
VPN Server
Advanced Mobile Presence Technology
Power of Grid Computing
Embedded web server for remote access
Bio-metrics
Magnetic Random Access Memory
Intrution Detection System
Multiterabit Networks
Printed Memory Technology
High Capacity Flash Chips
Self Healing Computers
Mind Reading Phones
Blade Servers
Near Filed Communication (NFC)
UMA (Unlicensed Mobile Access)
Assisted GPS
Diskless Network storage Controller
DIGITAL HUBBUB
Agile Software development
HCI (Human Computer Interaction ) in software applications
Embedded systems
Infini band
The SAT (SIM Application Toolkit)
3D Object Extraction Using GIS Database
Page Stealer Process
3D Printers
Web Services in Gridcomputing
Qubit PC
CD Based Firewall
Decision diagrams in VLSI CAD
Bandwidth Aggregator
Atomic CPU
Fluorescent Multilayer Optical Data Storage
Email-Service & Webhosting
Virtual Integration
SMART Programming
Object Relational Mapping
ASpect Oriented Programmin
Steganography and digital watermarking
Verifying Infinite State Systems
New Generation Of Chips
Precision Image Search
Evolotion of bluetooth
Nanocrystal Memory Devices
Ultra Wideband Networking
3D Searching
Biological Computers
Rover Technology
Self Defending Networks
Computer Intelligence Application
Digital Rights Management
Digital Scent Technology
Distributed Interactive Virtual Environment
Wireless LAN Security
Chameleon Chip
Intelligent RAM
iSCSI
Linux Kernel 2.6
Mesh Radio
Linux Virtual Server
Smart Client Application Development using .NET
Spawning Networks
Strata flash Memory
Swarm Intelligence
The Callpaper Concept
IP spoofing
Internet Access via Cable TV Network
Face Recognition Technology
VoiceXML
Wireless USB
Cisco IOS Firewall
Socket Programming
Ubiquitous Networking
Touch Screens
Tempest and Echelon
Synthetic Aperture Radar System
Unlicensed Mobile Access
Light emitting polymers
Sensors on 3D Digitization
Robotic Surgery
Quantum Information Technology
Gaming Consoles
MiniDisc system
Code Division Duplexing
Cluster Computing
Firewalls
DVD Technology
Night Vision Technology
Parasitic Computing
RD RAM
Data Security in Local Network using Distributed Firewalls
Computerized Paper Evaluation using Neural Network
Bluetooth Based Smart Sensor Networks
Laser Communications
Implementation Of Zoom FFT
Image Processing
Optical Networking and Dense Wavelength Division Multiplexing
Optical Burst Switching
Cyberterrorism
Ipv6 - The Next Generation Protocol
Space Mouse
Hyper Transport Technology
Aeronautical Communication
Blu Ray Disc
64-Bit Computing
Bio-Molecular Computing
Studying in a "Virtual University
AppleTalk
Combinatorial Optimization
Quantum Software And Quantum Computer Development
Metadata application profile
XML Query Languages
AMD Processors
Digital Video Encoding Formats
3-D Assembly Of Magnetic And Semiconducting Nanoparticles
Service oriented Architectures
Enterprise Service Bus
Phase Change Memory Technology
Object Oriented Design using Verilog HDL
WiBro
Zero Knowledge proofs
3-D Chip Stacking Technique
Integrating Structural Design and Formal Methods in RealTime System Design
Glass Glue
The Interactive Classroom
Embedded Computing
Wireless Internet
Quadrics Interconnection Networks
Home Automation using Handspring PDA