UNIT 4 Material
UNIT 4 Material
Mobile Computing
R13
IV B.Tech. - I Semester
Mobile Transport Layer: Conventional TCP/IP Protocols, Indirect TCP, Snooping TCP,
Mobile TCP, Other Transport Layer Protocols for Mobile Networks.
Database Issues: Database Hoarding & Caching Techniques, Client-Server Computing &
Adaptation, Transactional Models, Query processing, Data Recovery Process & QoS Issues.
- Different protocols
- Different direct and indirect protocols
- Database Issues
- Different hoarding and caching techniques
- Client Server computing
- Different transactional models
1.1.3.1. Lecture-1:
Mobile Transport Layer: Conventional TCP/IP Protocols, Indirect TCP, Snooping TCP,
Mobile TCP, Other Transport Layer Protocols for Mobile Networks.
Database Issues: Database Hoarding & Caching Techniques, Client-Server Computing &
Adaptation, Transactional Models, Query processing, Data Recovery Process & QoS Issues.
The sender notices the missing acknowledgement for the lost packet and assumes a packet loss due to
congestion. Retransmitting the missing packet and continuing at full sending rate would now be unwise,
Unit4 material of Mobile Computing by Dr.N.Sharmili
as this might only increase the congestion. To mitigate congestion, TCP slows down the transmission
rate dramatically. All other TCP connections experiencing the same congestion do exactly the same so
the congestion is soon resolved.
Slow start
TCP’s reaction to a missing acknowledgement is quite drastic, but it is necessary to get rid of congestion
quickly. The behavior TCP shows after the detection of congestion is called slow start. The sender always
calculates a congestion window for a receiver. The start size of the congestion window is one segment
(TCP packet). The sender sends one packet and waits for acknowledgement. If this acknowledgement
arrives, the sender increases the congestion window by one, now sending two packets (congestion
window = 2). This scheme doubles the congestion window every time the acknowledgements come
back, which takes one round trip time (RTT). This is called the exponential growth of the congestion
window in the slow start mechanism
The database architecture shown below is for two-tier or multi-tier databases. Here, the databases
reside at the remote servers and the copies of these databases are cached at the client tiers. This is
known as client-server computing architecture.
A cache is a list or database of items or records stored at the device. Databases are hoarded at the
application or enterprise tier, where the database server uses business logic and connectivity for
retrieving the data and then transmitting it to the device. The server provides and updates local copies
of the database at each mobile device connected to it. The computing API at the mobile device (first
tier) uses the cached local copy. At first tier (tier 1), the API uses the cached data records using the
computing architecture as explained above. From tier 2 or tier 3, the server retrieves and transmits the
data records to tier 1 using business logic and synchronizes the local copies at the device. These local
copies function as device caches.
The advantage of hoarding is that there is no access latency (delay in retrieving the queried record from
the server over wireless mobile networks). The client device API has instantaneous data access to
hoarded or cached data. After a device caches the data distributed by the server, the data is hoarded at
the device. The disadvantage of hoarding is that the consistency of the cached data with the database at
the server needs to be maintained.
Data Caching
Hoarded copies of the databases at the servers are distributed or transmitted to the mobile devices
from the enterprise servers or application databases. The copies cached at the devices are equivalent to
the cache memories at the processors in a multiprocessor system with a shared main memory and
copies of the main memory data stored at different locations.
Cache Access Protocols: A client device caches the pushed (disseminated) data records from a server.
Caching of the pushed data leads to a reduced access interval as compared to the pull (ondemand)
mode of data fetching. Caching of data records can be-based on pushed 'hot records' (the most needed
database records at the client device). Also, caching can be based on the ratio of two parameters—
access probability (at the device) and pushing rates (from the server) for each record. This method is
called cost-based data replacement or caching.
Pre-fetching: Pre-fetching is another alternative to caching of disseminated data. The process of pre-
fetching entails requesting for and pulling records that may be required later. The client device can pre-
fetch instead of caching from the pushed records keeping future needs in view. Pre-fetching reduces
server load. Further, the cost of cache-misses can thus be reduced. The term 'cost of cache-misses'
refers to the time taken in accessing a record at the server in case that record is not found in the device
database when required by the device API.
A cache consists of several records. Each record is called a cache-line, copies of which can be stored at
other devices or servers. The cache at the mobile devices or server databases at any given time can be
assigned one of four possible tags indicating its state—modified (after rewriting), exclusive, shared, and
invalidated (after expiry or when new data becomes available) at any given instance. These four states
are indicated by the letters M, E, S, and I, respectively (MESI). The states indicated by the various tags
are as follows:
a) The E tag indicates the exclusive state which means that the data record is for internal use and
cannot be used by any other device or server.
b) The S tag indicates the shared state which indicates that the data record can be used by others. c)
The M tag indicates the modified state which means that the device cache d) The I tag indicates the
invalidated state which means that the server database no longer has a copy of the record which was
shared and used for computations earlier.
The following figure shows the four possible states of a data record i at any instant in the server
database and its copy at the cache of the mobile device j.
ACID Rules
Atomicity: All operations of a transaction must be complete. In case, a transaction cannot be completed;
it must be undone (rolled back). Operations in a transaction are assumed to be one indivisible unit
(atomic unit)
Consistency: A transaction must be such that it preserves the integrity constraints and follows the
declared consistency rules for a given database. Consistency means the data is not in a contradictory
state after the transaction. i.e. guarantees data validity even in the event of errors or power failures.
Isolation: If two transactions are carried out simultaneously, there should not be any interference
between the two (i.e. multiple transactions occurring at the same time without impacting each others
execution). Further, any intermediate results in a transaction should be invisible to any other
transaction.