Assignment No: 4
Assignment No: 4
DBMS
PART-A
Question1: Why there is a need of concurrency protocols when we have the serializability concept? Ans: In concurrency execution, delay time may reduce, we can execute the short transaction before the large one, but this is not possible in serializibility. Concurrency execution increases the number of transaction per unit of time. That is it increases the throughput of the system. This is the reason why concurrency protocol is preffered over serializability concept.
Question2: Thomas write rule modifies the time-stamp ordering protocol. Do you agree? Justify your answer. Answer: According to me, yes it modifies the time-stamp ordering protocol. When Ti attempts to write data item Q, if TS(Ti) < Wtimestamp(Q), then Ti is attempting to write an obsolete value of {Q}. Hence, rather than rolling back Ti as the timestamp ordering protocol would have done, this {write} operation can be ignored. Otherwise this protocol is the same as the timestamp ordering protocol. If the Ti comes before Tj so we assume that TS(Ti)<TS(Tj). It easily succeeds the transaction and go to transaction Tj. But when Ti executes the write(Q) we find that it is already done by Tj. i.e. TS(Ti)<W-timestamp(Q). Thus the write(Q) by Ti will be rejected and will rolled back. Any other transition Tk with TS(Tk)<TS(Tj) never need to read and will rolled back. Any transition Tl with TS(Tl)>TS(Tj) must read the value of Q written by Tj , rather than written by Ti.
Question3. Why we term validation based protocol optimistic protocol? Explain the protocol with concurrent transactions? Answer: It is called optimistic concurrency control since transaction executes fully in the hope that all will go well during validation.
Execution of transaction Ti is done in three phases. 1. Read and execution phase: Transaction Ti writes only to temporary local variables 2. Validation phase: Transaction Ti performs a ``validation test'' to determine if local variables can be written without violating serializability. 3. Write phase: If Ti is validated, the updates are applied to the database; otherwise, Ti is rolled back. The three phases of concurrently executing transactions can be interleaved, but each transaction must go through the three phases in that order. This protocol is useful and gives greater degree of concurrency if probability of conflicts is low. That is because the serializability order is not pre-decided and relatively less transactions will have to be rolled back.
T1 read(a)
T2
read(a) a:=a+ 455 read(b) b:=b-590 read(b) (validate) display(A+B) (validate) write(a) write(b)
PART-B Question 4: Do we need recovery in database management system. Justify your answer with techniques which you will use.
Answer: - Recovery is an essential part of database management system without which the data safety cannot be assures. so yes we need recovery in our database. We have lots of valuable data stored, if there will not be any recovery mechanism, during some failure it leads to the inconsistent database and we may loose our valuable data. We generally use two techniques for this purpose:1. 2. Log-Based recovery Shadow paging
Log-Based recovery: in this we maintain logs of our transaction by the help of which we can recover our database during failure. It is performed by two waysa) Deffered log based recovery: we maintain logs and after the transaction is
partially committed log is written in database, if failure occurs database is left unaltered. b) Immediate log based: in this lag is maintained and is immediately written
on database but we maintain one extra log of old value of database , if failure occurs old value is used to recover.
Shadow paging: This recovery scheme does not require the use of a log.
The paging is very similar to paging schemes used by the operating system for memorymanagement .The idea is to maintain two page tables during the life of a transaction: the current page table and the shadow page table. When the transaction starts, both tables are identical. The shadow page is never changed during the life of the transaction. The current page is updated with each writeoperation. Each table entry points to a page on the disk. When the transaction is committed, the shadow page entry becomes a copy of the current page
table entry and the disk block with the old data is released. If the shadow is stored in nonvolatile memory and a system crash occurs, then the shadow page table is copied to the current page table. This guarantees that the shadow page table will point to the database pages corresponding to the state of the database prior to any transaction that was active at the time of the crash, making aborts automatic.
Question 5: Assume that the Railway reservation system is implemented using an RDBMS. What are the concurrency control measures one has to take, in order to avoid concurrency related problems in the above system? How can the deadlock be avoided in this system? Answer: We have various concurrency control protocols we can use the multiple granularity concurrency protocols to avoid inconsistency in Railway reservation system . In railway reservation system if one seat is booked then we apply locking at different-different levels. Lock will not allow the other person to book that seat which already booked. In this case the simplest way is each transition will lock its data before it executes. And prevent it from cyclic wait. By this we can avoid deadlock in the system.
Question 6: Shadow paging uses the concept of paging scheme (in operating system). Do you agree? Justify your answer. Answer:In operating system paging divide the physical memory in pages and frames, which is from where shadow paging mechanism is being inherited. Operating system divides pages into a) b) Frames Pages
Shadow paging also divides database into fixed block called pages it has two page tablea) b) Current page table. Shadow page table. By this we can very well say paging uses the concept of paging scheme (in operating system)