0% found this document useful (0 votes)
600 views8 pages

Describe The Relationship Between Data Security and Data Integrity, With The Help of A Diagram?

This document provides information about database management systems and concepts like data integrity, indexing, transactions, and concurrency control. It contains 10 questions with detailed explanations about the relationships between data security and integrity, different types of integrity constraints, indexes and their utility, strong and weak entities, problems with concurrent transactions, and more. The roles and components of a database manager are also explained with diagrams. Key concepts covered include locking protocols like two-phase locking and two-phase commit protocols.

Uploaded by

Sayan Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
600 views8 pages

Describe The Relationship Between Data Security and Data Integrity, With The Help of A Diagram?

This document provides information about database management systems and concepts like data integrity, indexing, transactions, and concurrency control. It contains 10 questions with detailed explanations about the relationships between data security and integrity, different types of integrity constraints, indexes and their utility, strong and weak entities, problems with concurrent transactions, and more. The roles and components of a database manager are also explained with diagrams. Key concepts covered include locking protocols like two-phase locking and two-phase commit protocols.

Uploaded by

Sayan Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

🔥

MCS-023 🔥
Database management system✔
✔@Bca studies

👍100% guranteed👍
● 🔥Most Important question🔥
1. Describe the relationship between Data Security and Data Integrity, with the help of a
diagram?
Sol. Relationship between Security and Integrity:Database security usually refers to access, while
database integrity refers to avoidance of accidental loss of consistency. But usually, the turning
point or the dividing line among security and integrity is not always clear. Figure shows the
relationship among data security and integrity.

2. What are integrity constraints ? What for they are required in databases ? Briefly discuss
the different types of integrity constraints?
Sol. Integrity Constraints
● Integrity constraints are a set of rules. It is used to maintain the quality of information.
● Integrity constraints ensure that the data insertion, updating, and other processes have to
be performed in such a way that data integrity is not affected.
● Thus, integrity constraint is used to guard against accidental damage to the database.
Types of Integrity Constraint :-
1. Domain constraints:-
● Domain constraints can be defined as the definition of a valid set of values for an
attribute.
● The data type of domain includes string, character, integer, time, date, currency,
etc. The value of the attribute must be available in the corresponding domain.
2. Entity integrity Constraints:-

● The entity integrity constraint states that primary key value can't be null.
● This is because the primary key value is used to identify individual rows in
relation and if the primary key has a null value, then we can't identify those rows.
● A table can contain a null value other than the primary key field.
3. Referential Integrity Constraints
● A referential integrity constraint is specified between two tables.
● In the Referential integrity constraints, if a foreign key in Table 1 refers to the Primary Key
of Table 2, then every value of the Foreign Key in Table 1 must be null or be available in
Table 2.
4. Key constraints
● Keys are the entity set that is used to identify an entity within its entity set uniquely.
● An entity set can have multiple keys, but out of which one key will be the primary key. A
primary key can contain a unique and null value in the relational table.
3. What are Indexes in DBMS ? What is the utility of Indexes in DBMS ? Under what
situations B-tree Indexes are preferable over Binary Search Tree Indexes ?
Sol. Indexing in DBMS:-
● Indexing is used to optimize the performance of a database by minimizing the number of
disk accesses required when a query is processed.
● The index is a type of data structure. It is used to locate and access the data in a
database table quickly.

4. What is the difference between strong and weak entities ? Specify strong and weak
entities in the above E-R diagram?
Sol.

5. What are concurrent transactions ? Briefly discuss the problems encountered by


concurrent transactions?
Sol. Concurrency Control is the management procedure that is required for controlling concurrent
execution of the operations that take place on a database.
Problem with concurrent execution:-
In a database transaction, the two main operations are READ and WRITE operations. So, there is
a need to manage these two operations in the concurrent execution of the transactions as if these
operations are not performed in an interleaved manner, and the data may become inconsistent.
So, the following problems occur with the Concurrent Execution of the operations:
Problem 1: Lost Update Problems (W - W Conflict)
The problem occurs when two different database transactions perform the read/write operations
on the same database items in an interleaved manner (i.e., concurrent execution) that makes the
values of the items incorrect hence making the database inconsistent.
2.Dirty Read Problems (W-R Conflict)
The dirty read problem occurs when one transaction updates an item of the database, and
somehow the transaction fails, and before the data gets rollback, the updated database item is
accessed by another transaction. There comes the Read-Write Conflict between both
transactions.
3. Unrepeatable Read Problem (W-R Conflict)
Also known as Inconsistent Retrievals Problem that occurs when in a transaction, two different
values are read for the same database item.
6. Consider two transactions TX and TY in the below diagram performing read/write operations on
account A where the available balance in account A is $300:
(

● At time t1, transaction TX reads the value of account A, i.e., $300 (only read).
● At time t2, transaction TX deducts $50 from account A that becomes $250 (only deducted
and not updated/write).
● Alternately, at time t3, transaction TY reads the value of account A that will be $300 only
because TX didn't update the value yet.
● At time t4, transaction TY adds $100 to account A that becomes $400 (only added but
not updated/write).
● At time t6, transaction TX writes the value of account A that will be updated as $250 only,
as TY didn't update the value yet.
● Similarly, at time t7, transaction TY writes the values of account A, so it will write as done
at time t4 that will be $400. It means the value written by TX is lost, i.e., $250 is lost.
Hence data becomes incorrect, and database sets to inconsistent.
7. Consider two transactions TX and TY in the below diagram performing read/write
operations on account A where the available balance in account A is $300:
(Dirty Read Problems (W-R Conflict))

● At time t1, transaction TX reads the value of account A, i.e., $300.


● At time t2, transaction TX adds $50 to account A that becomes $350.
● At time t3, transaction TX writes the updated value in account A, i.e., $350.
● Then at time t4, transaction TY reads account A that will be read as $350.
● Then at time t5, transaction TX rollbacks due to server problem, and the value changes
back to $300 (as initially).
● But the value for account A remains $350 for transaction TY as committed, which is the
dirty read and therefore known as the Dirty Read Problem.
8. Consider two transactions, TX and TY, performing the read/write operations on account A,
having an available balance = $300. The diagram is shown below:

● Sol. At time t1, transaction TX reads the value from account A, i.e., $300.
● At time t2, transaction TY reads the value from account A, i.e., $300.
● At time t3, transaction TY updates the value of account A by adding $100 to the available
balance, and then it becomes $400.
● At time t4, transaction TY writes the updated value, i.e., $400.
● After that, at time t5, transaction TX reads the available value of account A, and that will
be read as $400.
● It means that within the same transaction TX, it reads two different values of account A,
i.e., $ 300 initially, and after updation made by transaction TY, it reads $400. It is an
unrepeatable read and is therefore known as the Unrepeatable read problem.
9. What is the role of Database Manager ?
Sol. A database manager is responsible for developing and maintaining an organizations'
systems that store and organize data for companies. By implementing several security programs,
they ensure the safety of stored data.
10. Explain the important components of database manager with the help of a diagram??
Sol. Components of a database management system
All DBMS comes with various integrated components and tools necessary to carry out almost all
database management tasks. Some DBMS software even provides the ability to extend beyond
the core functionality by integrating with third-party tools and services, directly or via plugins.

In this section, we will look at the common components that are universal across all DBMS
software, including:

● Storage engine
● Query language
● Query processor
● Optimization engine
● Metadata catalog
● Log manager
● Reporting and monitoring tools
● Data utilities
11. Write short notes on the following :
(i) 2-Phase locking protocol
Sol.
● The two-phase locking protocol divides the execution phase of the transaction into three
parts.
● In the first part, when the execution of the transaction starts, it seeks permission for the
lock it requires.
● In the second part, the transaction acquires all the locks. The third phase is started as
soon as the transaction releases its first lock.
● In the third phase, the transaction cannot demand any new locks. It only releases the
acquired locks.

DBMS Lock-Based Protocol


There are two phases of 2PL:

● Growing phase: In the growing phase, a new lock on the data item may be acquired by
the transaction, but none can be released.

● Shrinking phase: In the shrinking phase, existing lock held by the transaction may be
released, but no new locks can be acquired.
(ii) 2-Phase commit protocol
Sol. Two-Phase Commit Protocol: This protocol is designed with the core intent to resolve the
above problems, Consider we have multiple distributed databases which are operated from
different servers(sites) let’s say S1, S2, S3, ….Sn. Where every Si made to maintains a separate
log record of all corresponding activities and the transition T has also been divided into the
subtransactions T1, T2, T3, …., Tn and each Tiare assigned to Si. This all maintains by a
separate transaction manager at each Si. We assigned anyone site as a Coordinator.

Some points to be considered regarding this protocol:

a) In a two-phase commit, we assume that each site logs actions at that site, but there is no
global log.

b) The coordinator(Ci), plays a vital role in doing confirmation whether the distributed transaction
would abort or commit.

c) In this protocol messages are made to send between the coordinator(Ci) and the other sites.
As each message is sent, its logs are noted at each sending site, to aid in recovery should it be
necessary.
(iii) Time-stamping protocol
Sol. Time stamping protocol:-
● The Timestamp Ordering Protocol is used to order the transactions based on their
Timestamps. The order of transaction is nothing but the ascending order of the
transaction creation.
● The priority of the older transaction is higher that's why it executes first. To determine the
timestamp of the transaction, this protocol uses system time or logical counter.
● The lock-based protocol is used to manage the order between conflicting pairs among
transactions at the execution time. But Timestamp based protocols start working as soon
as a transaction is created.
● Let's assume there are two transactions T1 and T2. Suppose the transaction T1 has
entered the system at 007 times and transaction T2 has entered the system at 009 times.
T1 has the higher priority, so it executes first as it is entered the system first.
● The timestamp ordering protocol also maintains the timestamp of last 'read' and 'write'
operation on a data.
(iv) Checkpoints
Sol. Checkpoint:-
● The checkpoint is a type of mechanism where all the previous logs are removed from the
system and permanently stored in the storage disk.
● The checkpoint is like a bookmark. While the execution of the transaction, such
checkpoints are marked, and the transaction is executed then using the steps of the
transaction, the log files will be created.
● When it reaches to the checkpoint, then the transaction will be updated into the database,
and till that point, the entire log file will be removed from the file. Then the log file is
updated with the new step of transaction till next checkpoint and so on.
● The checkpoint is used to declare a point before which the DBMS was in the consistent
state, and all transactions were committed.
(v)Full functional dependency
Sol. A full functional dependency is a state of database normalization that equates to the
normalization standard of Second Normal Form (2NF). In brief, this means that it meets the
requirements of First Normal Form (1NF), and all non-key attributes are fully functionally
dependent on the primary key.
(vi) Partial functional dependency
Sol.Partial Dependency occurs when a non-prime attribute is functionally dependent on part of a
candidate key.

The 2nd Normal Form (2NF) eliminates the Partial Dependency.


(vii) Transitive functional dependency
Sol. When an indirect relationship causes functional dependency it is called Transitive
Dependency.

If P -> Q and Q -> R is true, then P-> R is a transitive dependency.

To achieve 3NF, eliminate the Transitive Dependency.


(viii) Trivial functional dependency
Sol. Trivial functional dependency:-
● A → B has trivial functional dependency if B is a subset of A.
● The following dependencies are also trivial like: A → A, B → B
(ix) Wait-for Graph
Sol. Wait-for-graph is one of the methods for detecting the deadlock situation. This method is
suitable for smaller databases. In this method, a graph is drawn based on the transaction and
their lock on the resource. If the graph created has a closed-loop or a cycle, then there is a
deadlock.
(x) Wait and Wound Protocol
Sol. In this scheme, if an older transaction requests for a resource held by a younger transaction,
then an older transaction forces a younger transaction to kill the transaction and release the
resource. The younger transaction is restarted with a minute delay but with the same timestamp.
(xi) Data Replication in DDBMS
Sol. Data Replication is the process of storing data in more than one site or node. It is useful in
improving the availability of data. It is simply copying data from a database from one server to
another server so that all the users can share the same data without any inconsistency.
(xii)Deadlock and its prevention in DBMS
Sol. Deadlock in DBMS
A deadlock is a condition where two or more transactions are waiting indefinitely for one another
to give up locks. Deadlock is said to be one of the most feared complications in DBMS as no task
ever gets finished and is in waiting state forever.
For example: In the student table, transaction T1 holds a lock on some rows and needs to update
some rows in the grade table. Simultaneously, transaction T2 holds locks on some rows in the
grade table and needs to update the rows in the Student table held by Transaction T1.

Now, the main problem arises. Now Transaction T1 is waiting for T2 to release its lock and
similarly, transaction T2 is waiting for T1 to release its lock. All activities come to a halt state and
remain at a standstill. It will remain in a standstill until the DBMS detects the deadlock and aborts
one of the transactions.
(xiii) Distributed DBMS
Sol. A distributed database (DDB) is a collection of multiple, logically interrelated databases
distributed over a computer network. A distributed database management system (D–DBMS) is
the software that manages the DDB and provides an access mechanism that makes this
distribution transparent to the users.
(xiv) Clustering Indexes and their Implementation
Sol. Cluster index is a type of index which sorts the data rows in the table on their key values. In
the Database, there is only one clustered index per table.

A clustered index defines the order in which data is stored in the table which can be sorted in only
one way. So, there can be an only a single clustered index for every table. In an RDBMS, usually,
the primary key allows you to create a clustered index based on that specific column.
(xv) Shadow Paging
Sol. Shadow paging is one of the techniques that is used to recover from failure. We all know that
recovery means to get back the information, which is lost. It helps to maintain database
consistency in case of failure.

Concept of shadow paging


Now let see the concept of shadow paging step by step −

● Step 1 − Page is a segment of memory. Page table is an index of pages. Each table entry
points to a page on the disk.

● Step 2 − Two page tables are used during the life of a transaction: the current page table
and the shadow page table. Shadow page table is a copy of the current page table.

● Step 3 − When a transaction starts, both the tables look identical, the current table is
updated for each write operation.

● Step 4 − The shadow page is never changed during the life of the transaction.

● Step 5 − When the current transaction is committed, the shadow page entry becomes a
copy of the current page table entry and the disk block with the old data is released.

● Step 6 − The shadow page table is stored in non-volatile memory. If the system crash
occurs, then the shadow page table is copied to the current page table.
Advantages
The advantages of shadow paging are as follows −

● No need for log records.


● No undo/ Redo algorithm.
● Recovery is faster.
Disadvantages
The disadvantages of shadow paging are as follows −

● Data is fragmented or scattered.

● Garbage collection problem. Database pages containing old versions of modified data
need to be garbage collected after every transaction.

Concurrent transactions are difficult to execute.


(xvi) Weak Entity along with an example.
Sol. A weak entity is one that can only exist when owned by another one.
For example: a ROOM can only exist in a BUILDING. On the other hand, a TIRE might be
considered as a strong entity because it also can exist without being attached to a CAR.
(xvii) Specialisation in ERD with an appropriate example.
Sol. Specialization is a top-down approach, and it is opposite to Generalization.
● In specialization, one higher level entity can be broken down into two lower level entities.
● Specialization is used to identify the subset of an entity set that shares some
distinguishing characteristics.
Normally, the superclass is defined first, the subclass and its related attributes are defined next,
and relationship set are then added.
12. What do you understand by the term "Normalization" in DBMS ? Write
statement for second normal form (2NF), and discuss the insert, delete and update
anomalies associated with 2 NF?
13. Differentiate between the concepts of Logical data independence and . Physical data
independence in DBMS?
14. Explain ANSI SPARC 3 level architecture of DBMS, with the details of languages
associated at different levels and the type of data independence in between different
levels. Give a suitable diagram in support of your explanation.
15. Discuss the term wait-for graph. What is the utility of wait-for graphs in describing
deadlocks ? Give suitable examples in support of your discussion?
16. Do last 3 yr question papers june and december both…
17. do practice practical questions too…..
18. focus more on block 1 and 2

You might also like