Concurrency control ensures accurate and timely results from concurrent operations in data management programming. It is implemented through exclusive locks and lock managers in distributed systems, with two-phase locking ensuring order in accessing resources. However, there are issues with the contraction stage of two-phase blocking if transactions abort.
In data management programming, concurrency control is a mechanism designed to ensure that accurate results are generated from concurrent operations. These results must also be obtained in a timely manner. Concurrency control is most often present in databases where there is a cache of searchable information for users to obtain.
Programmers try to design a database such that the effect of important transactions on shared data is serially equivalent. This means that data coming into contact with sets of transactions would be in a certain state where results are achievable if all transactions are done in series and in a particular order. Sometimes such data is invalid due to the simultaneous modification of two transactions.
There are several ways to ensure that transactions execute one after another, including using mutual exclusion and creating a resource that decides which transactions they have access to. This is excessive, however, and will not allow a programmer to benefit from concurrency control in a distributed system. Concurrency control allows multiple transactions to run concurrently by keeping these transactions far from each other, ensuring linearization. One way to implement concurrency control is to use an exclusive lock on a particular resource for serial transaction executions that share resources. Transactions will lock an object that is meant to be used, and if some other transaction makes a request for the locked object, that transaction has to wait for the object to unlock.
Implementing this method in distributed systems involves lock managers, which are servers that issue resource locks. This is very similar to centralized mutual exclusion servers, where clients can request locks and send lock release messages on a particular resource. Preserving serial execution, however, is still required for concurrency control. If two separate transactions access a similar set of objects, the results should be similar and as if these transactions were executed in a particular order. To ensure order in accessing a resource, two-phase locking is introduced, which means that transactions are not allowed new locks when a separate lock is released.
In two-phase concurrency control blocking, its initial phase is considered to be the growth phase, where the transaction acquires the necessary block. The next stage is considered a contraction stage, where the transaction has its blocks released. There are problems with this type of block. If transactions abort, other transactions may use data from objects that have been modified and unlocked by aborted transactions. This would cause other transactions to stop.
Protect your devices with Threat Protection by NordVPN