Bakery Algorithm in Process Synchronization
Last Updated :
01 May, 2024
Prerequisite - Critical Section, Process Synchronization, Inter Process Communication The Bakery algorithm is one of the simplest known solutions to the mutual exclusion problem for the general case of N process. Bakery Algorithm is a critical section solution for N processes. The algorithm preserves the first come first serve property.
How does the Bakery Algorithm work?
In the Bakery Algorithm, each process is assigned a number (a ticket) in a lexicographical order. Before entering the critical section, a process receives a ticket number, and the process with the smallest ticket number enters the critical section. If two processes receive the same ticket number, the process with the lower process ID is given priority.
How does the Bakery Algorithm ensure fairness?
The Bakery Algorithm ensures fairness by assigning a unique ticket number to each process based on a lexicographical order. This ensures that processes are served in the order they arrive, which guarantees that all processes will eventually enter the critical section.
- Before entering its critical section, the process receives a number. Holder of the smallest number enters the critical section.
- If processes Pi and Pj receive the same number,
if i < j
Pi is served first;
else
Pj is served first.
- The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1, 2, 3, 3, 3, 3, 4, 5, ...
Notation - lexicographical order (ticket #, process id #) - Firstly the ticket number is compared. If same then the process ID is compared next, i.e.-
– (a, b) < (c, d) if a < c or if a = c and b < d
– max(a [0], . . ., a [n-1]) is a number, k, such that k >= a[i] for i = 0, . . ., n - 1
Shared data - choosing is an array [0..n - 1] of boolean values; & number is an array [0..n - 1] of integer values. Both are initialized to False & Zero respectively. Algorithm Pseudocode -
repeat
choosing[i] := true;
number[i] := max(number[0], number[1], ..., number[n - 1])+1;
choosing[i] := false;
for j := 0 to n - 1
do begin
while choosing[j] do no-op;
while number[j] != 0
and (number[j], j) < (number[i], i) do no-op;
end;
critical section
number[i] := 0;
remainder section
until false;
Explanation - Firstly the process sets its "choosing" variable to be TRUE indicating its intent to enter critical section. Then it gets assigned the highest ticket number corresponding to other processes. Then the "choosing" variable is set to FALSE indicating that it now has a new ticket number. This is in-fact the most important and confusing part of the algorithm. It is actually a small critical section in itself ! The very purpose of the first three lines is that if a process is modifying its TICKET value then at that time some other process should not be allowed to check its old ticket value which is now obsolete. This is why inside the for loop before checking ticket value we first make sure that all other processes have the "choosing" variable as FALSE. After that we proceed to check the ticket values of processes where process with least ticket number/process id gets inside the critical section. The exit section just resets the ticket value to zero.
Code - Here's the code implementation of the Bakery Algorithm. Run the following in a UNIX environment -
C++
#include <chrono>
#include <iostream>
#include <mutex>
#include <thread>
#include <vector>
#define THREAD_COUNT 8
std::vector<int> tickets(THREAD_COUNT);
std::vector<int> choosing(THREAD_COUNT);
volatile int resource = 0;
std::mutex mtx; // Mutex for resource access
void lock(int thread)
{
choosing[thread] = 1;
std::atomic_thread_fence(std::memory_order_seq_cst);
int max_ticket = 0;
for (int i = 0; i < THREAD_COUNT; ++i) {
int ticket = tickets[i];
max_ticket
= ticket > max_ticket ? ticket : max_ticket;
}
tickets[thread] = max_ticket + 1;
std::atomic_thread_fence(std::memory_order_seq_cst);
choosing[thread] = 0;
for (int other = 0; other < THREAD_COUNT; ++other) {
while (choosing[other]) {
}
std::atomic_thread_fence(std::memory_order_seq_cst);
while (tickets[other] != 0
&& (tickets[other] < tickets[thread]
|| (tickets[other] == tickets[thread]
&& other < thread))) {
}
}
}
void unlock(int thread)
{
std::atomic_thread_fence(std::memory_order_seq_cst);
tickets[thread] = 0;
}
void use_resource(int thread)
{
std::lock_guard<std::mutex> lock(mtx);
if (resource != 0) {
std::cout << "Resource was acquired by " << thread
<< ", but is still in-use by " << resource
<< "!\n";
}
resource = thread;
std::cout << thread << " using resource...\n";
std::atomic_thread_fence(std::memory_order_seq_cst);
std::this_thread::sleep_for(std::chrono::seconds(2));
resource = 0;
}
void thread_body(int thread)
{
lock(thread);
use_resource(thread);
unlock(thread);
}
int main()
{
std::fill(tickets.begin(), tickets.end(), 0);
std::fill(choosing.begin(), choosing.end(), 0);
resource = 0;
std::vector<std::thread> threads;
for (int i = 0; i < THREAD_COUNT; ++i) {
threads.emplace_back(thread_body, i);
}
for (auto& thread : threads) {
thread.join();
}
return 0;
}
// Compile this code using the following command to link
// against the pthread library: g++ -std=c++11 -pthread
// Solution.cpp -o Solution
// Note: Ensure that you have the '-pthread' option to
// properly link against the pthread library.
C
// Importing the thread library
#include "pthread.h"
#include "stdio.h"
// Importing POSIX Operating System API library
#include "unistd.h"
#include "string.h"
// This is a memory barrier instruction.
// Causes compiler to enforce an ordering
// constraint on memory operations.
// This means that operations issued prior
// to the barrier will be performed
// before operations issued after the barrier.
#define MEMBAR __sync_synchronize()
#define THREAD_COUNT 8
volatile int tickets[THREAD_COUNT];
volatile int choosing[THREAD_COUNT];
// VOLATILE used to prevent the compiler
// from applying any optimizations.
volatile int resource;
void lock(int thread)
{
// Before getting the ticket number
//"choosing" variable is set to be true
choosing[thread] = 1;
MEMBAR;
// Memory barrier applied
int max_ticket = 0;
// Finding Maximum ticket value among current threads
for (int i = 0; i < THREAD_COUNT; ++i) {
int ticket = tickets[i];
max_ticket
= ticket > max_ticket ? ticket : max_ticket;
}
// Allotting a new ticket value as MAXIMUM + 1
tickets[thread] = max_ticket + 1;
MEMBAR;
choosing[thread] = 0;
MEMBAR;
// The ENTRY Section starts from here
for (int other = 0; other < THREAD_COUNT; ++other) {
// Applying the bakery algorithm conditions
while (choosing[other]) {
}
MEMBAR;
while (tickets[other] != 0
&& (tickets[other] < tickets[thread]
|| (tickets[other] == tickets[thread]
&& other < thread))) {
}
}
}
// EXIT Section
void unlock(int thread)
{
MEMBAR;
tickets[thread] = 0;
}
// The CRITICAL Section
void use_resource(int thread)
{
if (resource != 0) {
printf("Resource was acquired by %d, but is still "
"in-use by %d!\n",
thread, resource);
}
resource = thread;
printf("%d using resource...\n", thread);
MEMBAR;
sleep(2);
resource = 0;
}
// A simplified function to show the implementation
void* thread_body(void* arg)
{
long thread = (long)arg;
lock(thread);
use_resource(thread);
unlock(thread);
return NULL;
}
int main(int argc, char** argv)
{
memset((void*)tickets, 0, sizeof(tickets));
memset((void*)choosing, 0, sizeof(choosing));
resource = 0;
// Declaring the thread variables
pthread_t threads[THREAD_COUNT];
for (int i = 0; i < THREAD_COUNT; ++i) {
// Creating a new thread with the function
//"thread_body" as its thread routine
pthread_create(&threads[i], NULL, &thread_body,
(void*)((long)i));
}
for (int i = 0; i < THREAD_COUNT; ++i) {
// Reaping the resources used by
// all threads once their task is completed !
pthread_join(threads[i], NULL);
}
return 0;
}
Java
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class Main {
// Define the number of threads
private static final int THREAD_COUNT = 8;
// Define tickets for each thread
private static AtomicInteger[] tickets
= new AtomicInteger[THREAD_COUNT];
// Define choosing array to indicate if a thread is
// trying to enter the critical section
private static AtomicInteger[] choosing
= new AtomicInteger[THREAD_COUNT];
// Define the shared resource
private static AtomicInteger resource
= new AtomicInteger(0);
// Mutex for resource access
private static Lock mtx = new ReentrantLock();
public static void main(String[] args)
{
// Initialize tickets and choosing arrays
for (int i = 0; i < THREAD_COUNT; i++) {
tickets[i] = new AtomicInteger(0);
choosing[i] = new AtomicInteger(0);
}
// Initialize the shared resource
resource.set(0);
// Create threads
Thread[] threads = new Thread[THREAD_COUNT];
for (int i = 0; i < THREAD_COUNT; i++) {
final int thread = i;
threads[i] = new Thread(() -> {
lock(thread); // Acquire the lock
useResource(
thread); // Use the shared resource
unlock(thread); // Release the lock
});
threads[i].start(); // Start the thread
}
// Wait for all threads to finish
for (Thread thread : threads) {
try {
thread.join();
}
catch (InterruptedException e) {
e.printStackTrace();
}
}
}
// Method to acquire the lock
private static void lock(int thread)
{
choosing[thread].set(
1); // Indicate that the thread is trying to
// enter the critical section
// Find the maximum ticket number and assign the
// next number to the current thread
int maxTicket = 0;
for (int i = 0; i < THREAD_COUNT; i++) {
int ticket = tickets[i].get();
maxTicket = Math.max(ticket, maxTicket);
}
tickets[thread].set(
maxTicket + 1); // Assign the next ticket number
// to the current thread
choosing[thread].set(0); // Indicate that the thread
// has got its ticket
// Wait until all other threads have got their
// tickets and it's the current thread's turn
for (int other = 0; other < THREAD_COUNT; other++) {
while (choosing[other].get() != 0) {
}
while (tickets[other].get() != 0
&& (tickets[other].get()
< tickets[thread].get()
|| (tickets[other].get()
== tickets[thread].get()
&& other < thread))) {
}
}
}
// Method to release the lock
private static void unlock(int thread)
{
tickets[thread].set(
0); // Reset the ticket of the current thread
}
// Method to use the shared resource
private static void useResource(int thread)
{
mtx.lock(); // Acquire the mutex lock
try {
// Check if the resource is already in use
if (resource.get() != 0) {
System.out.println(
"Resource was acquired by " + thread
+ ", but is still in-use by "
+ resource.get() + "!");
}
// Use the resource
resource.set(thread);
System.out.println(thread
+ " using resource...");
// Simulate the usage of the resource
try {
Thread.sleep(2000);
}
catch (InterruptedException e) {
e.printStackTrace();
}
// Release the resource
resource.set(0);
}
finally {
mtx.unlock(); // Release the mutex lock
}
}
}
C#
using System;
using System.Threading;
public class Solution {
private const int THREAD_COUNT = 8;
private static int[] tickets
= new int[THREAD_COUNT]; // Ticket array for each
// thread
private static int[] choosing
= new int[THREAD_COUNT]; // Array to indicate if a
// thread is choosing
private static volatile int resource
= 0; // Volatile resource variable
private static object lockObj
= new object(); // Lock object for synchronization
// Memory barrier instruction.
private static void Membar() { Thread.MemoryBarrier(); }
// Function to acquire the lock
private static void Lock(int thread)
{
choosing[thread]
= 1; // Indicate that this thread is choosing
Membar(); // Memory barrier
int maxTicket = 0;
// Find the maximum ticket value
for (int i = 0; i < THREAD_COUNT; i++) {
int ticket = tickets[i];
maxTicket = Math.Max(ticket, maxTicket);
}
// Assign ticket to this thread
tickets[thread] = maxTicket + 1;
Membar(); // Memory barrier
choosing[thread] = 0; // Done choosing
Membar(); // Memory barrier
// The ENTRY Section starts from here
for (int other = 0; other < THREAD_COUNT; ++other) {
// Applying the bakery algorithm conditions
while (choosing[other] != 0) {
}
Membar();
while (tickets[other] != 0
&& (tickets[other] < tickets[thread]
|| (tickets[other] == tickets[thread]
&& other < thread))) {
}
}
}
// EXIT Section
private static void Unlock(int thread)
{
Membar();
tickets[thread] = 0;
}
// The CRITICAL Section
private static void UseResource(int thread)
{
lock(lockObj) // Lock to ensure thread-safe access
// to resource
{
// Check if resource is already in use
if (resource != 0) {
Console.WriteLine(
$
"Resource was acquired by {thread}, but is still in-use by {resource}!");
}
// Acquire resource
resource = thread;
Console.WriteLine($
"{thread} using resource...");
}
// Simulate resource usage
Thread.Sleep(TimeSpan.FromSeconds(2));
// Release resource
lock(lockObj) { resource = 0; }
}
// A simplified function to show the implementation
private static void ThreadBody(object arg)
{
long thread = (long)arg;
Lock((int)thread); // Acquire lock
UseResource((int)thread); // Use resource
Unlock((int)thread); // Release lock
}
public static void Main(string[] args)
{
Array.Clear(
tickets, 0,
THREAD_COUNT); // Initialize ticket array
Array.Clear(
choosing, 0,
THREAD_COUNT); // Initialize choosing array
resource = 0; // Initialize resource
Thread[] threads = new Thread[THREAD_COUNT];
for (int i = 0; i < THREAD_COUNT; ++i) {
threads[i] = new Thread(ThreadBody);
threads[i].Start(i);
}
for (int i = 0; i < THREAD_COUNT; ++i) {
threads[i].Join();
}
}
}
JavaScript
const THREAD_COUNT = 8;
const tickets = new Array(THREAD_COUNT).fill(0);
const choosing = new Array(THREAD_COUNT).fill(0);
let resource = 0;
const lockObj = {};
// Memory barrier instruction.
function membar() {
// JavaScript doesn't have explicit memory barrier instructions.
// In most cases, JavaScript's single-threaded
//nature makes explicit memory barriers unnecessary.
// If working with Web Workers or other asynchronous operations,
// additional synchronization may be required.
}
// Function to acquire the lock
function lock(thread) {
choosing[thread] = 1;
membar();
let maxTicket = 0;
// Find the maximum ticket value
for (let i = 0; i < THREAD_COUNT; i++) {
const ticket = tickets[i];
maxTicket = Math.max(ticket, maxTicket);
}
// Assign ticket to this thread
tickets[thread] = maxTicket + 1;
membar();
choosing[thread] = 0;
membar();
// The ENTRY Section starts from here
for (let other = 0; other < THREAD_COUNT; ++other) {
while (choosing[other] !== 0) {}
membar();
while (tickets[other] !== 0 && (tickets[other] <
tickets[thread] || (tickets[other] === tickets[thread] &&
other < thread))) {}
}
}
// EXIT Section
function unlock(thread) {
membar();
tickets[thread] = 0;
}
// The CRITICAL Section
function useResource(thread) {
lockObj.lock = true;
// Check if resource is already in use
if (resource !== 0) {
console.log(`Resource was acquired by ${thread},
but is still in-use by ${resource}!`);
}
// Acquire resource
resource = thread;
console.log(`${thread} using resource...`);
// Simulate resource usage
setTimeout(() => {
// Release resource
resource = 0;
lockObj.lock = false;
}, 2000);
}
// A simplified function to show the implementation
function threadBody(thread) {
lock(thread);
useResource(thread);
unlock(thread);
}
const threads = [];
for (let i = 0; i < THREAD_COUNT; ++i) {
threads[i] = setTimeout(threadBody, 0, i);
}
//This code is contributed by Aman.
// Note: This code uses setTimeout to simulate threads,
// but for actual multithreading in JavaScript,
// you would need to use a library or environment that supports it,
// such as Node.js with Worker Threads
// or a web browser environment with Web Workers.
// use this node multithreading.js
Output: 
Advantages of Bakery Algorithm:
- Fairness: The Bakery Algorithm provides fairness, as it ensures that all processes get a fair chance to access the critical section, and no process will be left waiting indefinitely.
- Easy to Implement: The algorithm is easy to understand and implement, as it uses simple concepts such as turn numbers and flags to ensure mutual exclusion.
- No Deadlock: The Bakery Algorithm ensures that there is no deadlock situation in the system.
- No starvation: The algorithm also ensures that there is no starvation of any process, as every process gets a fair chance to enter the critical section.
Disadvantages Bakery Algorithm:
- Not Scalable: The Bakery Algorithm is not scalable, as the overhead of the algorithm increases with the number of processes in the system.
- High Time Complexity: The algorithm has a high time complexity, which increases as the number of processes in the system increases. This can result in performance issues in systems with a large number of processes.
- Busy Waiting: The algorithm requires busy waiting, which can lead to wastage of CPU cycles and increased power consumption.
- Memory Overhead: The algorithm requires extra memory to store the turn number and flag values, which can lead to increased memory overhead in systems with a large number of processes.
Similar Reads
Basics & Prerequisites
Data Structures
Array Data StructureIn this article, we introduce array, implementation in different popular languages, its basic operations and commonly seen problems / interview questions. An array stores items (in case of C/C++ and Java Primitive Arrays) or their references (in case of Python, JS, Java Non-Primitive) at contiguous
3 min read
String in Data StructureA string is a sequence of characters. The following facts make string an interesting data structure.Small set of elements. Unlike normal array, strings typically have smaller set of items. For example, lowercase English alphabet has only 26 characters. ASCII has only 256 characters.Strings are immut
2 min read
Hashing in Data StructureHashing is a technique used in data structures that efficiently stores and retrieves data in a way that allows for quick access. Hashing involves mapping data to a specific index in a hash table (an array of items) using a hash function. It enables fast retrieval of information based on its key. The
2 min read
Linked List Data StructureA linked list is a fundamental data structure in computer science. It mainly allows efficient insertion and deletion operations compared to arrays. Like arrays, it is also used to implement other data structures like stack, queue and deque. Hereâs the comparison of Linked List vs Arrays Linked List:
2 min read
Stack Data StructureA Stack is a linear data structure that follows a particular order in which the operations are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out). LIFO implies that the element that is inserted last, comes out first and FILO implies that the element that is inserted first
2 min read
Queue Data StructureA Queue Data Structure is a fundamental concept in computer science used for storing and managing data in a specific order. It follows the principle of "First in, First out" (FIFO), where the first element added to the queue is the first one to be removed. It is used as a buffer in computer systems
2 min read
Tree Data StructureTree Data Structure is a non-linear data structure in which a collection of elements known as nodes are connected to each other via edges such that there exists exactly one path between any two nodes. Types of TreeBinary Tree : Every node has at most two childrenTernary Tree : Every node has at most
4 min read
Graph Data StructureGraph Data Structure is a collection of nodes connected by edges. It's used to represent relationships between different entities. If you are looking for topic-wise list of problems on different topics like DFS, BFS, Topological Sort, Shortest Path, etc., please refer to Graph Algorithms. Basics of
3 min read
Trie Data StructureThe Trie data structure is a tree-like structure used for storing a dynamic set of strings. It allows for efficient retrieval and storage of keys, making it highly effective in handling large datasets. Trie supports operations such as insertion, search, deletion of keys, and prefix searches. In this
15+ min read
Algorithms
Searching AlgorithmsSearching algorithms are essential tools in computer science used to locate specific items within a collection of data. In this tutorial, we are mainly going to focus upon searching in an array. When we search an item in an array, there are two most common algorithms used based on the type of input
2 min read
Sorting AlgorithmsA Sorting Algorithm is used to rearrange a given array or list of elements in an order. For example, a given array [10, 20, 5, 2] becomes [2, 5, 10, 20] after sorting in increasing order and becomes [20, 10, 5, 2] after sorting in decreasing order. There exist different sorting algorithms for differ
3 min read
Introduction to RecursionThe process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called a recursive function. A recursive algorithm takes one step toward solution and then recursively call itself to further move. The algorithm stops once we reach the solution
14 min read
Greedy AlgorithmsGreedy algorithms are a class of algorithms that make locally optimal choices at each step with the hope of finding a global optimum solution. At every step of the algorithm, we make a choice that looks the best at the moment. To make the choice, we sometimes sort the array so that we can always get
3 min read
Graph AlgorithmsGraph is a non-linear data structure like tree data structure. The limitation of tree is, it can only represent hierarchical data. For situations where nodes or vertices are randomly connected with each other other, we use Graph. Example situations where we use graph data structure are, a social net
3 min read
Dynamic Programming or DPDynamic Programming is an algorithmic technique with the following properties.It is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of
3 min read
Bitwise AlgorithmsBitwise algorithms in Data Structures and Algorithms (DSA) involve manipulating individual bits of binary representations of numbers to perform operations efficiently. These algorithms utilize bitwise operators like AND, OR, XOR, NOT, Left Shift, and Right Shift.BasicsIntroduction to Bitwise Algorit
4 min read
Advanced
Segment TreeSegment Tree is a data structure that allows efficient querying and updating of intervals or segments of an array. It is particularly useful for problems involving range queries, such as finding the sum, minimum, maximum, or any other operation over a specific range of elements in an array. The tree
3 min read
Pattern SearchingPattern searching algorithms are essential tools in computer science and data processing. These algorithms are designed to efficiently find a particular pattern within a larger set of data. Patten SearchingImportant Pattern Searching Algorithms:Naive String Matching : A Simple Algorithm that works i
2 min read
GeometryGeometry is a branch of mathematics that studies the properties, measurements, and relationships of points, lines, angles, surfaces, and solids. From basic lines and angles to complex structures, it helps us understand the world around us.Geometry for Students and BeginnersThis section covers key br
2 min read
Interview Preparation
Practice Problem