Rtos Unit 2 Notes
Rtos Unit 2 Notes
Kernel services:
Kernel
At the core of any RTOS lies the kernel. It is a piece of software responsible for providing
secure access to the system’s hardware and to running the programs. Kernel is common to
every operating system either a real time or non-real time .
Kernel is responsible for providing essential services such as task scheduling, interrupt
handling, memory management, and inter-task communication.
The kernel typically comes in different configurations tailored to the specific requirements of
the application, such as pre-emptive or cooperative scheduling, fixed or variable task
priorities, and the level of real-time determinism.
Task Management
Inside the kernel is the scheduler. It is basically a set of algorithms which manage the task running
order. Multitasking definition comes from the ability of the kernel to control multiple tasks that must
run within time deadlines.
Multitasking may give the impression that multiple threads are running concurrently, as a matter of
fact the processer runs task by task, according to the task scheduling.
Tasks: In an RTOS, the basic unit of execution is a task, also known as a thread or
process. Tasks represent individual activities or functions within the system and can
execute concurrently or sequentially depending on their scheduling policy.
Task Control Block (TCB): Each task is associated with a data structure known as a
Task Control Block (TCB), which contains information about the task’s state, priority,
stack pointer, program counter, and other relevant context information.
Task Scheduling: The RTOS scheduler is responsible for determining which task should
execute next based on their priority, scheduling algorithm (such as preemptive or
cooperative), and real-time constraints. Task scheduling can be preemptive, where higher-
priority tasks can interrupt lower-priority ones, or cooperative, where tasks yield control
voluntarily.
Interrupt Handling
Interrupt Service Routines (ISRs): RTOSes must efficiently handle interrupts generated
by hardware peripherals, timers, or external events. Interrupt Service Routines (ISRs) are
special functions invoked in response to interrupts, and they typically have higher priority
than tasks to ensure timely handling of critical events.
Interrupt Latency: Minimizing interrupt latency is crucial in real-time systems to ensure
timely response to external stimuli. RTOS kernels are optimized to keep interrupt latency
low by prioritizing interrupt handling and minimizing non-deterministic delays.
Memory Management
Dynamic Memory Allocation: Some RTOS kernels support dynamic memory allocation
to manage memory resources efficiently. However, dynamic memory allocation can
introduce non-deterministic behavior and fragmentation issues, so its usage in real-time
systems is often limited or carefully managed.
Static Memory Allocation: Many RTOS applications use static memory allocation,
where memory for tasks, stacks, and other data structures is allocated statically at compile
time. Static allocation eliminates the overhead and unpredictability associated with
dynamic memory management, making it well-suited for real-time systems.
Timer Management
RTOS kernels often provide timer services to facilitate time-based operations such as
periodic task execution, timeouts, and scheduling of time-critical events.
Timers can be implemented using hardware timers or software counters managed by the
kernel.
1. Running : which means that the microprocessor is executing the instructions that make up
this task. Unless yours is a multiprocessor system, there is only one microprocessor, and
hence only one task that is in the running state at any given time.
2. Ready: which means that some other task is in the running state but that this task has
things that it could do if the microprocessor becomes available. Any number of tasks can be in
this state.
3. Blocked: which means that this task hasn't got anything to do right now, even if the
microprocessor becomes available. Tasks get into this state because they are waiting for some
external event. For example, a task that handles data coming in from a network will have
nothing to do when there is no data. A task that responds to the user when he presses a
button has nothing to do until the user presses the button. Any number of tasks can be in this
state as well.
Suspended: Tasks in the Blocked state normally have a 'timeout' period, after which the task will be
timeout, and be unblocked, even if the event the task was waiting for has not occurred.
• Tasks in the Suspended state cannot be selected to enter the Running state.
Most RTOSs seem to preffer a double handful of other task states. Included among the offerings
are suspended, pended, waiting, dormant, and delayed.
The Scheduler:
A part of the RTOS called the scheduler keeps track of the state of each task and decides which one
task should go into the running state. Unlike the scheduler in Unix or Windows, the schedulers in
most RTOSs are entirely simpleminded about which task should get the processor: they look at
priorities you assign to the tasks, and among the tasks that are not in the blocked state, the one with
the highest priority runs, and the rest of them wait in the ready state.
A task will only block because it decides for itself that it has run out of things
to do. Other tasks in the system or the scheduler cannot decide for a task that it needs to
wait for something. As a consequence of this, a task has to be running just before it is
blocked: it has to execute the instructions that figure out that there's nothing more to do.
While a task is blocked, it never gets the microprocessor. Therefore, an interrupt routine or
some other task in the system must be able to signal that whatever the task was waiting for
has happened. Otherwise, the task will be blocked forever.
The shuffling of tasks between the ready and running states is entirely the work of the
scheduler. Tasks can block themselves, and tasks and interrupt routines can move other tasks
from the blocked state to the ready state, but the scheduler has control over the running
state.
Since you can share data variables among tasks, it is easy to move data from one task to
another: the two tasks need only have access to the same variables. You can easily
accomplish this by having the two tasks in the same module in which the variables are
declared, or you can make the variables public in one of the tasks and declare them extern in the
other. Figure 6.6 shows how the former might be accomplished.
Task control block:
A Task Control Block (TCB) is a data structure used by the operating system to manage and
keep track of information about a process or thread. It contains important details such as:
3. PROCESS SYNCHRONIZATION
Process synchronization in OS is the task of coordinating the execution of processes in such a way
that no two processes can access the same shared data and resources. It is a critical part of operating
system design, as it ensures that processes can safely share resources without interfering with each
other.
It helps maintain the consistency of data by using variables or hardware so that only one process can
make changes to the shared memory at a time. There are various solutions for the same such
as semaphores, mutex locks, synchronization hardware, etc.
Process synchronization is very helpful when multiple processes are running at the same time
and more than one process has access to the same data or resources at the same time.
Process synchronization is generally used in the multi-process system. When more than two
processes have access to the same data or resources at the same time it can cause data
inconsistency so to remove this data inconsistency processes should be synchronized with
each other.
EXAMPLE:
We will learn how process synchronization in os works with help of an example. We will see an
example of different processes trying to access the same data at the same time.
In the above example, there are three processes, Process 1 is trying to write the shared data while
Process 2 and Process 3 are trying to read the same data so there are huge changes in Process 2, and
Process 3 might get the wrong data.
Entry Section:- This section is used to decide the entry of the process
Critical Section:- This section is used to make sure that only one process access and modifies the
shared data or resources.
Exit Section:- This section is used to allow a process that is waiting in the entry section and make
sure that finished processes are also removed from the critical section.
Remainder Section:- The remainder section contains other parts of the code which are not in the
Critical or Exit sections.
Race Condition:
When more than one process is either running the same code or modifying the same memory or any
shared data, there is a risk that the result or value of the shared data may be incorrect because all
processes try to access and modify this shared resource. Thus, all the processes race to say that my
result is correct. This condition is called the race condition. Since many processes use the same data,
the results of the processes may depend on the order of their execution.
This is mostly a situation that can arise within the critical section. In the critical section, a race
condition occurs when the end result of multiple thread executions varies depending on the
sequence in which the threads execute.
By treating the critical section as a section that can be accessed by only a single process at a
time. This kind of section is called an atomic section.
Critical Section:
The critical section is a code segment where the shared variables can be accessed. An atomic action
is required in a critical section i.e. only one process can execute in its critical section at a time. All the
other processes have to wait to execute in their critical sections. A diagram that demonstrates the
In the above diagram, the entry section handles the entry into the critical section. It acquires the
resources needed for execution by the process. The exit section handles the exit from the critical
section. It releases the resources and also informs the other processes that the critical section is free.
The wait() function mainly handles the entry to the critical section, while the signal() function
handles the exit from the critical section.
If we remove the critical section, we cannot guarantee the consistency of the end outcome after
all the processes finish executing simultaneously.
4. SEMAPHORE:
A semaphore is a signaling mechanism. It is available in all real-time operating systems with
some small differences in its implementation. Semaphores are used for synchronization (between
tasks or between tasks and interrupts) and managing allocation and access to shared resources.
• It‟s a technique to manage concurrent processes by using a simple integer value, which
is known as a Semaphore.
• Semaphore is simply a variable which is non-negative and shared between threads. This
variable is used to solve the critical section problem and to achieve process synchronization
in the multiprocessing environment.
• A semaphore „S‟ is an integer variable that, apart from initialization, is accessed only through
two standard atomic operations: wait() and signal(). Here in the code wait() represents the
word ‘P’ and signal represents ‘V’.
• RTOS uses the terms raise and lower; get and give, take and release, pend and
post, p and v, wait and signal, and any number of other combinations.
• All the modification to the integer value of the semaphore in the wait() and signal() operations
must be executed indivisibly. That is when one process modifies the semaphore value no
other process can simultaneously modify that same semaphore value.
Definition of wait():
when the semaphore accessed wait() operation by task , it first check that the variable ‘S’ is less than
or equal to 0, if the condition is true it will struck into the while loop otherwise it will decrement the
integer variable ‘S’ and it enter into the critical section or using shared resources. When another task
wants to access the semaphore, it checks the wait() operation , this time the condition becomes true (S
= 0), the task will stuck into the loop because the semaphore availed by the first task .
P(Semaphore S) {
While (S <= 0)
; // no operation
S--;
}
Definition of signal():
After the successful use of critical section or shared resources by task , it increments the variable ‘S’
that’s called release the semaphore. Now the other task free to access the resources by taking the semaphore.
V (Semaphore S) {
S++;
• The mutex behaves like a token (key) and it restricts access to a resource.
• If a task wants to access the protected resource it must first acquire the token.
• Once obtained, the token is owned by the task and is released once the task is finished using the
protected resource.
These are the common operations that an RTOS task can perform with a mutex:
• Create\Delete a mutex
• Tasks can call two RTOS functions, TakeSemaphore and Release Semaphore. If one task has
called Take Semaphore to take the semaphore and has not called Release Semaphore to release it,
then any other task that calls TakeSemaphore will block until the first task calls Release Semaphore.
Only one task can have the semaphore at a time.
• The semaphores are all independent of one another: if one task takes semaphore A, another task
can take semaphore B without blocking.
• Similarly, if one task is waiting for semaphore C, that task will still be blocked even if some
other task releases semaphore D.
Semaphore problems:
When first reading about semaphores, it is very tempting to conclude that they represent the
solutions to all of our shared-data problems. This is not true.The problem is that semaphores
work only if you use them perfectly, and there are no guarantees that you will do that.
Forgetting to take the semaphore: Semaphores only work if every task that accesses
the shared data, for read or for write, uses the semaphore. If anybody forgets, then the RTOS
may switch away from the code that forgot to take the semaphore and cause cm ugly shared-data
bug.
Forgetting to release the semaphore: If any task fails to release the semaphore, then
every other task that ever uses the semaphore will sooner or later block waiting to take that
semaphore and will be blocked forever
Taking the wrong semaphore: If you are using multiple semaphores, then taking the wrong
one is as bad as forgetting to take one.
Holding a semaphore for too long: Whenever one task takes a semaphore, every other
task that subsequently wants that semaphore has to wait until the semaphore is released. If one
task takes the semaphore and then holds it for too long, other tasks may miss real-time deadlines.
Task C can't release the semaphore until it gets the microprocessor back. No matter how
carefully you code Task C, Task B can prevent Task C from releasing the semaphore and can
thereby hold up Task A indefinitely. This problem is called priority inversion.
some RTOSs resolve this problem with priority inheritance-they temporarily boost the priority
of Task C to that of Task A whenever Task C holds the semaphore and Task A is waiting for
it.
The Priority Ceiling Protocol is designed to prevent priority inversion by setting a ceiling
priority for each resource (mutex)
Each resource (e.g., a mutex) is assigned a priority ceiling. The priority ceiling is typically
set to the priority of the highest-priority task that might lock the resource.
When a task locks a resource, it temporarily inherits the resource's priority ceiling if its
current priority is lower than the ceiling.
This ensures that the task holding the resource cannot be preempted by any other task that has
a lower priority than the ceiling, thus preventing other tasks from causing a priority inversion.
If a task attempts to lock a resource and its priority is lower than the current priority ceiling
(i.e., another task with a higher priority ceiling is holding a resource), the task is blocked until
the resource becomes available.
When a task releases a resource, it resumes its original priority. If other tasks are waiting for
the resource, they are allowed to proceed based on their priority and the priority ceiling rules.
Let's consider an example of the Priority Ceiling Protocol in a real-time operating system
(RTOS) with three tasks: Task A, Task B, and Task C. These tasks have different priorities,
and they share a resource (e.g., a mutex).
Task A starts executing and locks Resource X. Since Task A has the lowest priority, it
continues to execute normally.
According to the priority ceiling protocol, Task A’s priority is temporarily raised to Priority
3 (the priority ceiling of Resource X) while it holds the resource.
Task C becomes ready to execute. Normally, since Task C has a higher priority (Priority 2)
than Task A’s original priority (Priority 1), it would preempt Task A.
However, because Task A’s priority was raised to Priority 3 (the priority ceiling of Resource
X), Task C cannot preempt Task A. Task C is blocked and must wait until Task A releases
Resource X.
Task B, the highest-priority task, becomes ready and attempts to lock Resource X.
Task B is blocked because Task A is currently holding Resource X.
Task A continues to execute and eventually releases Resource X.
Upon releasing Resource X, Task A’s priority returns to its original level (Priority 1).
Now that Resource X is free, Task B, the highest-priority task, can proceed and lock
Resource X.
After Task B is done and releases Resource X, Task C, which was blocked earlier, can now
proceed.
If the priority ceiling protocol were not used, Task C could have preempted Task A, delaying
the release of Resource X and thus delaying Task B (the highest-priority task). This would
result in priority inversion.
INTER-PROCESS COMMUNICATION:
Inter process Communication (IPC) is a mechanism which allows the exchange of data between
processes. It enables resource and data sharing between the processes without interference.
Processes that execute concurrently in the operating system may be either independent processes or
cooperating processes.
A process is independent and it may or may not be affected by other processes executing in the
system. Any process that does not share data with any other process is independent.
Suppose if a process is cooperating then, it can be affected by other processes that are executing in
the system. Any process that shares the data with another process is called a cooperative process.
Information sharing − Several users are interested in the same piece of information. We must
provide an environment to allow concurrent access to such information.
Computation speeds up − If we want a particular task to run faster, we must break it into
subtasks, then each will be executed in parallel with the other. The speedup can be achieved only
if the computer has multiple processing elements.
Modularity − A system can be constructed in a modular fashion dividing the system functions
into separate processes or threads.
Convenience − An individual user may work on many tasks at the same time. For example, a user
may be editing, compiling, and printing in parallel.
Pipes − A pipe is a channel of communication that is one-way that enables a single procedure
to transmit data to a different one. Pipes can be identified as or unidentified. The operations
running in anonymous pipes have to be associated (i.e., both parent and child processes).
Known pipes, on the contrary together, may be utilized by processes that are separate from
one another.
Message Queues − Message queues are employed for inter-process interaction when both the
sending and getting processes do not need to be present at the same time. The asynchronous
communications may be sent and received. A message in a queue possesses a particular final
destination and is accessible to multiple processes
Shared Memory − Shared memory is an inter process communication method that enables
various programs to make use of a single storage region. This allows them to effectively and
effectively share data. Sharing memory is frequently employed in applications that are
extremely fast.
Semaphores − Semaphores serve to keep the utilization of resources that are shared
synchronized. Their companies serve as responses that limit the number of procedures that
may utilize a resource that is shared at any given time. Semaphores are useful for
implementing critical sections in which only one process has access to a resource that is
shared at a time.
Socket − Sockets constitute an internet-based communications process that enables
procedures to interact with one another over a network. Someone can communicate both
locally and remotely. In client-server relationships applications, ports are frequently used.
Remote Procedure call(RPC) − RPC is a procedure that enables a single process to call an
operation in another. It allows procedures to call treatments in distant systems as though they
were actually local, enabling distributed computing. In systems with distributed components,
RPC is frequently used.
Signals − Asynchronous IPC signals are employed for informing an operator of an
occurrence or interference. The Operating System (OS) sends communication through
processes as well as between processes. Programming based on events can be implemented
using signals.
SHARED MEMORY
Let us see the working condition of the shared memory system step by step.
Working:
In the Shared Memory system, the cooperating processes communicate, to exchange the data with
each other. Because of this, the cooperating processes establish a shared region in their memory. The
processes share data by reading and writing the data in the shared segment of the processes.
Let us discuss it by considering two processes. The diagram is shown below −
Let the two cooperating processes P1 and P2. Both the processes P1 and P2, have their different
address spaces. Now let us assume, P1 wants to share some data with P2.
Step 1 − Process P1 has some data to share with process P2. First P1 takes initiative and establishes a
shared memory region in its own address space and stores the data or information to be shared in its
shared memory region.
Step 2 − Now, P2 requires the information stored in the shared segment of P1. So, process P2 needs
to attach itself to the shared address space of P1. Now, P2 can read out the data from there.
Step 3 − The two processes can exchange information by reading and writing data in the shared
segment of the process.
Advantages:
MESSAGE QUEUE:
A message queue is a kernel object (i.e., a data structure) through which messages are sent
(i.e., posted) from either interrupt service routines (ISRs) or tasks to another task (i.e., pending).
An application can have any number of message queues, each one having its own purpose.
For example, a message queue can be used to pass packets received from a communication
interface ISR to a task, which in turn would be responsible for processing the packet.
Another queue can be used to pass content to a display task that will be responsible for
properly updating a display.
figure 1
Messages are typically void pointers to a storage area containing the actual message.
However, the pointer can point to anything, even a function for the receiving task to execute.
The meaning of the message is thus application-dependent.
Each message queue is configurable in the amount of storage it will hold.
A message queue can be configured to hold a single message (a.k.a., a mailbox)
or N messages. The size of the queue depends on the application and how fast the receiving
task can process messages before the queue fills up.
If a task pends (i.e., waits) for a message and there are no messages in the queue, then the task
will block until a message is posted (i.e., sent) to the queue.
The waiting task consumes no CPU time while waiting for messages since the RTOS runs
other tasks.
As shown in Figure 1, the pending task can specify a timeout. If a message is not received
within the specified timeout, the task will be allowed to resume execution (i.e., unblock)
when that task becomes the highest priority task.
A message queue is typically implemented as first-in-first-out (FIFO), meaning that the first
message received will be the first message extracted from the queue.
However, some kernels allow you to send messages that are deemed more important than
others, and thus post at the head of the queue.
In other words, in last-in-first-out (LIFO) order, making that message the first one to be
extracted by the task.
In many implementations of message queues in an RTOS, a message being sent to a queue is
discarded if the queue is already full.
Oftentimes this is not an issue and the logic of the application can recover from such
situations.
However, it’s fairly easy to implement a mechanism such that a sending task will block until
the receiver extracts one of the messages, as shown in Figure 2:
Figure.2
1. The counting semaphore is initialized with a value corresponding to the maximum number of
entries that the queue can accept.
2. The sending task pends on the semaphore before it’s allowed to post the message to the queue.
If the semaphore value is zero, the sender waits.
3. If the value is non-zero, the semaphore count is decremented, and the sender post its message
to the queue.
4. The recipient of the message pend one the message queue as usual.
5. When a message is received the recipient extracts the pointer to the message from the queue
and signals the semaphore, indicating that an entry in the queue has been freed up.
MAIL BOX
Mailbox (for message) is an IPC through a message at an OS that can be received only one
single destined task for the message from the tasks
The tasks rely on the kernel to allow them to write to the mailbox via a post
operation Or extract from it via a pend operation.
Generally, three types of operations can be performed on a mailbox
• Initialize (with or without a message)
• Deposit a message (POST)
• Wait for a message (PEND)
A task on an OS function call puts (means post and also send) into the mailbox only a
pointer to a mailbox message.
Mailbox message may also include a header to identify the message-type.
OS provides for inserting and deleting message into the mailbox message pointer.
Deleting means message-pointer pointing to Null.
Each mailbox for a message need initialization (creation) before using the functions
in the scheduler for the message queue and message pointer pointing to Null.
There may be a provision for multiple mailboxes for the multiple types or destinations
of messages.
Each mailbox has an ID.
Each mailbox usually has one message pointer only, which can point to message.
Example:
If we want to communicate with different data packets, say process A is sending message
type 1 to process B, message type 10 to process C, and message type 20 to process D. In
this case, it is simpler to implement with message queues.
OS Mailbox functions:
Create: OSMBoxCreate - creates a box and initializes the mailbox contents with a NULL
pointer.
Post: OSMBoxPost - sends pointer message, which now does not point to Null.
Pend: OSMBoxWait (Pend) - waits for pointer message not Null, which is read when not Null
Accept: OSMBoxAccept - reads the message at pointer message after checking the presence
yes or no
Query: OSMBoxQuery - queries the mailbox pointer message.
Delete: OSMBoxDelete - Deletes (reads) the mailbox message when read and pointer message
again points to Null.
The typical RTOS has functions to create, to write to, and to read from mailboxes,
and perhaps functions to check whether the mailbox contains any messages and to
destroy the mailbox if it is no longer needed. The details of mailboxes, however,
are different in different RTOSs.
PIPE:
Pipe is a communication medium between two or more related or interrelated processes. It can be
either within one process or a communication between the child and the parent processes.
Communication can also be multi-level such as communication between the parent, the child and the
grand-child, etc. Communication is achieved by one process writing into the pipe and other reading
from the pipe. To achieve the pipe system call, create two files, one to write into the file and another
to read from the file.
• Pipes are kernel objects that provide unstructured data exchange and facilitate
synchronization among tasks.
• In a traditional implementation, a pipe is a unidirectional data exchange facility, as shown in
Figure .
• Two descriptors, one for each end of the pipe (one end for reading and one for writing), are
returned when the pipe is created.
• Data is written via one descriptor and read via the other.
• The data remains in the pipe as an unstructured byte stream.
• Data is read from the pipe in FIFO order.
• A pipe provides a simple data flow facility so that the reader becomes blocked when the pipe
is empty, and the writer becomes blocked when the pipe is full.
• Typically, a pipe is used to exchange data between a data-producing task and a data-
consuming task, as shown in Figure .
• It is also permissible to have several writers for the pipe with multiple readers on it.
• The kernel creates and maintains pipe-specific information in an internal data structure called
a pipe control block.
• The structure of the pipe control block varies from one implementation to another.
CIRCULAR BUFFER:
A circular buffer is an array of constant length, and we use it to store data in a continuous loop. It is
also known as a ring buffer because it stores the data circularly. Data is read from the buffer in a
FIFO (first in, first out) manner, meaning that the oldest data is read first.
The circular buffer is particularly useful in scenarios where data is produced and consumed at
different rates.
Circular buffers have a pointer that points to the next empty position of the buffer, and
we increment this pointer with each new entry.
A circular buffer does not require shifting elements to make room for new data when the
buffer is full. Instead, when the buffer is full, new data is written over the oldest data.
Structure of a Circular Buffer:
Key Characteristics:
Fixed Size: The buffer has a fixed capacity, which means it can hold a limited amount of data
at any given time.
Wrap-around: When the head or tail pointer reaches the end of the array, it wraps around to
the beginning, hence the term "circular."
Working:
1. Initialization:
o The head and tail pointers are both initialized to the start of the buffer (e.g., index 0).
2. Data Insertion (Write Operation):
o Data is written to the buffer at the position indicated by the head pointer.
o After writing the data, the head pointer is incremented.
o If the head pointer reaches the end of the buffer, it wraps around to the beginning.
3. Data Retrieval (Read Operation):
o Data is read from the buffer at the position indicated by the tail pointer.
o After reading the data, the tail pointer is incremented.
o If the tail pointer reaches the end of the buffer, it wraps around to the beginning.
4. Full Buffer Condition:
o The buffer is considered full when advancing the head pointer would make it equal to
the tail pointer (after wrapping around). In this case, no more data can be written until
some data is read.
5. Empty Buffer Condition:
o The buffer is considered empty when the head pointer is equal to the tail pointer. In
this case, no data can be read until more data is written.
Efficient Use of Memory: Circular buffers make efficient use of a fixed amount of memory
without the need for dynamic memory allocation.
Low Overhead: The operations of reading from and writing to a circular buffer are simple
and fast, typically involving only pointer increments and simple checks.
Concurrency: Circular buffers are well-suited for concurrent operations where one task (or
ISR) writes data into the buffer while another reads data from it. This is common in producer-
consumer scenarios.
Data Stream Buffering: Circular buffers are commonly used to buffer data streams in
communication protocols, such as UART or SPI, where data is received and needs to be
stored temporarily before processing.
Task Communication: In a multi-tasking environment, one task might produce data (e.g.,
sensor readings), and another task might consume it (e.g., logging or processing). A circular
buffer allows for smooth communication between these tasks without requiring complex
synchronization mechanisms.
Interrupt Handling: Circular buffers are often used to store data from interrupt service
routines (ISRs) until the main processing task is ready to handle it. This helps in decoupling
the timing of data production (interrupts) from data consumption.
UART ISR: When data arrives from a serial port, an ISR writes the received data into a
circular buffer.
Processing Task: A separate task reads data from the circular buffer and processes it (e.g.,
parsing commands or storing it in a file).
This setup ensures that the ISR can quickly store incoming data without being blocked by the
processing task, and the processing task can consume the data at its own pace.