Thread libraries and implicit threading
Thread libraries and implicit threading
A thread is a lightweight of process and is a basic unit of CPU utilization which consists of a program
counter, a stack, and a set of registers.
A process has a single thread of control where one program can counter, and one sequence of
instructions is carried out at any given time. Dividing an application or a program into multiple
sequential threads that run in quasi-parallel, the programming model becomes simpler. Thread has the
ability to share an address space and all of its data among themselves. This ability is essential for
some specific applications. Threads are lighter weight than processes, but they are faster to create and
destroy than processes.
Thread Library
A thread library provides the programmer with an Application program interface for creating and
managing thread.
There are two primary ways of implementing thread library, which are as follows −
The first approach is to provide a library entirely in user space with kernel support. All code
and data structures for the library exist in a local function call in user space and not in a
system call.
The second approach is to implement a kernel level library supported directly by the
operating system. In this case the code and data structures for the library exist in kernel space.
Invoking a function in the application program interface for the library typically results in a system
call to the kernel.
The main thread libraries which are used are given below −
POSIX threads − Pthreads, the threads extension of the POSIX standard, may be provided as
either a user level or a kernel level library.
WIN 32 thread − The windows thread library is a kernel level library available on windows
systems.
JAVA thread − The JAVA thread API allows threads to be created and managed directly as
JAVA programs.
Implicit Threading
One way to address the difficulties and better support the design of multithreaded applications is to
transfer the creation and management of threading from application developers to compilers and run-
time libraries. This, termed implicit threading, is a popular trend today.
Implicit threading is mainly the use of libraries or other language support to hide the management of
threads. The most common implicit threading library is OpenMP, in context of C.
OpenMP is a set of compiler directives as well as an API for programs written in C, C++, or
FORTRAN that provides support for parallel programming in shared-memory environments. OpenMP
identifies parallel regions as blocks of code that may run in parallel. Application developers insert
compiler directives into their code at parallel regions, and these directives instruct the OpenMP run-
time library to execute the region in parallel. The following C program illustrates a compiler directive
above the parallel region containing the printf() statement:
#include <omp.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
/* sequential code */
#pragma omp parallel
{
printf("I am a parallel region");
}
/* sequential code */
return 0;
}
Multicore and FPGA processing helps to increase the performance of an embedded system.
Also helps to achieve scalability, so the system can take advantage of increasing numbers of
cores and FPGA processing power over time.
Concurrent systems that we create using multicore programming have multiple tasks executing in
parallel. This is known as concurrent execution. When multiple parallel tasks are executed by a
processor, it is known as multitasking. A CPU scheduler, handles the tasks that execute in parallel.
The CPU implements tasks using operating system threads. So that tasks can execute independently
but have some data transfer between them, such as data transfer between a data acquisition module
and controller for the system. Data transfer occurs when there is a data dependency.