Linux CT2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

Q. 1 to Q.

7
1.What is process
2.Process descriptor
3.Allocation and storing process descriptor
4.process state
5.process context/context switching
12.User premmtion
6.process creation
Fork v fork & clone system call
7.process termination

https://www.informit.com/articles/article.aspx?p=368650
(5 page article covers all above topics)
1.  What is Process?
             An instance of a running program is called a process. Every time you run a shell
command, a program is run and a process is created for it. Each process in Linux has a process
id (PID) and it is associated with a particular user and group account.

Linux is a multitasking operating system, which means that multiple programs can be
running at the same time (processes are also known as tasks). Each process has the
illusion that it is the only process on the computer. The tasks share common processing
resources (like CPU and memory).

2.Process Descriptor
Process Descriptor and the Task Structure
The kernel stores the list of processes in a circular doubly linked list called the task list3.
Each element in the task list is a process descriptor of the types struct task_struct, which
is defined in <linux/sched.h>. The process descriptor contains all the information about a
specific process.

The task_struct is a relatively large data structure, at around 1.7 kilobytes on a 32-bit
machine. This size, however, is quite small considering that the structure contains all the
information that the kernel has and needs about a process. The process descriptor
contains the data that describes the executing program—open files, the process's
address space, pending signals, the process's state, and much more
7

3.Allocation and storing process descriptor


Allocating the Process Descriptor
The task_struct structure is allocated via the slab allocator to provide object reuse and
cache coloring (see Chapter 11, "Memory Management"). Prior to the 2.6 kernel series,
struct task_struct was stored at the end of the kernel stack of each process. This allowed
architectures with few registers, such as x86, to calculate the location of the process
descriptor via the stack pointer without using an extra register to store the location. With
the process descriptor now dynamically created via the slab allocator, a new structure,
struct thread_info, was created that again lives at the bottom of the stack (for stacks that
grow down) and at the top of the stack (for stacks that grow up)4. See Figure
3.2.

The new structure also makes it rather easy to calculate offsets of its values for use in
assembly code.

The thread_info structure is defined on x86 in <asm/thread_info.h> as

struct thread_info {
    struct task_struct  *task;
    struct exec_domain  *exec_domain;
    unsigned long     flags;
    unsigned long     status;
    __u32         cpu;
    __s32         preempt_count;
    mm_segment_t     addr_limit;
    struct restart_block restart_block;
    unsigned long     previous_esp;
    __u8         supervisor_stack[0];
};
Each task's thread_info structure is allocated at the end of its stack. The task element of
the structure is a pointer to the task's actual task_struct.

Storing the Process Descriptor


The system identifies processes by a unique process identification value or PID. The
PID is a numerical value that is represented by the opaque type5 pid_t, which is typically
an int. Because of backward compatibility with earlier Unix and Linux versions, however,
the default maximum value is only 32,768 (that of a short int), although the value can
optionally be increased to the full range afforded the type. The kernel stores this value
as pid inside each process descriptor.

This maximum value is important because it is essentially the maximum number of


processes that may exist concurrently on the system. Although 32,768 might be
sufficient for a desktop system, large servers may require many more processes. The
lower the value, the sooner the values will wrap around, destroying the useful notion that
higher values indicate later run processes than lower values. If the system is willing to
break compatibility with old applications, the administrator may increase the maximum
value via /proc/sys/kernel/pid_max.
Inside the kernel, tasks are typically referenced directly by a pointer to their task_struct
structure. In fact, most kernel code that deals with processes works directly with struct
task_struct. Consequently, it is very useful to be able to quickly look up the process
descriptor of the currently executing task, which is done via the current macro. This
macro must be separately implemented by each architecture. Some architectures save a
pointer to the task_struct structure of the currently running process in a register, allowing
for efficient access. Other architectures, such as x86 (which has few registers to waste),
make use of the fact that struct thread_info is stored on the kernel stack to calculate the
location of thread_info and subsequently the task_struct.

On x86, current is calculated by masking out the 13 least significant bits of the stack
pointer to obtain the thread_info structure. This is done by the current_thread_info()
function. The assembly is shown here:

movl $-8192, %eax


andl %esp, %eax
This assumes that the stack size is 8KB. When 4KB stacks are enabled, 4096 is used in
lieu of 8192.

Finally, current dereferences the task member of thread_info to return the task_struct:

current_thread_info()->task;
Contrast this approach with that taken by PowerPC (IBM's modern RISC-based
microprocessor), which stores the current task_struct in a register. Thus, current on PPC
merely returns the value stored in the register r2. PPC can take this approach because,
unlike x86, it has plenty of registers. Because accessing the process descriptor is a
common and important job, the PPC kernel developers deem using a register worthy for
the task

1. Process States
The state field of the process descriptor describes the current condition of the process
(see Figure 3.3). Each process on the system is in exactly one of five different states.
This value is represented by one of five flags:

TASK_RUNNING—The process is runnable; it is either currently running or on a


runqueue waiting to run (runqueues are discussed in Chapter 4, "Scheduling"). This is
the only possible state for a process executing in user-space; it can also apply to a
process in kernel-space that is actively running.

TASK_INTERRUPTIBLE—The process is sleeping (that is, it is blocked), waiting for


some condition to exist. When this condition exists, the kernel sets the process's state to
TASK_RUNNING. The process also awakes prematurely and becomes runnable if it
receives a signal.

TASK_UNINTERRUPTIBLE—This state is identical to TASK_INTERRUPTIBLE except


that it does not wake up and become runnable if it receives a signal. This is used in
situations where the process must wait without interruption or when the event is
expected to occur quite quickly. Because the task does not respond to signals in this
state, TASK_UNINTERRUPTIBLE is less often used than TASK_INTERRUPTIBLE6.

TASK_ZOMBIE—The task has terminated, but its parent has not yet issued a wait4()
system call. The task's process descriptor must remain in case the parent wants to
access it. If the parent calls wait4(), the process descriptor is deallocated.

TASK_STOPPED—Process execution has stopped; the task is not running nor is it


eligible to run. This occurs if the task receives the SIGSTOP, SIGTSTP, SIGTTIN, or
SIGTTOU signal or if it receives any signal while it is being debugged.

1. Process context / context switching


Process Context
One of the most important parts of a process is the executing program code. This code
is read in from an executable file and executed within the program's address space.
Normal program execution occurs in user-space. When a program executes a system
call (see Chapter 5, "System Calls") or triggers an exception, it enters kernel-space. At
this point, the kernel is said to be "executing on behalf of the process" and is in process
context. When in process context, the current macro is valid7. Upon exiting the kernel,
the process resumes execution in user-space, unless a higher-priority process has
become runnable in the interim, in which case the scheduler is invoked to select the
higher priority process.

System calls and exception handlers are well-defined interfaces into the kernel. A
process can begin executing in kernel-space only through one of these interfaces—all
access to the kernel is through these interfaces.

1. Process creation fork and clone system call


a. Fork():
Processes execute the fork() system call to create a new child process.
The process executing the fork() call is called a parent process. The child
process created receives a unique Process Identifier (PID) but retains the
parent’s PID as its Parent Process Identifier (PPID).
The child process has identical data to its parent process. However, both
processes have separate address spaces.
After the creation of the child process, both the parent and child processes
execute simultaneously. They execute the next step after the fork() system call.
Since the parent and child processes have different address spaces, any
modifications made to one process will not reflect on the other.
Later improvements introduced the copy-on-write mechanism, which allows the
parent and child processes to share the same address space. This removed the
need to copy data to the child process. If any process modifies the pages in the
shared address space, the system allocates a new address space that allows
both processes to run independently.
a. Clone():
The clone() system call is an upgraded version of the fork call. It’s powerful since
it creates a child process and provides more precise control over the data shared
between the parent and child processes. The caller of this system call can control
the table of file descriptors, the table of signal handlers, and whether the two
processes share the same address space.
clone() system call allows the child process to be placed in different
namespaces. With the flexibility that comes with using the clone() system call, we
can choose to share an address space with the parent process, emulating the
vfork() system call. We can also choose to share file system information, open
files, and signal handlers using different flags available.
This is the signature of the clone() system call:

int clone(int (*fn)(void *), void *stack, int flags, void *arg, ...
                 /* pid_t *parent_tid, void *tls, pid_t *child_tid */ );
Let’s breakdown some parts to understand more:

 *fn: pointer that points to a function


 *stack: points to the smallest byte of a stack
 pid_t: process identifier (PID)
 *parent_tid: points to the storage location of child process thread identifier
(TID) in parent process memory
 *child_tid: points to the storage location of the child process thread
identifier (TID) in the child process memory
1. Process termination
2. I/O bound vs processor bound process (CPU bound)
Processes can be classified as either I/O-bound or processor-bound.The former is
characterized as a process that spends much of its time submitting and waiting on I/O
requests.
Consequently, such a process is runnable for only short durations, because it eventually
blocks waiting on more I/O. (Here, by I/O, we mean any type of blockable resource,
such as keyboard input or network I/O, and not just disk I/O.) Most graphical user
interface (GUI) applications, for example, are I/O-bound, even if they never read from or
write to the disk, because they spend most of their time waiting on user interaction via
the keyboard and mouse.
Conversely, processor-bound processes spend much of their time executing code.They
tend to run until they are preempted because they do not block on I/O requests very
often. Because they are not I/O-driven, however, system response does not dictate that
the scheduler run them often.A scheduler policy for processor-bound processes,
therefore, tends to run such processes less frequently but for longer durations.The
ultimate
example of a processor-bound process is one executing an infinite loop. More palatable
examples include programs that perform a lot of mathematical calculations, such as
sshkeygen or MATLAB.
Of course, these classifications are not mutually exclusive. Processes can exhibit both
behaviors simultaneously:The X Window server, for example, is both processor and
I/Ointense. Other processes can be I/O-bound but dive into periods of intense processor
action.A good example of this is a word processor, which normally sits waiting for key
presses but at any moment might peg the processor in a rabid fit of spell checking or
macro calculation.
The scheduling policy in a system must attempt to satisfy two conflicting goals: fast
process response time (low latency) and maximal system utilization (high throughput).To
satisfy these at-odds requirements, schedulers often employ complex algorithms to
determine the most worthwhile process to run while not compromising fairness to other,
lower priority, processes.The scheduler policy in Unix systems tends to explicitly favor
I/O-bound processes, thus providing good process response time. Linux, aiming to
provide good interactive response and desktop performance, optimizes for process
response (low latency), thus favoring I/O-bound processes over processor-bound
processors.As we will see, this is done in a creative manner that does not neglect
processor-bound processes.

1. Nice values
The Linux kernel implements two separate priority ranges.The first is the nice value, a
number from –20 to +19 with a default of 0. Larger nice values correspond to a lower
priority—you are being “nice” to the other processes on the system. Processes with a
lower nice value (higher priority) receive a larger proportion of the system’s processor
compared to processes with a higher nice value (lower priority). Nice values are the
standard priority range used in all Unix systems, although different Unix systems apply
them in different ways, reflective of their individual scheduling algorithms. In other Unix-
based systems, such as Mac OS X, the nice value is a control over the absolute
timeslice allotted to a process; in Linux, it is a control over the proportion of
timeslice.You can see a list of the processes on your system and their respective nice
values (under the column marked NI) with the command ps -el

1. RT Priority
The second range is the real-time priority.The values are configurable, but by default
range from 0 to 99, inclusive. Opposite from nice values, higher real-time priority values
correspond to a greater priority.All real-time processes are at a higher priority than
normal processes; that is, the real-time priority and nice value are in disjoint value
spaces. Linux implements real-time priorities in accordance with the relevant Unix
standards, specifically POSIX.1b.All modern Unix systems implement a similar
scheme.You can see a list of the processes on your system and their respective real-
time priority (under the column marked RTPRIO) with the command ps -eo
state,uid,pid,ppid,rtprio,time,comm. A value of “-” means the process is not real-time

1. Time slice
The timeslice2 is the numeric value that represents how long a task can run until it is
preempted.The scheduler policy must dictate a default timeslice, which is not a trivial
exercise.Too long a timeslice causes the system to have poor interactive performance;
the system will no longer feel as if applications are concurrently executed.Too short a
timeslice causes significant amounts of processor time to be wasted on the overhead of
switching processes because a significant percentage of the system’s time is spent
switching from one process with a short timeslice to the next. Furthermore, the
conflicting goals of I/Obound versus processor-bound processes again arise: I/O-bound
processes do not need longer timeslices (although they do like to run often), whereas
processor-bound processes crave long timeslices (to keep their caches hot). With this
argument, it would seem that any long timeslice would result in poor interactive
performance. In many operating systems, this observation is taken to heart, and the
default timeslice is rather low—for example, 10 milliseconds. Linux’s CFS scheduler,
however, does not directly assign timeslices to processes. Instead, in a novel approach,
CFS assigns processes a proportion of the processor. On Linux, therefore, the amount
of processor time that a process receives is a function of the load of the system.This
assigned proportion is further affected by each process’s nice value.The nice value acts
as a weight, changing the proportion of the processor time each process receives.
Processes with higher nice values (a lower priority) receive a deflationary weight,
yielding them a smaller proportion of the processor; processes with smaller nice values
(a higher priority) receive an inflationary weight, netting them a larger proportion of the
processor. 

1. User preemption kernel preemption


User Preemption
User preemption occurs when the kernel is about to return to user-space, need_resched
is set, and therefore, the scheduler is invoked. If the kernel is returning to user-space, it
knows it is in a safe quiescent state. In other words, if it is safe to continue executing the
current task, it is also safe to pick a new task to execute. Consequently, whenever the
kernel is preparing to return to user-space either on return from an interrupt or after a
system call, the value of need_resched is checked. If it is set, the scheduler is invoked to
select a new (more fit) process to execute. Both the return paths for return from interrupt
and return from system call are architecture-dependent and typically implemented in
assembly in entry.S (which, aside from kernel entry code, also contains kernel exit
code).
In short, user preemption can occur
  When returning to user-space from a system call
  When returning to user-space from an interrupt handler
Kernel Preemption
The Linux kernel, unlike most other Unix variants and many other operating systems, is
a
fully preemptive kernel. In nonpreemptive kernels, kernel code runs until completion.
That is, the scheduler cannot reschedule a task while it is in the kernel—kernel code is
scheduled cooperatively, not preemptively. Kernel code runs until it finishes (returns to
user-space) or explicitly blocks. In the 2.6 kernel, however, the Linux kernel became
preemptive: It is now possible to preempt a task at any point, so long as the kernel is in a
state in which it is safe to reschedule.
So when is it safe to reschedule? The kernel can preempt a task running in the kernel
so long as it does not hold a lock.That is, locks are used as markers of regions of
nonpreemptibility. Because the kernel is SMP-safe, if a lock is not held, the current code
is reentrant and capable of being preempted.
The first change in supporting kernel preemption was the addition of a preemption
counter, preempt_count, to each process’s thread_info.This counter begins at zero and
increments once for each lock that is acquired and decrements once for each lock that is
released. When the counter is zero, the kernel is preemptible. Upon return from
interrupt,
if returning to kernel-space, the kernel checks the values of need_resched and
preempt_count. If need_resched is set and preempt_count is zero, then a more
important task is runnable, and it is safe to preempt.Thus, the scheduler is invoked. If
preempt_count is nonzero, a lock is held, and it is unsafe to reschedule. In that case, the
interrupt returns as usual to the currently executing task.When all the locks that the
current task is holding are released, preempt_count returns to zero.At that time, the
unlock
code checks whether need_resched is set. If so, the scheduler is invoked. Enabling and
disabling kernel preemption is sometimes required in kernel code and is discussed in
Chapter 9.
Kernel preemption can also occur explicitly, when a task in the kernel blocks or
explicitly calls schedule().This form of kernel preemption has always been supported
because no additional logic is required to ensure that the kernel is in a state that is safe
to preempt. It is assumed that the code that explicitly calls schedule() knows it is safe to
reschedule. Kernel preemption can occur n When an interrupt handler exits, before
returning to kernel-space n When kernel code becomes preemptible again n If a task in
the kernel explicitly calls schedule() n If a task in the kernel blocks (which results in a call
to schedule())

1. Locking
The fundamental issue surrounding locking is the need to provide synchronization in
certain code paths in the kernel. These code paths, called critical sections, require some
combination of concurrency or re-entrancy protection and proper ordering with respect to
other events. The typical result without proper locking is called a race condition. Realize
how even a simple i++ is dangerous if i is shared! Consider the case where one
processor reads i, then another, then they both increment it, then they both write i back
to memory. If i were originally 2, it should now be 4, but in fact it would be 3!

This is not to say that the only locking issues arise from SMP (symmetric
multiprocessing). Interrupt handlers create locking issues, as does the new preemptible
kernel, and any code can block (go to sleep). Of these, only SMP is considered true
concurrency, i.e., only with SMP can two things actually occur at the exact same time.
The other situations—interrupt handlers, preempt-kernel and blocking methods—provide
pseudo concurrency as code is not actually executed concurrently, but separate code
can mangle one another's data.

These critical regions require locking. The Linux kernel provides a family of locking
primitives that developers can use to write safe and efficient code.

SMP Locks in a Uniprocessor Kernel


Whether or not you have an SMP machine, people who use your code may. Further,
code that does not handle locking issues properly is typically not accepted into the Linux
kernel. Finally, with a preemptible kernel even UP (uniprocessor) systems require proper
locking. Thus, do not forget: you must implement locking.

1. Priority arrays
2. User level threads and kernel level threads
3. What is shell
4. Types of shell
5. How to execute and run a shell script
6. Shell programming (variables in linux, rules naming variables adv system variables)
7. Quotes in shell script
8. Echo options in shell script
9. Shell arithmetic
10. Command line argument shell script bc command if statement, expr or test statement
11. If else statement
12. For loop case statement user interface

8. What is the difference between I O bound and CPU bound process?


Comparison with CPU-bound The CPU-bound process will get and hold the CPU.
Eventually, the CPU-bound process finishes its CPU burst and moves to an I/O device.
All the I/O-bound processes, which have short CPU bursts, execute quickly and move
back to the I/O queues.

9.Nice Values
Nice value — Nice values are user-space values that we can use to control the
priority of a process. The nice value range is -20 to +19 where -20 is highest, 0
default and +19 is lowest.
Source:
https://medium.com/@chetaniam/a-brief-guide-to-priority-and-nice-values-
in-the-linux-ecosystem-fb39e49815e0

10.RT Priority
Linux Real-Time scheduling

all tasks with static priority less than 100 are real-time tasks.
highest priority is a FIFO task, which runs until it suspends —
this prevents all other (lower priority) tasks from running. next
highest priority is a RR task, which runs until its timeslice
expires.

RT in top command-
linux top scheduling htop priority. In the top and htop tools,
processes (or/and threads depending on display settings) having
the highest realtime priority (99 from the userland API point of
view) with either the scheduling policy SCHED_RR or
SCHED_FIFO the priority is displayed as RT

11.Time Slice
In a time slice, a short period of time is assigned to execute a CPU. A time slice is a time allocation for each
process that will run in a preemptive multitasking CPU. Each time a process is run, the scheduler runs it.

What Is Time Slice In Linux?


In other words, the timeslice for each process are based on the current load and are weighted
according to the priority of the process. When using SCHED_RR for special-purpose realtime
processes, the Linux kernel defines RR_TIMESLICE as RR_TIMESLICE in
include/linux/sched/rt. h for the default timeslice

13.Run queue
Active processes are placed in an array called a run queue, or runqueue. The run queue
may contain priority values for each process, which will be used by the scheduler to
determine which process to run next. To ensure each program has a fair share of
resources, each one is run for some time period (quantum) before it is paused and
placed back into the run queue. When a program is stopped to let another run, the
program with the highest priority in the run queue is then allowed to execute.
Processes are also removed from the run queue when they ask to sleep, are waiting on a
resource to become available, or have been terminated.
In the Linux operating system (prior to kernel 2.6.23), each CPU in the system is given a
run queue, which maintains both an active and expired array of processes. Each array
contains 140 (one for each priority level) pointers to doubly linked lists, which in turn
reference all processes with the given priority. The scheduler selects the next process
from the active array with highest priority. When a process' quantum expires, it is placed
into the expired array with some priority. When the active array contains no more
processes, the scheduler swaps the active and expired arrays, hence the name O(1)
scheduler.
In UNIX or Linux, the sar command is used to check the run queue.
The vmstat UNIX or Linux command can also be used to determine the number of
processes that are queued to run or waiting to run. These appear in the 'r' column.
There are two models for Run queues: one that assigns a Run Queue to each physical
processor, and the other has only one Run Queue in the system

14.Priority Array
The Priority Arrays

Each runqueue contains two priority arrays, the active and the expired array. Priority arrays
are defined in kernel/sched.c as struct prio_array. Priority arrays are the data
structures that provide O(1) scheduling. Each priority array contains one queue of runnable
processors per priority level. These queues contain lists of the runnable processes at each
priority level. The priority arrays also contain a priority bitmap used to efficiently discover
the highest-priority runnable task in the system.

struct prio_array {
int nr_active; /* number of tasks in the queues
*/
unsigned long bitmap[BITMAP_SIZE]; /* priority bitmap */
struct list_head queue[MAX_PRIO]; /* priority queues */
};

MAX_PRIO is the number of priority levels on the system. By default, this is 140. Thus, there
is one struct list_head for each priority. BITMAP_SIZE is the size that an array
of unsigned long typed variables would have to be to provide one bit for each valid priority
level. With 140 priorities and 32-bit words, this is five. Thus, bitmap is an array with five
elements and a total of 160 bits.

Each priority array contains a bitmap field that has at least one bit for every priority on the
system. Initially, all the bits are zero. When a task of a given priority becomes runnable
(that is, its state is set to TASK_RUNNING), the corresponding bit in the bitmap is set to one.
For example, if a task with priority seven is runnable, then bit seven is set. Finding the
highest priority task on the system is therefore only a matter of finding the first set bit in
the bitmap. Because the number of priorities is static, the time to complete this search is
constant and unaffected by the number of running processes on the system. Furthermore,
each supported architecture in Linux implements a fast find first set algorithm to quickly
search the bitmap. This method is called sched_find_first_bit(). Many architectures
provide a find-first-set instruction that operates on a given word[4]. On these systems,
finding the first set bit is as trivial as executing this instruction at most a couple of times.

[4]
 On the x86 architecture, this instruction is called bsfl. On PPC, cntlzw is used for this purpose.

Each priority array also contains an array named queue of struct list_head queues, one


queue for each priority. Each list corresponds to a given priority and in fact contains all the
runnable processes of that priority that are on this processor's runqueue. Finding the next
task to run is as simple as selecting the next element in the list. Within a given priority,
tasks are scheduled round robin.

The priority array also contains a counter, nr_active. This is the number of runnable tasks
in this priority array

15. Process level thread & User level thread

A thread is a lightweight process that can be managed independently by a scheduler. It


improves the application performance using parallelism
A thread shares information like data segment, code segment files etc. with its peer
threads while it contains its own registers, stack, counter etc.
The two main types of threads are user-level threads and kernel-level threads. A
diagram that demonstrates these is as follows −
User - Level Threads
The user-level threads are implemented by users and the kernel is not aware of the
existence of these threads. It handles them as if they were single-threaded processes.
User-level threads are small and much faster than kernel level threads. They are
represented by a program counter(PC), stack, registers and a small process control
block. Also, there is no kernel involvement in synchronization for user-level threads.
Advantages of User-Level Threads
Some of the advantages of user-level threads are as follows −

 User-level threads are easier and faster to create than kernel-level threads. They can
also be more easily managed.
 User-level threads can be run on any operating system.
 There are no kernel mode privileges required for thread switching in user-level threads.
Disadvantages of User-Level Threads
Some of the disadvantages of user-level threads are as follows −

 Multithreaded applications in user-level threads cannot use multiprocessing to their


advantage.
 The entire process is blocked if one user-level thread performs blocking operation.
Kernel-Level Threads
Kernel-level threads are handled by the operating system directly and the thread
management is done by the kernel. The context information for the process as well as
the process threads is all managed by the kernel. Because of this, kernel-level threads
are slower than user-level threads.
Advantages of Kernel-Level Threads
Some of the advantages of kernel-level threads are as follows −

 Multiple threads of the same process can be scheduled on different processors in


kernel-level threads.
 The kernel routines can also be multithreaded.
 If a kernel-level thread is blocked, another thread of the same process can be scheduled
by the kernel.
Disadvantages of Kernel-Level Threads
Some of the disadvantages of kernel-level threads are as follows −

 A mode switch to kernel mode is required to transfer control from one thread to another
in a process.
 Kernel-level threads are slower to create as well as manage as compared to user-level
threads.

Difference
Difference between User Level thread and Kernel Level thread
User level thread Kernel level thread

User thread are implemented by users. kernel threads are implemented by OS.

OS doesn’t recognize user level threads. Kernel threads are recognized by OS.

Implementation of User threads is easy. Implementation of Kernel thread is complicated.

Context switch time is less. Context switch time is more.

Context switch requires no hardware support. Hardware support is needed.

If one user level thread perform blocking If one kernel thread perform blocking operation
operation then entire process will be blocked. then another thread can continue execution.

User level threads are designed as dependent Kernel level threads are designed as independent
threads. threads.
Example : Java thread, POSIX threads. Example : Window Solaris

You might also like