0% found this document useful (0 votes)
28 views2 pages

422s Exam1

Uploaded by

5rkztm57r7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views2 pages

422s Exam1

Uploaded by

5rkztm57r7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Three components:

SCHED_FIFO:First in First out,做完


(1) an atomic integer variable in userspace
SCHED_RR: Round robin
(2) a system call to sleep/wake up contending processes (state of the atomic variable guards the sleep call)
SCHED_DEADLINE:deadline first.
(3) a kernel-level wait queue for sleeping threads/processes
O(1):Inverted switching rates, variance across nice value
Can use to synchronize multiple processes or threads
need absolute timeslices,limited by HW
In multiprocess settings, processes must explicitly share memory (e.g., lock variable location) via mmap(), etc.
Completely Fair Scheduler (CFS)
In multithread settings, threads implicitly share memory
422s Exam1
In their basic form, futexes offer primitive operations on which other (sleep) lock mechanisms can be built
virtual runtime=(actual runtime)?1024/weight.
Timeslice = (scheduling period)*(Weight of task)/(Weight
Semaphores, condition variables, readers/writer locks, etc.
Monday, February 28, 2022 7:50 PM of all tasks)

01 Linux Intro:
OS is program; high enough privilege to access hardware: processors, I/O, Memory
Offers abstractions to make applications more portable across different types of hardware (see arch­specific parts of Linux source)
Provides interfaces to applications to safely access hardware (e.g., isolated vs. shared memory address spaces)
Implements policies that efficiently utilize hardware resources
Kernel Code: 1.No libc or standard headers. 2.coded in GNUC. 3.No memory protection. 4. No use of floating point. 5.Small fix size stack 6.Concern
synchronization and concurrency. 7.Super of protability

Program Execution in Linux:

The compiler translates source code to machine code (.o file); The linker connects binary files to libraries to create an executable.
Static linking – required code and data is copied into the executable at compile time (
into virtual memory)
Fork(): get new process address space (Duplicates file descriptors, signal handlers, address space, namespace, etc); Use copy­on­write.
Execev(): reads program into memory; starts executing at _start() in the C runtime (setup environment); C call main(); after main return, C does cleanup
Exec(): executes a new program image. Called after folk(), ­1 error, keeps pid parent pid, priority, owning user and group. Reset most attributes
l versus v: arguments provided via list or vector p: user’s path is searched for the specified file e: A new environment is supplied
Strace(): In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the
signals which are received by a process. The name of each system call, its arguments and its return value are printed on standard error or to the file
specified with the -o option.

Dynamic linking – required code and data is linked to the executable at runtime (lib.so)
The dynamic linker (called ld) maps these into the memory map segment on­demand
Global Offset Table (GOT)
Used to resolve locations of global library objects (functions and variables). All references to variables in shared libraries are replaced with references to the
GOT
Procedure Linkage Table (PLT)
All function calls to shared libraries are replaced by stub functions that query the PLT at runtime for the address of the function. Calls the dynamic linker to
resolve locations of library functions if unknown
lazy-binding: The program runs as normal until it encounters an unresolved function; Execution jumps to the linker; The linker maps the shared library into
the process’s address space and replaces the call to the linker in GOT with the resolved address;

How & When The Kernel Runs

Process vs program: A process is an execution context for a program. Kernel sees all processes in the system. Kernel is a program.
System Call: An operation that requires a higher level of privilege than is granted to user applications.
From user space (processes) --> System Call Interface /System Call Handlers --> kernel space
Pros: better isolation, security and stability
Kernel Execution: Power on -->(Bootloader loads kernel)--> Initialize System -->(Initial kernel never returns)--> Idle Task(pid 0)
Kernel runs at boot time/entirely event driven. Init creates all other threads
Threads(scheduled&preempted): does delayed interrupt handling/does inter-processor load balancing/handles misc. tasks
When kernel run: 1. Explicitly invoked by processes by syscalls. 2. Response to hardware interrupts. 3. Kernel threads are scheduled

Using syscalls require more work. It's not efficient

Kernel Structure and Infrastructure


Kernel vs. Application Coding:Core kernel has no stndard libraries. Core kernel must be a monolithic, statically linked library.
Kernel libraries re-implement a lot of the functionality programmers expect in user space: 1. Are statically compiled into the kernel
2. Automatically available just by including relevant header; 3. Built to be kernel-safe (sleeping, waiting, locking, etc. is done properly)
Features: Utilities: kmalloc, kthreads, string parsing, etc. Containers: hash tables, binary trees etc. Algorithms: sorting, compression

Disadvantage: need to “roll your own code” for each list you create: 1. Duplicate code throughout the kernel; 2. Introduce bugs
3. Lose optimizations (placement within cache lines, etc.)

struct fox *new_fox; new_fox = kmalloc(sizeof(struct fox), GFP_KERNEL); new_fox->tail_length = 40;


new_fox->weight = 6; new_fox->is_fantastic = false;
INIT_LIST_HEAD(&(new_fox->list_node)); Or statically at compile time: static LIST_HEAD(fox_list_head);
Adding: struct data *new_data; list_add(&new_data->list, &data_list_head);
Deleting: list_del(&new_data->list); kfree(new_data); //if dynamic

Kernel Module: not use: Waste of memory (embedded systems); Slower boot time; Larger trusted computing base (TCB), more room for bugs
Pro: Customized function, no reboot. Device Drivers; Architecture-specific code
Insmod, rmmod
Must define: 1. An initialization function called on load 2. An exit function called on unload
echo to pass parameters// insmod foo.ko mystring="bebop" mybyte=255

Time Sources and Timing Track real time: Hadware, real-time clock
RTC (Real-Time Clock) (power off), system timer/System timer
interrupt to fire at known points| sys read
Higher Precision Hardware Timers
from RTC then set system clock
HZ:Frequency of interrupt. For every 1/HZ seconds, stop currently running process and figure out what to do next.
• HZ found in /include/asm-generic/param.h. OS function tick_periodic() runs each 1/HZ seconds
• jiffies variable tracks number of ticks since boot. xtime variable tracks wall-clock time
• Current time since boot: jiffies * 1/HZ. Current wall time: boot_time + jiffies * 1/HZ
• Timers are checked for expiry each tick
Trace-cmd is a command-line tool used to
Schedule(): put current process to sleep, tell the kernel to woke another process
capture and filter kernel trace data. It allows
users to specify trace points and events to
monitor, and to capture trace data to a file or
stream. Kernelshark is GUI
Real-time clock: track system time even when machine is off: low precision(0.5s)

Hrtimers: high resolution timer. Use hrtimers in kernel modules.


1. Write callback function for timers expiration. Return type: enum hrtimer_restart; para: a pointer to struct hrtimer. Body: use
hrtimer_foward() and the module's static timer internal variable to reschedule the timer's next expiration. One timer interva
Real-time clock: track system time even when machine is off: low precision(0.5s)

Hrtimers: high resolution timer. Use hrtimers in kernel modules. Use in RTC,scheduler, network stack.
1. Write callback function for timers expiration. Return type: enum hrtimer_restart; para: a pointer to struct hrtimer. Body: use
hrtimer_foward() and the module's static timer internal variable to reschedule the timer's next expiration. One timer interva
future.
System clock is low resolution via HZ, need hardware level support.
the resolution of the system clock is limited
to 1/HZ seconds. Not accuracy.
Kernel Tracing and Debugging
An oops communicates something bad happened but the kernel tries to continue executing
• An oops means the kernel is not totally broken, but is probably in an inconsistent state
• An oops in interrupt context, the idle task (pid 0), or the init task (pid 1) results in a panic; How to figure out what
Kernel panic: unrecoverable and results in an instant halt
BUG: unable to handle kernel NULL pointer dereference

printk() prints information to the system log: 1. Messages stored in circular buffer 2. Can be read with dmesg
3. Eight possible log levels (set with dmesg –n)
Strace: Allows one userspace process (tracer) to inspect the system calls made by another thread (tracee).
1. Tracer calls ptrace() on tracee. 2. Tracee halts at every system call, system call return, and signal (except SIGKILL)
3. Tracer records info, and releases tracee to continue
Ftrace: Tracing beyond system calls; many features:
• Event tracepoints (scheduler, interrupts, etc.) Trace any kernel function Call graphs
Latency tracing: ( How long interrupts disabled How long preemption disabled )
Trace-cmd: sudo trace-cmd record -e sched_switch ./program -->trace.dat.
Use trace-cmd report to inspect trace.dat; KernelShark: visualize trace.dat

Process
Signals: software interrupts, each defined as an integer with manifest constant
Resource Virtualization: The operating system virtualizes physical hardware resources into virtual resources in the form of processes
Memory Virtualization: Translate private memory addresses to physical addresses: Virtual memory
Processes in Linux: 1. Original: Each process behaves as though it has total control over the system
2. Modern: Threads still execute as though they monopolize the system, but the process abstraction also facilitates multi-
threading, signals, and inter-process communication

Tasks in Linux:
thread_info: small, efficient; task_struct: large, statically allocated by slab allocator, points to other data structs
Threads are like user processes but they share their parent’s address space.
Kernel threads don’t have a user-side address space (i.e. mm_struct pointer is NULL) and not all data values are used.
-on-

get_task_pid(),kernel_thread()
Exit: Free dynamically allocated memory Close files and sockets

Kernel Synchronization I
Critical regions: Code path that access and manipulate shared data.
Race condition: two threads of execution simultaneously execute within critical regions
Lock: 1. Any data that can be accessed by multiple threads of execution. 2. Shared memory regions. 3.Global variables in multithreaded code
Deadlock: occurs when a process can never progress while waiting for a lock. Lock at same order in two threads Eg: lock AB lock AB/ disable
preemption & interrupt
Psedo-concurrency: two things do not actually happen at the same time, but interleave with each other such that they might as well.
True-concurrency: two process can actually be executed in a critical multiprocessing at the exact same time (symmetrical multiprocessing)
Cause of concurrency: -
symmetrical multiprocessing

Test and Set: set new , return old


Compare and exchange: compare 2 values, if same, set new
ARM specific: ldrex: load a word from memory; strex: conditional
Atomic Variables: atomic_t, atomic_set(), atomic_add(), atomic_add_return()
Spin Lock: 0:unlock, wait until it's state to 0, then set it to 1.
void spin_lock(spinlock_t * lock) { while (test_and_set(&(lock->value), LOCKED) == LOCKED); }
void spin_unlock(spinlock_t * lock) { assert(test_and_set(&(lock->value), UNLOCKED) == LOCKED
void spin_lock(spinlock_t * lock) { while (compare_and_exchange(&(lock->value),
UNLOCKED, LOCKED) == LOCKED);}
void spin_unlock(spinlock_t * lock) { assert(compare_and_exchange(&(lock->value), LOCKED,
UNLOCKED) == LOCKED);}

Synchronization Design tradeoffs: balance readers and writers


Synchronization can prevent concurrency… :Readers/writer locks; Mutual exclusion
Or it can allow concurrency at the expense of overheads: Lock free / wait free algorithms; Transactional memory

module function:module_
demsg:查看内核最 1.TASK_RUNNING—The process is runnable; it is either currently running or on a runqueue waiting to run
init(),module_exit(),
近的输出 dmesg> (runqueues are discussed in Chapter 4). This is the only possible state for a process executing in user-space; it
module_author() module_
dmsg.log 存储到 can also apply to a process in kernel-space that is actively running
description(),module_
dmsg.log文件中 2.TASK_UNINTERRUPTIBLE—This state is identical to TASK_INTERRUPTIBLE except that it does not
version()
wake up and become runnable if it receives a signal. This is used in situations where the process must wait
pass parameters:
Why not system without interruption or when the event is expected to occur quite quickly. Because the task does not respond
command line:insmod
call? 1.占用内存空 to signals in this state, TASK_UNINTERRUPTIBLE is less often used than TASK_INTERRUPTIBLE.
mymodule.ko myparam=
间2.安全性不能保 3.__TASK_TRACED—The process is being traced by another process, such as a debugger, via ptrace.
证 3.很难保证系 123 configuration files:
4 __TASK_STOPPED—Process execution has stopped; the task is not running nor is it eligible to run. This
统的稳定性 mymodule.conf options
occurs if the task receives the SIGSTOP, SIGTSTP, SIGTTIN, or SIGTTOU signal or if it receives any signal
kernel module:1. mymodule myparam=123
while it is being debugged
easy to update 2 Sysfs interface /sys/
.more flexible forks preemped Task sleep wait for a target thing exit via do_exit() event occurs
module/mymodule/
parameters/myparam
envirornment variables

You might also like