0% found this document useful (0 votes)
11 views13 pages

Csci4061 Final Exam Practice Sol

The document provides practice questions for a final exam in an Introduction to Operating Systems course. It covers topics such as socket programming for a simplified DNS protocol, reader/writer synchronization with custom locks, deadlock scenarios with mutexes, and considerations for building a global email system including partitioning and replication strategies. Additionally, it includes various system calls and functions relevant to memory management, I/O operations, and process management.

Uploaded by

nguy4176
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views13 pages

Csci4061 Final Exam Practice Sol

The document provides practice questions for a final exam in an Introduction to Operating Systems course. It covers topics such as socket programming for a simplified DNS protocol, reader/writer synchronization with custom locks, deadlock scenarios with mutexes, and considerations for building a global email system including partitioning and replication strategies. Additionally, it includes various system calls and functions relevant to memory management, I/O operations, and process management.

Uploaded by

nguy4176
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

CSCI 4061: Introduction to Operating Systems

Spring 2023
Final Exam – Practice Questions

1. Sockets
Say that Cloudflare has decided to deploy a new, much simpler version of DNS on its
server with the famous IP address 1.1.1.1. The new protocol is as follows:
1. A client sends a request to the server to look up the IP address of a host name by
sending two data items:
(a) A 4-byte integer giving the length of the host name to look up
(b) A 128-character sequence representing the host name as a null-terminated
string.
2. The server replies with a response consisting of two data items:
(a) A 4-byte integer giving the status code of the lookup operation. The integer is
0 if the lookup was successful and -1 on error.
(b) The host’s IP address. We’ll assume IPv4 here, meaning the IP address is
represented as a 4-byte number.
As with traditional DNS, Cloudflare requires clients to use UDP. Additionally, you do
not need to worry about endianness or error handling for your code in this question.
Complete the code below for the implementation of the simple_dns_lookup function and
related struct definitions.
• The function takes a single const char * argument containing the host name for
which to look up an IP address.
• The function should return the host’s IP address on success or -1 on error.
(a) First, complete definitions of struct data types you can use to represent a request
to the server and a response from the server, respectively.
#define MAX_NAME_LEN 128
#define HOST "1.1.1.1"
#define PORT "53"
typedef struct {

int name_len;

char name[MAX_NAME_LEN];
} request_t;

typedef struct {

int status;

int ip_address;
} response_t;
Page 2 of 13

(b) Now, complete the implementation of the simple_dns_lookup function.


int simple_dns_lookup(const char *host_name) {

struct addrinfo hints;

memset(&hints, 0, sizeof(struct addrinfo));

hints.ai_family = AF_UNSPEC;

hints.ai_socktype = SOCK_DGRAM;

struct addrinfo *server;

getaddrinfo(HOST, PORT, &hints, &server);

int sock_fd = socket(server->ai_family, server->ai_socktype,

server->ai_protocol);

request_t request;

request.name_len = strlen(host_name);

strncpy(request.name, host_name, MAX_NAME_LEN - 1);

request.name[MAX_NAME_LEN - 1] = ‘\0’;

sendto(sock_fd, &request, sizeof(request_t), 0,

server->ai_addr, server->ai_addrlen);

freeaddrinfo(server);

response_t response;

recvfrom(sock_fd, &response, sizeof(response_t), 0, NULL, NULL);

close(sock_fd);

if (response.status == 0) {

return response.ip_address;

return -1;

}
Page 3 of 13

2. Reader/Writer Synchronization
We want to implement our own read/write lock instead of using the provided pthread_rwlock_t
implementation. It should support the following operations:

1. lock_init: Initialize the read/write lock


2. read_lock: Acquire a read lock
3. read_unlock: Release a read lock
4. write_lock: Acquire a write lock
5. write_unlock: Release a write lock

We will define a new type, my_rwlock_t, and a set of functions to operate on instances
of this type, one for each of the operations listed above.
We have decided that our lock’s behavior should adhere to the following rules:

1. A write lock cannot be acquired if any other threads already hold a read lock or a
write lock.
2. A read lock cannot be acquired if any other threads already hold a write lock but
can be acquired if other threads already hold read locks.
3. Writers are prioritized over readers. If both are waiting to acquire their respective
type of lock, a write lock should be given out rather than read locks.

Complete the definition of the my_rwlock_t data type and of the lock_init, read_lock,
read_unlock, write_lock, and write_unlock functions below.

You do not need to perform any error handling in your code below.
Page 4 of 13

typedef struct {

pthread_mutex_t mutex;

pthread_cond_t readers;

pthread_cond_t writers;

unsigned waiting_readers;

unsigned waiting_writers;

unsigned active_readers;

unsigned active_writers;

} my_rwlock_t;

void lock_init(my_rwlock_t *lock) {

pthread_mutex_init(&lock->mutex, NULL);

pthread_cond_init(&lock->readers, NULL);

pthread_cond_init(&lock->writers, NULL);

lock->waiting_readers = 0;

lock->waiting_writers = 0;

lock->active_readers = 0;

lock->active_writers = 0;

}
Page 5 of 13

void read_lock(my_rwlock_t *lock) {

pthread_mutex_lock(&lock->mutex);

lock->waiting_readers++;

while (lock->active_writers > 0 || lock->waiting_writers > 0) {

pthread_cond_wait(&lock->readers, &lock->mutex);

lock->waiting_readers- -;

lock->active_readers++;

pthread_mutex_unlock(&lock->mutex);

void read_unlock(my_rwlock_t *lock) {

pthread_mutex_lock(&lock->mutex);

lock->active_readers- -;

if (lock->active_readers == 0 && lock->waiting_writers > 0) {

pthread_cond_signal(&lock->writers);

pthread_mutex_unlock(&lock->mutex);

}
Page 6 of 13

void write_lock(my_rwlock_t *lock) {

pthread_mutex_lock(&lock->mutex);

lock->waiting_writers++;

while (lock->active_readers > 0 || lock->active_writers > 0) {

pthread_cond_wait(&lock->writers, &lock->mutex);

lock->waiting_writers- -;

lock->active_writers++;

pthread_mutex_unlock(&lock->mutex);

void write_unlock(my_rwlock_t *lock) {

pthread_mutex_lock(&lock->mutex);

lock->active_writers- -;

if (lock->waiting_writers > 0) {

pthread_cond_signal(&lock->writers);

} else if (lock->waiting_readers > 0) {

pthread_cond_broadcast(&lock->readers);

pthread_mutex_unlock(&lock->mutex);

}
Page 7 of 13

3. Deadlock
Say we have an array of six mutexes, declared like so:

#define NUM_LOCKS 6
pthread_mutex_t locks[NUM_LOCKS];

Assume that each mutex in the array is correctly initialized. Consider a group of threads
where each thread needs to acquire all six mutexes to do its work.
(a) When the mutexes are unlocked after a thread no longer needs them, does the
sequence in which they are unlocked matter with respect to the possibility of a
deadlock? Explain.

Solution: No, releasing a resource (like a mutex) cannot cause a deadlock. A


thread already “owns” the mutexes it is releasing and does not need to wait to
acquire some other resource.

(b) When the mutexes are unlocked after a thread no longer needs them, does the
sequence in which they are released matter with respect to performance?

Solution: Yes. Assuming all threads acquire the mutexes in the same order,
then it is best to release the mutexes in the reverse order. This prevents a
program from switching to a new thread after an unlock, only to have that
thread wait to acquire another mutex that is still held by the original thread,
essentially performing unnecessary context switches.

(c) Say all threads run the following code to acquire all six mutexes. Assuming no
errors can occur, is a deadlock possible with this code? Explain why or why not.
for (int i = 0; i < NUM_LOCKS; i++) {
pthread_mutex_lock(locks + i);
}

Solution: A deadlock is not possible here. All threads acquire the mutexes in
the same order, so there is no possibility of a circular wait.
Page 8 of 13

(d) Say all threads run the following code to acquire all six mutexes. Assuming no
errors can occur, is a deadlock possible with this code? Explain why or why not.
for (int i = NUM_LOCKS - 1; i >= 0; i--) {
pthread_mutex_lock(locks + i);
}

Solution: A deadlock is not possible here. All threads acquire the mutexes in
the same order, so there is no possibility of a circular wait.

(e) Assume the gettid() function returns a unique thread ID (similar to how getpid()
works for processes). Say all threads run the following code to acquire all six
mutexes. Assuming no errors can occur, is a deadlock possible with this code?
Explain why or why not.
int start = gettid() % NUM_LOCKS;
for (int i = 0; i < NUM_LOCKS; i++) {
int index = (start + i) % NUM_LOCKS;
pthread_mutex_lock(locks + index);
}

Solution: Yes, deadlock may occur here. Now that different threads acquire
locks in different orders, it is possible that two threads are each waiting to
acquire a lock held by the other. In other words, a circular wait may occur.
Page 9 of 13

4. Distributed Systems
Say we want to build a global email system that will potentially serve billions of users.
(a) How will partitioning be helpful in implementing this system?

Solution: Partitioning storage of users’ emails will be necessary because they


will consume too much space to be accommodated by a single machine. Addi-
tionally, we may want to partition users by geography so that each can interact
with the nearest server to minimize response time.

(b) How will replication be helpful in implementing this system?

Solution: Replication provides fault tolerance. If a subset of the email system’s


servers go down, it should be able to remain available to all users with a proper
replication strategy.

(c) Your initial design requires each user’s emails to be replicated on N independent
servers. A request is sent to all replica servers when a user wants to check her inbox,
and operations like deleting or archiving an email also generate requests that are
sent to all replica servers.
What is an issue with this design?

Solution: The system is doing unnecessary work, specifically by sending re-


quests to more servers than is necessary. Intuitively, when all writes go to all
replicas, meaning all replicas are always fully up to date, a read can safely
contact just one replica.
Page 10 of 13

(d) You decide to use quorum consistency among each user’s replica set. If you expect
users to frequently check the status of their respective inboxes and only infrequently
delete or archive emails, what is the best set of quorum consistency parameters?

Solution: It is best to set W = N and R = 1. While this makes writes


expensive, reads are now efficient because they only involve communication
with one replica. As reads are expected to be the dominant operation, we want
this operation to be fast.

(e) If you instead expect users to check the status of their respective inboxes infre-
quently and to delete or archive emails frequently, what is the best set of quorum
consistency parameters?

Solution: It is best to set W = 1 and R = N . This makes writes efficient and


reads expensive, but we expect writes to be more frequent and therefore to be
the dominant factor in the system’s general performance.

(f) When might other quorum consistency parameter values (besides the two sets you
identified above) be useful?

Solution: These parameters form a spectrum, from cheap reads and expensive
writes to expensive reads and cheap writes. If we don’t expect either type of
operation to be more frequent than the other, we might choose some intermedi-
ate point on the spectrum (say N = 4, R = 3, W = 2) so that they both have
roughly the same cost. If we want even more resilience for our system, we could
also choose a combination of values such that W +R exceeds the minimum value
of N + 1.
Page 11 of 13

Strings
size_t strlen(const char *s);
char *strcpy(char *dest, const char *src);
char *strtok(char *str, const char *delim);
int strcmp(const char *s1, const char *s2);
Memory
void *memset(void *s, int c, size_t n);
void *memcpy(void *dest, const void *src, size_t n);
void *malloc(size_t size);
void free(void *ptr);
stdio Operations
FILE *fopen(const char *pathname, const char *mode);
int fclose(FILE *stream);
size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);
size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);
int fscanf(FILE *stream, const char *format, ...);
int fprintf(FILE *stream, const char *format, ...);
int fseek(FILE *stream, long offset, int whence);
long ftell(FILE *stream);
int feof(FILE *stream);
int ferror(FILE *stream);
void perror(const char *s);
Process Creation and Management
pid_t getpid();
pid_t getppid();
pid_t fork();
int execl(const char *pathname, const char *arg, ..., NULL);
int execlp(const char *file, const char *arg, ..., NULL);
int execle(const char *pathname, const char *arg, ..., NULL, char *const envp);
int execv(const char *pathname, char *const argv[]);
int execvp(const char *file, char *const argv]);
int execvpe(const char *file, char *const argv[], char *const envp[]);
pid_t wait(int *wstatus);
pid_t waitpid(pid_t pid, int *wstatus, int options);
void exit(int status);
Environment Variables
char *getenv(const char *name);
int setenv(const char *name, const char *value, int overwrite);
int unsetenv(const char *name);
I/O System Calls
int open(const char *pathname, int flags);
int open(const char *pathname, int flags, mode_t mode);
ssize_t read(int fd, void *buf, size_t count);
ssize_t write(int fd, const void *buf, size_t count);
off_t lseek(int fd, off_t offset, int whence);
int dup(int oldfd);
int dup2(int oldfd, int newfd);
File System Operations
int mkdir(const char *pathname, mode_t mode);
int rmdir(const char *pathname);
int stat(const char *pathname, struct stat *statbuf);
int fstat(int fd, struct stat *statbuf);
DIR *opendir(char *name);
struct dirent *readdir(DIR *dirp);
int closedir(DIR *dirp);
void rewinddir(DIR *dirp);
Page 12 of 13

Memory-Mapped I/O
void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset);
int munmap(void *addr, size_t length);
Signals
int kill(pid_t pid, int sig);
int sigemptyset(sigset_t *set);
int sigfillset(sigset_t *set);
int sigaddset(sigset_t *set, int signum);
int sigdelset(sigset_t *set, int signum);
int sigismember(sigset_t *set, int signum);
int sigprocmask(int how, const sigset_t *set, sigset_t *oldset);
int sigaction(int sig, const struct sigaction *act, struct sigaction *oact);
int pause();
int sigsuspend(const sigset_t *sigmask);
int sigwait(const sigset_t *sigmask, int *signo);
Pipes
int pipe(int filedes[2]);
int mkfifo(const char *path, mode_t mode);
Multiplexed I/O
int poll(struct pollfd *fds, int nfds, int timeout);
Shared Memory
int shmget(key_t key, size_t size, int shmflg);
void *shmat(int shmid, const void *shmaddr, int shmflg);
int shmdt(const void *shmaddr);
int shmctl(int shmid, int cmd, struct shmid_ds *buf);
key_t ftok(const char *pathname, int proj_id);
Socket Programming
int getaddrinfo(const char *node, const char *service,
const struct addrinfo *hints, struct addrinfo **res);
int socket(int domain, int type, int protocol);
void freeaddrinfo(struct addrinfo *res);
int bind(int sockfd, const struct sockaddr *addr, socklen_t addrlen);
ssize_t sendto(int sockfd, const void *buf, size_t len, int flags,
const struct sockaddr *dest_addr, socklen_t addrlen);
ssize t recvfrom(int sockfd, void *buf, size_t len, int flags,
_
struct sockaddr *src_addr, socklen_t *addrlen);
int listen(int sockfd, int backlog);
int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen);
Thread Creation and Management
int pthread_create(pthread_t *thread, const pthread_attr_t *attr,
void *(*start_routine) (void *), void *arg);
int pthread join(pthread_t thread, void **retval);
_
pthread_t pthread_self();
void pthread_exit(void *retval);
int pthread_cancel(pthread_t thread);
Mutexes
int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *attr);
int pthread_mutex_destroy(pthread_mutex_t *mutex);
int pthread_mutex_lock(pthread_mutex_t *mutex);
int pthread_mutex_unlock(pthread_mutex_t *mutex);
Condition Variables
int pthread_cond_init(pthread_cond_t *cond, const pthread_condattr_t *attr);
int pthread_cond_destroy(pthread_cond_t *cond);
int pthread_cond_wait(pthread_cond_t *cond, pthrad_mutex_t *mutex);
int pthread_cond_signal(pthread_cond_t *cond);
int pthread_cond_broadcast(pthread_cond_t *cond);
Page 13 of 13

Read/Write Locks
int pthread_rwlock_init(pthread_rwlock_t *rwlock,
const pthread_rwlockattr_t *attr);
int pthread rwlock_destroy(pthread_rwlock_t *rwlock);
_
int pthread_rwlock_rdlock(pthread_rwlock_t *rwlock);
int pthread_rwlock_wrlock(pthread_rwlock_t *rwlock);
int pthread_rwlock_unlock(pthread_rwlock_t *rwlock);
Semaphores
int sem_init(sem_t *sem, int pshared, unsigned int value);
sem_destroy(sem_t *sem);
int sem_wait(sem_t *sem);
int sem_post(sem_t *sem);

You might also like