ESD Question Bank 2
ESD Question Bank 2
UNIT-III
Short Answer Type Questions
1. How does a brown-out protection circuit evaluate and respond to a voltage drop in an
appliance?
A brown-out protection circuit continuously monitors the supply voltage. If the voltage drops
below a predefined threshold (set by components like Zener diodes or supervisor ICs), it triggers
a reset signal to halt the system, preventing malfunction or data corruption until power
stabilizes.
4. Which of the following memory management routine is used for changing the size of allocated
bytes in a dynamically allocated memory block:
(a) malloc () (b) realloc() (c) calloc () (d) free ()
Option: b
Option: a
7. Explain the different types of special keywords used in inline assembly for C51.
1. `#pragma asm` & `#pragma endasm: Marks the start and end of an inline assembly block.
2. `__asm` (Alternative Syntax in Some Compilers): Similar to `#pragma asm` but may vary across
compilers.
3. Register Access Keywords: Access 8051 registers (e.g., `ACC`, `PSW`, `DPTR`) directly in C.
1|Page
8. What will be the memory allocated on successful execution of the following memory allocation
request? Assume the size of int as 2 bytes.
x = (int *) malloc(100);
(a) 2 Bytes (b) 100 Bytes (c) 200 Bytes (d) 4 Bytes
Option: b
Option: b
A watchdog timer monitors firmware execution and resets the system if it hangs. It works by
counting up or down and generating a reset signal when the count limit is reached. The firmware
must reset the watchdog timer regularly to prevent unintended resets.
Most processors have a built-in watchdog timer with control and status registers.
If not built-in, an external watchdog timer IC can be used for the same function.
The DS1232 microprocessor supervisor IC integrates a hardware watchdog timer.
In modern embedded systems, watchdog timeout can trigger an interrupt instead of a reset.
2. Draw a diagram of the assembly language to machine language conversion process and
explain it in detail.
1. Source File to Object File Translation
- Tools:
- Assembler (e.g., Keil A51 for 8051) converts `.asm`/`.src` files to relocatable `.obj` files.
- Linking Dependencies:
- Linker Errors: Occur if dependencies are unresolved (e.g., `EXTRN` without `PUBLIC`).
2|Page
2. Library File Creation
- Purpose:
- Reuse code without exposing source (e.g., math functions in `.lib` files).
- Tool Example:
- Functions:
- Linker: Combines multiple `.obj` files into one absolute object file.
- Final Output:
- Hex file (e.g., `Intel HEX` format) is flashed onto the microcontroller.
3|Page
3. Explain the different approaches to embedded firmware design in detail.
1. Super Loop Approach
- Design:
- Pros:
- Cons:
2. OS-Based Approach
- Types:
- GPOS (e.g., Windows Embedded): For complex, non-real-time apps (POS terminals).
- Pros:
- Cons:
4. Explain the ‘High Level Language’ based embedded firmware development technique and its
key features.
This technique uses languages like C, C++, or Python to write firmware, which is then translated into
machine code for the target hardware. Below are its key features:
- The compiler handles low-level details (e.g., `printf()` translates to UART commands).
2. Cross-Compiler Toolchain
- A cross-compiler converts HLL code to machine code for the target microcontroller (e.g., Keil for
8051, GCC for ARM).
4|Page
3. Faster Development Cycle
- Reduced coding effort: Complex tasks (e.g., floating-point math) need fewer lines than assembly.
4. Portability
- The same code can be recompiled for different architectures (e.g., ARM, AVR) with minor
adjustments (e.g., changing header files).
- Code is split into modules (`.c`/`.h` files) and libraries (e.g., `math.h`).
6. Scalability
- Suitable for both simple (e.g., LED blink) and complex systems (e.g., IoT devices with RTOS).
Uses languages like C/C++/Python to write code without needing deep knowledge of registers or
memory maps (e.g., `HAL_GPIO_WritePin()` for GPIO control).
2. Cross-Compilation
A cross-compiler (e.g., GCC for ARM) converts HLL code into machine-specific binary for the target
MCU.
3. Faster Development
4. Portability
Same code can be recompiled for different MCUs (e.g., STM32 to ESP32) with minor HAL
adjustments.
5. Modularity
Supports split code into reusable modules (`.c`/`.h` files) and third-party libraries (e.g., FreeRTOS).
6. Performance Trade-off
- Cons: Slightly less efficient than hand-optimized Assembly for timing-critical tasks.
5|Page
6. With a neat diagram, explain the conversion process from assembly language to machine
language, and discuss the advantages and disadvantages of assembly language.
Same as Q.2
Advantages of Assembly Disadvantages of Assembly
Reverse Engineering
Poor Scalability for Large Systems
Resistance
UNIT-IV
Short Answer Type questions
1. Explain the key functionality of process management.
Process Creation: Initializing new processes, allocating memory, and loading the process
code into memory.
Process Scheduling: Deciding which process runs next based on scheduling algorithms
(e.g., FCFS, Round Robin, Priority-based).
Resource Allocation: Assigning CPU time, memory, and I/O devices to processes.
6|Page
5. Classify the types of Real-Time Operating Systems (RTOS).
Hard RTOS & Soft RTOS
7. Explain the main principle of Round Robin (RR) scheduling in CPU process management.
Equal Time Allocation: Each process gets a fixed time slice (quantum).
Preemption: If a process doesn’t finish in its quantum, it’s moved to the end of the ready queue.
Performance Metrics: Minimizes waiting time, turnaround time, and maximizes throughput.
7|Page
9. Identify and explain the key processes involved in context switching, context saving, and
context retrieval.
Context Saving: Saves the current process’s state (registers, PC, stack) to its PCB.
Context Retrieval: Loads the next process’s state from its PCB into the CPU.
10. Explain the different thread binding models for user-level and kernel-level threads.
11. Compare and contrast threads and processes by stating two key differences.
Resource Ownership:
Process: Has its own memory (code, data, stack).
Thread: Shares memory with other threads in the same process.
Creation Cost:
Process: Heavyweight (high OS overhead).
Thread: Lightweight (faster creation/switching).
8|Page
Long Answer Questions
1. Compare and contrast multiprocessing, multitasking, and multiprogramming, analyzing their
key differences, advantages, and suitable use cases.
Multiprogrammi
Feature Multiprocessing Multitasking
ng
Holds multiple
A single
Uses multiple programs in
CPU switches
CPUs/cores to execute memory but
Definition between
processes simultaneous runs one at a
tasks rapidly (tim
ly. time (no
e-sharing).
switching).
Pseudo-
parallelism (tasks No
True parallelism (tasks
appear parallelism (only
Parallelism run concurrently on
concurrent due one task runs at a
separate CPUs).
to fast time).
switching).
Maximizes CPU
Maximizes CPU usage by Keeps CPU busy
CPU Usage utilization by leveraging overlapping I/O when one task
multiple processors. waits with waits for I/O.
computation.
- Responsive for
- High performance for
interactive - Simple to
CPU-bound tasks.
systems (e.g., implement.
Advantages - Fault tolerance (one
GUIs). - Efficient for
CPU fails, others
- Fair resource batch processing.
continue).
allocation.
9|Page
Multiprogrammi
Feature Multiprocessing Multitasking
ng
- Operating
systems - Early batch-
- Scientific computing. (Windows, processing
Use Cases
- Server farms. Linux). systems (e.g.,
- Real-time IBM OS/360).
systems.
2. Three processes with process IDs P1, P2, P3 with estimated completion time 5, 10, 7
milliseconds respectively enters the ready queue together in the order P1, P2, P3. Process P4
with estimated execution completion time 2 milliseconds enters the ready queue after 5
milliseconds. Calculate the waiting time and Turn Around Time (TAT) for each process and the
Average waiting time and Turn Around Time (Assuming there is no I/O waiting for the
processes) in the FIFO scheduling.
10 | P a g e
3. Draw the Operating System architecture and explain the role of each component.
1. User Applications
Role: End-user programs (e.g., browsers, games) that request OS services.
4. Device Drivers
Role: Translate OS commands to hardware-specific instructions (e.g., printer drivers).
5. Interrupt Handler
Role: Responds to hardware/software events (e.g., keyboard input, errors).
6. Hardware Abstraction
Role: Hides hardware complexity, providing uniform interfaces to apps.
11 | P a g e
4. With a neat diagram explain the structure of a process, its memory organization, and the
various process states along with the transitions between these states.
1. Structure of a Process
A process consists of the following components:
Process Control Block (PCB): Kernel data structure storing process metadata (PID, state,
priority, registers, etc.).
Code Segment: Instructions to be executed (read-only).
Data Segment: Global and static variables.
Heap: Dynamically allocated memory (malloc(), new).
Stack: Temporary data (function calls, local variables).
CPU Registers: Program Counter (PC), stack pointer, status registers.
Stores executable
Code Fixed (read-only).
instructions.
Key States:
1. New: Process is being created.
2. Ready: Loaded in memory, awaiting CPU allocation.
3. Running: Instructions are executed on CPU.
4. Blocked/Waiting: Paused for I/O or resource.
5. Terminated: Process finishes or is killed.
12 | P a g e
State Transitions:
New → Ready: OS initializes PCB and resources.
Ready → Running: Scheduler assigns CPU.
Running → Ready: Time slice expires (preemption).
Running → Blocked: Process requests I/O.
Blocked → Ready: I/O completes.
Running → Terminated: Process exits.
5. Analyze the different types of non-preemptive scheduling algorithms, comparing their merits
and demerits in various system environments.
Aspect Details
Mechanism Executes processes in the order they arrive in the ready queue.
- Simple to implement.
Merits
- No starvation (fair for long processes).
Aspect Details
Mechanism Picks the process with the shortest burst time next.
13 | P a g e
3. Priority Scheduling
Aspect Details
6. Three processes with process IDs P1, P2, P3 with estimated completion time 12, 10, 2
milliseconds respectively enters the ready queue together in the order P2, P3, P1. Process P4
with estimated execution completion time 4 milliseconds enters the Ready queue after 8
milliseconds. Calculate the waiting time and Turn Around Time (TAT) for each process and the
Average waiting time and Turn Around Time (Assuming there is no I/O waiting for the
processes) in the LIFO scheduling.
Can’t find answer
UNIT-V
Short answer type questions
1. What is the key difference between a unidirectional and a bidirectional ‘Pipes’ in process
communication?
14 | P a g e
A message queue is like a mailbox that stores messages temporarily in a First-In-First-Out (FIFO)
order. It helps processes or threads send and receive messages asynchronously or synchronously.
Speed Fast but involves copying messages Very fast (direct memory access)
Data Amount Suitable for small messages Suitable for large data
Synchronization Less complex; built-in sync Requires extra sync (e.g., semaphores)
Security Safer, as no direct memory access Risky without proper access control
4. Choose the key parameters involved in memory-mapping and explain how each one affects the
mapping process.
Can only be used between related processes (e.g., Can be used between unrelated
Visibility
parent and child) processes
Can be unidirectional or
Direction Usually unidirectional
bidirectional
15 | P a g e
6. Apply your understanding of the Mailbox concept in inter-process communication to explain
how it works and provide an example of its use in an embedded system.
2. One or more clients (subscriber threads) wait to receive messages from it.
Example In MicroC/OS-II, a sensor task (server) sends temperature data to a display task (client) via
a mailbox.
Deadlocks can be handled by ignoring, detecting and recovering, avoiding, or preventing them.
Detection uses resource graphs; recovery may involve killing or restarting processes.
Avoidance plans safe resource allocation; prevention blocks deadlock conditions.
Each method has trade-offs based on system needs and complexity.
8. Explain the concept of a race condition and explain how it can lead to data inconsistency in a
multi-process system.
A race condition occurs when multiple processes access and modify shared data at the same time.
Due to non-atomic operations like counter++, results may become inconsistent.
Context switches during these operations cause data overwrites or loss.
Using mutexes or semaphores can prevent race conditions.
9. Explain signaling as a method for inter-process communication and explain how it is used in
RTX51 Tiny OS.
Signaling is a simple IPC method to send alerts between processes without sending data.
In RTX51 Tiny OS, os_send_signal() triggers a signal to a task, and os_wait() waits for it.
It supports asynchronous task notification, especially in lightweight systems.
It is useful for ISR-to-task communication in embedded applications.
16 | P a g e
Long Answer Questions
1. With a neat diagram, explain the concept of Remote Procedure Call (RPC) in inter-process
communication.
RPC enables communication between processes on the same or different CPUs over a
network.
Uses Interface Definition Language (IDL) for standardization (MIDL for Windows).
Communication happens via sockets with port numbers, and security is handled using
IDs and encryption methods like DES/3DES.
2. With the help of a neat diagram, explain the Dining Philosophers Problem and provide a
solution using semaphores to avoid deadlock.
Dining Philosophers Problem – Explanation
Imagine 5 philosophers sitting around a circular table. Each has a plate of spaghetti and
needs two forks (left and right) to eat.
A fork is placed between each pair of philosophers (5 forks in total).
Philosophers alternate between thinking and eating. To eat, they need to pick up both forks.
Problem arises when all philosophers pick up their left fork at the same time and wait for the
right fork. Since no one puts down a fork, everyone waits forever → Deadlock.
Solutions:
• Imposing Rules: Philosophers must release a fork if they cannot acquire both within a fixed
time and wait before retrying.
• Using Semaphores (Mutex): Before picking up forks, a philosopher checks if neighbors are using
them. If forks are unavailable, they wait, preventing deadlocks and ensuring fair resource
allocation.
17 | P a g e
3. Illustrate the concept of deadlock in an operating system, identify the conditions that lead to a
deadlock situation, and discuss the methods that can be used to prevent it.
Got it! Here's a one-page, concise and point-wise version of the answer:
🔹 What is Deadlock?
A deadlock is a state where two or more processes are blocked forever, each waiting for the
other to release a resource.
18 | P a g e
4. Infer the role of device drivers in an embedded operating system in managing communication
between user applications and hardware peripherals.
Device drivers abstract low-level hardware details (e.g., registers, memory addresses, interrupt
handling) from user applications.
Applications interact with hardware through standardized OS APIs, avoiding direct hardware
manipulation.
Drivers initialize hardware peripherals (e.g., configuring I/O ports, setting up communication
protocols like UART/SPI).
Interrupt Handling
For complex tasks, ISRs delegate processing to Interrupt Service Threads (ISTs) to maintain system
responsiveness.
Facilitate data exchange between applications and hardware (e.g., reading from a sensor, writing to a
display).
Use IPC mechanisms (e.g., shared memory, message queues) to pass data to user applications.
Prevent applications from directly accessing hardware, reducing risks of crashes or corruption.
Kernel-mode drivers (high-performance) and user-mode drivers (safer) balance speed and reliability.
Provide a uniform interface for diverse hardware, enabling OS portability across devices.
5. Describe the methods used to choose an RTOS for an embedded system in detail.
1. Functional Requirements
a) Processor Support
Ensure the RTOS supports the target processor architecture (e.g., ARM, x86, RISC-V).
Some RTOSs are optimized for specific microcontrollers (e.g., FreeRTOS for ARM Cortex-M).
b) Memory Requirements
ROM/Flash: Needed for storing the OS kernel and services.
RAM: Required for runtime tasks and dynamic memory.
c) Real-Time Capabilities
Hard vs. Soft Real-Time:
o Hard RTOS (e.g., VxWorks, QNX): Guarantees strict deadline adherence (critical for
aerospace, medical devices).
19 | P a g e
o Soft RTOS (e.g., FreeRTOS, Zephyr): Tolerates minor delays (used in IoT, consumer
electronics).
Scheduling Policies: Check for priority-based preemption, round-robin, or time-
slicing support.
d) Kernel and Interrupt Latency
Low interrupt latency is crucial for time-sensitive applications (e.g., motor control, robotics).
Some RTOSs (e.g., RTX, µC/OS-II) minimize latency by reducing kernel overhead.
e) IPC and Task Synchronization: Supported mechanisms: Message queues, mailboxes,
semaphores, mutexes.
f) Modularization Support: Ability to include/exclude OS components (e.g., file system,
networking stack).
2. Non-Functional Requirements
a) Custom vs. Off-the-Shelf
Commercial RTOS (e.g., QNX, VxWorks): High reliability, vendor support, but costly.
Open-Source RTOS (e.g., FreeRTOS, Zephyr): Free, customizable, but lacks formal support.
b) Cost Considerations: Licensing fees (per-unit royalty vs. one-time purchase).
c) Development & Debugging Tools: Availability of IDEs, simulators, profilers, and trace tools.
d) Ease of Use: Documentation quality, community support, and learning curve.
e) After-Sales Support: Vendor-provided bug fixes, updates, and technical assistance.
6. Apply the concept of the Producer-Consumer problem to design a solution for a bounded
buffer system, considering synchronization issues and potential deadlock scenarios. Explain
how you would implement the solution in an embedded system.
Producer-Consumer Problem:
A classic synchronization issue where a producer generates data and places it in a shared
buffer, while a consumer retrieves and processes the data.
The challenge arises due to the difference in processing speeds of the two.
Potential Issues:
If the producer generates data faster than the consumer consumes it, buffer overrun
occurs, leading to data loss.
If the consumer processes data faster than the producer generates it, buffer underrun
occurs, leading to inaccurate data retrieval.
Scheduling Impact:
If the producer is scheduled more frequently, it may overwrite data before the consumer
reads it.
If the consumer is scheduled more frequently, it may read old or incomplete data.
Solutions:
Sleep and Wake-up Mechanism: Synchronization techniques like semaphores, mutexes, and
monitors help coordinate access to the shared buffer, ensuring proper data flow.
20 | P a g e