0% found this document useful (0 votes)
6 views8 pages

Study Doc Usama

Uploaded by

ANUPAMA SARKAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views8 pages

Study Doc Usama

Uploaded by

ANUPAMA SARKAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

=> For example, *f(int) is a declarator specifying that f is a “function ... returning a pointer ... .

” In contrast, (*f)(int) specifies that


fis a “pointer to a function ... .”
=>const char * myPtr = &char_A;
char * const myPtr = &char_A;
const char * const myPtr = &char_A;

=> struct DSPI_tag {


union {
vuint32_t R;
struct {
vuint32_t MSTR:1;
vuint32_t CONT_SCKE:1;
vuint32_t DCONF:2;
vuint32_t PCSIS1:1;
vuint32_t PCSIS0:1;
vuint32_t:1;
} B;
} MCR; /* Module Configuration Register @baseaddress + 0x00 */

#define DSPI_B (*( volatile struct DSPI_tag *) 0xFFF94000)


DSPI_B.MCR.R = 0x873F0102;
DSPI_B.MCR.B.MDIS= 0;
typedef volatile uint32_t vuint32_

=>What is interrupt latency


First of ALL we have to understand what happened when some device generate Interrupt.
1. Consider my processor is 3 pipeline processor and in one stage processor is fetching instruction,In second its decoding and 3rd
its executing Instruction.
2.now when Interrupt generated processor need to Save its current state and need to jump on the ISR that is written to handle that
particular Interrupt.
3.For this Processor need to flush its Instruction pipeline generally ..fetch and decode phase as execute is atomic operation in
many architectures.....this is known as Hardware Latency and we can not reduce it.
4.The area we can Improve is software latency ...the question is how?
Now processor save its current state by saving CPSR(current process state register) into SPSR and saving PC into link register
and other registers into stack.
5.Now saving other register into stack means push operations and that is where we can chose to reduce so that less number of
push operations happen during this context switching but how?
6.the answer is that processor provides you two mode of interrupt one is fast mode and another is normal mode ...the process i
explained above is normal mode where when interrupt comes then you will got 2 additional registers SPSR and link register to
save CPSR and PC value else everything you need to save on stack.
7.But if are using FIRQ mode than you got fresh copy of registers from r4 to r15 ..these are hidden registers and only avilable in
this mode ..so instead of saving on stack you can save your current state of system on these registers.so we can reduce the
Interrupt latency by using FIQ mode.

7. What is NULL pointer and what is its use?


Conceptually, when a pointer has that null value it is not pointing anywhere.
"null pointer" is a pointer that has a value of zero (NULL). Example : void *pointer = NULL;
USE -You can use a null pointer as a placeholder to remind yourself (or, more importantly, to
help your program remember) that a pointer variable does not point anywhere at the moment and
that you should not use the ``contents of'' operator on it

8. What is void pointer and what is its use?


Void pointer is a specific pointer type - void * - a pointer that points to some data location in
storage, which doesn't have any specific type.
USE -- A void pointer is usually a method of cheating or turning-off compiler type checking, for
instance if you want to return a pointer to one type, or an unknown type, to use as another type.
For instance malloc() returns a void pointer to a type-less chunk of memory, the type of which
you can cast to later use as a pointer to bytes, short ints, double floats, typePotato's, or whatever.

Swape two numbers without using third variable.


1)x^=y^=x^=y

2) a=a+b
b=a-b
a=a-b
3)
int *ptr1,*ptr2
int a;
a=*ptr1
*ptr1=*ptr2
*ptr2=a

Assume that 0x7600 is an address. How will you store 50 in that address?
*((int * ) 0x7600) = 50;

How to find the given number is little endian or big endian?


#include <stdio.h>
int main()
{
unsigned int n = 1;
char *p;
p = (ch ar*)&n;
if (*p == 1)
printf("Little Endian\n");
else if (*(p + sizeof(int) - 1) == 1)
printf("Big Endian\n");
else
printf("Surprise output!!!!\n");
return 0;
}

Reverse a single linked list


Node *Reverse (Node *p)
{
Node *pr = NULL;
while (p != NULL)
{
Node *tmp = p->next;
p->next = pr;
pr = p;
p = tmp;
}
return pr;
}

What is Node –
struct node {
int data;
struct node* next;
};

11.Can we use any function inside ISR?

You can call a function from an isr if it is only the isr that is calling the function. Then it can be considered as part of

the isr.If some other part of the program is also calling the function, then it must be a re-entrant function. The isr

might interrupt when the program is inside the function.A re-entrant function is a function that does not use global or

static variables and does not call other non re-entrant functions.Some compilers will flag an error if an isr calls a

function that is also called from another part of the program

Re-entrancy

Some of the answers the thread aren't totally clear, so I hope to clarify some of that here. Re-entrancy is only an

issue if a function is shared across multiple execution contexts. I could put some formal definition, but I think it

better to demonstrate with an example. Say we have the following (pseudo-)code:

1. int x;
2.
3. void foo(int z)
4. {
5. x = z;
6. }
7.
8. void ISR()
9. {
10. x++;

Now, depending upon the architecture, this code could create strange values for x depending
upon when foo() and ISR() run. For example, if x is a 32-bit integer and we are on an 8-bit
platform, it usually requires several instructions to update x. If the interrupt occurred in the
middle of that update to x, the contents of x could be corrupted.

13.Can we put breakpoint inside ISR?


Yes - in an emulator.
Otherwise, no. It's difficult to pull off, and a bad idea in any case. ISRs are (usually) supposed to
work with the hardware, and hardware can easily behave very differently when you leave a gap
of half a second between each instruction.
Set up some sort of logging system instead.
ISRs also ungracefully "steal" the CPU from other processes, so many operating systems
recommend keeping your ISRs extremely short and doing only what is strictly necessary (such as
dealing with any urgent hardware stuff, and scheduling a task that will deal with the event
properly). So in theory, ISRs should be so simple that they don't need to be debugged.
If it's hardware behaviour that's the problem, use some sort of logging instead, as I've suggested.
If the hardware doesn't really mind long gaps of time between instructions, then you could just
write most of the driver in user space - and you can use a debugger on that!
15.What is Top half & bottom half of a kernel?
Linux (along with many other systems) resolves this problem by splitting the
interrupt handler into two halves. The so-called top half is the routine that
actually responds to the interrupt—the one you register with request_irq. The
bottom half is a routine that is scheduled by the top half to be executed later, at a
safer time. The big difference between the top-half handler and the bottom half is
that all interrupts are enabled during execution of the bottom half—that's why it
runs at a safer time. In the typical scenario, the top half saves device data to a
device-specific buffer, schedules its bottom half, and exits: this operation is very
fast. The bottom half then performs whatever other work is required, such as
awakening processes, starting up another I/O operation, and so on. This setup
permits the top half to service a new interrupt while the bottom half is still
working.
16.Difference between RISC and CISC processor.

CISC RISC
Emphasis on hardware Emphasis on software
Includes multi-clock Single-clock, http://www-cs-faculty.stanford.edu/
complex instructions reduced instruction only ~eroberts/courses/soco/projects/
Memory-to-memory: Register to register: 2000-01/risc/risccisc/
"LOAD" and "STORE" "LOAD" and "STORE"
incorporated in instructions are independent instructions
Small code sizes, Low cycles per second,
high cycles per second large code sizes
Transistors used for storing Spends more transistors
complex instructions on memory registers

17.What is RTOS?

18.What is the difference between hard real-time and soft real-time OS?

A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time
application requests.
A key characteristic of an RTOS is the level of its consistency concerning the amount of time it
takes to accept and complete an application's task; the variability is jitter.[1] A hard real-time
operating system has less jitter than a soft real-time operating system. The chief design goal is
not high throughput, but rather a guarantee of a soft or hard performance category. An RTOS
that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline
deterministically it is a hard real-time OS.
An RTOS has an advanced algorithm for scheduling.Key factors in a real-time OS are minimal
interrupt latency and minimal thread switching latency; a real-time OS is valued more for how
quickly or how predictably it can respond than for the amount of work it can perform in a given
period of time
As the name suggests, there is a deadline associated with tasks and an RTOS adheres to this
deadline as missing a deadline can cause affects ranging from undesired to catastrophic. The
example of airbag which causes catastrophic affect of an RTOS missing a deadline.

19.What type of scheduling is there in RTOS?

A scheduler is the heart of every RTOS. It provides the algorithms to select the task for execution. Three
common scheduling algorithms are

> Cooperative scheduling


> Round-robin scheduling
> Preemptive scheduling

In typical designs, a task has three states:


1. Running (executing on the CPU);
2. Ready (ready to be executed);
3. Blocked (waiting for an event, I/O for example).

20.What is priority inversion?

Priority inversion is a situation where in lower priority tasks will run blocking higher priority tasks waiting
for resource (mutex). For ex: consider 3 tasks. A, B and C, A being highest priority task and C is lowest.
Look at sequence of context swaps A and C have shared resource
A goes for I/O .
C was ready to run. So C starts running. locks resource
B is ready to run. but B is not using share resource of A and C. Swaps out C and start execution
now A is ready to run but resource is locked by C
. after B finished C will continue..and B wise versa.. A still wait to run.

21.What is priority inheritance?

To avoid this situation exchange the priority levels of A and C. After this C will become highest priority
and gets the CPU time by suspending task B. Task C will be serviced and releases resource shared with
task A. As soon as task C is released the shared resource change the prority levels of A and C again. Now
A will become highest priority than C and CPU starts running Task A.

What is semaphore?
Semaphore is a location in memory whose value can be tested and set by more than one process.
Semaphor is a positive value set by the kernel against the number of shared resources available at any
given point of time. if one process occupies the shared resource, the value of semaphor is decremented
(P operation) by one and so on until it becomes zero when all resources are occupied. When a process
releases the resource, semaphor value is incremented (V operation).

Suppose a library has 10 identical study rooms, intended to be used by one student at a time. To prevent
disputes, students must request a room from the front counter if they wish to make use of a study
room. When a student has finished using a room, the student must return to the counter and indicate
that one room has become free. If no rooms are free, students wait at the counter until someone
relinquishes a room.The librarian at the front desk does not keep track of which room is occupied, only
the number of free rooms available. When a student requests a room, the librarian decreases this
number. When a student releases a room, the librarian increases this number. Once access to a room is
granted, the room can be used for as long as desired, and so it is not possible to book rooms ahead of
time.In this scenario the front desk represents a semaphore, the rooms are the resources, and the
students represent processes. The value of the semaphore in this scenario is initially 10. When a student
requests a room he or she is granted access and the value of the semaphore is changed to 9. After the
next student comes, it drops to 8, then 7 and so on. If someone requests a room and the resulting value
of the semaphore is negative,[2] they are forced to wait. When multiple people are waiting, they will
either wait in a queue, or use Round-robin scheduling and race back to the counter when someone
releases a room (depending on the nature of the semaphore).
What is difference between binary semaphore and mutex?

When number of resources a Semaphore protects is greater than 1, it is called a Counting Semaphore.
When it controls one resource, it is called a Boolean Semaphore. A boolean semaphore is equivalent to a
mutex.Mutex:

Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the
person gives (frees) the key to the next person in the queue.

A mutex object only allows one thread into a controlled section, forcing other threads which attempt to
gain access to that section to wait until the first thread has exited from that section.

Is the number of free identical toilet keys. Example, say we have four toilets with identical locks and
keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then
the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys
left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1
(one free key), and given to the next person in the queue.

Officially: "A semaphore restricts the number of simultaneous users of a shared resource up to a
maximum number. Threads can request access to the resource (decrementing the semaphore), and can
signal that they have finished using the resource (incrementing the semaphore)." Ref: Symbian
Developer Library

49.How to implement a WD timer in software ?


A watchdog timer is a hardware circuit that counts input pulses up to a certain limit. The input pulses
usually come from a very reliable clock source such as the main system clock generator. If the limiting
count is reached, the circuit generates an output signal that is treated as a fault and can be used to
initiate recovery. The fault signal is usually tied into the system reset circuitry and causes the system
hardware and software to reset. To keep the counter from reaching the limit, the software must toggle
another input signal to reset the counter. This is called retriggering the watchdog timer, and periodic
retriggering keeps the watchdog timer from expiring and resetting the system.

If the watchdog timer expires, it is because the software hasn't retriggered it often enough. The
software might be stuck in a loop somewhere, or it might have crashed altogether. In any event, it is a
problem, and you have to recover.

http://gelliphanindraviswanadhaprasad.org/interview-questions/embedded-interview-questions/

To study
1.RISC vs CISC
2.typedef
3.Diff between union n structure
4.hardware latency
5. What is rentrant?

You might also like