RTS Slides 1
RTS Slides 1
Fail safe state: of a system is one which if entered when the system fails ,no
damage would result.
Safety critical systems: is one whose failure can cause severe damages
How to achieve high reliability
Steps to achieve highly reliable software
1. Error avoidance
2. Error detection and removal
3.Fault tolerance Hardware Fault Tolerance
1. Delay Constraint
2. Duration Constraint
3. Deadline constraint
Deadline Constraint : capturesthe permissible maximum
separation between e1 and e2. ie the second event must follow the
first event within the permissible maximum separation time . where
t(e1) and t(e2) are the time stamps of e1 and e2,d is the deadline
and ∆ is the actual separation in time between the occurrence of
two events
Duration Constraint: Specifies the time period over which the
event acts. it can be either minimum or maximum.
Modeling timing constraints
It can serve as a formal specification of the system
Finite State Machine
is used to model traditional systems.
at any point of time a system can be in any one of a state.
State is represented by a circle.
A state change is called state transition
A transition from one state to another state is represented by
drawing a directed arc from source to destination.
Extended Finite State Machine
Used to model time constraints
It extends FSM by incorporating the action of setting a timer
and the expiry event of a timer
SS Constraints
RS Constraints
SR Constraints
RR Constraints
RTT Scheduling
Task instance: task is generated when some specificevent occurs
Relative deadline and Absolute deadline
Absolute deadline of a task is the absolute time value by which the results from
the task are expected
Relative deadline is the time interval between the start of the task and the
instance at which deadline occurs
Response time
Task Precedence: a task is said to precede another task ,if
the first task must complete before the second task can stat
Data sharing
Types of Real-Time Tasks
Periodic tasks
- Time-driven.
-repeats after a certain fixed time interval
-represented by
E.g.: monitoring temperature of a patient in an ICU.
Aperiodic tasks
-Event-driven.
-arise at random instants
-E.g.: Task activated upon detecting change in patient’s condition.
Sporadic Tasks
Recurs at random instants
Represented by
E.g: emergency message sending
pi : task period ai : arrival time ri : ready time di : deadline
gi :minimum separation between the consecutive instances of a task
ei : worst case execution time.
RTT scheduling basic concepts
•Proficient scheduler
•Optimal scheduler
•Scheduling points
•Preemptive scheduler
•Utilization
Classification of RTT Scheduling algorithms
❖ Another classification is based on task acceptance test
1. Planning Based
2. Best Effort
Schedulability test
1. Necessary condition: total cpu utilization due to all the task in
the task set should be less than 1
2. Sufficient condition:
Advantage RMA
1. RMA Transient overload handling: means when a lower priority
task doesn’t complete within its planned completion time ,cant make any
higher priority task to miss its deadline .
Disadvantages
1. Difficult to support aperiodic and sporadic task.
2. Is not optimal when task periods and deadlines are
differ
Deadline monotonic algorithm
A variant of RMA and assigns priorities to tasks based on
their deadlines, rather than assigning priorities based on task
periods .
Assigns high priority to tasks with shorter deadline
Issues in using RMA in practical situations long
1. Handling critical tasks with
transformation technique) periods(Period
2. Handling aperiodic and sporadic tasks
Aperiodic server
1. deferrable server(tickets are replenished at regular intervals
,independent of actual ticket usage)
2. sporadic server(replenishment time depends on exact ticket
usage time, guarantees a minimum separation between two
instances of a task)
3. Coping with limited priority levels(assigning priority to task)
a. Uniform scheme(all the tasks in the application are uniformly
divided among the available priority levels, if its not possible then more
tasks should be made to share the lower priority levels)
b. Arithmetic scheme(the number of tasks assigned to different priority
levels form an arithmetic progression. Let N be the number of tasks then
N=r+2r+3r+..nr ,where n be the total number of priority levels )
c. Geometric scheme(the number of tasks assigned to different priority
levels form a geometric progression
N=r+kr2+kr3+..krn
d. Logarithmic scheme(shorter period tasks should be allotted distinct
priority levels and many lower priority tasks on the other hand can be
clubbed together at the same priority levels without causing any problem
to the schedulability of HP Task)
Resource Sharing Among RTT
● We first consider how to adapt the analysis discussed previously when tasks
● Later, in our discussion of distributed systems, we will consider tasks that have
precedence constraints.
Resource Access Control Protocols
● We now consider several protocols for allocating resources that control priority inversions
and/or deadlocks.
● From now on, the term “critical section” is taken to mean “outermost critical section” unless
specified otherwise.
The simplest protocol: just execute each critical section nonpreemptively. If tasks are indexed
by priority (or relative deadline in the case of EDF), then task Ti has a blocking term equal to
PRIORITY INVERSION
● Simple priority inversion
● Unbounded priority inversion
Unbounded priority inversion
Priority inheritance protocol (pip)
Priority Inheritance Protocol (PIP) is a technique which is used for sharing critical resources
among different tasks. This allows the sharing of critical resources among different without the
occurrence of unbounded priority inversions.
Is a simple technique to share CR among tasks without incurring unbounded priority inversions.
The essence of this protocol is that whenever a task suffers priority inversion, the priority of the
lower priority task holding the resource is raised through a priority inheritance mechanism. it
enables it to complete its usage of the CR as early as possible without having to suffer preemption
from intermediate tasks.
When several tasks waiting for a resource ,the task holding the resource inherits the highest
priority of all tasks waiting for the resource.(if this priority is greater than its own priority)
Basic Concept of PIP :
The basic concept of PIP is that when a task goes through priority inversion, the
priority of the lower priority task which has the critical resource is increased by the
priority inheritance mechanism.
It allows this task to use the critical resource as early as possible without going
through the preemption.
It avoids the unbounded priority inversion.
Working of PIP :
● When several tasks are waiting for the same critical resource, the task which is currently holding this
critical resource is given the highest priority among all the tasks which are waiting for the same critical
resource.
● Now after the lower priority task having the critical resource is given the highest priority then the
intermediate priority tasks can not preempt this task. This helps in avoiding the unbounded priority
inversion.
● When the task which is given the highest priority among all tasks, finishes the job and releases the critical
resource then it gets back to its original priority value (which may be less or equal).
● If a task is holding multiple critical resources then after releasing one critical resource it can not go back to
it original priority value. In this case it inherits the highest priority among all tasks waiting for the same
critical resource.
If the critical resource is free then
allocate the resource
If the critical resource is held by higher priority task then
wait for the resource
If the critical resource is held by lower priority task
{
lower priority task is provided the highest priority
other tasks wait for the resource
}
Advantages of PIP :
Priority Inheritance protocol has the following advantages:
● It allows the different priority tasks to share the critical resources.
● The most prominent advantage with Priority Inheritance Protocol is that it
avoids the unbounded priority inversion.
Disadvantages of PIP :
Priority Inheritance Protocol has two major problems which may occur:
Deadlock –
► There is possibility of deadlock in the priority inheritance protocol.
For example, there are two tasks T1 and T2. Suppose T1 has the higher
priority than T2. T2 starts running first and holds the critical resource CR2.
► After that, T1 arrives and preempts T2. T1 holds critical resource CR1 and
also tries to hold CR2 which is held by T2. Now T1 blocks and T2 inherits
the priority of T1 according to PIP. T2 starts execution and now T2 tries to
hold CR1 which is held by T1.
► Thus, both T1 and T2 are deadlocked.
● Chain Blocking –
► When a task goes through priority inversion each time it needs a resource
then this process is called chain blocking.
► For example, there are two tasks T1 and T2. Suppose T1 has the higher
priority than T2. T2 holds the critical resource CR1 and CR2. T1 arrives and
requests for CR1. T2 undergoes the priority inversion according to PIP.
► Now, T1 request CR2, again T2 goes for priority inversion according to PIP.
Hence, multiple priority inversion to hold the critical resource leads to
chain blocking
PIP- Two important problems
1. Deadlock
consider following sequence of actions by two tasks
T1 and T2,which need access to two shared CR1 and CR2
T1 :Lock CR1,Lock CR2,Unlock CR2, Unlock CR1
T2: Lock CR2,Lock CR1,Unlock CR1, Unlock CR2
i. T1 has higher priority than T2
ii. T2 starts running first and locks CR2
iii. T1 arrives ,preempts T2 and starts executing
iv. T1 locks CR1 and then tries to lock CR2 which is being held by T2
v. T1 blocks and T2 inherits T1 ‘s priority according to the PIP
vi. T2 resumes its execution and after sometime needs to lock
the resource CR1 being held by T1
vii. T1 and T2 are both deadlocked
Chain Blocking
A task is said to undergo chain blocking ,if each time it needs a
resource ,it undergoes priority inversion
2. T2 holding CR1 and CR2 and T1 arrives and requests to lock CR1.it
undergoes priority inversion and causes T2 to inherit its priority
► The basic concept of Highest Locker Protocol is based on the ceiling priority
value.
► When a task holds a critical resource its priority is changed to the ceiling
priority vale of the critical resource.
► If a task holds multiple critical resources, then maximum of all ceiling priorities
values is assigned as priority of the task.
Working of HLP :
● Resources required by each task is found before the compile time.
● Initially a ceiling priority value is assigned to each critical resource.
● Ceiling priority value of a critical resource is calculated as the maximum of priorities
of all those tasks which may request to hold this critical resource.
● When a task holds a critical resource, corresponding ceiling priority value is
assigned as priority to the task.
● Task acquiring multiple critical resources is assigned maximum of all ceiling priority
value.
● Further the execution is done on the basis of allotted priorities.
Features of HLP :
● When HLP is used for resource sharing, once the task holds the required critical
resource then it is not blocked any further.
● Before a task can hold one resource, all the resources that may be required by
this task should be free.
● It prevents tasks from going into deadlock or chain blocking.
Advantages of HLP :
Following are the advantages of Highest Locker Protocol:
● It is useful into critical resource sharing by several tasks.
● It avoids the unbounded priority inversion among tasks.
● It overcomes the limitations of priority inheritance protocol.
● It prevents from deadlock as a task hols one resource, all other required resources by
this task must be free.
● A task can not go into chain blocking using Highest Locker Protocol.
Disadvantages of HLP :
The major disadvantage of Highest Locker Protocol is Inheritance related Priority Inversion.
Inheritance related Priority Inversion occurs when the priority value of low priority task
acquiring a critical resource is assigned the highest priority using ceiling rule then the
intermediate priority tasks that do not need the resource cannot execute and undergo
Inheritance related Priority Conversion.
Highest Locker Protocol (HLP): Priority Ceiling Protocol (PCP) :
when a task acquires a resource its priority is set equal to the ceiling
priority of all its locked resources, if the task holds multiple resources then
it inherits the highest ceiling priority of all its locked resources.
Ceiling priority of a resource Ri be Ceil(Ri) priority
of a task Tj is pri(Tj)
Ceil(Ri) =max({ pri(Tj) / Tj needs Ri})
Theorem
When HLP is used for resource sharing once a task gets a resource required by
it. It is not blocked any further.
Corollary 1.
under HLP before a task can acquire one resource ,all the resources that might
be required by it must be free.
Corollary 2.
a task can’t undergo chain blocking in HLP
Shortcomings
1. Inheritance related inversion
when the priority value of a low priority task holding a resource is raised to
a high value by the ceiling rule, the intermediate priority tasks not needing the
resource cannot execute and are said to undergo inheritance related inversion
Priority ceiling protocol(pcp)
Minimizing inheritance related inversions
A resource may not be granted to a requesting task even if the resource is free
Associates a ceiling value with every resource, that is the maximum of the
priority values of all tasks that might use the resource
CSC is initialized to 0(lower priority than the least priority task in the system)
Resource sharing among tasks under PCP is regulated using two
rules
Resource Grant Rule: consist of two clauses. these two clauses are
applied when a task requests to block a resource
● HIGHEST LOCKER PROTOCOL ● PRIORITY CEILING PROTOCOL
● It is a critical resource sharing protocol which is an ● It is a critical resource sharing protocol which is an
extension of PIP. extension of PIP and HLP.
● It overcomes the limitations of PIP. ● It overcomes the limitations of PIP and HLP.
● It is the least efficient among all resource sharing ● While it is the most efficient one among all resource
protocols. sharing protocols.
Fault Detection
1) Arriving task
A task has four properties when it arrives, arrival time (ai), Ready time (ri), Deadline – (di) and worst
case computation time (ci) represented as Ti = (ai, ri, di, ci)
2) EDF schedulability
Check if all the tasks can be scheduled successfully using the earliest deadline first algorithm. If the
schedulability test fails, then reject the set of tasks saying that they are not schedulable.
3) Searching for timeslot
When task Ti arrives, check each processor to find if the primary copy (Pri) of the task can be
scheduled between ri and di. Say it is scheduled on processor Pi.
4) Try overloading
Try to overload the backup copy (Bki) on an existing backup slot on any processor other than Pi. Note:
The backups of 2 primary tasks that are scheduled on the same processor must not overlap. If the
processor fails, it will not be possible to schedule the two backups simultaneously since they are on the
same time slot (overloaded).
5) EDF Algorithm
If there is no existing backup slot that can be overloaded, then schedule the backup on the latest possible
free slot depending upon the dead line of the task. The task with the earliest deadline is scheduled first.
6) De-Allocation of backups
If a schedule has been found for both the primary and backup copy for a task, commit the task, otherwise
reject it. If the primary copy executes successfully, the corresponding backup copy is deallocated.
7) Backup execution
If there is a permanent or transient fault in the processor, the processor crashes and then all the backups
of the tasks that were running on this system are executed on different processors
Clock in distributed system?
► A logical clock is a mechanism for capturing chronological and causal relationships
in a distributed system.
► Often, distributed systems may have no physically synchronous global clock.
Clock synchronization
► Master clock sends its time to all other clocks (slave clocks) for synchronization.
► Server broadcasts its time after each ‘t’ time interval.
► Slave clocks receive time from master clock and set their time accordingly.
► Time interval ‘t’ is chosen quite carefully.
1.
Clocks in DRTS
Clocks in a system are useful for two main purposes
1. Determine time out
2. Time stamping
Clock synchronization(external and internal)
1. Centralized clock synchronization
2. Distributed clock synchronization
•Byzantine clock
•Byzantine clock is two faced clock, it can transmit different values
to different clocks at the same time.
Fault-tolerant scheduling
Fault Evasion: Means to estimate the present number, the future incidence, and the
likely consequences of faults.
Scheduling Algorithms are used for fault tolerance as well as fault
avoidance which may be classified as
1. First Come First Serve
2. Shortest Job First
3. Preemptive
4. Non-Preemptive
5. Round-Robin Technique
Basics of Fault-Tolerant Scheduling
The general method of responding to a failure is as follows:
► Transient Failures: If the system is designed only to withstand transients that go away
quickly, reexecution of the failed task or of a shorter, more basic, version of that task is
carried out. The scheduling problem reduces to ensuring that there is always enough time
to carry out such an execution before the deadline
► Software Failure: Here, the failure is that of the software, not of the processor. Software
diversity is used: backup software which is different from the failed software is invoked.
Again, we have to make sure, in preparing for software faults, that there is enough time for
the backup version to meet the original task deadline.
► Permanent Failure: Backup versions of the tasks assigned to the failed
processor must be invoked.
The steps are: –
► Provide each task with a backup copy.
► Place the backups in the schedule, either prior to operation for offline
scheduling or before guaranteeing the task for online scheduling.
► If a processor fails, activate one backup for each of the tasks that
have been affected.
Commercial Real-time Operating
Systems – An Introduction
► Introduction
► LynxOS
► QNX/Neutrino
► VRTX
► VxWorks
► Spring Kernel
► Commercial RTOS s different from traditional OS – gives more
predictability
► Used in the following areas such as:
► Embedded Systems or Industrial Control Systems
► Parallel and Distributed Systems
► E.g. LynxOS, VxWorks, pSoS, QNX , bluecat
► Traditionally these systems can be classified into a Uniprocessor,
Multiprocessor or Distributed Real-Time OS
Features of RTOS
Clock and timer support: clock and timer services with adequate
resolution:
Clock resolution denotes the time granularity provided by the system
clock of a computer. Thus the resolution of a system clock corresponds
to the duration of time that elapses between two successive clocks
ticks.
Real time priority levels: static priority levels
Fast task preemption: is the time duration for which a higher priority task
waits before it is allowed to execute
Predictable and fast interrupt latency:
interrupt latency is the time delay between the occurrence of an interrupt and
the running of the corresponding ISR
Support for resource sharing among RTT
Requirements on memory management
Support for asynchronous I/O: non blocking I/O
Additional requirements for embedded RTOS:cost,size,power consumption
UNIX as a RTOS
A process running in kernel mode can’t be preempted by other processes. Unix system
preempt processes running in the user mode.
Consequence of this is that even when a low priority process makes a system call ,the high
priority processes would have to wait until the system call by the low priority process
completes
For RT applications this causes a priority inversion
Kernel routine starts to execute ,all interrupts are disabled
.interrupts are enabled only after the OS routine completes
Dynamic Priority Levels
At every preempting point the scheduler scans the multilevel queue from the
top(highest priority) and selects the first task of the non empty queue
Each task is allowed to run for a fixed time quantum at a time , unix normally
uses one second time slice.
The kernel preempts a process that doesn’t complete within its assigned time
quantum ,recomputed its priority and inserts it back into one of the priority queues
The basic philosophy of Unix operating System is that the interactive tasks are
made to assume higher priority levels and are processed a the earliest . This gives
Nice(Ti)=
3. RT application is developed on the host and is then cross compiled to generate code
for the target processor .the developed application is downloaded onto target board
that is to be embedded in a RTS via a serial port or a TCP/IP connection
4. ROM resident small RT kernel is used in the target board and once the program works
successfully it is fused in the ROM and becomes ready to deployed in applications
5. Ex:VxWorks,PSOS,VRTX
It needs cross compiler and cross debugger
Extensions to the traditional unix kernel for RT Applications
I. by adding some RT capabilities(RT timer support, RT task
scheduler ) over the kernel.
Preemption Point approach
1. Preemption points in the execution of a system routine are
the instants at which the kernel data structure is consistent
2. In this point kernel can safely be preempted to make way
for any waiting higher priority RTT to run without
corrupting any kernel Data structures
3. The execution of a system call reaches a preemption point
the kernel checks to see whether any higher priority tasks
have become ready .if there is at least one ,it preempts the
processing of the kernel routine and dispatches the waiting
highest priority task immediately
4. Ex: HP UX, Windows CE
Self host systems
1. A RTA is developed on the same OS on which the RTA
would finally run
2. Once the application runs satisfactorily on the host, it is
fused on a ROM or flash memory on the target board along
with a possibly stripped down version of the OS.
3. While deploying the application the OS modules that are
not essential during task execution are excluded to
minimize the size of the OS
4. Based on micro kernel architecture-only the core
functionalities such as interrupt handling and process
management are implemented as kernel routines. All other
functionalities such as memory management, file
management, device management etc are implemented as
add on modules which operate in the user mode
Non Preemptive kernel
▪ It is necessary to use locks at appropriate places in the kernel
code to overcome the problem. Two types of locks are used in
fully preemptive unix systems
1. Kernel level locks
Similar to traditional lock
Inefficient due to context switch overhead
2. Spin lock
RT priorities
Unix based RTS support dynamic ,RT and idle priorities
Idle-lowest priority level, idle task run this level, static and are
not recomputed periodically
Dynamic-recomputed periodically,
RT-static priorities and are not recomputed during run time, hard
RTT operate at these levels
Windows as a RTOS
▪ Reduces the cost of development, increase the availability of add- on software packages
,enhances ease of programming and facilitates easy integration of separately developed
modules
▪ POSIX stands for Portable Operating System Interface, and is an IEEE standard designed to
facilitate application portability.
▪ If they are successful, it will make it easier to port applications between hardware platforms.
▪ POSIX is an evolving group of standards, each of which covers different
aspects of the operating systems.
▪ Open software Standards
1. Open source-provides portability at the source code level
2. Open object-provides portability of unlinked object modules across
different platforms
3. Open binary-provide complete s/w portability across h/w platforms
based on a common binary language structure
Overview of posix
▪ POSIX standard defines only interfaces to OS services and the semantics of these
services, but doesn’t specify how exactly the services are to be implemented
4. POSIX 4: RT extensions
RT POSIX Standard
▪ Memory locking
▪ Multithreading support
Benchmarking real time systems
FreeRTOS takes twice as much cycles to take and block on a semaphore than to signal and
switch to a new task.
○ Other OS do not exhibit the same behavior.
● Tracing the execution can provide insigher uCOS-III maximum time is 4.5x higher than
the average time.
● FreeRTOS maximum time is 2x higher than the average time.
● uCOS-III schedules a Tick Task in the tick interrupt handler.
Benchmarking real time systems
MIPS(MillionInstructions Per Second) and FLOPS(FLoating Point
Operations Per Second)
Rhealstone Metric
6 parameters of RTS are considered
1. Task Switching Time (tts )
2. Task Preemption Time(ttp )
3. Interrupt Latency Time (til)
4. Semaphore shuffling time(tss)
5. Unbounded Priority Inversion Time(tup)
6. Datagram Throughput time(tdt)
1. Task Switching Time (tts ):
It is defined as the time it takes for one context switch among equal priority
tasks
2. Task Preemption Time(ttp ):
Defined as the time it takes to start execution of a higher priority task ,after the condition enabling the
task occurs .consists of the following 3 components
I. Task switching time
II. Time to recognize the event enabling the higher
priority.
III. Time to dispatch
3. Interrupt Latency Time (til):
Consists of the following components
1. 1.Hardware Delay in CPU recogonizing the interrupt.
2. 2.Time to complete the current instruction.
3. Time to save the context of the currently running task.
4. Start the ISR
4. Semaphore shuffling time(tss):
tup=t1+t2
It is computed as the time it takes for the OS to recognize priority inversion(t 1) and
run the task holding the resource and start T2 after T1 completes(t2)
6. Datagram Throughput time(tdt)
Indicates the number of kilobytes of data that can be transferred between two
► They use timing constraints that represent a certain range of values for which the data are valid.
This range is called temporal validity.
► A conventional database cannot work under these circumstances because the inconsistencies
between the real world objects and the data that represents them are too severe for simple
modifications.
► An effective system needs to be able to handle time-sensitive queries, return only temporally
valid data, and support priority scheduling.
► To enter the data in the records, often a sensor or an input device monitors the state of the
physical system and updates the database with new information to reflect the physical system
more accurately.
► When designing a real-time database system, one should consider how to represent valid time,
how facts are associated with real-time system.
► Also, consider how to represent attribute values in the database so that process transactions and
data consistency have no violations.
► In real-time databases, deadlines are formed and different kinds of systems respond to data that
does not meet its deadline in different ways.
► In a real-time system, each transaction uses a timestamp to schedule the transactions
► A priority mapper unit assigns a level of importance to each transaction upon its arrival in the
database system that is dependent on how the system views times and other priorities.
► The timestamp method relies on the arrival time in the system.
Real-time databases are useful for
► accounting,
► banking,
► law,
► medical records,
► multi-media,
► process control,
► reservation systems,
► and scientific data analysis
Examples of database applications
● Amazon
● CNN
● eBay
● Facebook
● Fandango
● Filemaker (Mac OS)
● Microsoft Access
● Oracle relational database
● SAP (Systems, Applications & Products in Data Processing)
● Ticketmaster
● Wikipedia
● Yelp
● YouTube
● Google
● My SQL
Responses :
Hard deadline
► If not meeting deadlines creates problems, a hard deadline is best.
► It is periodic, meaning that it enters the database on a regular rhythmic pattern.
► An example is data gathered by a sensor.
► These are often used in life critical systems
Firm deadline
► Firm deadlines appear to be similar to hard deadlines yet they differ from hard
deadlines because firm deadlines measure how important it is to complete the
transaction at some point after the transaction arrives.
► Sometimes completing a transaction after its deadline has expired may be harmful
or not helpful, and both the firm and hard deadlines consider this.
► An example of a firm deadline is an autopilot system.
Soft deadline
► If meeting time constrains is desirable but missing deadlines do not cause
serious damage, a soft deadline may be best.
► It operates on an aperiodic or irregular schedule.
► In fact, the arrival of each time for each task is unknown.
► An example is an operator switchboard for a telephone.
► Hard deadline processes abort transactions that have passed the deadline,
improving the system by cleaning out clutter that needs to be processed.
► Processes can clear out not only the transactions with expired deadlines but
also transactions with the longest deadlines, assuming that once they reach
the processor they would be obsolete.
► This means other transactions should be of higher priority.
► The goal of scheduling periods and deadlines is to update transactions guaranteed to
complete before their deadline in such a way that the workload is minimal.
► With large real-time databases, buffering functions can help improve performance
tremendously.
► A buffer is part of the database that is stored in main memory to reduce transaction
response time.
► In order to reduce disk input and output transactions, a certain number of buffers
should be allocated
Temporal database
A uni-temporal database has one axis of time, either the validity range or the system
time range
Bi-Temporal
Absolute Validity :
This is the notion of consistency between the environment and its reflection in the
database given by the data collected by the system about the environment
Relative Consistency
This is the notion of consistency among the data are used to derive new data
Concurrency control in real-time databases
● Optimistic - Delay the checking of whether a transaction meets the isolation and other
integrity rules until its end, without blocking any of its operations , and then abort a
transaction to prevent the violation, if the desired rules are to be violated upon its
commit. An aborted transaction is immediately restarted and re-executed, which incurs
an obvious overhead. If not too many transactions are aborted, then being optimistic is
usually a good strategy.
Both the update and query operations on the tracking data for a missile must be
processed within a given deadline: otherwise, the information provided could be
of little value
Commercial real-time databases
► A commercial real time database need to avoid using anything that can introduce
unpredictable latency.
► Real-time databases are useful for accounting, banking, law, medical records,
multi-media, process control, reservation systems, and scientific data analysis.
Some of the Commercial Databases are :
► Aerospike DBS
► ArangoDB
► eXtremeDB
► Ehcache
► GigaSpaces
► InfinityDB
► MonetDB
► solidDB etc.
Real-time Communication
► Real-time communications (RTC) is a term used to refer to any live telecommunications that occur
without transmission delays.
► RTC is nearly instant with minimal latency. RTC data and messages are not stored between transmission
and reception.
► RTC is generally a peer-to-peer, rather than broadcasting or multicasting, transmission.
► Examples of RTC include
► The Internet, land lines, mobile/cell phones, instant messaging (IM), Internet relay chat, video
conferencing, teleconferencing and robotic telepresence. Emails, bulletin boards and blogs are not RTC
channels but occur in time-shifting mode, where there is a significant delay between data transmission
and reception.
► RTC features were first introduced in Windows XP and included Microsoft Office Communicator,
MSN Messenger, Windows Messenger, real-time voice and video and IM
.
► Microsoft operating systems and software applications include RTC platforms comprised of
RTC-enabled component sets.
► In RTC, there is always a direct path between the source and the destination. Although the link
might contain several intermediate nodes, the data goes from source to destination without
being stored in between them.
Real-time communications can take place in half-duplex or full-duplex modes:
● Half-duplex RTC. Data transmission can happen in both directions on a single carrier
or circuit but not at the same time.
● Full-duplex RTC. Data transmission can occur in both directions simultaneously on a
single carrier or circuit.
Real-time communications examples
Real-time communications tools and applications are many and varied, ranging from
old-school telephony to cloud communications services.