0% found this document useful (0 votes)
18 views

Unit 5

The document discusses wireless sensor network platforms and the TinyOS operating system. It describes typical components of a WSN platform including microcontrollers, radios, power supplies, and sensors. It then discusses the original TinyOS platform developed at UC Berkeley and commercialized by Crossbow. It explains that TinyOS uses an event-driven programming model with microthreads and two-level scheduling to support concurrency on resource-constrained sensor nodes. Application programs are constructed as graphs of software components that communicate through asynchronous events and commands.

Uploaded by

savsen6720
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Unit 5

The document discusses wireless sensor network platforms and the TinyOS operating system. It describes typical components of a WSN platform including microcontrollers, radios, power supplies, and sensors. It then discusses the original TinyOS platform developed at UC Berkeley and commercialized by Crossbow. It explains that TinyOS uses an event-driven programming model with microthreads and two-level scheduling to support concurrency on resource-constrained sensor nodes. Application programs are constructed as graphs of software components that communicate through asynchronous events and commands.

Uploaded by

savsen6720
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 121

WSN Platform


A hardware device that enable wireless
sensor network research

Typical components
– Microcontroller
– Radio
– Power supply
– Sensors and/or actuators
– Peripherals

USB interface

Storage
Crossbow
● Original design: UC Berkeley
– Commercial product: Crossbow

MicaZ
Mica2
– Other products from Crossbow
● Cricket
● Imote2
● IRIS

Cricket Imote IRIS


Traditional Systems

• Well established layers


Application of abstractions
User
Application • Strict boundaries
• Ample resources
System
Network Stack • Independent
Threads Transport
applications at
Network
Address Space
Data Link
endpoints
Files
Physical Layer communicate pt-pt
Drivers through routers
• Well attended
Routers
Sensor Network Systems
• Highly constrained resources
 processing, storage, bandwidth, power, limited hardware parallelism,
relatively simple interconnect

• Applications spread over many small nodes


 self-organizing collectives
 highly integrated with changing environment and network
 diversity in design and usage
⇒ Need a framework for:

• Concurrency intensive in bursts • Resource-constrained


concurrency
 streams of sensor data &
network traffic
• Defining boundaries
• Appl’n-specific processing
• Robust allow abstractions to emerge
 inaccessible, critical operation
Choice of Programming Primitives

• Traditional approaches
 command processing loop (wait request, act, respond)

 monolithic event processing

 full thread/socket posix regime

• Alternative
 provide framework for concurrency and modularity

 never poll, never block

 interleaving flows, events


TinyOS


Microthreaded OS (lightweight thread support) and efficient
network interfaces


Two level scheduling structure
– Long running tasks that can be interrupted by hardware events


Small, tightly integrated design that allows crossover of
software components into hardware
Tiny OS Concepts
• Scheduler + Graph of Components
 constrained two-level scheduling model: Commands Events

msg_rec(type, data)
threads + events

msg_send_done)
power(mode)

type, data)
send_msg
(addr,
• Component:

init
 Commands
 Event Handlers Messaging Component
Internal
 Frame (storage)

TX_packet(buf)
internal thread State

Power(mode)
 Tasks (concurrency)

init

RX_packet_done (buffer)
• Constrained Storage Model
 frame per component, shared stack, no
heap

TX_packet_done
(success)
• Very lean multithreading
Application = Graph of Components
application

Route map Router Sensor Appln

Active Messages

Example: ad hoc, multi-hop


packet

Radio Packet Serial Packet Temp Photo routing of photo sensor


SW
readings

UART HW
byte

Radio byte ADC


3450 B code
226 B data

clock
Graph of cooperating
bit

RFM
state machines
on shared stack
TOS Execution Model

• commands request action application comp data processing


 ack/nack at every boundary
message-event driven
 call command or post task
active message
event-driven packet-pump
• events notify occurrence

packet
Radio Packet
crc
 HW interrupt at lowest level
event-driven byte-pump
 may signal events
byte Radio byte
 call commands encode/decode

 post tasks event-driven bit-pump


RFM
bit

• tasks provide logical


concurrency

Event-Driven Sensor Access Pattern

command result_t StdControl.start() {


return call Timer.start(TIMER_REPEAT, 200); SENSE
}
event result_t Timer.fired() {
Timer Photo LED
return call sensor.getData();
}
event result_t sensor.dataReady(uint16_t data) {
display(data)
return SUCCESS;
}
• clock event handler initiates data collection
• sensor signals data ready event
• data event handler calls output command
• device sleeps or handles other activity while waiting
• conservative send/ack at component boundary
TinyOS Commands and Events
{
...
status = call CmdName(args)
...
}

command CmdName(args) {
...
return status;
}

event EvtName(args) {
...
return status;
}

{
...
status = signal EvtName(args)
...
}
TinyOS Execution Contexts
Tasks
events

commands

Interrupts

Hardware
• Events generated by interrupts preempt tasks
• Tasks do not preempt tasks
• Both essentially process state transitions
Handling Concurrency: Async or Sync
Code

Async methods call only async methods (interrupts are async)

Sync methods/tasks call only sync methods

Potential race conditions:


any update to shared state from async code
any update to shared state from sync code that is
also updated from async code

Compiler rule:
if a variable x is accessed by async code, then any access
of x outside of an atomic statement is a compile-time error

Race-Free Invariant:
any update to shared state is either not a potential race
condition (sync code only) or is within an atomic section
Tasks

• provide concurrency internal to a component


 longer running operations

• are preempted by events


• not preempted by tasks
• able to perform operations beyond event context
• may call
{
commands
task void TskName {
... ...
• may signal events
post TskName();
...
}
}
Typical Application Use of Tasks

• event driven data acquisition


• schedule task to do computational portion
event result_t sensor.dataReady(uint16_t data) {
putdata(data);
post processData();
return SUCCESS;
}
task void processData() {
int16_t i, sum=0; • 128 Hz sampling rate
• simple FIR filter
for (i=0; i ‹ maxdata; i++) • dynamic software tuning for centering the
magnetometer signal (1208 bytes)
sum += (rdata[i] ›› 7); • digital control of analog, not DSP
• ADC (196 bytes)
display(sum ›› shiftdata);
}
Task Scheduling

• Typically simple FIFO scheduler


• Bound on number of pending tasks
• When idle, shuts down node except clock

• Uses non-blocking task queue data structure

• Simple event-driven structure + control over complete


application/system graph
 instead of complex task priorities and IPC
Maintaining Scheduling Agility

• Need logical concurrency at many levels of the graph


• While meeting hard timing constraints
 sample the radio in every bit window

⇒ Retain event-driven structure throughout application


⇒ Tasks extend processing outside event window
⇒ All operations are non-blocking
The Complete Application

SenseToRfm

generic comm IntToRfm

AMStandard

RadioCRCPacket UARTnoCRCPacket
packet

CRCfilter noCRCPacket

Timer photo

MicaHighSpeedRadioM

phototemp
SecDedEncode ChannelMon RadioTiming
byte

SW

SPIByteFIFO RandomLFSR ADC HW


UART ClockC
bit

SlavePin
Programming Syntax

• TinyOS 2.0 is written in an extension of C, called nesC

• Applications are too


 just additional components composed with OS components

• Provides syntax for TinyOS concurrency and storage model


 commands, events, tasks
 local frame variable

• Compositional support
 separation of definition and linkage
 robustness through narrow interfaces and reuse
 interpositioning

• Whole system analysis and optimization


Component Interface

• logically related set of commands and events


StdControl.nc

interface StdControl {
command result_t init();
command result_t start();
command result_t stop();
}
Clock.nc

interface Clock {
command result_t setRate(char interval, char scale);
event result_t fire();
}
Component Types

• Configuration
 links together components to compose new component

 configurations can be nested

 complete “main” application is always a configuration

• Module
 provides code that implements one or more interfaces and

internal behavior
Example of Top Level Configuration

configuration SenseToRfm {
// this module does not provide any interface
}
implementation
{
components Main, SenseToInt, IntToRfm, ClockC, Photo as Sensor;

Main.StdControl -> SenseToInt;


Main.StdControl -> IntToRfm; Main
StdControl
SenseToInt.Clock -> ClockC;
SenseToInt.ADC -> Sensor; SenseToInt
SenseToInt.ADCControl -> Sensor;
Clock ADC ADCControl IntOutput
SenseToInt.IntOutput -> IntToRfm;
} ClockC Photo IntToRfm
Nested Configuration

includes IntMsg;
configuration IntToRfm StdControl IntOutput
{
provides {
interface IntOutput; IntToRfmM

interface StdControl;
SubControl SendMsg[AM_INTMSG];
}
} GenericComm
implementation
{
components IntToRfmM, GenericComm as Comm;

IntOutput = IntToRfmM;
StdControl = IntToRfmM;

IntToRfmM.Send -> Comm.SendMsg[AM_INTMSG];


IntToRfmM.SubControl -> Comm;
IntToRfm Module
includes IntMsg; command result_t StdControl.start()
{ return call SubControl.start(); }
module IntToRfmM
command result_t StdControl.stop()
{
uses { { return call SubControl.stop(); }
interface StdControl as SubControl;
interface SendMsg as Send; command result_t IntOutput.output(uint16_t value)
}
{
provides {
...
interface IntOutput;
interface StdControl; if (call Send.send(TOS_BCAST_ADDR,
} sizeof(IntMsg), &data)
} return SUCCESS;
implementation ...
{
}
bool pending;
struct TOS_Msg data;
event result_t Send.sendDone(TOS_MsgPtr msg,
command result_t StdControl.init() { result_t success)
pending = FALSE; {
return call SubControl.init(); ...
}
}
}
Atomicity Support in nesC

• Split phase operations require care to deal with pending operations

• Race conditions may occur when shared state is accessed by


premptible executions, e.g. when an event accesses a shared
state, or when a task updates state (premptible by an event which
then uses that state)

• nesC supports atomic block


 implemented by turning of interrupts
 for efficiency, no calls are allowed in block
 access to shared variable outside atomic block is not allowed
Supporting HW Evolution

• Distribution broken into


 apps: top-level applications
 tos:

lib: shared application components

system: hardware independent system components

platform: hardware dependent system components
o includes HPLs and hardware.h

interfaces
 tools: development support tools
 contrib
 beta

• Component design so HW and SW look the same


 example: temp component

may abstract particular channel of ADC on the microcontroller

may be a SW I2C protocol to a sensor board with digital sensor or ADC

• HW/SW boundary can move up and down with minimal changes


Example: Radio Byte Operation

• Pipelines transmission: transmits byte while encoding next byte

• Trades 1 byte of buffering for easy deadline

• Encoding task must complete before byte transmission completes

• Decode must complete before next byte arrives

• Separates high level latencies from low level real-time rqmts


Encode Task Byte 1 Byte 2 Byte 3 Byte 4

Bit transmission start Byte 1 Byte 2 Byte 3

RFM Bits
Dynamics of Events and Threads
bit event =>
end of byte => thread posted to start
bit event filtered end of packet =>
at byte layer end of msg send send next message

radio takes clock events to detect recv


Sending a Message
bool pending;
struct TOS_Msg data;
command result_t IntOutput.output(uint16_t value) {
IntMsg *message = (IntMsg *)data.data;
if (!pending) {
pending = TRUE;
message->val = value;
message->src = TOS_LOCAL_ADDRESS;
if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg), &data))
return SUCCESS;
pending = FALSE;
}
return FAIL; destination length
}
• Refuses to accept command if buffer is still full or network refuses to
accept send command
• User component provide structured msg storage
Send done Event

event result_t IntOutput.sendDone(TOS_MsgPtr msg,


result_t success)
{
if (pending && msg == &data) {
pending = FALSE;
signal IntOutput.outputComplete(success);
}
return SUCCESS;
}
}

• Send done event fans out to all potential senders


• Originator determined by match
 free buffer on success, retry or fail on failure

• Others use the event to schedule pending communication


Receive Event

event TOS_MsgPtr ReceiveIntMsg.receive(TOS_MsgPtr m) {


IntMsg *message = (IntMsg *)m->data;
call IntOutput.output(message->val);
return m;
}

• Active message automatically dispatched to associated handler


 knows format, no run-time parsing
 performs action on message event

• Must return free buffer to the system


 typically the incoming buffer if processing complete
Tiny Active Messages

• Sending
 declare buffer storage in a frame
 request transmission
 name a handler
 handle completion signal

• Receiving
 declare a handler
 firing a handler: automatic

• Buffer management
 strict ownership exchange
 tx: send done event ⇒ reuse
 rx: must return a buffer
Tasks in Low-level Operation

• transmit packet
 send command schedules task to calculate CRC
 task initiates byte-level data pump
 events keep the pump flowing

• receive packet
 receive event schedules task to check CRC
 task signals packet ready if OK

• byte-level tx/rx
 task scheduled to encode/decode each complete byte
TinyOS tools

TOSSIM: a simulator for tinyos programs

ListenRaw, SerialForwarder: java tools to receive raw packets on PC from base
node


Oscilloscope: java tool to visualize (sensor) data in real time


Memory usage: breaks down memory usage per component (in contrib)


Peacekeeper: detect RAM corruption due to stack overflows (in lib)


Stopwatch: tool to measure execution time of code block by timestamping at
entry and exit (in osu CVS server)


Makedoc and graphviz: generate and visualize component hierarchy


Surge, Deluge, SNMS, TinyDB
Scalable Simulation Environment

• target platform: TOSSIM


 whole application compiled for host native instruction set
 event-driven execution mapped into event-driven simulator machinery
 storage model mapped to thousands of virtual nodes

• radio model and environmental


model plugged in
 bit-level fidelity

• Sockets = basestation

• Complete application
 including GUI
Simulation Scaling
TinyOS 2.0: basic changes

Scheduler: improve robustness and flexibility
– Reserved tasks by default (⇒ fault tolerance)
– Priority tasks


New nesC 1.2 features:
– Network types enable link level cross-platform interoperability
– Generic (instantiable) components, attributes, etc.


Platform definition: simplify porting
– Structure OS to leverage code reuse
– Decompose h/w devices into 3 layers: presentation, abstraction, device-independent
– Structure common chips for reuse across platforms

so platforms are a collection of chips: msp430 + CC2420 +


Power mgmt architecture for devices controlled by resource reservation


Self-initialisation


App-level notion of instantiable services
TinyOS Limitations


Static allocation allows for compile-time analysis, but can make programming
harder


No support for heterogeneity
 Support for other platforms (e.g. stargate)
 Support for high data rate apps (e.g. acoustic beamforming)
 Interoperability with other software frameworks and languages


Limited visibility
 Debugging
 Intra-node fault tolerance


Robustness solved in the details of implementation
 nesC offers only some types of checking
Em*


Software environment for sensor networks built from Linux-
class devices

Claimed features:
– Simulation and emulation tools
– Modular, but not strictly layered architecture
– Robust, autonomous, remote operation
– Fault tolerance within node and between nodes
– Reactivity to dynamics in environment and task
– High visibility into system: interactive access to all services
Contrasting Emstar and TinyOS


Similar design choices
 programming framework
− Component-based design
− “Wiring together” modules into an application

 event-driven
− reactive to “sudden” sensor events or triggers

 robustness
− Nodes/system components can fail

Differences

 hardware platform-dependent constraints


− Emstar: Develop without optimization
− TinyOS: Develop under severe resource-constraints

 operating system and language choices


− Emstar: easy to use C language, tightly coupled to linux (devfs,redhat,…)
Em* Transparently Trades-off Scale vs.
Reality

Em* code runs transparently at many degrees of “reality”: high visibility


debugging before low-visibility deployment

Pure Simulation
Deployment

Data Replay
S
ca
le

Ceiling Array

Portable Array

Reality
Em* Modularity


Dependency DAG Collaborative Sensor
Processing Application


Each module (service)
– Manages a resource & resolves State
Sync
3d Multi-
Lateration

contention
– Has a well defined interface Topology Discovery
Acoustic
Ranging
– Has a well scoped task
– Encapsulates mechanism Neighbor
Discovery
Leader
Election
Reliable
Unicast
Time
Sync
– Exposes control of policy
– Minimizes work done by client
Hardware Radio Audio Sensors
library


Application has same structure
Em* Robustness


Fault isolation via multiple processes


Active process management (EmRun)


Auto-reconnect built into libraries scheduling

– “Crashproofing” prevents cascading failure


depth map path_plan
EmRun

Soft state design style
camera motor_x motor_y
– Services periodically refresh clients
– Avoid “diff protocols”
Em* Reactivity


Event-driven software structure
– React to asynchronous notification
scheduling
– e.g. reaction to change in neighbor list

path_plan
notify

Notification through the layers filter

– Events percolate up motor_y

– Domain-specific filtering at every level


– e.g.

neighbor list membership hysteresis
EmStar Components


Tools
– EmRun
– EmProxy/EmView
– EmTOS


Standard IPC
– FUSD
– Device patterns


Common Services
– NeighborDiscovery
– TimeSync
– Routing
EmRun: Manages Services


Designed to start, stop, and monitor services


EmRun config file specifies service dependencies


Starting and stopping the system
– Starts up services in correct order
– Can detect and restart unresponsive services
– Respawns services that die
– Notifies services before shutdown, enabling graceful shutdown and
persistent state


Error/Debug Logging
– Per-process logging to in-memory ring buffers
– Configurable log levels, at run time
EmSim/EmCee

● Em* supports a variety of types of


simulation and emulation, from simulated
radio channel and sensors to emulated
radio and sensor channels (ceiling array)

● In all cases, the code is identical

● Multiple emulated nodes run in their own


spaces, on the same physical machine
EmView/EmProxy: Visualization

Emulator emview

emproxy

neighbor

linkstat

nodeN
motenic nodeN
nodeN

Mote Mote … Mote


Inter-module IPC : FUSD


Creates device file interfaces


Text/Binary on same file Client Server

User

Standard interface /dev/servicename /dev/fusd
– Language independent
Kernel
– No client library required
kfusd.o
Device Patterns


FUSD can support virtually any semantics
– What happens when client calls read()?


But many interfaces fall into certain patterns


Device Patterns
– encapsulate specific semantics
– take the form of a library:

objects, with method calls and callback functions

priority: ease of use
Status Device

Designed to report current state
– no queuing: clients not guaranteed to see
every intermediate state

Supports multiple clients

Interactive and programmatic interface
– ASCII output via “cat” Server
– binary output to programs
Config State Request
Handler Handler

Supports client notification
– notification via select() O I
Status Device

Client configurable
– client can write command string Client1 Client2 Client3
– server parses it to enable per-client behavior
Packet Device


Designed for message streams

Supports multiple clients
Server

Supports queuing
– Round-robin service of output queues Packet Device
O I
– Delivery of messages to all/ specific
clients
F F F

Client-configurable:
I O I O I O
– Input and output queue lengths
– Input filters
– Optional loopback of outputs to other Client1 Client2 Client3
clients (for snooping)
Device Files vs Regular Files


Regular files:
– Require locking semantics to prevent race conditions between readers and
writers
– Support “status” semantics but not queuing
– No support for notification, polling only


Device files:
– Leverage kernel for serialization: no locking needed
– Arbitrary control of semantics:

queuing, text/binary, per client configuration
– Immediate action, like an function call:

system call on device triggers immediate response from service, rather than setting a
request and waiting for service to poll
Interacting With em*

● Text/Binary on same device file


– Text mode enables interaction from
shell and scripts
– Binary mode enables easy
programmatic access to data as C
structures, etc.

● EmStar device patterns support


multiple concurrent clients
– IPC channels used internally can be
viewed concurrently for debugging
– “Live” state can be viewed in the shell
(“echocat –w”) or using emview
SOS: Motivation and Key Feature

• Post-deployment software updates are necessary to


• customize the system to the environment
• upgrade features
• remove bugs
• re-task system

• Remote reprogramming is desirable

• Approach: Remotely insert binary modules into running kernel


⇒ software reconfiguration without interrupting system operation
⇒ no stop and re-boot unlike differential patching

• Performance should be superior to virtual machines


Architecture Overview

Dynamically
Tree Light
Application Loadable
Routing Sensor Modules

Function Pointer Kernel


Dynamic Memory Scheduler Services
Control Blocks
Serial Comm. Sensor Low-level
Timer Device
Framer Stack Manager Drivers
Hardware
Clock UART ADC SPI I2C Abstraction
Layer

Static Kernel Dynamic Modules



Provides hardware abstraction & •
Drivers, protocols, and applications
common services

Maintains data structures to enable

Inexpensive to modify after
module loading deployment

Costly to modify after deployment •
Position independent
SOS Kernel


Hardware Abstraction Layer (HAL)

Clock, UART, ADC, SPI, etc.


Low layer device drivers interface with HAL

Timer, serial framer, communications stack, etc.


Kernel services

Dynamic memory management

Scheduling

Function control blocks
Kernel Services: Memory Management


Fixed-partition dynamic memory allocation

Constant allocation time

Low overhead


Memory management features

Guard bytes for run-time memory overflow checks

Ownership tracking

Garbage collection on completion


pkt = (uint8_t*)ker_malloc(hdr_size + sizeof(SurgeMsg), SURGE_MOD_PID);
Kernel Services: Scheduling


SOS implements non-preemptive priority scheduling via priority
queues


Event served when there is no higher priority event

Low priority queue for scheduling most events

High priority queue for time critical events, e.g., h/w interrupts &
sensitive timers


Prevents execution in interrupt contexts


post_long(TREE_ROUTING_PID, SURGE_MOD_PID, MSG_SEND_PACKET,
hdr_size + sizeof(SurgeMsg), (void*)packet, SOS_MSG_DYM_MANAGED);
Modules


Each module is uniquely identified by its ID or pid


Has private state


Represented by a message handler & has prototype:
int8_t handler(void *private_state, Message *msg)


Return value follows errno

– SOS_OK for success. -EINVAL, -ENOMEM, etc. for

failure
Kernel Services: Module Linking

Orthogonal to module distribution protocol


Kernel stores new module in free block located in program memory
and critical information about module in the module table


Kernel calls initialization routine for module

Publish functions for other parts of the system to use
char tmp_string = {'C', 'v', 'v', 0};

ker_register_fn(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string, (fn_ptr_t)tr_get_header_size);


Subscribe to functions supplied by other modules
char tmp_string = {'C', 'v', 'v', 0};

s­>get_hdr_size = (func_u8_t*)ker_get_handle(TREE_ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string);


Set initial timers and schedule events
Module–to–Kernel Communication
Module A
System Call System Messages
High Priority
System
SOS Kernel Message
Jump Table
Buffer

HW Specific API Interrupt


Hardware


Kernel provides system services and access to hardware
ker_timer_start(s­>pid, 0, TIMER_REPEAT, 500);
ker_led(LED_YELLOW_OFF);


Kernel jump table re-directs system calls to handlers

upgrade kernel independent of the module


Interrupts & messages from kernel dispatched by a high priority message buffer

low latency

concurrency safe operation
Inter-Module Communication

Module A Module B Module A Module B

Indirect Function Call


Post
Module Function Message
Pointer Table Buffer

Inter-Module Function Calls Inter-Module Message Passing



Synchronous communication •
Asynchronous communication


Kernel stores pointers to •
Messages dispatched by a two-
functions registered by modules level priority scheduler


Blocking calls with low latency

Suited for services with long latency

Type safe binding through publish /

Type-safe runtime function subscribe interface
binding
Synchronous Communication


Module can register function for low
latency blocking call (1)
Module A Module B

Modules which need such function can
subscribe to it by getting function
3 pointer pointer (i.e. **func) (2)


When service is needed, module
2 1 dereferences the function pointer
Module Function pointer (3)
Pointer Table
Asynchronous Communication

3
2 Module A

Network
1 Module B
Msg Queue 4 5 Send Queue

• Module is active when it is handling the message (2)(4)


• Message handling runs to completion and can only be
interrupted by hardware interrupts
• Module can send message to another module (3) or send
message to the network (5)
• Message can come from both network (1) and local host (3)
Module Safety

Problem: Modules can be remotely added, removed, & modified on
deployed nodes

Accessing a module

If module doesn't exist, kernel catches messages sent to it & handles
dynamically allocated memory

If module exists but can't handle the message, then module's default
handler gets message & kernel handles dynamically allocated memory


Subscribing to a module’s function

Publishing a function includes a type description that is stored in a
function control block (FCB) table

Subscription attempts include type checks against corresponding FCB

Type changes/removal of published functions result in subscribers being
redirected to system stub handler function specific to that type

Updates to functions w/ same type assumed to have same semantics
Module Library


Some applications created by combining already written and tested
modules


SOS kernel facilitates loosely coupled modules

Passing of memory ownership

Efficient function and messaging interfaces

Surge Application
with Debugging Memory
Surge
Debug
Photo Tree
Sensor Routing
Module Design
#include <module.h>

typedef struct {
uint8_t pid;
uint8_t led_on;
} app_state;
• Uses standard C
DECL_MOD_STATE(app_state);
DECL_MOD_ID(BLINK_ID);
• Programs created by “wiring” modules
int8_t module(void *state, Message *msg){
app_state *s = (app_state*)state;
together
switch (msg­>type){
case MSG_INIT: {
s­>pid = msg­>did;
s­>led_on = 0;
ker_timer_start(s­>pid, 0, TIMER_REPEAT, 500);
break;
}
case MSG_FINAL: {
ker_timer_stop(s­>pid, 0);
break;
}
case MSG_TIMER_TIMEOUT: {
if(s­>led_on == 1){
ker_led(LED_YELLOW_ON);
} else {
ker_led(LED_YELLOW_OFF);
}
s­>led_on++;
if(s­>led_on > 1) s­>led_on = 0;
break;
}
default: return ­EINVAL;
}
return SOS_OK;
Sensor Manager

Module A Module B

Enables sharing of sensor data between
multiple modules
Periodic Signal Polled
Access Data Ready Access

Presents uniform data access API to
Sensor diverse sensors
Manager

Underlying device specific drivers register
getData dataReady with the sensor manager

MagSensor •
Device specific sensor drivers control

Calibration

Data interpolation

ADC I2C •
Sensor drivers are loadable: enables

post-deployment configuration of sensors

hot-swapping of sensors on a running node
Application Level Performance

Comparison of application performance in SOS, TinyOS, and MateVM

Surge Tree Formation Latency Surge Forwarding Delay Surge Packet Delivery Ratio
Active Active Overhead
Platform ROM RAM Time(in 1 Time relative to TOS
SOS Core 20464 B 1163 B System min) (%) (%)
Dynamic Memory Pool - 1536 B TinyOs 3.31 sec 5.22% NA
TinyOS with Deluge 21132 B 597 B
SOS 3.50 sec 5.84% 5.70%
Mate VM 39746 B 3196 B
Mate VM 3.68 sec 6.13% 11.00%
Memory footprint for base operating system
with the ability to distribute and update node CPU active time for surge application.
programs.
Reconfiguration Performance
Code Cost Write
Size (mJ/pa Energy
Code Size System (Bytes) ge) (mJ)
Module Name(Bytes)
SOS 1316 0.31 1.86
sample_send 568 TinyOS 30988 1.34 164.02
tree_routing 2242 Mate VM NA NA NA
photo_sensor 372
Energy (mJ) 2312.68 Energy cost of light sensor driver update
Latency (sec) 46.6 Code Cost Write
Size (mJ/pa Energy
Module size and energy profile System (Bytes) ge) (mJ)
for installing surge under SOS SOS 566 0.31 0.93
TinyOS 31006 1.34 164.02
Mate VM 17 0 0
Energy cost of surge application update

• Energy trade offs


 SOS has slightly higher base operating cost
 TinyOS has significantly higher update cost
 SOS is more energy efficient when the system is updated one or
more times a week
Platform Support

Supported micro controllers Supported radio stacks

• Atmel Atmega128 •
Chipcon CC1000
• 4 Kb RAM
• 128 Kb FLASH

BMAC

• Oki ARM •
Chipcon CC2420
• 32 Kb RAM •
IEEE 802.15.4 MAC
• 256 Kb FLASH
(NDA required)
Simulation Support

Source code level network simulation

Pthread simulates hardware concurrency

UDP simulates perfect radio channel

Supports user defined topology & heterogeneous software configuration

Useful for verifying the functional correctness


Instruction level simulation with Avrora

Instruction cycle accurate simulation

Simple perfect radio channel

Useful for verifying timing information

See http://compilers.cs.ucla.edu/avrora/


EmStar integration under development
Contiki

Dynamic loading of programs (vs. static)

Multi-threaded concurrency managed execution


(in addition to event driven)

Available on MSP430, AVR, HC12, Z80, 6502, x86, ...

Simulation environment available for


BSD/Linux/Windows
Key ideas

• Dynamic loading of programs


 Selective reprogramming
 Static/pre-linking (early work: EmNets)
 Dynamic linking (recent work: SENSYS)
− Key difference from SOS:
no assumption of position independence

• Concurrency management mechanisms


 Events and threads
 Trade-offs: preemption, size
Loadable programs

• One-way dependencies

 Core resident in memory


− Language run-time, communication

 If programs “know” the core Core


− Can be statically linked
− And call core functions and reference
core variables freely

• Individual programs can be


loaded/unloaded
− Need to register their variable and
function information with core
Loadable programs (contd.)

• Programs can be loaded from


anywhere
 Radio (multi-hop, single-hop),
EEPROM, etc.
Core

• During software development,


usually change only one module
Core Symbol Table


Registry of names and addresses of
all externally visible variables and functions
of core modules and run-time libraries


Offers API to linker to search registry and to
update registry


Created when Contiki core binary image is
compiled
– multiple pass process
Linking and relocating a module

1. Parse payload into code, data, symbol table,


and list of “relocation entries” which
− correspond to an instruction or address in code or data that needs to be updated with a
new address
− consist of
o a pointer to a symbol, such as a variable name or a function name or a pointer to a place in the
code or data
o address of the symbol
o a relocation type which specifies how the data or code should be updated

2. Allocate memory for code & data is flash ROM and RAM

3. Link and relocate code and data segments


— for each relocation entry, search core symbol table and module symbol table
— if relocation is relative than calculate absolute address

2. Write code to flash ROM and data to RAM


Contiki size (bytes)

Module Code MSP430 Code AVR RAM


Kernel 810 1044 10 + e + p

Program loader 658 - 8

Multi-threading library 582 678 8+s

Timer library 60 90 0

Memory manager 170 226 0

Event log replicator 1656 1934 200


4146 5218
µIP TCP/IP stack 18 + b
How well does it work?

• Works well
 Program typically much smaller than entire system image (1-
10%)
− Much quicker to transfer over the radio
 Reprogramming takes seconds

• Static linking can be a problem


 Small differences in core means module cannot be run
 Implementation of dynamic linker is in SENSYS paper
Revisiting Multi-threaded Computation

 Threads blocked, waiting Thread Thread Thread


for events
 Kernel unblocks threads
when event occurs
 Thread runs until next
Kernel
blocking statement
 Each thread requires its
own stack
− Larger memory usage
Event-driven vs multi-threaded

Event-driven Multi-threaded
- No wait() statements + wait() statements
- No preemption + Preemption possible
- State machines + Sequential code flow
+ Compact code - Larger code overhead
+ Locking less of a problem - Locking problematic
+ Memory efficient - Larger memory requirements

How to combine them?


Contiki: event-based kernel with threads

• Kernel is event-based
 Most programs run directly on top of the kernel

• Multi-threading implemented as a library

• Threads only used if explicitly needed


 Long running computations, ...

• Preemption possible
 Responsive system with running computations
Responsiveness

Computation in a thread
Threads implemented atop an event-
based kernel
Event Thread Thread

Event

Kernel

Event

Event
Implementing preemptive threads 1
Thread

s t ack ndler
i t ch Q ha
Sw p IR
u
Event Set
handler Switch stack back
Timer IRQ
Implementing preemptive threads 2

Sw
itc h
Set
up stack
Event IRQ
handler ha n
d le r
Sw
i tc
hs
tac
kb
ac
k
yield()
Memory management

• Memory allocated when module is loaded


 Both ROM and RAM
 Fixed block memory allocator

• Code relocation made by module loader


 Exercises flash ROM evenly
Protothreads: light-weight stackless
threads

• Protothreads: mixture between event-driven and threaded


 A third concurrency mechanism

• Allows blocked waiting

• Requires per-thread no stack

• Each protothread runs inside a single C function

• 2 bytes of per-protothread state


Mate: A Virtual Machine for Sensor
Networks

Why VM?

Large number (100’s to 1000’s) of nodes in a coverage area

Some nodes will fail during operation

Change of function during the mission

Related Work

PicoJava
assumes Java bytecode execution hardware

K Virtual Machine
requires 160 – 512 KB of memory

XML
too complex and not enough RAM

Scylla
VM for mobile embedded system
Mate features


Small (16KB instruction memory, 1KB RAM)


Concise (limited memory & bandwidth)


Resilience (memory protection)


Efficient (bandwidth)


Tailorable (user defined instructions)
Mate in a Nutshell


Stack architecture


Three concurrent execution contexts


Execution triggered by predefined events


Tiny code capsules; self-propagate into network


Built in communication and sensing instructions
When is Mate Preferable?


For small number of executions


GDI example:
Bytecode version is preferable for a program
running less than 5 days


In energy constrained domains


Use Mate capsule as a general RPC engine
Mate Architecture

 Stack based architecture


 Single shared variable
• gets/sets
Subroutines Events
 Three events:

Receive
Clock

Send
• Clock timer 0 1 2 3

• Message reception
• Message send gets/sets

 Hides asynchrony Operand


PC Stack

Code
• Simplifies programming Return
Stack
• Less prone to bugs
Instruction Set

 One byte per instruction


 Three classes: basic, s-type, x-type
• basic: arithmetic, halting, LED operation
• s-type: messaging system
• x-type: pushc, blez
 8 instructions reserved for users to define
 Instruction polymorphism
• e.g. add(data, message, sensing)
Code Example(1)


Display Counter to LED

gets # Push heap variable on stack


pushc 1 # Push 1 on stack
add # Pop twice, add, push result
copy # Copy top of stack
sets # Pop, set heap
pushc 7 # Push 0x0007 onto stack
and # Take bottom 3 bits of value
putled # Pop, set LEDs to bit pattern
halt #
Code Capsules


One capsule = 24 instructions


Fits into single TOS packet


Atomic reception


Code Capsule

– Type and version information

– Type: send, receive, timer, subroutine


Viral Code


Capsule transmission: forw

Forwarding other installed capsule: forwo (use within
clock capsule)

Mate checks on version number on reception of a capsule

-> if it is newer, install it



Versioning: 32bit counter

Disseminates new code over the network
Component Breakdown

● Mate runs on mica with 7286 bytes code, 603 bytes RAM
Network Infection Rate

● 42 node network in 3 by 14
grid
● Radio transmission: 3 hop
network
● Cell size: 15 to 30 motes
● Every mote runs its clock
capsule every 20 seconds
● Self-forwarding clock capsule
Bytecodes vs. Native Code

● Mate IPS: ~10,000

● Overhead: Every instruction executed as separate TOS task


Installation Costs

● Bytecodes have computational overhead

● But this can be compensated by using small packets on


upload (to some extent)
Customizing Mate


Mate is general architecture; user can build customized
VM


User can select bytecodes and execution events


Issues:
– Flexibility vs. Efficiency
Customizing increases efficiency w/ cost of changing requirements
– Java’s solution:
General computational VM + class libraries
– Mate’s approach:
More customizable solution -> let user decide
How to …


Select a language
-> defines VM bytecodes


Select execution events
-> execution context, code image


Select primitives
-> beyond language functionality
Constructing a Mate VM

 This generates
a set of files
-> which are
used to build
TOS application
and
to configure
script program
Compiling and Running a Program

Send it over
the network
to a VM

VM-specific
binary code

Write programs in
the scripter
Bombilla Architecture

 Once context: perform operations that only need


single execution
 16 word heap sharing among the context;
setvar, getvar
 Buffer holds up to ten values;
bhead, byank, bsorta
Bombilla Instruction Set

 basic: arithmetic, halt, sensing


 m-class: access message header
 v-class: 16 word heap access
 j-class: two jump instructions
 x-class: pushc
Enhanced Features of Bombilla


Capsule Injector: programming environment

Synchronization: 16-word shared heap; locking scheme

Provide synchronization model: handler, invocations,

resources, scheduling points, sequences



Resource management: prevent deadlock

Random and selective capsule forwarding

Error State
Discussion


Comparing to traditional VM concept, is Mate platform independent? Can
we have it run on heterogeneous hardware?


Security issues:
How can we trust the received capsule? Is there a way to prevent version
number race with adversary?


In viral programming, is there a way to forward messages other than
flooding? After a certain number of nodes are infected by new version
capsule, can we forward based on need?


Bombilla has some sophisticated OS features. What is the size of the
program? Does sensor node need all those features?
.NET MicroFramework (MF) Architecture

• .NET
MF is a bootable runtime environment tailored for
embedded development

• MF services include:

 Boot Code

 Code Execution

 Thread Management

 Memory Management

 Hardware I/O
.NET MF Hardware Abstraction Layer (HAL)


Provides an interface to access hardware and
peripherals
– Relevant only for system, not application developers


Does not require operating system
– Can run on top of one if available


Interfaces include:
– Clock Management
– Core CPU
– Communications
– External Bus Interface Unit (EBIU)

.NET MF Platform Abstraction Layer (PAL)


Provides hardware independent abstractions
– Used by application developers to access system
resources
– Application calls to PAL managed by Common
Language Runtime (CLR)
– In turn calls HAL drivers to access hardware


PAL interfaces include:
– Time
– PAL Memory Management
– Input/Output
– Events
– Debugging
Threading Model


User applications may have multiple threads

– Represented in the system as Managed Threads serviced by the


CLR

– Time sliced context switching with (configurable) 20ms quantum

– Threads may have priorities


CLR has a single thread of execution at the system level

– Uses cooperative multitasking


Explicitly yields execution periodically to interrupt service routine
continuations
Timer Module


MF provides support for accessing timers from C#


Enables execution of a user specified method

– At periodic intervals or one-time

– Callback method can be selected when timer is constructed


Part of the System.Threading namespace

– Callback method executes in a thread pool thread


provided by the system
Timer Interface
● Callback: user specified
method to be executed

● State: information used by


callback method
– May be null

● Duetime: delay before the


timer first fires

● Period: time interval


between callback
invocations

● Change method allows user


to stop timer
– Change period to -1
ADC Extension to the HAL


Extended MF HAL to support ADC API’s
– High-precision, low latency sampling using hardware clock

Critical for many signal processing applications


Supported API functions include
– Initialize: initialize ADC peripheral registers and the clocks
– UnInitialize: reset ADC peripheral registers and uninitialize clocks
– ConfigureADC: select ADC parameters (mode, input channels, etc)
– StartSampling: starts conversion on selected ADC channel
– GetSamplingStatus: whether in progress or complete
– GetData: returns data stored in ADC data register
Radio Extension to the HAL


Extended the MF HAL to support radio API’s


Supported API functions include
– On: powers on radio, configures registers, SPI bus, initializes clocks
– Off: powers off radio, resets registers, clocks and SPI bus
– Configure: sets radio options for 802.15.4 radio
– BuildFrame: constructs data frame with specified parameters

destination address, data, ack request
– SendPacket: sends data frame to specified address
– ReceivePacket: receives packet from a specified source address
MAC Extension to PAL


Built-in, efficient wireless communication protocol

– OMAC (Cao, Parker, Arora: ICNP 2006)



Receiver centric MAC protocol

Highly efficient for low duty cycle applications
– Implemented as a PAL component natively on top of HAL radio
extensions for maximum efficiency
– Exposes rich set of wireless communication interfaces

OMACSender

OMACReceiver

OMACBroadcast
– Easy, out-of-the-box Wireless Communication

Complete abstraction of native, platform or protocol specific code from
application developers

You might also like