Unit 5
Unit 5
●
A hardware device that enable wireless
sensor network research
●
Typical components
– Microcontroller
– Radio
– Power supply
– Sensors and/or actuators
– Peripherals
●
USB interface
●
Storage
Crossbow
● Original design: UC Berkeley
– Commercial product: Crossbow
MicaZ
Mica2
– Other products from Crossbow
● Cricket
● Imote2
● IRIS
• Traditional approaches
command processing loop (wait request, act, respond)
• Alternative
provide framework for concurrency and modularity
●
Microthreaded OS (lightweight thread support) and efficient
network interfaces
●
Two level scheduling structure
– Long running tasks that can be interrupted by hardware events
●
Small, tightly integrated design that allows crossover of
software components into hardware
Tiny OS Concepts
• Scheduler + Graph of Components
constrained two-level scheduling model: Commands Events
msg_rec(type, data)
threads + events
msg_send_done)
power(mode)
type, data)
send_msg
(addr,
• Component:
init
Commands
Event Handlers Messaging Component
Internal
Frame (storage)
TX_packet(buf)
internal thread State
Power(mode)
Tasks (concurrency)
init
RX_packet_done (buffer)
• Constrained Storage Model
frame per component, shared stack, no
heap
TX_packet_done
(success)
• Very lean multithreading
Application = Graph of Components
application
Active Messages
UART HW
byte
clock
Graph of cooperating
bit
RFM
state machines
on shared stack
TOS Execution Model
packet
Radio Packet
crc
HW interrupt at lowest level
event-driven byte-pump
may signal events
byte Radio byte
call commands encode/decode
command CmdName(args) {
...
return status;
}
event EvtName(args) {
...
return status;
}
{
...
status = signal EvtName(args)
...
}
TinyOS Execution Contexts
Tasks
events
commands
Interrupts
Hardware
• Events generated by interrupts preempt tasks
• Tasks do not preempt tasks
• Both essentially process state transitions
Handling Concurrency: Async or Sync
Code
Compiler rule:
if a variable x is accessed by async code, then any access
of x outside of an atomic statement is a compile-time error
Race-Free Invariant:
any update to shared state is either not a potential race
condition (sync code only) or is within an atomic section
Tasks
SenseToRfm
AMStandard
RadioCRCPacket UARTnoCRCPacket
packet
CRCfilter noCRCPacket
Timer photo
MicaHighSpeedRadioM
phototemp
SecDedEncode ChannelMon RadioTiming
byte
SW
SlavePin
Programming Syntax
• Compositional support
separation of definition and linkage
robustness through narrow interfaces and reuse
interpositioning
interface StdControl {
command result_t init();
command result_t start();
command result_t stop();
}
Clock.nc
interface Clock {
command result_t setRate(char interval, char scale);
event result_t fire();
}
Component Types
• Configuration
links together components to compose new component
• Module
provides code that implements one or more interfaces and
internal behavior
Example of Top Level Configuration
configuration SenseToRfm {
// this module does not provide any interface
}
implementation
{
components Main, SenseToInt, IntToRfm, ClockC, Photo as Sensor;
includes IntMsg;
configuration IntToRfm StdControl IntOutput
{
provides {
interface IntOutput; IntToRfmM
interface StdControl;
SubControl SendMsg[AM_INTMSG];
}
} GenericComm
implementation
{
components IntToRfmM, GenericComm as Comm;
IntOutput = IntToRfmM;
StdControl = IntToRfmM;
…
Encode Task Byte 1 Byte 2 Byte 3 Byte 4
RFM Bits
Dynamics of Events and Threads
bit event =>
end of byte => thread posted to start
bit event filtered end of packet =>
at byte layer end of msg send send next message
• Sending
declare buffer storage in a frame
request transmission
name a handler
handle completion signal
• Receiving
declare a handler
firing a handler: automatic
• Buffer management
strict ownership exchange
tx: send done event ⇒ reuse
rx: must return a buffer
Tasks in Low-level Operation
• transmit packet
send command schedules task to calculate CRC
task initiates byte-level data pump
events keep the pump flowing
• receive packet
receive event schedules task to check CRC
task signals packet ready if OK
• byte-level tx/rx
task scheduled to encode/decode each complete byte
TinyOS tools
●
TOSSIM: a simulator for tinyos programs
●
ListenRaw, SerialForwarder: java tools to receive raw packets on PC from base
node
●
Oscilloscope: java tool to visualize (sensor) data in real time
●
Memory usage: breaks down memory usage per component (in contrib)
●
Peacekeeper: detect RAM corruption due to stack overflows (in lib)
●
Stopwatch: tool to measure execution time of code block by timestamping at
entry and exit (in osu CVS server)
●
Makedoc and graphviz: generate and visualize component hierarchy
●
Surge, Deluge, SNMS, TinyDB
Scalable Simulation Environment
• Sockets = basestation
• Complete application
including GUI
Simulation Scaling
TinyOS 2.0: basic changes
●
Scheduler: improve robustness and flexibility
– Reserved tasks by default (⇒ fault tolerance)
– Priority tasks
●
New nesC 1.2 features:
– Network types enable link level cross-platform interoperability
– Generic (instantiable) components, attributes, etc.
●
Platform definition: simplify porting
– Structure OS to leverage code reuse
– Decompose h/w devices into 3 layers: presentation, abstraction, device-independent
– Structure common chips for reuse across platforms
●
so platforms are a collection of chips: msp430 + CC2420 +
●
Power mgmt architecture for devices controlled by resource reservation
●
Self-initialisation
●
App-level notion of instantiable services
TinyOS Limitations
●
Static allocation allows for compile-time analysis, but can make programming
harder
●
No support for heterogeneity
Support for other platforms (e.g. stargate)
Support for high data rate apps (e.g. acoustic beamforming)
Interoperability with other software frameworks and languages
●
Limited visibility
Debugging
Intra-node fault tolerance
●
Robustness solved in the details of implementation
nesC offers only some types of checking
Em*
●
Software environment for sensor networks built from Linux-
class devices
●
Claimed features:
– Simulation and emulation tools
– Modular, but not strictly layered architecture
– Robust, autonomous, remote operation
– Fault tolerance within node and between nodes
– Reactivity to dynamics in environment and task
– High visibility into system: interactive access to all services
Contrasting Emstar and TinyOS
●
Similar design choices
programming framework
− Component-based design
− “Wiring together” modules into an application
event-driven
− reactive to “sudden” sensor events or triggers
robustness
− Nodes/system components can fail
●
Differences
Pure Simulation
Deployment
Data Replay
S
ca
le
Ceiling Array
Portable Array
Reality
Em* Modularity
●
Dependency DAG Collaborative Sensor
Processing Application
●
Each module (service)
– Manages a resource & resolves State
Sync
3d Multi-
Lateration
contention
– Has a well defined interface Topology Discovery
Acoustic
Ranging
– Has a well scoped task
– Encapsulates mechanism Neighbor
Discovery
Leader
Election
Reliable
Unicast
Time
Sync
– Exposes control of policy
– Minimizes work done by client
Hardware Radio Audio Sensors
library
●
Application has same structure
Em* Robustness
●
Fault isolation via multiple processes
●
Active process management (EmRun)
●
Auto-reconnect built into libraries scheduling
●
Event-driven software structure
– React to asynchronous notification
scheduling
– e.g. reaction to change in neighbor list
path_plan
notify
●
Notification through the layers filter
●
Tools
– EmRun
– EmProxy/EmView
– EmTOS
●
Standard IPC
– FUSD
– Device patterns
●
Common Services
– NeighborDiscovery
– TimeSync
– Routing
EmRun: Manages Services
●
Designed to start, stop, and monitor services
●
EmRun config file specifies service dependencies
●
Starting and stopping the system
– Starts up services in correct order
– Can detect and restart unresponsive services
– Respawns services that die
– Notifies services before shutdown, enabling graceful shutdown and
persistent state
●
Error/Debug Logging
– Per-process logging to in-memory ring buffers
– Configurable log levels, at run time
EmSim/EmCee
Emulator emview
emproxy
neighbor
linkstat
…
nodeN
motenic nodeN
nodeN
●
Creates device file interfaces
●
Text/Binary on same file Client Server
User
●
Standard interface /dev/servicename /dev/fusd
– Language independent
Kernel
– No client library required
kfusd.o
Device Patterns
●
FUSD can support virtually any semantics
– What happens when client calls read()?
●
But many interfaces fall into certain patterns
●
Device Patterns
– encapsulate specific semantics
– take the form of a library:
●
objects, with method calls and callback functions
●
priority: ease of use
Status Device
●
Designed to report current state
– no queuing: clients not guaranteed to see
every intermediate state
●
Supports multiple clients
●
Interactive and programmatic interface
– ASCII output via “cat” Server
– binary output to programs
Config State Request
Handler Handler
●
Supports client notification
– notification via select() O I
Status Device
●
Client configurable
– client can write command string Client1 Client2 Client3
– server parses it to enable per-client behavior
Packet Device
●
Designed for message streams
●
Supports multiple clients
Server
●
Supports queuing
– Round-robin service of output queues Packet Device
O I
– Delivery of messages to all/ specific
clients
F F F
●
Client-configurable:
I O I O I O
– Input and output queue lengths
– Input filters
– Optional loopback of outputs to other Client1 Client2 Client3
clients (for snooping)
Device Files vs Regular Files
●
Regular files:
– Require locking semantics to prevent race conditions between readers and
writers
– Support “status” semantics but not queuing
– No support for notification, polling only
●
Device files:
– Leverage kernel for serialization: no locking needed
– Arbitrary control of semantics:
●
queuing, text/binary, per client configuration
– Immediate action, like an function call:
●
system call on device triggers immediate response from service, rather than setting a
request and waiting for service to poll
Interacting With em*
Dynamically
Tree Light
Application Loadable
Routing Sensor Modules
•
Hardware Abstraction Layer (HAL)
•
Clock, UART, ADC, SPI, etc.
•
Low layer device drivers interface with HAL
•
Timer, serial framer, communications stack, etc.
•
Kernel services
•
Dynamic memory management
•
Scheduling
•
Function control blocks
Kernel Services: Memory Management
•
Fixed-partition dynamic memory allocation
•
Constant allocation time
•
Low overhead
•
Memory management features
•
Guard bytes for run-time memory overflow checks
•
Ownership tracking
•
Garbage collection on completion
•
pkt = (uint8_t*)ker_malloc(hdr_size + sizeof(SurgeMsg), SURGE_MOD_PID);
Kernel Services: Scheduling
•
SOS implements non-preemptive priority scheduling via priority
queues
•
Event served when there is no higher priority event
•
Low priority queue for scheduling most events
•
High priority queue for time critical events, e.g., h/w interrupts &
sensitive timers
•
Prevents execution in interrupt contexts
•
post_long(TREE_ROUTING_PID, SURGE_MOD_PID, MSG_SEND_PACKET,
hdr_size + sizeof(SurgeMsg), (void*)packet, SOS_MSG_DYM_MANAGED);
Modules
●
Each module is uniquely identified by its ID or pid
●
Has private state
●
Represented by a message handler & has prototype:
int8_t handler(void *private_state, Message *msg)
●
Return value follows errno
failure
Kernel Services: Module Linking
•
Orthogonal to module distribution protocol
•
Kernel stores new module in free block located in program memory
and critical information about module in the module table
•
Kernel calls initialization routine for module
•
Publish functions for other parts of the system to use
char tmp_string = {'C', 'v', 'v', 0};
•
Subscribe to functions supplied by other modules
char tmp_string = {'C', 'v', 'v', 0};
•
Set initial timers and schedule events
Module–to–Kernel Communication
Module A
System Call System Messages
High Priority
System
SOS Kernel Message
Jump Table
Buffer
•
Kernel provides system services and access to hardware
ker_timer_start(s>pid, 0, TIMER_REPEAT, 500);
ker_led(LED_YELLOW_OFF);
•
Kernel jump table re-directs system calls to handlers
•
upgrade kernel independent of the module
•
Interrupts & messages from kernel dispatched by a high priority message buffer
•
low latency
•
concurrency safe operation
Inter-Module Communication
•
Kernel stores pointers to •
Messages dispatched by a two-
functions registered by modules level priority scheduler
•
Blocking calls with low latency
•
Suited for services with long latency
•
Type safe binding through publish /
•
Type-safe runtime function subscribe interface
binding
Synchronous Communication
●
Module can register function for low
latency blocking call (1)
Module A Module B
●
Modules which need such function can
subscribe to it by getting function
3 pointer pointer (i.e. **func) (2)
●
When service is needed, module
2 1 dereferences the function pointer
Module Function pointer (3)
Pointer Table
Asynchronous Communication
3
2 Module A
Network
1 Module B
Msg Queue 4 5 Send Queue
•
Subscribing to a module’s function
•
Publishing a function includes a type description that is stored in a
function control block (FCB) table
•
Subscription attempts include type checks against corresponding FCB
•
Type changes/removal of published functions result in subscribers being
redirected to system stub handler function specific to that type
•
Updates to functions w/ same type assumed to have same semantics
Module Library
•
Some applications created by combining already written and tested
modules
•
SOS kernel facilitates loosely coupled modules
•
Passing of memory ownership
•
Efficient function and messaging interfaces
Surge Application
with Debugging Memory
Surge
Debug
Photo Tree
Sensor Routing
Module Design
#include <module.h>
typedef struct {
uint8_t pid;
uint8_t led_on;
} app_state;
• Uses standard C
DECL_MOD_STATE(app_state);
DECL_MOD_ID(BLINK_ID);
• Programs created by “wiring” modules
int8_t module(void *state, Message *msg){
app_state *s = (app_state*)state;
together
switch (msg>type){
case MSG_INIT: {
s>pid = msg>did;
s>led_on = 0;
ker_timer_start(s>pid, 0, TIMER_REPEAT, 500);
break;
}
case MSG_FINAL: {
ker_timer_stop(s>pid, 0);
break;
}
case MSG_TIMER_TIMEOUT: {
if(s>led_on == 1){
ker_led(LED_YELLOW_ON);
} else {
ker_led(LED_YELLOW_OFF);
}
s>led_on++;
if(s>led_on > 1) s>led_on = 0;
break;
}
default: return EINVAL;
}
return SOS_OK;
Sensor Manager
Module A Module B
•
Enables sharing of sensor data between
multiple modules
Periodic Signal Polled
Access Data Ready Access
•
Presents uniform data access API to
Sensor diverse sensors
Manager
•
Underlying device specific drivers register
getData dataReady with the sensor manager
MagSensor •
Device specific sensor drivers control
•
Calibration
•
Data interpolation
ADC I2C •
Sensor drivers are loadable: enables
•
post-deployment configuration of sensors
•
hot-swapping of sensors on a running node
Application Level Performance
Surge Tree Formation Latency Surge Forwarding Delay Surge Packet Delivery Ratio
Active Active Overhead
Platform ROM RAM Time(in 1 Time relative to TOS
SOS Core 20464 B 1163 B System min) (%) (%)
Dynamic Memory Pool - 1536 B TinyOs 3.31 sec 5.22% NA
TinyOS with Deluge 21132 B 597 B
SOS 3.50 sec 5.84% 5.70%
Mate VM 39746 B 3196 B
Mate VM 3.68 sec 6.13% 11.00%
Memory footprint for base operating system
with the ability to distribute and update node CPU active time for surge application.
programs.
Reconfiguration Performance
Code Cost Write
Size (mJ/pa Energy
Code Size System (Bytes) ge) (mJ)
Module Name(Bytes)
SOS 1316 0.31 1.86
sample_send 568 TinyOS 30988 1.34 164.02
tree_routing 2242 Mate VM NA NA NA
photo_sensor 372
Energy (mJ) 2312.68 Energy cost of light sensor driver update
Latency (sec) 46.6 Code Cost Write
Size (mJ/pa Energy
Module size and energy profile System (Bytes) ge) (mJ)
for installing surge under SOS SOS 566 0.31 0.93
TinyOS 31006 1.34 164.02
Mate VM 17 0 0
Energy cost of surge application update
• Atmel Atmega128 •
Chipcon CC1000
• 4 Kb RAM
• 128 Kb FLASH
•
BMAC
• Oki ARM •
Chipcon CC2420
• 32 Kb RAM •
IEEE 802.15.4 MAC
• 256 Kb FLASH
(NDA required)
Simulation Support
•
Source code level network simulation
•
Pthread simulates hardware concurrency
•
UDP simulates perfect radio channel
•
Supports user defined topology & heterogeneous software configuration
•
Useful for verifying the functional correctness
•
Instruction level simulation with Avrora
•
Instruction cycle accurate simulation
•
Simple perfect radio channel
•
Useful for verifying timing information
•
See http://compilers.cs.ucla.edu/avrora/
•
EmStar integration under development
Contiki
• One-way dependencies
●
Registry of names and addresses of
all externally visible variables and functions
of core modules and run-time libraries
●
Offers API to linker to search registry and to
update registry
●
Created when Contiki core binary image is
compiled
– multiple pass process
Linking and relocating a module
2. Allocate memory for code & data is flash ROM and RAM
Timer library 60 90 0
• Works well
Program typically much smaller than entire system image (1-
10%)
− Much quicker to transfer over the radio
Reprogramming takes seconds
Event-driven Multi-threaded
- No wait() statements + wait() statements
- No preemption + Preemption possible
- State machines + Sequential code flow
+ Compact code - Larger code overhead
+ Locking less of a problem - Locking problematic
+ Memory efficient - Larger memory requirements
• Kernel is event-based
Most programs run directly on top of the kernel
• Preemption possible
Responsive system with running computations
Responsiveness
Computation in a thread
Threads implemented atop an event-
based kernel
Event Thread Thread
Event
Kernel
Event
Event
Implementing preemptive threads 1
Thread
s t ack ndler
i t ch Q ha
Sw p IR
u
Event Set
handler Switch stack back
Timer IRQ
Implementing preemptive threads 2
Sw
itc h
Set
up stack
Event IRQ
handler ha n
d le r
Sw
i tc
hs
tac
kb
ac
k
yield()
Memory management
Why VM?
●
Large number (100’s to 1000’s) of nodes in a coverage area
●
Some nodes will fail during operation
●
Change of function during the mission
Related Work
●
PicoJava
assumes Java bytecode execution hardware
●
K Virtual Machine
requires 160 – 512 KB of memory
●
XML
too complex and not enough RAM
●
Scylla
VM for mobile embedded system
Mate features
●
Small (16KB instruction memory, 1KB RAM)
●
Concise (limited memory & bandwidth)
●
Resilience (memory protection)
●
Efficient (bandwidth)
●
Tailorable (user defined instructions)
Mate in a Nutshell
●
Stack architecture
●
Three concurrent execution contexts
●
Execution triggered by predefined events
●
Tiny code capsules; self-propagate into network
●
Built in communication and sensing instructions
When is Mate Preferable?
●
For small number of executions
●
GDI example:
Bytecode version is preferable for a program
running less than 5 days
●
In energy constrained domains
●
Use Mate capsule as a general RPC engine
Mate Architecture
Receive
Clock
Send
• Clock timer 0 1 2 3
• Message reception
• Message send gets/sets
Code
• Simplifies programming Return
Stack
• Less prone to bugs
Instruction Set
●
Display Counter to LED
●
One capsule = 24 instructions
●
Fits into single TOS packet
●
Atomic reception
●
Code Capsule
●
Capsule transmission: forw
●
Forwarding other installed capsule: forwo (use within
clock capsule)
●
Mate checks on version number on reception of a capsule
● Mate runs on mica with 7286 bytes code, 603 bytes RAM
Network Infection Rate
● 42 node network in 3 by 14
grid
● Radio transmission: 3 hop
network
● Cell size: 15 to 30 motes
● Every mote runs its clock
capsule every 20 seconds
● Self-forwarding clock capsule
Bytecodes vs. Native Code
●
Mate is general architecture; user can build customized
VM
●
User can select bytecodes and execution events
●
Issues:
– Flexibility vs. Efficiency
Customizing increases efficiency w/ cost of changing requirements
– Java’s solution:
General computational VM + class libraries
– Mate’s approach:
More customizable solution -> let user decide
How to …
●
Select a language
-> defines VM bytecodes
●
Select execution events
-> execution context, code image
●
Select primitives
-> beyond language functionality
Constructing a Mate VM
This generates
a set of files
-> which are
used to build
TOS application
and
to configure
script program
Compiling and Running a Program
Send it over
the network
to a VM
VM-specific
binary code
Write programs in
the scripter
Bombilla Architecture
●
Capsule Injector: programming environment
●
Synchronization: 16-word shared heap; locking scheme
●
Provide synchronization model: handler, invocations,
●
Comparing to traditional VM concept, is Mate platform independent? Can
we have it run on heterogeneous hardware?
●
Security issues:
How can we trust the received capsule? Is there a way to prevent version
number race with adversary?
●
In viral programming, is there a way to forward messages other than
flooding? After a certain number of nodes are infected by new version
capsule, can we forward based on need?
●
Bombilla has some sophisticated OS features. What is the size of the
program? Does sensor node need all those features?
.NET MicroFramework (MF) Architecture
• .NET
MF is a bootable runtime environment tailored for
embedded development
• MF services include:
Boot Code
Code Execution
Thread Management
Memory Management
Hardware I/O
.NET MF Hardware Abstraction Layer (HAL)
●
Provides an interface to access hardware and
peripherals
– Relevant only for system, not application developers
●
Does not require operating system
– Can run on top of one if available
●
Interfaces include:
– Clock Management
– Core CPU
– Communications
– External Bus Interface Unit (EBIU)
–
.NET MF Platform Abstraction Layer (PAL)
●
Provides hardware independent abstractions
– Used by application developers to access system
resources
– Application calls to PAL managed by Common
Language Runtime (CLR)
– In turn calls HAL drivers to access hardware
●
PAL interfaces include:
– Time
– PAL Memory Management
– Input/Output
– Events
– Debugging
Threading Model
●
User applications may have multiple threads
●
CLR has a single thread of execution at the system level
●
Explicitly yields execution periodically to interrupt service routine
continuations
Timer Module
●
MF provides support for accessing timers from C#
●
Enables execution of a user specified method
●
Part of the System.Threading namespace
●
Extended MF HAL to support ADC API’s
– High-precision, low latency sampling using hardware clock
●
Critical for many signal processing applications
●
Supported API functions include
– Initialize: initialize ADC peripheral registers and the clocks
– UnInitialize: reset ADC peripheral registers and uninitialize clocks
– ConfigureADC: select ADC parameters (mode, input channels, etc)
– StartSampling: starts conversion on selected ADC channel
– GetSamplingStatus: whether in progress or complete
– GetData: returns data stored in ADC data register
Radio Extension to the HAL
●
Extended the MF HAL to support radio API’s
●
Supported API functions include
– On: powers on radio, configures registers, SPI bus, initializes clocks
– Off: powers off radio, resets registers, clocks and SPI bus
– Configure: sets radio options for 802.15.4 radio
– BuildFrame: constructs data frame with specified parameters
●
destination address, data, ack request
– SendPacket: sends data frame to specified address
– ReceivePacket: receives packet from a specified source address
MAC Extension to PAL
●
Built-in, efficient wireless communication protocol