Vxworks Kernel Programmers Guide 6.6
Vxworks Kernel Programmers Guide 6.6
VxWorks
®
6.6
Copyright © 2007 Wind River Systems, Inc.
All rights reserved. No part of this publication may be reproduced or transmitted in any
form or by any means without the prior written permission of Wind River Systems, Inc.
Wind River, Tornado, and VxWorks are registered trademarks of Wind River Systems, Inc.
The Wind River logo is a trademark of Wind River Systems, Inc. Any third-party
trademarks referenced are the property of their respective owners. For further information
regarding Wind River trademarks, please see:
http://www.windriver.com/company/terms/trademark.html
This product may include software licensed to Wind River by third parties. Relevant
notices (if any) are provided in your product installation at the following location:
installDir/product_name/3rd_party_licensor_notice.pdf.
Wind River may refer to third-party documentation by listing publications or providing
links to third-party Web sites for informational purposes. Wind River accepts no
responsibility for the information provided in such third-party documentation.
Corporate Headquarters
Wind River Systems, Inc.
500 Wind River Way
Alameda, CA 94501-1153
U.S.A.
For additional contact information, please visit the Wind River URL:
http://www.windriver.com
For information on how to contact Customer Support, please visit the following URL:
http://www.windriver.com/support
13 Nov 07
Part #: DOC-16075-ND-00
Contents
2 Kernel .................................................................................................... 7
iii
VxWorks
Kernel Programmer's Guide, 6.6
iv
Contents
v
VxWorks
Kernel Programmer's Guide, 6.6
vi
Contents
3.11 Booting From the Host File System Using TSFS ............................................. 155
vii
VxWorks
Kernel Programmer's Guide, 6.6
viii
Contents
ix
VxWorks
Kernel Programmer's Guide, 6.6
x
Contents
xi
VxWorks
Kernel Programmer's Guide, 6.6
xii
Contents
xiii
VxWorks
Kernel Programmer's Guide, 6.6
xiv
Contents
xv
VxWorks
Kernel Programmer's Guide, 6.6
7.9 Differences Between VxWorks and Host System I/O ..................................... 424
xvi
Contents
xvii
VxWorks
Kernel Programmer's Guide, 6.6
xviii
Contents
xix
VxWorks
Kernel Programmer's Guide, 6.6
xx
Contents
xxi
VxWorks
Kernel Programmer's Guide, 6.6
xxii
Contents
xxiii
VxWorks
Kernel Programmer's Guide, 6.6
12.4.5 Synchronizing Host and Kernel Modules List and Symbol Table .... 625
12.4.6 Creating and Using User Symbol Tables .............................................. 625
xxiv
Contents
xxv
VxWorks
Kernel Programmer's Guide, 6.6
xxvi
Contents
xxvii
VxWorks
Kernel Programmer's Guide, 6.6
xxviii
Contents
xxix
VxWorks
Kernel Programmer's Guide, 6.6
xxx
PART I
Core Technologies
1 Overview ............................................................. 3
2 Kernel .................................................................. 7
1
VxWorks
Kernel Programmer's Guide, 6.6
2
1
Overview
1.1 Introduction 3
1.2 Related Documentation Resources 4
1.3 VxWorks Configuration and Build 5
1.1 Introduction
This guide describes the VxWorks operating system, and how to use VxWorks
facilities in the development of real-time systems and applications. The first part,
Core Technologies, covers the following topics:
■
kernel facilities, kernel-based applications, and kernel customization
■
boot loader
■
multitasking facilities
■
POSIX facilities
■
memory management
■
I/O system
■
local file systems
■
Network File System (NFS)
■
flash file system support with TrueFFS
■
error detection and reporting
■
target tools, such as the kernel shell, kernel object-module loader, and target
symbol table
■
C++ development
3
VxWorks
Kernel Programmer's Guide, 6.6
The second part of this guide describes VxWorks multiprocessor technologies. For
an introduction to this material, see 14. Overview of Multiprocessing Technologies.
NOTE: This book provides information about facilities available in the VxWorks
kernel. For information about facilities available to real-time processes, see the
VxWorks Application Programmer’s Guide.
4
1 Overview
1.3 VxWorks Configuration and Build
This document describes VxWorks features; it does not go into detail about the
mechanisms by which VxWorks-based systems and applications are configured
and built. The tools and procedures used for configuration and build are described
in the Wind River Workbench User’s Guide and the VxWorks Command-Line Tools
User’s Guide.
5
VxWorks
Kernel Programmer's Guide, 6.6
6
2
Kernel
2.1 Introduction 7
2.2 Kernel Architecture 9
2.3 System Startup 13
2.4 VxWorks Configuration 14
2.5 Power Management 40
2.6 Kernel Applications 51
2.7 Custom Kernel Libraries 67
2.8 Custom VxWorks Components and CDFs 67
2.9 Custom System Calls 100
2.10 Custom Scheduler 118
2.1 Introduction
This chapter provides an overview of the VxWorks kernel architecture and
detailed discussions of those features of interest to developers who work directly
with kernel facilities. In general, kernel developers can modify and extend the
VxWorks kernel in following ways:
7
VxWorks
Kernel Programmer's Guide, 6.6
■
By reconfiguring and rebuilding VxWorks with various standard components
to suit the needs of their application development environment, as well as the
needs of their deployed products.
■
By creating kernel applications that can either be interactively downloaded
and run on a VxWorks target system, or configured to execute at boot time and
linked with the operating system image.
■ By creating custom kernel libraries that can be built into the operating system.
■ By creating custom VxWorks components—such as file systems or networking
protocols—that can be configured into VxWorks using the operating system
configuration utilities.
■ By extending the kernel system-call interface with custom APIs that should be
accessible to applications running in user space (as real-time process—RTP—
applications).
■ By creating a custom scheduler for use in place of the traditional VxWorks
scheduler or the POSIX thread scheduler.
See 2.4 VxWorks Configuration, p.14; as well as other chapters throughout this book
for information about VxWorks facilities and their use. Chapter 4. Multitasking, for
example, includes discussion of features that are available only in the kernel (such
as ISRs and watchdog timers).
Section 2.6 Kernel Applications, p.51 provides information about creating kernel
applications. For information about RTP applications, see the VxWorks Application
Programmer’s Guide: Applications and Processes.
Instructions for creating custom kernel libraries is provided in the VxWorks
Command-Line Tools User’s Guide. Only brief mention of this topic is given in this
book in 2.7 Custom Kernel Libraries, p.67.
See 2.8 Custom VxWorks Components and CDFs, p.67, 2.9 Custom System Calls, p.100,
and 2.10 Custom Scheduler, p.118 for information about extending the operating
system.
Developers can also write or port drivers and BSPs for VxWorks. These topics are
covered by other books in the VxWorks documentation set; see the VxWorks Device
Driver’s Guide and the VxWorks BSP Developer’s Guide.
8
2 Kernel
2.2 Kernel Architecture
1. The VxWorks 5.x optional product VxVMI provides write protection of text segments and
the VxWorks exception vector table, as well as an architecture-independent interface to the
CPU’s memory management unit (MMU). In addition, specialized variants of VxWorks
such as VxWorks AE and VxWorks AE653 provide memory protection, but in a manner
different from that provided in the current release.
9
VxWorks
Kernel Programmer's Guide, 6.6
■
Kernel-only features. Features such as watchdog timers, ISRs, and VxMP are
available only in the kernel. In some cases, however, there are alternatives for
process-based applications (POSIX timers, for example).
■
Hardware access. If the application requires direct access to hardware, it can
only do so from within the kernel.
VxWorks is flexible in terms of both the modularity of its features and its
extensibility. The operating system can be configured as a minimal kernel that
provides a task scheduler, interrupt handling, dynamic memory management,
and little else. Or, it can be configured with components for executing applications
as processes, file systems, networking, error detection and reporting, and so on.
The operating system can also be extended by adding custom components or
modules to the kernel itself (for example, for new file systems, networking
protocols, or drivers). The system call interface can then be extended by adding
custom APIs, which makes them available to process-based applications.
VxWorks provides a core set of facilities that are commonly provided by the kernel
of a multitasking operating system:
■ Startup facilities for system initialization (see 2.3 System Startup, p.13).
■ Clocks and timers (see 4.19 Watchdog Timers, p.240 and 5.7 POSIX
Asynchronous I/O, p.265).
■ Exception and interrupt handling (see Exception Task, p.12, 4.6 Task Exception
Handling, p.187, 4.18 Signals, p.227, and 4.20 Interrupt Service Routines, p.242).
■ Task management (see 4.2 Tasks and Multitasking, p.160).
■ Process management (see the VxWorks Application Programmer’s Guide:
Applications and Processes).
■
A system call interface for applications executing in processes (see VxWorks
Application Programmer’s Guide: Applications and Processes and 2.9 Custom
System Calls, p.100).
■
Intertask and interprocess communication (see 4.8 Intertask and Interprocess
Communication, p.194.
■
Signals (see 4.18 Signals, p.227).
■
Resource reclamation (see VxWorks Application Programmer’s Guide:
Applications and Processes).
10
2 Kernel
2.2 Kernel Architecture
■
Memory management (see 6. Memory Management).
■
I/O system (see 7. I/O System).
2
■
File systems (see 8. Local File Systems).
■
NFS (see 9. Network File System: NFS).
In addition, the VxWorks kernel also provides:
■ The WDB target agent, which is required for using the host development tools
with VxWorks. It carries out requests transmitted from the tools (by way of the
target server) and replies with the results (see 12.6 WDB Target Agent, p.628).
■ Facilities for error detection and reporting (see 11. Error Detection and
Reporting).
■ A target-based shell for direct user interaction, with a command interpreter
and a C-language interpreter (see 12.2 Kernel Shell, p.579).
■ A specialized facilities for multi-processor intertask communication through
shared memory (see 16. Shared-Memory Objects: VxMP).
For information about basic networking facilities, see the Wind River Network Stack
for VxWorks 6 Programmer’s Guide.
Root Task
The root task tRootTask is the first task executed by the kernel. The entry point of
the root task is usrRoot( )initializes most VxWorks facilities. It spawns such tasks
as the logging task, the exception task, the network task, and the tRlogind
daemon. Normally, the root task terminates and is deleted after all initialization
has completed. For more information tRootTask and usrRoot( ), see the VxWorks
BSP Developer’s Guide.
Logging Task
The log task, tLogTask, is used by VxWorks modules to log system messages
without having to perform I/O in the current task context. For more information,
see 7.7 Asynchronous Input/Output, p.383 and the API reference entry for logLib.
11
VxWorks
Kernel Programmer's Guide, 6.6
Exception Task
The exception task, tExcTask, supports the VxWorks exception handling package
by performing functions that cannot occur at interrupt level. It is also used for
actions that cannot be performed in the current task’s context, such as task suicide.
It must have the highest priority in the system. Do not suspend, delete, or change
the priority of this task. For more information, see the reference entry for excLib.
Network Task
The tNet0 task is the default network daemon. It handles the task-level (as
opposed to interrupt-level) processing required by the VxWorks network. For
systems that have been configured with more than one network daemon, the task
names are tNetn. The task is primarily used by network drivers. Configure
VxWorks with the INCLUDE_NET_DAEMON component to spawn the tNet0 task.
For more information on tNet0, see the Wind River Network Stack for VxWorks 6
Programmer’s Guide.
The WDB target agent task, tWdbTask, is created if the target agent is set to run in
task mode. It services requests from the host tools (by way of the target server); for
information about this server, see the host development environment
documentation. Configure VxWorks with the INCLUDE_WDB component to
include the target agent. See 12.6 WDB Target Agent, p.628 for more information
about WDB.
The following VxWorks system tasks are created if their components are included
in the operating system configuration.
tShellnum
If you have included the kernel shell in the VxWorks configuration, it is
spawned as a task. Any routine or task that is invoked from the kernel shell,
rather than spawned, runs in the tShellnum context.
The task name for a shell on the console is tShell0. The kernel shell is
re-entrant, and more than one shell task can run at a time (hence the number
suffix). In addition, if a user logs in remotely (using rlogin or telnet) to a
VxWorks target, the name reflects that fact as well. For example, tShellRem1.
For more information, see 12.2 Kernel Shell, p.579. Configure VxWorks with
the INCLUDE_SHELL component to include the kernel shell.
12
2 Kernel
2.3 System Startup
tRlogind
If you have included the kernel shell and the rlogin facility in the VxWorks
configuration, this daemon allows remote users to log in to VxWorks. It 2
accepts a remote login request from another VxWorks or host system and
spawns tRlogInTask_hexNumber and tRlogOutTask_hexNumber (for
example, tRlogInTask_5c4d0). These tasks exist as long as the remote user is
logged on. Configure VxWorks with the INCLUDE_RLOGIN component to
include the rlogin facility.
tTelnetd
If you have included the kernel shell and the telnet facility in the VxWorks
configuration, this daemon allows remote users to log in to VxWorks with
telnet. It accepts a remote login request from another VxWorks or host system
and spawns the input task tTelnetInTask_hexNumber and output task
tTelnetOutTask_hexNumber. These tasks exist as long as the remote user is
logged on. Configure VxWorks with the INCLUDE_TELNET component to
include the telnet facility.
tPortmapd
If you have included the RPC facility in the VxWorks configuration, this
daemon is RPC server that acts as a central registrar for RPC services running
on the same machine. RPC clients query the tPortmapd daemon to find out
how to contact the various servers. Configure VxWorks with the
INCLUDE_RPC component to include the portmap facility.
tJobTask
The tJobTask executes jobs—that is, function calls—on the behalf of tasks.
(The tExcTask task executes jobs on the behalf of ISRs.) It runs at priority 0
while waiting for a request, and dynamically adjusts its priority to match that
of the task that requests job execution. Configure VxWorks with the
INCLUDE_JOB_TASK component to include the job facility. For more
information see, 4.4.6 Task Deletion and Deletion Safety, p.180.
13
VxWorks
Kernel Programmer's Guide, 6.6
case during development—or stored in ROM with the boot loader, as is often the
case with production units. The VxWorks boot loader is actually a scaled-down
version of VxWorks itself, whose sole purpose is to load a system image and
initiate its execution. (See 3. Boot Loader.) For more information about system
startup, see the VxWorks BSP Developer's Guide: Overview of a BSP.
14
2 Kernel
2.4 VxWorks Configuration
process-based applications (see 2.9 Custom System Calls, p.100), create your own
scheduler (see 2.10 Custom Scheduler, p.118), and so on.
2
Finally, for production systems, you will want to reconfigure VxWorks with only
those components needed for deployed operation, and to build it as the
appropriate type of system image (see 2.4.1 VxWorks Image Types, p.15). For
production systems you will likely want to remove components required for host
development support, such as the WDB target agent and debugging components
(INCLUDE_WDB and INCLUDE_DEBUG), as well as to remove any other
operating system components not required to support your application. Other
considerations include reducing the memory requirements of the system,
speeding up boot time, and security issues.
For information about using the Workbench and command-line tools to configure
and build VxWorks, see the Wind River Workbench User’s Guide and the VxWorks
Command-Line Tools User’s Guide.
15
VxWorks
Kernel Programmer's Guide, 6.6
has a slower startup time; but it has a faster execution time than
vxWorks_romResident.
vxWorks_romCompress
A VxWorks image that is stored in ROM on the target. It is almost entirely
compressed, but has small uncompressed portion executed by the processor
immediately after power up/reboot. This small portion is responsible for
decompressing the compressed section of the ROM image into RAM and for
making the processor switch execution to RAM. The compression of the image
allows it to be much smaller than other images. However the decompression
operation increases the boot time. It takes longer to boot than vxWorks_rom
but takes up less space than other ROM-based images. The run-time execution
is the same speed as vxWorks_rom.
vxWorks_romResident
A VxWorks image that is stored in ROM on the target. It copies only the data
segment to RAM on startup; the text segment stays in ROM. Thus it is
described as being ROM-resident. It has the fastest startup time and uses the
smallest amount of RAM, but it runs slower than the other image types
because the ROM access required for fetching instructions is slower than
fetching them from RAM. It is obviously useful for systems with constrained
memory resources.
The default VxWorks image files can be found in sub-directories under
installDir/vxworks-6.x/target/proj/projName. For example:
/home/moi/myInstallDir/vxworks-6.x/target/proj/wrSbc8260_diab/default_rom/vxWorks_rom
16
2 Kernel
2.4 VxWorks Configuration
information, see 12.6 WDB Target Agent, p.628. Also note that before you use the
host development tools such as the shell and debugger, you must start a target
server that is configured for the same mode of communication. 2
A VxWorks component is the basic unit of functionality with which VxWorks can
be configured. While some components are autonomous, others may have
dependencies on other components, which must be included in the configuration
of the operating system for run-time operation. The kernel shell is an example of a
component with many dependencies. The symbol table is an example of a
component upon which other components depend (the kernel shell and module
loader; for more information, see 12. Target Tools).
The names, descriptions, and configurable features of VxWorks can be displayed
with the GUI configuration facilities in Workbench. Workbench provides facilities
for configuring VxWorks with selected components, setting component
parameters, as well as automated mechanisms for determining dependencies
between components during the configuration and build process.
The command-line operating system configuration tool—vxprj—uses the naming
convention that originated with configuration macros to identify individual
operating system components. The convention identifies components with names
that begin with INCLUDE. For example, INCLUDE_MSG_Q is the message queue
component. In addition to configuration facilities, the vxprj tool provides
associated features for listing the components included in a project, and so on.
For information about the Workbench and command-line facilities used for
configuring and building VxWorks, see the Wind River Workbench User’s Guide and
the VxWorks Command-Line Tools User’s Guide.
Textual configuration files identify components with macro names that begin with
INCLUDE, as well as with user-friendly descriptions. (For information about
configuration files, see 2.8.4 Component Description Language, p.79.)
In this book, components are identified by their macro name. The GUI
configuration facilities provide a search facility for finding individual components
in the GUI component tree based on the macro name.
17
VxWorks
Kernel Programmer's Guide, 6.6
Some of the commonly used VxWorks components are described in Table 2-1.
Names that end in XXX represent families of components, in which the XXX is
replaced by a suffix for individual component names. For example,
INCLUDE_CPLUS_XXX refers to a family of components that includes
INCLUDE_CPLUS_MIN and others.
Note that Table 2-1 does not include all components provided in the default
configuration of VxWorks, and that the VxWorks simulator provides more
components by default.
18
2 Kernel
2.4 VxWorks Configuration
19
VxWorks
Kernel Programmer's Guide, 6.6
20
2 Kernel
2.4 VxWorks Configuration
By default, VxWorks includes both libc and GNU libgcc, which are provided with
the INCLUDE_ALL_INTRINSICS component. If you wish to exclude one or the
other library, you can do so by reconfiguring the kernel with either
INCLUDE_DIAB_INTRINSICS or INCLUDE_GNU_INTRINSICS, respectively. Note
that these libraries are available in the kernel to enable dynamically downloading
and running kernel object modules.
21
VxWorks
Kernel Programmer's Guide, 6.6
Note that the component names for VxBus drivers do not have the leading
INCLUDE_ element (for example, DRV_SIO_NS16550), whereas the names for
non-VxBus drivers do (for example, INCLUDE_ELT_3C509_END).
For information about the VxBus facility, see the VxWorks Device Driver Developer’s
Guide.
22
2 Kernel
2.4 VxWorks Configuration
23
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks can be scaled below the size of the default operating system using special
configuration profiles (sometimes referred to as source-scalable or scalable profiles).
They provide three levels of operating system functionality, starting with a
minimal kernel through a basic operating system. These profiles do not support
networking facilities. The small configuration profiles are as follows:
■ PROFILE_MINIMAL_KERNEL—The minimal kernel profile provides the
lowest level of services at which a VxWorks system can operate. It consists of
the microkernel, and basic CPU and BSP support. This profile creates a very
small VxWorks systems that can support multitasking and interrupt
management. It does not provide support for dynamic memory allocation; all
objects must be statically instantiated. For more information see Minimal
Kernel Profile, p.28.
■ PROFILE_BASIC_KERNEL—The basic kernel profile builds on the minimal
kernel profile, adding support for message queues, task hooks, dynamic
memory allocation and de-allocation, and a basic I/O system. For more
information, see Basic Kernel Profile, p.33.
■ PROFILE_BASIC_OS—The basic OS profile provides a small operating system.
It supports a full I/O system, file descriptors, and related ANSI routines. It
also supports task and environment variables, signals, pipes, coprocessor
management, and the ROMFS file system. This profile is close to a VxWorks
5.5 configuration, but without the network stack and debugging assistance
tools. It can also be viewed as a lightweight version of the default VxWorks 6.x
configuration, but without memory protection, the network stack, System
Viewer instrumentation, and debugging assistance support (the WDB target
agent). For more information, see Basic OS Profile, p.35.
Each of the small VxWorks profiles produce an operating system that is smaller
than the current default VxWorks 6.x configuration. The smallest configuration
can build an image of 100 KB or less. The profiles are not monolithic. Each can be
used as a base upon which additional functionality can be added with other
VxWorks components, using standard configuration facilities (Workbench and
vxprj). Custom facilities and applications can also be added, as with VxWorks
configurations based on components and other profiles. And, additional
measures—such as static instantiation of objects—can be used to keep the system
as lean and efficient as possible.
Figure 2-1 illustrates the relationship between the three profiles and the
functionality they provide.
24
2 Kernel
2.4 VxWorks Configuration
NOTE: The small VxWorks profiles are built from source code, so you must install
VxWorks source to use them. For this release they can only be built with the Wind
2
River compiler.
NOTE: For this release, the small VxWorks profiles are not available for all BSPs.
They are provided for wrSbcPowerQuiccII (for PowerPC) and integrator1136jfs
(for ARM).
25
VxWorks
Kernel Programmer's Guide, 6.6
Basic OS
Basic Kernel
Minimal Kernel
Interrupts Multitasking
System clock Watchdogs
Simple drivers Semaphores
Cache support Exception handling
ANSI ctype, string Events
Static allocation of
system resources
26
2 Kernel
2.4 VxWorks Configuration
The small VxWorks profiles are built from source code, so you must install 2
VxWorks source to use them. You must also create a project using the appropriate
configuration profile and select the option for building from source. For the vxprj
command-line tool, use the -source option; for Workbench, select the source build
option during project creation.
Each small VxWorks profile has a predefined set of components that belong to it.
If you choose to change the default configuration of a profile, you should be aware
of the following guidelines and behavior:
■ If components are not needed, they can be removed.
■ If a component that is used in a higher-level profile is added, the profile is
automatically elevated to the next level that uses the component. For example,
if you add a message queue component to the PROFILE_MINIMAL_KERNEL
profile the project is automatically expanded to the PROFILE_BASIC_KERNEL
profile, with all the components provided by that profile.
■ If you add components that are intended to be used with any of the (source)
small profiles—such as networking components—the project defaults to using
the standard binary components. In effect, the advantage of using a small
VxWorks profile is lost.
Systems produced with small VxWorks profiles can be further optimized when
code is developed using the following methods:
■
Static instantiation of kernel objects—which is required for
PROFILE_MINIMAL_KERNEL applications—can be useful with the other
profiles as well. For information in this regard, see 2.6.4 Static Instantiation of
Kernel Objects, p.56.
27
VxWorks
Kernel Programmer's Guide, 6.6
■
Using the VX_GLOBAL_NO_STACK_FILL configuration parameter disables
stack filling on all tasks in the system (for information about the parameter, see
Filling Task Stacks, p.175).
28
2 Kernel
2.4 VxWorks Configuration
related routines are not supported). Both system and application components are
expected to declare storage for all objects at compile time (that is, statically). The
absence of dynamic allocation in this profile also implies that the kernel cannot 2
dynamically instantiate objects like tasks, semaphores and watchdogs. APIs such
as taskSpawn( ), taskDelete( ), semXCreate( ), semDelete( ), and so on, are not
available. However the same kind of objects can be instantiated statically.
Unsupported Facilities
The minimal kernel profile is a very small, limited environment that is suitable
only for small, fixed-function systems. Consequently, it is also a very limited
programming environment. Significant capabilities that are absent from a minimal
kernel, which would otherwise be present in more feature rich configurations of
VxWorks. These features are as follows:
■ Dynamic memory allocation and de-allocation. The ability to destroy or
terminate kernel objects.
■ Memory protection (that is, detection of null pointer and other invalid
memory accesses).
■ Support for task hooks (that is, taskHookLib).
■ Floating point and other coprocessor support.
■ I/O system.
■ Signals.
■ Processes (RTPs).
■ ISR Objects (that is, isrLib).
■ WDB target agent.
■ Networking.
■ C++ support.
■ System Viewer logging.
Since there is no I/O system in this profile, there is no support for traditional
device access APIs like open( ), close( ), read( ), write( ), and so on. Device drivers
written for such systems must be standalone programs that manage devices
directly. DMA-safe buffers (if needed) must be allocated at compile time. Since
there is no support for malloc( ) or free( ), there is correspondingly no support for
cacheDmaMalloc( ) or cacheDmaFree( ) either.
29
VxWorks
Kernel Programmer's Guide, 6.6
outputting only integer values and strings. The supported formats are %d, %s, %c,
%x, %X, %u, and %o only. Floating point, 64-bit, or vector type formats are not
supported with this profile.
Because there is no I/O system in the minimal kernel profile, there are no file
descriptors, and the assumption that printf( ) output is sent to file descriptor 1 is
not true for this profile. The printf( ) routine works for the formats described
above, but its output is sent to the console device through a dedicated function.
Do not attempt to make use of file descriptors, standard output, or standard error
with this profile, because they do not operate in the standard manner.
The components and libraries that make up the VxWorks minimal kernel profile
are listed in Table 2-2.
. Table 2-2 Minimal Kernel Profile Components
30
2 Kernel
2.4 VxWorks Configuration
31
VxWorks
Kernel Programmer's Guide, 6.6
32
2 Kernel
2.4 VxWorks Configuration
The level above the minimal kernel is provided by the basic kernel profile 2
(PROFILE_BASIC_KERNEL). The basic kernel profile produces small VxWorks
systems that build on the minimal kernel to provide support for moderately
complex applications. Systems based on the basic kernel profile are still not much
more than a kernel But in addition to the a minimal kernel system, the basic kernel
profile offers support for the following facilities:
■ Basic I/O system
■ Inter-task communication using message queues.
■ Support for task hooks.
■ Memory allocation and free (using memPartLib).
■ Ability to dynamically create and delete kernel objects such as tasks,
semaphores, watchdogs and message queues (enabled by memPartLib).
■ Support for ANSI string routine strdup( ), which relies on malloc( ).
The most notable additions to this profile are support for basic I/O facilities,
support for message queues and task hooks, and support for memory allocation
and de-allocation. This allows applications based on this profile to be more
dynamic and feature-rich than the minimal kernel. What this profile provides,
however, is still a kernel and not an operating system. It has a full complement of
intertask communications mechanisms and other kernel features, but does not
have operating system capabilities such as memory protection, file system
support, or higher-level constructs such as pipes and so on.
Like the minimal kernel profile, there is no I/O system present in the basic kernel
profile. Hence device drivers for such systems must be standalone programs,
managing devices directly. Since malloc( ) and free( ) are supported,
cacheDmaMalloc( ) and cacheDmaFree( ) are available starting with this profile.
The very same limitations on formatted output apply to the basic kernel profile, as
are present for the minimal kernel profile. See Formatted Character String Output,
p.29.
In addition to the components and libraries provided by the minimal kernel profile
(listed in Table 2-2), the basic kernel profile provides those listed in Table 2-3.
33
VxWorks
Kernel Programmer's Guide, 6.6
Table 2-3 Basic Kernel Profile Components—in Addition to Minimal Kernel Profile
34
2 Kernel
2.4 VxWorks Configuration
Table 2-3 Basic Kernel Profile Components—in Addition to Minimal Kernel Profile
Basic OS Profile
The basic OS profile (PROFILE_BASIC_OS) builds upon the basic kernel profile to
offer a relatively simple real-time operating system. It does not, for example,
provide support for networking or real-time processes (RTPs). This configuration
is similar to a VxWorks 5.5 configuration, but without the network stack and
debugging assistance tools (the WDB target agent). The profile provides an
operating system instead of simply a kernel. The new capabilities added in this
profile are the following:
■ Full I/O system, which includes file system and POSIX support.
■ Standard I/O file descriptors and associated API support.
■ APIs for directory and path manipulations, and disk utilities.
■ Support for select( ).
■ TTY and pipe driver support.
■
Support for logging (logLib).
■
Support for task and environment variables (envLib, taskVarLib).
■
Support for coprocessor management (coprocLib) and floating point.
■
Full-featured memory partition manager (memLib).
■
Full ANSI library support. Adds support for assert( ), setjmp( ) and
longjmp( ), stdio, stdlib, and time library routines.
Device drivers for the basic OS can now use additional IO system features,
associated capabilities like select( ), and so on. File descriptor based I/O and
35
VxWorks
Kernel Programmer's Guide, 6.6
Full ANSI formatted I/O routines are available starting with the basic OS profile.
Formatted output routines like printf( ), sprintf( ), and so on can handle floating
point or vector types if applicable. Formatted input routines such as scanf( ) and
so on are also available. These routines send their output to the standard I/O file
descriptors as expected. These capabilities are available with the
INCLUDE_FORMATTED_IO component.
In addition to the components and libraries provided by the minimal kernel profile
and basic kernel profile (listed in Table 2-2 and Table 2-3), the basic OS profile
provides those listed in Table 2-4.
36
2 Kernel
2.4 VxWorks Configuration
Table 2-4 Basic OS Profile Components—in Addition to the Minimal and Basic Kernel Profiles
37
VxWorks
Kernel Programmer's Guide, 6.6
Table 2-4 Basic OS Profile Components—in Addition to the Minimal and Basic Kernel Profiles
38
2 Kernel
2.4 VxWorks Configuration
VxWorks operating system code can itself be customized. This section introduces 2
customization of usrClock( ), hardware initialization, and more general features of
the operating system.
During system initialization at boot time, the system clock ISR—usrClock( )—is
attached to the system clock timer interrupt. For every system clock interrupt,
usrClock( ) is called to update the system tick counter and to run the scheduler.
You can add application-specific processing to usrClock( ). However, you should
keep in mind that this is executed in interrupt context, so only limited functions
can be safely called. See 4.20.5 Special Limitations of ISRs, p.246 for a list of routines
that can be safely used in interrupt context.
Long power management, if used, allows the processor to sleep for multiple ticks.
See 2.5 Power Management, p.40. The usrClock( ) routine, and therefore
tickAnnounce( ), is not called while the processor is sleeping. Instead, usrClock( )
is called only once, after the processor wakes, if at least one tick has expired.
Application code in usrClock( ) must verify the tick counter each time it is called,
to avoid losing time due to setting the processor into sleep mode.
When the application requires custom hardware, or when the application requires
custom initialization of the existing hardware, the BSP must be modified to
perform the initialization as required. What BSP modifications are required
depend on the type of hardware and the type of initialization that must be
performed. For information about adding or customizing device drivers, the
VxWorks Device Driver Developer’s Guide, specifically the introductory sections. For
information about custom modifications to BSP code, see the VxWorks BSP
Developer’s Guide.
Other Customization
39
VxWorks
Kernel Programmer's Guide, 6.6
This step recompiles all modified files in the directory, and replaces the
corresponding object code in the appropriate architecture-dependent directory.
After that, the next time you rebuild VxWorks, the resulting system image includes
your modified code.
The following example illustrates replacing usrLib with a modified version,
rebuilding the archives, and then rebuilding the VxWorks system image. For the
sake of conciseness, the make output is not shown. The example assumes the
pcPentium BSP; replace the BSP directory name and CPU value as appropriate for
your environment.
c:\> cd installDir\vxworks-6.1\target\src\usr
c:\installDir\vxworks-6.1\target\src\usr> copy usrLib.c usrLib.c.orig
c:\installDir\vxworks-6.1\target\src\usr> copy develDir\usrLib.c usrLib.c
c:\installDir\vxworks-6.1\target\src\usr> make CPU=PENTIUM TOOL=diab
...
c:\installDir\vxworks-6.1\target\src\usr> cd nstallDir\vxworks-6.1\target\config\pcPentium
c:\installDir\vxworks-6.1\target\config\pcPentium2> make
...
40
2 Kernel
2.5 Power Management
VxWorks power management facilities provide for managing the power and 2
performance states of the CPU. These facilities can be used to control CPU power
use based on the following:
■
CPU utilization
■
CPU temperature thresholds
■ task and ISR-specific performance states
The VxWorks power management facilities utilize key concepts of the Advanced
Configuration and Power Interface (ACPI) Specification, version 3.0. The ACPI
specification has not been implemented for VxWorks because of its unsuitability
for hard real-time systems and for all the architectures supported by VxWorks.
However, the ACPI specification provides useful definitions of power states and
power-state transitions, as well as of thermal management, and these definitions
have been incorporated into the design of the VxWorks power management
facilities.
The ACPI 3.0 specification defines processor power states as well as the transitions
that take the processor from one state to the other. Essentially it defines the
processor power state machine. This aspect of the specification enables the
mapping of the power management features of a CPU to one of the defined states,
whether the CPU is ACPI compliant or not. For example, ACPI defines that in
power state C1, the processor does not execute instructions but the contents of the
caches remain valid (bus snooping still takes place). Many CPUs support such a
power state, but manufacturers often use different names to identify that state.
In addition to defining processor power states, ACPI defines performance states
where the CPU executes instructions, but not necessarily at its maximum
throughput. These states correspond to voltage and or frequency scaling
capabilities present on some CPUs, and provide a power management scheme
with which power can be managed even if the system is not idle.
Figure 2-2 illustrates the ACPI-defined power states that apply to the CPU power
management for VxWorks.
41
VxWorks
Kernel Programmer's Guide, 6.6
The G0 global system state is also known as the working state. ACPI requires that
all processor power states reside under the G0 state, which is why other G states
are not deemed relevant to this feature. The C0 to Cn states are processor power
states. Key features of these processor power are as follows:
■ In the C0 state the processor is fetching and executing instructions
■ In the C1 to Cn states the processor is not executing instructions.
■ The higher the power state number, the greater the power saving, but at the
cost greater latency in reaction to external events.
■ State transitions occur to and from the C0 state. For example, after going from
the C0 state to the C1 state the processor must transition back to the C0 state
before going to the C2 state. This is because transitions are triggered by
software and only the C0 state is allowed to execute instructions.
Under the C0 power state reside the processor performance states. In each of these
states the CPU is executing instructions but the performance and power savings in
each P-state vary. The higher the performance state number, the greater the power
saving, but the slower the rate of instruction execution. Taking the Speedstep
technology of the Pentium processor as an example, it defines various
voltage-frequency power settings that are mapped to the ACPI-defined P-states.
Note that unlike the C-state transitions, P-state transitions can occur between any
two states.
42
2 Kernel
2.5 Power Management
See the VxWorks Architecture Supplement for information about which states are
supported
2
Much like the processor power management concepts, the thermal management
concepts defined in ACPI 3.0 are applicable to non ACPI-compliant hardware.
Some of these concepts are:
■ Differentiation between active cooling and passive cooling. Actively cooling a
system consists of activating a cooling device such as a fan. Passive cooling is
achieved by reducing the power consumed by a device. In this case device
includes processors and therefore this is relevant to this feature.
■ Critical Shut Down. It is the temperature threshold at which a device or system
is shut down so as to protect it from heat induced damages.
■ Notification of temperature changes. This allows a power management entity
to actively or passively manage the temperature of a system or device without
the need to poll the devices to obtain their operating temperature.
■ Processor Throttling. This is the link between thermal management and the
processor power states. ACPI equations define how to manage the
performance states of a processor so as to attempt to keep it inside a
temperature zone.
43
VxWorks
Kernel Programmer's Guide, 6.6
Power Manager
CPU
The power management framework is designed to accommodate two use cases for
controlling power consumption: one minimizes the power level of the CPU based
on how much work it has to do (and its temperature); the other runs tasks and ISRs
at different performance states based on their priority.
Wind River provides the power management framework and a two power
managers—only one of which can be used at a time (see Wind River Power
Managers, p.47).
You can develop your own power manager for either of the two use cases
supported by the power management framework (see Power Management
Framework and Use Cases, p.44). The APIs provided by the power management
framework give you the control mechanisms for a power manager (see Power
Management Framework API, p.48).
The VxWorks power management framework is designed to serve two use cases:
one bases the control of power consumption on how much work must be done;
and the other bases the control of power consumption on task-specific
performance states.
One use case involves controlling the power consumed by the CPU based on how
much work the CPU has to do. The idea is to keep the power level of the CPU as
low as possible while preventing the system from going into overload. That is,
prevent running at 100% CPU utilization for a user-defined period of time. It is
44
2 Kernel
2.5 Power Management
quite clear that the writers of the ACPI specification had this use case in mind
while writing the document.
2
The second use case is based on the premise that power can be controlled by
having tasks and ISRs execute at different performance states (P-states). For
example, a task that performs work queued up by interrupts may need to run at
the P0 performance state (highest performance) while a maintenance task with no
hard deadlines can afford to run at the lowest performance state.
The first use case is more global in nature in that the entire system is running in a
certain power state. It is also a scenario in which the power consumption is
dynamically adapted to the work that is required of the processor. One of the
drawback of the approach however is that it makes it difficult to guarantee
deadlines can be met, as a piece of code is not guaranteed to run in the same
performance state on every invocation. The second use case provides a finer
granularity of control and can be more deterministic, since each task can be set to
run in the same performance state at all times. This comes at the price of increased
context switching and interrupt handling times.
NOTE: While the power management framework allows for various power
management methods to be used, the power manager itself must be designed to
ensure that it uses the capacities of the framework in a coherent manner. For
example, the framework cannot prevent contention for the CPU if both of the use
cases described above are implemented at the same time. For the same reason, only
one power manager should be included in the configuration of VxWorks (the two
power managers provided by Wind River are mutually exclusive of one another).
A CPU utilization based power manager is one that uses CPU utilization and CPU
temperature to control the power consumption of the CPU. There are really two
aspects to this approach. One is to transition the CPU from the C0 power state
(executing state) to one of the other C-states (non-executing states) when the
VxWorks kernel becomes idle. The other aspect is to control the performance state
(P-state) of the CPU so as to keep it inside a specified range of CPU utilization and,
optionally, inside a temperature range. In order to support a power manager using
this approach, the power management framework has the following features:
■
The framework notifies the power manager when the VxWorks kernel goes
idle.
■
The framework notifies the power manager when the VxWorks kernel comes
out of idle state.
45
VxWorks
Kernel Programmer's Guide, 6.6
■
The framework allows the power manager to transition the CPU from the C0
state to any of the non-executing power states: C1, C2, ...Cn. Note that the
transition back to the C0 state occurs when an external event takes place (that
is, an interrupt) and therefore this is not a state transition the framework can
allow the power manager to perform/control.
■
The framework allows the power manager to transition the CPU between
performance states (P-states) based on the CPU utilization over a user-defined
time interval. This is achieved by the framework keeping track of CPU
utilization and reporting that figure to the power manager.
■ The framework computes the CPU utilization over two user-specified time
intervals. Having two intervals makes it easier for the power manager to
implement a quick ramp up, slow ramp down policy through the performance
states. The sampling intervals can be modified dynamically.
■ The framework notifies the power manager when a CPU-utilization interval
has elapsed and provides the CPU utilization figure to the power manager at
that time.
■ The framework allows the power manager to specify a high and a low
temperature threshold for the purpose of being notified whenever the
temperature of the CPU crosses either threshold. These thresholds can be
modified dynamically. The purpose for these is to allow the power manager to
implement a cooling policy such as reducing the CPU performance state to
lower power consumption, hence lowering temperature.
The full-featured CPU utilization power manager provided by Wind River is an
example of this type of power management. See Wind River Power Managers, p.47.
The per-task performance power manager is based on the premise that power can
be controlled by having tasks execute at different performance states (P-states). For
example, a task that performs work queued up by interrupts may need to run at
the P0 performance state (highest performance) while a maintenance task with no
hard deadlines can afford to run at the lowest performance state. In order to
support a power manager using this approach, the power management
framework has the following features:
■
The framework allows a performance state (P-state) to be assigned to each task
and allows that state to be set during context switches.
■
The framework allows a single performance state to be assigned for all
interrupts in the system so that execution of ISRs can be performed in a
performance state other than the one of the interrupted task.
46
2 Kernel
2.5 Power Management
47
VxWorks
Kernel Programmer's Guide, 6.6
Table 2-5 describes the API provided by the power management framework.
Power managers use this API to plug into the framework. The routines are
available only in the kernel.
Routine Description
For more information about the routines and the power states supported with
VxWorks, see the API reference for cpuPwrLib.
Also see the VxWorks Architecture Supplement for the mappings between the ACPI
specification C-states and P-states and the power modes supported by the CPU in
question.
48
2 Kernel
2.5 Power Management
NOTE: For this release, the power management facilities available for architectures
other than IA are the same as provided for VxWorks 6.0 and 6.1. These facilities are
described in this section.
The features described in 2.5.1 Power Management for IA Architecture, p.41 will be
available for other architectures in future releases.
49
VxWorks
Kernel Programmer's Guide, 6.6
■
Short Sleep Mode
In short sleep mode, the CPU is put in a low power state until the next
interrupt occurs, including the system clock interrupt. The maximum amount
of time the CPU can sleep is therefore the interval in between system clock
interrupts. The short sleep mode does not require special BSP support. It is
provided at the architecture level. See the VxWorks Architecture Supplement to
determine if this mode is supported for your CPU.
■
Long Sleep Mode
In long sleep mode, the CPU is put in a low power state until the next interrupt
occurs, excluding the system clock interrupt. This allows the CPU to sleep
until the next system event is scheduled to occur; such as a task that times out
on a semaphore take operation or a watchdog that fires. This mode requires
BSP support because the system clock interrupt source must be turned off, and
a mechanism must schedule the CPU to wake after a specified amount of time.
To provide power management support for your system, configure VxWorks with
the INCLUDE_POWER_MGMT_BSP_SUPPORT component.
For more information, see the VxWorks BSP Developer’s Guide. Also see System Clock
Modification, p.39.
50
2 Kernel
2.6 Kernel Applications
51
VxWorks
Kernel Programmer's Guide, 6.6
Note that VxWorks can also be configured with support for applications that
execute in user space as processes. See VxWorks Application Programmer’s Guide:
Applications and Processes.
Both VxWorks native C libraries, and Dinkum C and C++ libraries, are provided
for VxWorks application development. As shown in Table 2-6, VxWorks native
libraries are used for C kernel application development, and Dinkum libraries are
used in all other cases.
The VxWorks native C libraries provide routines outside the ANSI specification.
Note that they provide no support for wide or multi-byte characters.
For more information about these libraries, see the VxWorks and Dinkum API
references. For more information about C++ facilities, see 13. C++ Development.
52
2 Kernel
2.6 Kernel Applications
process-based application). It simply requires an entry point routine that starts all
the tasks required to get the application running.
2
NOTE: If your kernel application includes a main( ) routine, do not assume that it
will start automatically. Kernel application modules that are downloaded or
simply stored in the system image must be started interactively (or be started by
another application that is already running). The operating system can also be
configured to start applications automatically at boot time (see a2.6.10 Configuring
VxWorks to Run Applications Automatically, p.66).
The entry-point routine performs any data initialization that is required, and starts
all the tasks that the running application uses. For example, a kernel application
might have a routine named like myAppStartUp( ), which could look something
like this:
void myAppStartUp (void)
{
runFoo();
tidThis = taskSpawn("tThis", 200, 0, STACK_SIZE,
(FUNCPTR) thisRoutine,0,0,0,0,0,0,0,0,0,0);
tidThat = taskSpawn("tThat", 220, 0, STACK_SIZE,
(FUNCPTR) thatRoutine,0,0,0,0,0,0,0,0,0,0);
tidAnother = taskSpawn("tAnother", 230, 0, STACK_SIZE,
(FUNCPTR) anotherRoutine,0,0,0,0,0,0,0,0,0,0);
return (OK);
}
For information about VxWorks tasks and multitasking, see 4. Multitasking. For
information about working with C++ see 13. C++ Development.
53
VxWorks
Kernel Programmer's Guide, 6.6
The header file vxWorks.h must be included first by every kernel application
module that uses VxWorks facilities. It contains many basic definitions and types
that are used extensively by other VxWorks modules. Many other VxWorks
header files require these definitions. Include vxWorks.h with the following line:
#include <vxWorks.h>
Kernel applications can include other VxWorks header files, as needed, to access
VxWorks facilities. For example, a module that uses the VxWorks linked-list
subroutine library must include the lstLib.h file with the following line:
#include <lstLib.h>
The API reference entry for each library lists all header files necessary to use that
library.
All ANSI-specified header files are included in VxWorks. Those that are
compiler-independent or more VxWorks-specific are provided in
installDir/vxworks-6.x/target/h while a few that are compiler-dependent (for
example stddef.h and stdarg.h) are provided by the compiler installation. Each
toolchain knows how to find its own internal headers; no special compile flags are
needed.
Each compiler has its own C++ libraries and C++ headers (such as iostream and
new). The C++ headers are located in the compiler installation directory rather
than in installDir/vxworks-6.x/target/h. No special flags are required to enable the
compilers to find these headers. For more information about C++ development,
see 13. C++ Development.
NOTE: In releases prior to VxWorks 5.5, Wind River recommended the use of the
flag -nostdinc. This flag should not be used with the current release since it prevents
the compilers from finding headers such as stddef.h.
By default, the compiler searches for header files first in the directory of the source
module and then in its internal subdirectories. In general,
installDir/vxworks-6.x/target/h should always be searched before the compilers’
54
2 Kernel
2.6 Kernel Applications
other internal subdirectories; to ensure this, always use the following flag for
compiling under VxWorks:
2
-I %WIND_BASE%/target/h %WIND_BASE%/target/h/wrn/coreip
Some header files are located in subdirectories. To refer to header files in these
subdirectories, be sure to specify the subdirectory name in the include statement,
so that the files can be located with a single -I specifier. For example:
#include <xWorks.h>
#include <sys/stat.h>
Some VxWorks facilities make use of other, lower-level VxWorks facilities. For
example, the tty management facility uses the ring buffer subroutine library. The
tty header file tyLib.h uses definitions that are supplied by the ring buffer header
file rngLib.h.
It would be inconvenient to require you to be aware of such include-file
interdependencies and ordering. Instead, all VxWorks header files explicitly
include all prerequisite header files. Thus, tyLib.h itself contains an include of
rngLib.h. (The one exception is the basic VxWorks header file vxWorks.h, which
all other header files assume is already included.)
Generally, explicit inclusion of prerequisite header files can pose a problem: a
header file could get included more than once and generate fatal compilation
errors (because the C preprocessor regards duplicate definitions as potential
sources of conflict). However, all VxWorks header files contain conditional
compilation statements and definitions that ensure that their text is included only
once, no matter how many times they are specified by include statements. Thus, a
kernel application module can include just those header files it needs directly,
without regard to interdependencies or ordering, and no conflicts will arise.
Some elements of VxWorks are internal details that may change and so should not
be referenced in a kernel application. The only supported uses of a module’s
facilities are through the public definitions in the header file, and through the
module’s subroutine interfaces. Your adherence ensures that your application
code is not affected by internal changes in the implementation of a VxWorks
module.
Some header files mark internal details using HIDDEN comments:
/* HIDDEN */
...
/* END HIDDEN */
55
VxWorks
Kernel Programmer's Guide, 6.6
Internal details are also hidden with private header files: files that are stored in the
directory installDir/vxworks-6.x/target/h/private. The naming conventions for
these files parallel those in installDir/vxworks-6.x/target/h with the library name
followed by P.h. For example, the private header file for semLib is
installDir/vxworks-6.x/target/h/private/semLibP.h.
The VxWorks APIs have a long established convention for the creation and
deletion of kernel entities. Objects such as tasks, semaphores, message queues and
watchdogs are instantiated using their respective creation APIs (for example,
taskSpawn( ), semXCreate( ), and so on) and deleted using their respective delete
APIs (for example, msgQDelete( ), wdDelete( ), and so on.). Object creation is a
two-step process: first the memory for the object is allocated from the system,
which is then initialized appropriately before the object is considered usable.
Object deletion involves invalidation of the object, followed by freeing its memory
back to the system. Thus, object creation and deletion are dependent on dynamic
memory allocation, usually through the malloc( ) and free( ) routines.
Dynamic creation and deletion of objects at run-time is a convenient programming
paradigm, though it has certain disadvantages for some real-time critical
applications. First, the allocation of memory from the system cannot always be
guaranteed. Should the system run out of memory the application cannot create
the resources it must have function. The application must then resort to a suitable
error recovery process if any exists, or abort in some fashion. Second, dynamic
allocation of memory is a relatively slow operation that may potentially block the
calling task. This makes dynamic allocation non-deterministic in performance.
Static instantiation of objects is a faster, more deterministic alternative to dynamic
creation. In static instantiation, the object is declared as a compile time variable.
Thus the compiler allocates storage for the object in the program being compiled.
No more allocation need be done. At run-time the objects memory is available
immediately at startup for the initialization step. Initialization of pre-allocated
memory is much more deterministic and faster than dynamic creation. Such static
declaration of objects cannot fail, unless the program itself is too large to fit in the
systems memory.
Many applications are suited to exploit static instantiation of objects in varying
degrees. Most applications require some resources to be created, that last for the
lifetime of the application. These resources are never deleted. In lieu of the latter,
objects that last for the lifetime of the application are ideally suited for static (that
is, compile time) allocation. To the extent that they are instantiated statically
56
2 Kernel
2.6 Kernel Applications
(which we shall see below), the application is that much more fail safe and fast to
launch.
2
See 2.4.5 Small VxWorks Configuration Profiles, p.24 for information about operating
system profiles of particular relevance for static instantiation.
57
VxWorks
Kernel Programmer's Guide, 6.6
Kernel objects such as tasks, semaphores, message queues and watchdogs can be
instantiated statically using the same principles outlined above.
Normally these objects are created using the appropriate create routines for that
type of object, and deleted using the appropriate delete routine. As mentioned
before, creation and deletion involve dynamic memory allocation and free
respectively.
Static instantiation of objects is a two-step process. First the object to be created is
declared, usually at global scope. Next the declared object is initialized using an
initialization routine, which makes it ready for use. In contrast, dynamic creation
with create routines is a one-step process. Static instantiation of objects is thus a
little less convenient, but more deterministic. Users can choose the style that suits
their purpose.
NOTE: Static instantiation should only be used for objects that are kernel-resident.
It is not meant to be used to create objects in a process (RTP).
The following sections describe an alternative static instantiation method for each
of these entities.
The taskSpawn( ) routine has been the standard method for instantiating tasks.
This API relies on dynamic allocations. In order to instantiate tasks statically
several macros have been provided to emulate the dynamic instantiation
capability provided by taskSpawn( ) and related routines.
The VX_TASK macro declares a task object at compilation time. It takes two
arguments: the task name and its stack size. When calling taskSpawn( ) the name
may be a NULL pointer, but when using the VX_TASK macro, a name is
58
2 Kernel
2.6 Kernel Applications
mandatory. The stack size must evaluate to a non-zero integer value and must be
a compile-time constant.
2
The VX_TASK_INSTANTIATE macro is the static equivalent of the taskSpawn( )
routine. It initializes and schedules the task, making it run according to its priority.
VX_TASK_INSTANTIATE evaluates to the task ID of the spawned task if it was
successful, or ERROR if not.
The following example illustrate spawning tasks statically:
#include <vxWorks.h>
#include <taskLib.h>
VX_TASK(myTask,4096);
int myTaskId;
if (myTaskId != ERROR)
return (OK); /* instantiation succeeded */
else
return (ERROR);
}
Sometimes users may prefer to initialize a task, but keep it suspended until needed
later. This can be achieved by using the VX_TASK_INITIALIZE macro, as
illustrated below. Since the task is left suspended, users are responsible for calling
taskActivate( ) in order to run the task.
#include <vxWorks.h>
#include <taskLib.h>
VX_TASK(myTask,4096);
int myTaskId;
if (myTaskId != NULL)
{
taskActivate (myTaskId);
return (OK);
}
else
return (ERROR);
}
59
VxWorks
Kernel Programmer's Guide, 6.6
The macro VX_MSG_Q is used to declare a message queue at compile time. It takes
three parameters: the name, the maximum number of messages in the message
queue, and the maximum size of each message.
60
2 Kernel
2.6 Kernel Applications
As with other static instantiation macros it is crucial to pass exactly the same name
used in the VX_MSG_Q declaration to the msgQInitialize( ) routine, or else
compilation errors will result. Also it is crucial to pass exactly the same values for
the message size and maximum number of messages as was passed to the
VX_MSG_Q macro.
For more information, see the API reference for msgQLib.
The macro VX_WDOG is used to declare a watchdog at compile time. It takes one
parameter, the name of the watchdog. After this declaration, the routine
wdInitialize( ) is used to initialize the watchdog and make it ready for use.
#include <vxWorks.h>
#include <wdLib.h>
As with other static instantiation macros it is crucial to pass exactly the same name
used in the VX_WDOG declaration to the wdInitialize( ) routine, or else
compilation errors will result.
For more information, see the API reference for wdLib.
61
VxWorks
Kernel Programmer's Guide, 6.6
You can make use of the VxWorks makefile structure to put together your own
application makefiles quickly and efficiently. If you build your application directly
in a BSP directory (or in a copy of one), you can use the makefile in that BSP, by
specifying variable definitions that include the components of your application.
You can specify values for these variables either from the make command line, or
from your own makefiles (when you take advantage of the predefined VxWorks
make include files).
ADDED_CFLAGS
Application-specific compiler options for C programs.
ADDED_C++FLAGS
Application-specific compiler options for C++ programs.
62
2 Kernel
2.6 Kernel Applications
Additional variables can be used to link kernel application modules with the
VxWorks image; see 2.6.8 Linking Kernel Application Object Modules with VxWorks,
p.64. 2
For more information about makefiles, see the VxWorks Command-Line Tools User’s
Guide.
You can also take advantage of the makefile structure if you develop kernel
application modules in separate directories. Example 2-1 illustrates the general
scheme. Include the makefile headers that specify variables, and list the object
modules you want built as dependencies of a target. This simple scheme is usually
sufficient, because the makefile variables are carefully designed to fit into the
default rules that make knows about.2
include $(WIND_BASE)/target/h/make/defs.bsp
exe : myApp.o
For information about build options, see the VxWorks Architecture Supplement for
the target architecture in question. For information about using makefiles to build
applications, see the VxWorks Command-Line Tools User’s Guide.
2. However, if you are working with C++, it may be also convenient to copy the .cpp.out rule
from installDir/vxworks-6.x/target/h/make/rules.bsp into your application’s makefile.
63
VxWorks
Kernel Programmer's Guide, 6.6
The following example is a command to link several modules, using the Wind
River linker for the PowerPC family of processors:
c:\> dld -o applic.o -r applic1.o applic2.o applic3.o
In either case, the command creates the object module applic.o from the object
modules applic1.o, applic2.o, and applic3.o. The -r option is required, because the
object-module output must be left in relocatable form so that it can be downloaded
and linked to the target VxWorks image.
Any VxWorks facilities called by the kernel application modules are reported by
the linker as unresolved externals. These are resolved by the loader when the
module is loaded into VxWorks memory.
! WARNING: Do not link each kernel application module with the VxWorks libraries.
Doing this defeats the load-time linking feature of the loader, and wastes space by
writing multiple copies of VxWorks system modules on the target.
In order to produce complete systems that include kernel application modules, the
modules must be statically linked with the VxWorks image. The makefile
EXTRA_MODULES variable can be used to do so. It can be used from the command
line as follows:
% make EXTRA_MODULES="foo.o"
64
2 Kernel
2.6 Kernel Applications
For more information about using makefile variables, see 2.6.6 Building Kernel
Application Modules, p.62.
2
To include your kernel application modules in the system image using a makefile,
identify the names of the application object modules (with the .o suffix) with
EXTRA_MODULES. For example, to link the module myMod.o with the operating
system, add a line like the following:
EXTRA_MODULES = myMod.o
Building the system image with the module linked in is the final part of this step.
In the project directory, execute the following command:
c:\myVxProj\osProj> make vxWorks
For information about how to have kernel applications start automatically at boot
time, see 2.6.10 Configuring VxWorks to Run Applications Automatically, p.66.
Generally, VxWorks boot loader code is copied to a start address in RAM above
the constant RAM_HIGH_ADRS, and the boot loader in turn copies the
downloaded system image starting at RAM_LOW_ADRS. The values of these
constants are architecture dependent, but in any case the system image must not
exceed the space between the two. Otherwise the system image will overwrite the
boot loader code while downloading, potentially killing the booting process.
To help avoid this, the last command executed when you build a new VxWorks
image is vxsize, which shows the size of the new executable image and how much
space (if any) is left in the area below the space used for boot ROM code:
vxsize 386 -v 00100000 00020000 vxWorks
vxWorks: 612328(t) + 69456(d) + 34736(b) = 716520 (235720 bytes left)
(In this output, t stands for text segment, d for data segment, and b for bss.)
65
VxWorks
Kernel Programmer's Guide, 6.6
Self-Booting Image
myAppStartUp();
3. Link the kernel-base application object modules with the kernel image (see
2.6.8 Linking Kernel Application Object Modules with VxWorks, p.64).
66
2 Kernel
2.7 Custom Kernel Libraries
NOTE: Functionality can be added to VxWorks in the form of kernel modules that
do not have to be defined as VxWorks components (see 2.6 Kernel Applications,
p.51). However, in order to make use of either Workbench or the vxprj
command-line configuration facilities, to define dependencies between
components, and so on, extensions to the operating system should be developed
using CDFs.
A CDF identifies the binary and source code elements that make up the
component, its configuration parameters, relationship to other components, and
so on. CDFs also define information about how components are displayed in the
Workbench kernel configuration facility, and they can be used to group
components into predefined sets to facilitate VxWorks configuration. While some
components are autonomous, some have dependencies on other components,
which must be included in the configuration of the operating system for run-time
operation.
Both Workbench and the vxprj command-line facility use CDFs for configuring the
operating system with selected components, for setting component parameter
values, and so on. Workbench also uses information in CDFs to display the names
and descriptions of components, and to provide links to online help. For
67
VxWorks
Kernel Programmer's Guide, 6.6
Defining a Component
This section describes the process of defining your own component. To allow for
the greatest flexibility, there is no convention governing the order in which
properties describe a component or the sequence in which CDL objects are entered
into a component description file. The following steps taken to create the
component INCLUDE_FOO are a suggested order only; the sequence is not
mandatory. Nor is there meant to be any suggestion that you use all of the
properties and object classes described.
The naming conventions for CDL are described in 2.8.5 CDF Naming Conventions,
p.80. Note that CDF files must have a .cdf suffix.
68
2 Kernel
2.8 Custom VxWorks Components and CDFs
69
VxWorks
Kernel Programmer's Guide, 6.6
If your module is not located in the standard path, use the ARCHIVE property to
specify the archive name, for example, /somePath/fooLib.a. (For additional
instructions concerning the use of ARCHIVE, see Step 10.)
NOTE: Developers should create their own archives for custom components. Do
not modify Wind River archives.
Use the HDR_FILES property to specify any .h files associated with the component,
like foo.h. These files are emitted in prjConfig.c, providing the initialization
routine with a declaration.
If there is source code that should be compiled as part of the component, put it in
a .c file and specify the file name in the CONFIGLETTES property, for example,
fooConfig.c. Component parameters should be referenced in the configlette or in
the initialization routine; otherwise they have no effect.
For detailed information about the properties of the component object (MODULES
and so on), see 2.8.6 Component CDF Object, p.81.
If you are not using the MODULES property of the component object, use the
LINK_SYMS property to include your object module from a linked archive. The
system generates an unresolved reference to the symbol (fooRtn1 in this example),
causing the linker to resolve it by extracting your module from the archive.
...
LINK_SYMS fooRtn1
...
For more information about the INIT_RTN property of the component object, see
and 2.8.6 Component CDF Object, p.81.
70
2 Kernel
2.8 Custom VxWorks Components and CDFs
NOTE: INIT_BEFORE only affects ordering within the initialization group. Do not
reference a component that is not in the initialization group; this has no effect.
71
VxWorks
Kernel Programmer's Guide, 6.6
72
2 Kernel
2.8 Custom VxWorks Components and CDFs
Use the DEFAULT property to specify a default value for each parameter.
Parameter FOO_MAX_COUNT {
NAME Foo maximum 2
TYPE uint
DEFAULT 50
}
For information about the parameter object and its properties, see 2.8.7 Parameter
CDF Object, p.87.
73
VxWorks
Kernel Programmer's Guide, 6.6
In a selection group, the COUNT property specifies the minimum and maximum
number of included components. If the user exceeds these limits the system flags
the selection as mis-configured.
Selection SELECT_FOO {
NAME Foo type
SYNOPSIS Select the type of desired FOO support
COUNT 0-1
CHILDREN INCLUDE_FOO_TYPE1 \
INCLUDE_FOO_TYPE2 \
INCLUDE_FOO_TYPE3
DEFAULTS INCLUDE_FOO_TYPE1
}
For information about the folder and section objects and its properties, see
2.8.11 Folder CDF Object, p.92 and 2.8.12 Selection CDF Object, p.94.
! CAUTION: Do not alter Wind River-supplied CDFs directly. Use the naming
convention to create a file whose higher precedence overrides the default
properties of Wind River-supplied components.
Modifying a Component
74
2 Kernel
2.8 Custom VxWorks Components and CDFs
by convention read last and therefore have the highest priority. Use the naming
convention to create a high-precedence CDF file that overrides the default
properties of Wind River components. 2
For detailed information, see 2.8.2 CDF Precedence and CDF Installation, p.75.
In the following example, the default number of open file descriptors
(NUM_FILES) in the standard Wind River component INCLUDE_IO_SYSTEM has
been modified. The normal default value is 50.
Parameter NUM_FILES {
DEFAULT 75
}
By adding these example lines of code to a third-party CDF file, by removing and
adding the component if it is already in the configuration, and by re-building the
project, the value of NUM_FILES is changed to 75. The original Wind River CDF
file, 00vxWorks.cdf, is not changed; the default property value is changed because
the third-party file has higher precedence. Other property values remain the same
unless specifically redefined.
! CAUTION: Do not alter Wind River-supplied CDFs directly. Use the naming
convention to create a file whose higher precedence overrides the default
properties of Wind River-supplied components.
More than one CDF may define a given component and its properties. The
precedence of multiple definitions is determined by a numbering scheme used
with the CDF naming convention and by the order in which directories containing
CDFs are read by the configuration facility.
75
VxWorks
Kernel Programmer's Guide, 6.6
The component description files are read at two points by the configuration
facility:
■ When a project is created.
■ After component description files are changed and the project build occurs.
The order in which CDFs are read is significant. If more than one file describes the
same property of the same component, the one read last overrides all earlier ones.
The intent is to allow component description files to have some precedence level.
Files read later have higher precedence than those read earlier.
Precedence is established in two complementary ways:
■ CDF files reside in certain directories, and those directories are read in a
specified order.
■ Within one of these directories, CDFs are read in alphanumeric order.
The configuration facility sources all .cdf files in any of the following directories.
These directories are read in the order in which they are presented:
1. installDir/vxworks-6.x/target/config/comps/vxWorks
Contains all generic VxWorks components.
2. installDir/vxworks-6.x/target/config/comps/vxWorks/arch/arch
Contains all architecture-specific VxWorks components (or component
overrides).
3. installDir/vxworks-6.x/target/config/bspName
Contains all BSP-specific components.
4. the project directory
Contains all other components.
76
2 Kernel
2.8 Custom VxWorks Components and CDFs
Wind River delivers the parts of its components in the following locations:
■ Source code modules are usually found in the
installDir/vxworks-6.x/target/src or target/config directories.
■ Headers are found in installDir/vxworks-6.x/target/h; object modules are
delivered in installDir/vxworks-6.x/target/lib/objARCH.
■ Component description files are in
installDir/vxworks-6.x/target/config/comps/vxWorks.
■ Component configlettes (source fragments) are in
installDir/vxworks-6.x/target/config/comps/src.
Third parties are not limited to this arrangement, and the location of component
elements can be fully described in the component description file.
If you have created a new CDF, you must place it in the appropriate path, based
on the nature of the contents and the desired level of precedence. Your choices of
paths (as described in CDF Directories and Precedence, p.76) are as follows:
■ installDir/vxworks-6.x/target/config/comps/vxWorks for generic VxWorks
component descriptions only
■ installDir/vxworks-6.x/target/config/comps/vxWorks/arch/arch for
architecture-specific VxWorks component descriptions
■ installDir/vxworks-6.x/target/config/config/bspName for board-specific
component descriptions
■
the project directory for all other files
Wind River recommends that third parties place their component source code and
object elements in a directory, such as
installDir/vxworks-6.x/target/config/vendorName. The location of the component
description file (CDF) depends on where in the system the components should be
integrated.
To be able to integrate a new general-purpose VxWorks component into the
system, the CDF must be located in
77
VxWorks
Kernel Programmer's Guide, 6.6
installDir/vxworks-6.x/target/config/comps/vxWorks. If it is a BSP-specific
component, the file should be located in the BSP directory. If it is specific to a single
project, it should be located in the project directory
(installDir/vxworks-6.x/target/proj/projectName).
Be sure to follow the proper naming and numbering conventions, which are
described in 2.8.5 CDF Naming Conventions, p.80.
There are several tests that can run to verify that components have been written
correctly:
■ Check Syntax and Semantics
For example:
% vxprj component check MyProject.wpj
If no project file is specified, vxprj looks for a .wpj file in the current directory.
If no component is specified, vxprj checks every component in the project. This
command invokes the cmpTest routine, which tests for syntactical and
semantic errors
Based on test output, make any required modifications. Keep running the
script until you have removed the errors.
■ Check Component Dependencies
You can test for scalability bugs in your component by running a second vxprj
command, which has the following syntax:
vxprj component dependencies [projectFile] component [component ... ]
If no project file is specified, vxprj looks for a .wpj file in the current directory.
■ Check the Component Hierarchy in Workbench
Verify that selections, folders, and new components you have added are
properly included by making a visual check of the Workbench component
hierarchy.
78
2 Kernel
2.8 Custom VxWorks Components and CDFs
Look at how your new elements appear in the folder tree. Check the
parameters associated with a component and their parameter default values.
2
If you have added a folder containing components, and have included that
folder in your configuration, the Workbench component hierarchy should
display in boldface all components listed as defaults for that folder (that is,
values for the DEFAULTS property).
■
Build and Boot the System
79
VxWorks
Kernel Programmer's Guide, 6.6
Based on the operating system components selected by the user, the configuration
facilities (Workbench or vxprj) create the system configuration files prjComps.h,
prjParams.h, and prjConfig.c, which are used in building the specified system.
For information about how the configuration and code generation facilities work,
see the Wind River Workbench User’s Guide and the VxWorks Command-Line Tools
User’s Guide.
Follow these conventions when creating CDFs, where FOO (or Foo) is the variable
element of the naming convention:
■ All bundles names are of the form BUNDLE_FOO.
■ All component names are of the form INCLUDE_FOO.
■ All VxBus driver component names are of the form
INCLUDE_driverType_driverName (for example, DRV_SIO_NS16550).
■ All folders names are of the form FOLDER_FOO.
■ All selection names are of the form SELECT_FOO.
■ Parameter names should not match the format of any other object type, but are
otherwise unrestricted. For example, you can use FOO_XXX, but not
INCLUDE_FOO.
■ All initialization group names should be of the form initFoo. However, Wind
River initialization groups use the form usrFoo for backwards compatibility
reasons.
■ All component description files have a .cdf suffix.
■ All .cdf file names begin with two decimal digits; for example,
00comp_foo.cdf. These first two digits control the order in which .cdf files are
read within a directory. See 2.8.2 CDF Precedence and CDF Installation, p.75 for
more information.
Note that more than one component can be defined in a single CDF.
New component description files should be independent of existing files, with two
exceptions:
■
New component objects should be bound into an existing folder or selection
for GUI display purposes.
80
2 Kernel
2.8 Custom VxWorks Components and CDFs
■
New component object initialization routines must be associated with an
existing initialization group.
2
A new component object can be bound to an existing folder or selection, and to an
existing initialization group, without modifying the existing elements. By
prepending an underscore (“_”) to certain component properties, you can reverse
the meaning of the property. For example, if there is already a folder
FOLDER_EXISTING and an initialization group initExisting, you can bind a new
component (which is defined in a different file) to it as follows:
Component INCLUDE_FOO
...
_CHILDREN FOLDER_EXISTING
_INIT_ORDER initExisting
}
Components are the basic units of configurable software. They are the smallest,
scalable unit in a system. With either Workbench or the command-line vxprj
facility, a user can reconfigure VxWorks by including or excluding a component,
as well as modifying some of its characteristics. The properties of a component
include:
■ Identification of the object code (modules) and source code (configlettes) used
in the build of a project.
■
Identification of configuration parameters, which are typically preprocessor
macros used within a component’s configuration code.
■
Integration information that controls how a component is integrated into an
executable system image (for example, an initialization routine).
For information about how to group components in to a bundle of components, see
2.8.9 Bundle CDF Object, p.90.
81
VxWorks
Kernel Programmer's Guide, 6.6
Component Properties
The component object class defines the source and object code associated with a
component, much of the integration information, and any associated parameters.
Dependencies among components can be detected and the related components can
be automatically added by the configuration facility (GUI or CLI). It does so by
analyzing the global symbols in each object module that belongs to the component,
and then determining which other components provide the functionality.
As an example, a message logging component could be defined in CDL as follows:
Component INCLUDE_LOGGING {
NAME message logging
SYNOPSIS Provides logMsg support
MODULES logLib.o
INIT_RTN logInit (consoleFd, MAX_LOG_MSGS);
CFG_PARAMS MAX_LOG_MSGS
HDR_FILES logLib.h
}
The component object includes the greatest number of properties. The more
commonly used properties are described below.
NAME
The name that appears next to the component icon in the component tree of
the Workbench kernel configuration facility.
SYNOPSIS
A brief description of the component’s functionality, which is used in
Workbench.
The next four properties are all related to the configuring of code in a project:
82
2 Kernel
2.8 Custom VxWorks Components and CDFs
MODULES
The names of the object files that constitute the component’s code (along with
any source file configlettes). For example, semShow.o is identified in the 2
MODULES property of the INCLUDE_SEM_SHOW component.
! CAUTION: An object file must only be associated with one component, and its
name must be globally unique (across the entire system).
CONFIGLETTES
Configlettes are source file fragments used in conjunction with parameters
(see CFG_PARAMS, p.83 and 2.8.7 Parameter CDF Object, p.87). They are
typically used for parameter switches (for example, to specify a number, a
binary setting to true or false, and so on).
Configlette definitions may include the use of macros. For example:
CONFIGLETTES $(MY_PARAM)\myFile.c
HDR_FILES
Header files associated with your configlette code or initialization routine.
These are header files that must be included in order for your configlette or
initialization routine to compile.
ARCHIVE
The archive file in which to find object modules stored other than in the
standard location.
The following property provides configuration information:
CFG_PARAMS
A list of configuration parameters associated with the component, typically a
list of preprocessor macros. Each must be described separately by its own
parameter object. Also see CONFIGLETTES, p.83 and 2.8.7 Parameter CDF
Object, p.87.
The next group of properties control integration of the component into the system
image, including initialization and dependency information.
INIT_RTN
A one-line initialization routine. Also see INIT_BEFORE, p.84, _INIT_ORDER,
p.84, and 2.8.8 Initialization Group CDF Object, p.88.
83
VxWorks
Kernel Programmer's Guide, 6.6
LINK_SYMS
A list of symbols to look up in order to include components from an archive.
REQUIRES
A list of component(s) that do not otherwise have structural dependencies and
must be included if this component is included. List only those components
that cannot be detected from MODULES; that is, by their associated object files.
For example, components with configlettes only or those that reference other
components through function pointers. In this latter case, REQUIRES can
specify a selection.
INCLUDE_WHEN
Sets a dependency to automatically include the specified component(s) when
this component is included (that is, it handles nested includes).
INIT_BEFORE
Call the initialization routine of this component before the one specified by this
property. This property is effective only in conjunction with _INIT_ORDER.
Also see INIT_RTN, p.83, _INIT_ORDER, p.84, and 2.8.8 Initialization Group
CDF Object, p.88.
_INIT_ORDER
The component belongs to the specified initialization group. This property places
the specified component at the end of the INIT_ORDER property list of the
initialization group object. Also see INIT_RTN, p.83, INIT_BEFORE, p.84, and
2.8.8 Initialization Group CDF Object, p.88. Like NAME and SYNOPSIS before them,
the following properties also affect user presentation in Workbench:
HELP
List reference pages associated with the component.
_CHILDREN
This component is a child component of the specified folder (or selection). Also
see CHILDREN, p.93.
_DEFAULTS
This component is a default component of the specified folder (or selection).
This property must be used in conjunction with _CHILDREN.
Normally, the component would be listed in the container (folder or selection)
as CHILDREN & DEFAULT. The usage of the complement allows adding
components to an existing container without modifying the container's CDF.
84
2 Kernel
2.8 Custom VxWorks Components and CDFs
Component Template
85
VxWorks
Kernel Programmer's Guide, 6.6
86
2 Kernel
2.8 Custom VxWorks Components and CDFs
Parameter Properties
87
VxWorks
Kernel Programmer's Guide, 6.6
DEFAULT
The default value of the parameter.
Parameter Template
Parameter parameter {
NAME name // readable name (e.g., "max open files")
88
2 Kernel
2.8 Custom VxWorks Components and CDFs
89
VxWorks
Kernel Programmer's Guide, 6.6
InitGroup group {
INIT_RTN rtn(..) // initialization routine definition
Bundle object can be used to associate components that are often used together,
which facilitates configuration of the operating system with generic sets of
facilities.
For example, the network kernel shell (BUNDLE_NET_SHELL) includes all the
components are required to use the kernel shell with a network symbol table:
Bundle BUNDLE_NET_SHELL {
NAME network kernel shell
SYNOPSIS Kernel shell tool with networking symbol table
HELP shell windsh tgtsvr
COMPONENTS INCLUDE_SHELL \
INCLUDE_LOADER \
INCLUDE_DISK_UTIL \
INCLUDE_SHOW_ROUTINES \
INCLUDE_STAT_SYM_TBL \
INCLUDE_DEBUG \
INCLUDE_UNLOADER \
INCLUDE_MEM_SHOW \
INCLUDE_SYM_TBL_SHOW \
INCLUDE_CPLUS \
INCLUDE_NET_SYM_TBL
_CHILDREN FOLDER_BUNDLES
}
For information about components, see 2.8.6 Component CDF Object, p.81.
Bundle Properties
NAME
The name of the bundle as it should appear in Workbench.
SYNOPSIS
A description of the bundle’s functionality, as it should appear in Workbench.
90
2 Kernel
2.8 Custom VxWorks Components and CDFs
HELP
HTML help topics that the bundle is related to.
2
COMPONENTS
A list of components to add to the kernel configuration when this bundle is
added.
_CHILDREN
This bundle is a child of the specified folder.
Profile Properties
NAME
The name of the profile as it should appear in Workbench.
SYNOPSIS
A brief description of the profiles functionality, which is used in Workbench.
PROFILES
A list of base profiles.
COMPONENTS
A list of components to add to the kernel configuration when this profile is
added.
Profile Template
Profile profileName {
NAME nameForGui
91
VxWorks
Kernel Programmer's Guide, 6.6
SYNOPSIS description
PROFILES p1 p2 p3...
COMPONENTS comp1 comp2 comp3...
}
92
2 Kernel
2.8 Custom VxWorks Components and CDFs
Folder FOLDER_ANSI {
NAME ANSI C components (libc)
SYNOPSIS ANSI libraries
CHILDREN INCLUDE_ANSI_ASSERT \ 2
INCLUDE_ANSI_CTYPE \
INCLUDE_ANSI_LOCALE \
INCLUDE_ANSI_MATH \
INCLUDE_ANSI_STDIO \
INCLUDE_ANSI_STDLIB \
INCLUDE_ANSI_STRING \
INCLUDE_ANSI_TIME \
INCLUDE_ANSI_STDIO_EXTRA
DEFAULTS INCLUDE_ANSI_ASSERT INCLUDE_ANSI_CTYPE \
INCLUDE_ANSI_MATH INCLUDE_ANSI_STDIO \
INCLUDE_ANSI_STDLIB INCLUDE_ANSI_STRING \
INCLUDE_ANSI_TIME
}
Folder Properties
93
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: Use folder objects only when creating a new grouping of components (new
or existing). Do not modify existing folders in order to include new components.
CDL accommodates that by prepending an underscore to a property name, for
example, _CHILDREN.
Folder Template
Folder folder {
NAME name // readable name (e.g., "foo libraries").
94
2 Kernel
2.8 Custom VxWorks Components and CDFs
count, selections do not contain folders or other selections; nor do they have
parameter or initialization group objects associated with them.
2
Selection Properties
There are three timestamp drivers available, as indicated by the three values for
the CHILDREN property. The COUNT permits a choice of one, that is, a minimum
and a maximum of 1.
Selection Template
Selection selection {
NAME name // readable name (for example , "foo
// communication path")
95
VxWorks
Kernel Programmer's Guide, 6.6
This template includes all objects and properties that can be used in a CDF.
Generally, CDFs use only a portion of them.
Bundle MY_BUNDLE
{
COMPONENTS :
HELP :
NAME :
SYNOPSIS :
_CHILDREN :
}
Component MY_COMPONENT
{
ARCHIVE :
CFG_PARAMS :
CONFIGLETTES :
ENTRY_POINTS :
EXCLUDE_WHEN :
HDR_FILES :
HELP :
HIDE_UNLESS :
INCLUDE_WHEN :
INIT_AFTER :
INIT_BEFORE :
INIT_RTN :
LINK_DATASYMS :
LINK_SYMS :
MODULES :
NAME :
PREF_DOMAIN :
PROJECT :
PROTOTYPE :
REQUIRES :
SHUTDOWN_RTN :
SYNOPSIS :
TERM_RTN :
USES :
_CHILDREN :
96
2 Kernel
2.8 Custom VxWorks Components and CDFs
_COMPONENTS :
_DEFAULTS :
_EXCLUDE_WHEN :
_HIDE_UNLESS : 2
_INCLUDE_WHEN :
_INIT_AFTER :
_INIT_BEFORE :
_INIT_ORDER :
_LINK_DATASYMS :
_LINK_SYMS :
_REQUIRES :
_USES :
}
EntryPoint MY_ENTRYPOINT
{
NAME :
PRIVILEGED :
SYNOPSIS :
TYPE :
_ENTRY_POINTS :
}
EntryPointType MY_ENTRYPOINTTYPE
{
SYNOPSIS :
_TYPE :
}
Folder MY_FOLDER
{
CHILDREN :
DEFAULTS :
HELP :
NAME :
SYNOPSIS :
_CHILDREN :
_DEFAULTS :
}
InitGroup MY_INITGROUP
{
HELP :
INIT_AFTER :
INIT_BEFORE :
INIT_ORDER :
INIT_RTN :
NAME :
PROTOTYPE :
SHUTDOWN_RTN :
SYNOPSIS :
TERM_RTN :
_INIT_AFTER :
_INIT_BEFORE :
_INIT_ORDER :
}
97
VxWorks
Kernel Programmer's Guide, 6.6
Module MY_MODULE
{
ENTRY_POINTS :
NAME :
SRC_PATH_NAME :
_MODULES :
}
Parameter MY_PARAMETER
{
DEFAULT :
HELP :
NAME :
STORAGE :
SYNOPSIS :
TYPE :
VALUE :
_CFG_PARAMS :
}
Profile MY_PROFILE
{
COMPONENTS :
HELP :
NAME :
SYNOPSIS :
_CHILDREN :
}
Selection MY_SELECTION
{
ARCHIVE :
CFG_PARAMS :
CHILDREN :
CONFIGLETTES :
COUNT :
DEFAULTS :
HDR_FILES :
HELP :
HIDE_UNLESS :
INIT_AFTER :
INIT_BEFORE :
INIT_RTN :
LINK_DATASYMS :
LINK_SYMS :
MODULES :
NAME :
PROTOTYPE :
REQUIRES :
SHUTDOWN_RTN :
SYNOPSIS :
USES :
_CHILDREN :
_DEFAULTS :
_HIDE_UNLESS :
98
2 Kernel
2.8 Custom VxWorks Components and CDFs
_INIT_AFTER :
_INIT_BEFORE :
_INIT_ORDER :
_LINK_DATASYMS : 2
_LINK_SYMS :
_REQUIRES :
_USES :
}
Symbol MY_SYMBOL
{
_LINK_DATASYMS :
_LINK_SYMS :
}
99
VxWorks
Kernel Programmer's Guide, 6.6
System calls are C-callable routines. They are implemented as short pieces of
assembly code called system call stubs. The stubs execute a trap instruction, which
switches execution mode from user mode to kernel mode. All stubs are identical
to each other except for the unique system call number that they pass to the kernel
to identify the system call.
In kernel mode, a trap handler copies any system call arguments from the user
stack to the kernel stack, and then calls the system call handler.
Each system call handler is given only one argument—the address of its argument
array. Handler routines interpret the argument area as a structure whose members
are the arguments.
System call handlers may call other routines in the kernel to service the system call
request. They must validate the parameters of the system call, and return errors if
necessary.
100
2 Kernel
2.9 Custom System Calls
The architecture of the system call dispatcher allows system call handlers to be
installed at either compile time or run-time.
2
The names of various elements associated with a system call must derive their
names from that of the system call itself. It is important to adhere to this
convention in order to avoid compilation errors when using the automated
mechanisms provided for adding system calls. See Table 2-7.
The system call name is used by developer in system call definition files. The
system call stub is generated automatically from the information in the definition
files. The developer must write the system call handler routine, which includes the
system call argument structure.
For example, if the name of the system call is foo( ), then:
■
The system call stub is named SYSCALL_STUB_foo.s.
The stub implements the routine foo( ) in user mode.
■
The system call handler routine for system call foo must be named fooSc( ).
Routine fooSc( ) is called when an application makes a call to foo( ) in user
space. Writing a routine with this name is the kernel developer’s
101
VxWorks
Kernel Programmer's Guide, 6.6
Each system call must have a unique system call number. The system call number
is passed by the system call stub to the kernel, which then uses it to identify and
execute the appropriate system call handler.
A system call number is a concatenation of two numbers:
■ the system call group number
■ the routine number within the system call group
The group number is implemented as a ten-bit field, and the routine number as a
six-bit field. This allows for up to 1024 system call groups, each with 64 routines in
it. The total system-call number-space can therefore accommodate 65,536 system
calls.
Six system call groups—numbers 2 through 7—are reserved for customer use.
(Customers may request a formal system call group allocation from Wind River.)
All other system call groups are reserved for Wind River use.
! WARNING: Do not use any system call group numbers other than those reserved
for customer use. Doing so may conflict with Wind River or Wind River partner
implementations of system calls.
Wind River system call group numbers and system call routine numbers are
defined in the syscallNum.def file. It should not be modified by customers.
Customer system calls group numbers and system call routine numbers are
defined in the syscallUsrNum.def file.
The Wind River system call number definition file, and a template for the customer
system call definition file are located in installDir/vxworks-6.x/target/share/h.
102
2 Kernel
2.9 Custom System Calls
A given system call group is simply a collection of related system calls offering
complementary functionality. For example, the VxWorks SCG_STANDARD group
includes system calls that are commonly found in UNIX-like (POSIX) systems, and 2
the SCG_VXWORKS group includes system calls that are unique to VxWorks or
that are otherwise dissimilar to UNIX-like system calls.
For information about using the system call definition files to generate system
calls, see Adding System Calls Statically, p.107.
System calls may only take up to eight arguments. Special consideration must be
given to 64-bit arguments on 32-bit systems. Floating point and vector-type
arguments are not permitted.
Wind River system calls are defined in the syscallApi.def file. It should not be
modified by customers.
Customer system calls are defined in the syscallUsrApi.def file. See Adding System
Calls Statically, p.107 for information about editing this file.
Number of Arguments
System calls can take up to a maximum of eight arguments (the maximum that the
trap handler can accommodate). Each argument is expected to be one native-word
in size. The size of a native-word is 32 bits for a 32-bit architecture and 64 bits for
64-bit architectures. For the great majority of system calls (which use 32 bits),
therefore, the number of words in the argument list is equal to the number of
parameters the routine takes.
In cases where more than eight arguments are required the arguments should be
packed into a structure whose address is the parameter to the system call.
64-bit arguments are permitted, but they may only be of the type long long. For
32-bit architectures, a 64-bit argument takes up two native-words on the argument
list, although it is still only one parameter to the routine.
There are other complications associated with 64-bit arguments to routines. Some
architectures require 64-bit arguments to be aligned to either even or odd
numbered registers, while some architectures have no restrictions.
It is important for system call developers to take into account the subtleties of
64-bit argument passing on 32-bit systems. The definition of a system call for
103
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks requires identification of how many words are in the argument list, so
that the trap handler can transfer the right amount of data from the user-stack to
the kernel-stack. Alignment issues may make this less than straightforward.
Consider for example, the following routine prototypes:
int foo (int a, int b, long long c, int d);
int bar (int p, long long q, int r);
The ARM and Intel x86 architectures have no alignment constraints for 64-bit
arguments, so the size of the argument list for foo( ) would be five words, while
the size of the argument for bar( ) would be four words.
PowerPC requires long long arguments to be aligned on eight-byte boundaries.
Parameter c to routine foo( ) is already at an eight-byte offset with respect to the
start of the argument list and is hence aligned. So for PowerPC, the argument list
size for foo( ) is five words.
However, in the case of bar( ) the long long argument q is at offset four from the
first argument, and is therefore not aligned. When passing arguments to bar, the
compiler will skip one argument register and place q at offset eight so that it is
aligned. This alignment pad is ignored by the called routine, though it still
occupies space in the argument list. Hence for PowerPC, the argument list for bar
is five words long. When describing a system call such as bar( ), it is thus advised
that the argument list size be set to five for it to work correctly on all architectures.
Consult the architecture ABI documentation for more information. There are only
a few routines that take 64-bit arguments.
System calls may return only a native word as a return value (that is, integer values
or pointers, and so on).
64-bit return values are not permitted directly, though they may be emulated by
using private routines. To do so, a system call must have a name prefixed by an
underscore, and it must a pointer to the return value as one of the parameters. For
example the routine:
long long get64BitValue (void)
104
2 Kernel
2.9 Custom System Calls
Routine _get64BitValue( ) is the actual system call that should be defined in the
syscallUsrNum.def and syscallUsrApi.def files. The routine get64BitValue( ) can
then be written as follows: 2
long long get64BitValue (void)
{
long long value;
_get64BitValue (&value);
return value;
}
(The get64BitValue( ) routine would be written by the user and placed in a user
mode library, and the _get64BitValue( ) routine would be generated
automatically; see 2.9.4 Adding System Calls, p.107.)
The value -1 (ERROR) is the only permitted error return value from a system call.
No system call should treat -1 as a valid return value. When a return value of -1 is
generated, the operating system transfers the errno value correctly across the trap
boundary so that the user-mode code has access to it.
If NULL must be the error return value, then the system call itself must be
implemented by another routine that returns -1 as an error return. The -1 value
from the system call can then be translated to NULL by another routine in user
mode.
! CAUTION: In order to enforce isolation between kernel and user space, not all
kernel APIs may be called from a system call handler. In particular, APIs cannot be
called if their operation involves passing a user-side task ID or an RTP ID. APIs
also cannot be called to create objects in the kernel if those APIs are already directly
accessible in user space by way of the standard system call interface. Examples
include taskSpawn( ), taskCreate( ), msgQCreate( ), pthread_create( ), the
various semaphore creation routines, and so on.
105
VxWorks
Kernel Programmer's Guide, 6.6
System call handlers must be named in accordance with the system call naming
conventions, which means that they must use the same name as the system call,
but with an Sc appended. For example, the foo( ) system call must be serviced by
the fooSc( ) system call handler.
All system call handlers take a single parameter, which is a pointer to their
argument structure. The argument structure must also be named in accordance
with the system call naming conventions, which means that they must use the
same name as the system call handler, but with Args appended. For example, the
argument structure for fooSc must be declared as struct fooScArgs.
For example, the write( ) system call is declared as:
int write (int fd, char * buf, int nbytes)
The system call handler routine for write is therefore named writeSc( ), and it is
declared as:
int writeSc (struct writeScArgs * pArgs)
106
2 Kernel
2.9 Custom System Calls
At the end of the system call, in the case of failure, the system call handler should 2
ensure errno is set appropriately, and then return -1 (ERROR). If the return value
is -1 (ERROR) the kernel errno value is then copied into the calling process’ errno.
If there is no error, simply return a value that will be copied to user mode. If the
handlers set their errno before returning ERROR, user mode code sees the same
errno value.
System calls can be added both statically and dynamically. This means that they
can be either configured and built into the VxWorks operating system image, or
they can be added interactively to the operating system while it is running on a
target.
Dynamic addition is useful for rapid prototyping and debugging of system calls.
Static configuration is useful for more stable development efforts, and production
systems.
The process of adding system calls statically is based on the use of the
syscallUsrNum.def and syscallUsrApi.def system call definition files.
The files define the system call names and numbers, their prototypes, the system
call groups to which they belong, and (optionally) the components with which
they should be associated. The scgen utility program uses these files—along with
comparable files for standard VxWorks system calls—to generate the system call
apparatus required to work with the system call handler written by the developer.
The scgen program is integrated into the build system, and is run automatically
when the build system detects that changes have been made to
syscallUsrNum.def and syscallUsrApi.def.
The template files syscallUsrNum.def.template and syscallUsrApi.def.template
are in installDir/vxworks-6.x/target/share/h. Make copies of files in the same
directory without the .template extension, and create the appropriate entries in
them, as described below.
After you have written a system call handler, the basic steps required to add a new
system call to VxWorks are:
107
VxWorks
Kernel Programmer's Guide, 6.6
1. If you are creating a new system call group, add an entry for the group to
syscallUsrNum.def. See Defining a New System Call Group, p.109. Remember
that only groups 2 through 7 are available to developers; do not use any other
group numbers. (Contact Wind River if you need to have a group formally
added to VxWorks.)
2. Add an entry to syscallUsrNum.def to assign the system call to a system call
group, and to associate the system call name with a system call number. See
Defining a New System Call, p.110.
3. Add an entry to syscallUsrApi.def to define the system call name and its
arguments. See Defining a New System Call, p.110.
4. Write the system call handler routine. See 2.9.3 System Call Handler
Requirements, p.105.
5. Rebuild the kernel-mode and user-mode source code trees
installDir/vxworks-6.x/target/src and installDir/vxworks-6.x/target/usr/src.
Use the following command in each directory:
make CPU=cpuType TOOL=toolType
Using the system call definitions in the both Wind River and the customer system
call definition files scgen generates the following:
1. The files installDir/vxworks-6.x/target/h/syscall.h and
installDir/vxworks-6.x/target/usr/h/syscall.h. The contents of both files are
identical. They define all system call numbers and group numbers in the
system. These files provide information shared between kernel and user space
code.
2. One system call assembly stub file for each system call. The stubs are placed
into the appropriate architecture directory under
installDir/vxworks-6.x/target/usr/src/arch for compilation into libvx.a or
libc.so.
3. A file containing argument structures for all system calls in the system. This
file is architecture/ABI specific, and is used by the system call handlers located
in the kernel. This file is named syscallArgsArchAbi.h under
installDir/vxworks-6.x/target/h/arch/archName (for example,
installDir/vxworks-6.x/target/h/arch/ppc/syscallArgsppc.h).
108
2 Kernel
2.9 Custom System Calls
4. A file containing a pre-initialized system call group table for all system call
groups known at compile-time. This file is
installDir/vxworks-6.x/target/h/syscallTbl.h. 2
All of this output is then used by the build system automatically; no user
intervention is required to build the appropriated system call infrastructure into
the system.
The scgen utility can also be run from the command line for debugging purposes.
If you need to define a new system call group, add it to syscallUsrNum.def using
the following syntax:
SYSCALL_GROUP SCG_sgcGroupName groupNum componentNames
Six system call groups—numbers 2 through 7—are reserved for customer use. All
other system call groups are reserved for Wind River use. (See System Call
Numbering Rules, p.102.) Group names must be unique.
! WARNING: Do not use any system call group numbers other than those reserved
for customer use. Doing so may conflict with Wind River or Wind River partner
implementations of system calls.
109
VxWorks
Kernel Programmer's Guide, 6.6
The system calls that are part of the system call group are identified below the
SYSCALL_GROUP definition line. Up to 64 system calls can be identified within
each group. See Defining a New System Call, p.110.
To define a new system call, you must create entries in two different files:
■ One entry in syscallUsrNum.def, which assigns it to a system call group and
associates the system call name and number.
■ One entry in syscallUsrApi.def, which defines the system call name and its
arguments.
To add a system call to a call group, add an entry to syscallUsrApi.def under the
appropriate system call group name, using the following syntax:
sysCallNum sysCallName
Note that it is important to add system calls to the end of a system call group; do
use numbers that have already been assigned. Reusing an existing number will
break binary compatibility with existing binaries; and all existing applications
must be recompiled. System call numbers need not be strictly sequential (that is
there can be gaps in the series for future use).
To define a system call itself, add an entry to syscallUsrApi.def using the
following syntax:
sysCallName numArgs [ argType arg1; argType arg2; argType angN; ] \
CompName INCLUDE headerFileName.h
System call definition lines can be split over multiple lines by using the backslash
character as a connector.
The name of the system call used in syscallUsrApi.def must match the name used
in syscallUsrNum.def.
When defining the number of arguments, take into consideration any 64-bit
arguments and adjust the number accordingly (for issues related to 64-bit
arguments, see System Call Argument Rules, p.103).
The arguments to the system call are described in the bracket-enclosed list. The
opening bracket must be followed by a space; and the closing bracket preceded by
one. Each argument must be followed by a semicolon and then at least one space.
If the system call does not take any arguments, nothing should be listed—not even
the bracket pair.
110
2 Kernel
2.9 Custom System Calls
More than one component name can be listed. If any of the components is included
in the operating system configuration, the system call will be included when the
system is built. (For information about custom components, see 2.8 Custom 2
VxWorks Components and CDFs, p.67.)
The following mistakes are commonly made when editing syscallUsrApi.def and
syscallUsrNum.def, and can confuse the scgen utility:
■ No space after the opening bracket of an argument list.
■ No space before the closing bracket of an argument list.
■ No backslash at the end of a line (if the argument list continues onto the next
line).
■ An empty pair of brackets that encloses no arguments at all. This will cause the
generated temporary C file to have a compile error.
Bear in mind that there can be no more than 64 routines in any system call group.
If the system call includes the definition of a new type in a header file, the header
file must be identified with the INCLUDE statement. The scgen utility must resolve
all types before generating the argument structures, and this is the mechanism by
which it is informed of custom definitions.
For examples of how this syntax is used, see System Call Definition Example, p.111.
Also consult the Wind River system call definitions files (syscallNum.def and
syscallApi.def), but do not modify these files.
Assume that we want to add the custom system call myNewSyscall( ) to a new
system call group SCG_USGR0 (which is defined in syscallNum.def).
First, create syscallUsrNum.def file by copying syscallUsrNum.def.template.
Then edit the file syscallUsrNum.def, adding a system call group entry for the
appropriate group, and the system call number and name under it. System call
groups 2 through 7 are reserved for customer use; do not use any other group
numbers.
For example:
SYSCALL_GROUP SCG_USER0 2
1 myNewSyscall
111
VxWorks
Kernel Programmer's Guide, 6.6
The call has three arguments, and a type defined in a custom header file. Assume
that we also want to implement the system call conditionally, depending on
whether or not the component INCLUDE_FOO is configured into the operating
system.
The entry in syscallUsrApi.def would therefore look like this:
INCLUDE <myNewType.h>
myNewSyscall 3 [ MY_NEW_TYPE a; int b; char *c; ] INCLUDE_FOO
You can dynamically extend the system call interface on a target by downloading
a kernel object module that includes code for installing system call handlers as well
as the system call handler routines themselves. You do not need to modify the
system call definition files, to run scgen, or to rebuild the kernel.
This approach is useful for rapid prototyping. It would rarely be useful or
advisable with a deployed system.
The code required to install your system call handlers in the kernel consists of:
■ an initialized table for the system call handler routines
■ a call to a system call registration routine
This code should be included in the same module with the system call handlers.
You must identify a system call group for the system calls, and it should be a group
that is otherwise unused in the target system.
Routine Table
The system call handler routine table is used to register the system call handler
routines with the system call infrastructure when the module is downloaded.
For example, if the system handler routines are testFunc0( ), testFunc1( ),
testFunc2( ), and testFunc3( ),the table should be declared as follows:
_WRS_DATA_ALIGN_BYTES(16) SYSCALL_RTN_TBL_ENTRY testScRtnTbl [] =
{
{(FUNCPTR) testFunc0, 1, "testFunc0", 0}, /* routine 0 */
{(FUNCPTR) testFunc1, 2, "testFunc0", 1}, /* routine 1 */
{(FUNCPTR) testFunc2, 3, "testFunc0", 2}, /* routine 2 */
{(FUNCPTR) testFunc3, 4, "testFunc0", 3} /* routine 3 */
}
112
2 Kernel
2.9 Custom System Calls
Build the object module containing the system call handlers and registration code
as you would any module. See 2.6.6 Building Kernel Application Modules, p.62.
After you have built the module, download it, register it, and check that
registration has been successful:
1. Download it to the target system with the debugger, host shell, or kernel shell.
From the shell (using the C interpreter) the module foo.o could be loaded as
follows:
-> ld < foo.o
2. Register the new handlers with the system call infrastructure before any
system calls are routed to your new handlers. This is done by calling
syscallGroupRegister( ). For example:
-> syscallGroupRegister (2, "testGroup", 4, &testScRtnTbl, 0)
The first argument is a variable holding the group number (an integer); the
second is the group name; the second is the group name; the third is the
number of system handler routines, as defined in the table; the fourth is the
name of the table; and the last is set to that the registration does not forcibly
overwrite an existing entry. (Note that you use the ampersand address
operator with the third argument when you execute the call from the shell—
which you would not do when executing it from a program.)
It is important to check the return value from syscallGroupRegister( ) and
print an error message if an error was returned. See the API reference for
syscallGroupRegister( ) for more information.
3. Verify that the group is registered by running syscallShow( ) from the shell
(host or kernel).
The system call infrastructure is now ready to route system calls to the newly
installed handlers.
113
VxWorks
Kernel Programmer's Guide, 6.6
The quickest method of testing a new system call is to create and run a simple RTP
application
First, calculate the system call numbers for your new system calls. In order to do
so, use the SYSCALL_NUMBER( ) utility macro (defined in syscall.h). For example,
if you used group number 2 for your test group and the routine number for
testFunc0( ) is 0 (as described above), then the system call number for this routine
is the value returned by the following call:
SYSCALL_NUMBER (2, 0)
The system call number for testFunc1( ) is the value returned by this call:
SYSCALL_NUMBER (2, 1)
And so on.
To make the actual system calls, the application calls the syscall( ) routine. The first
eight arguments (all integers) are the arguments passed to your system call, and
the ninth argument is the system call number.
For example, to have your user-mode applications to call testFunc0( ) from
process, you should implement testFunc0( ) like this:
int testFunc0
(
int arg1,
int arg2,
int arg3,
int arg4,
int arg5
)
{
return syscall (arg1, arg2, arg3, arg4, arg5, 0, 0, 0,
SYSCALL_NUMBER(2,0));
}
Note that you must use nine arguments with syscall( ). The last argument is the
system call number, and the preceding eight are for the system call arguments. If
your routine takes less than eight arguments, you must use zeros as placeholders
for the remainder.
This section discusses using show routines, syscallmonitor( ), and hooks for
obtaining information about, and debugging, system calls.
114
2 Kernel
2.9 Custom System Calls
If show routines are included in your VxWorks configuration (with the component
INCLUDE_SHOW_ROUTINES), the set of system calls currently available can be
displayed with the syscallShow( ) shell command with the shell’s C interpreter: 2
-> syscallShow
Group Name GroupNo NumRtns Rtn Tbl Addr
-------------------- ------- ------- ------------
TEMPGroup 7 6 0x001dea50
STANDARDGroup 8 48 0x001deab0
VXWORKSGroup 9 31 0x001dedb0
value = 55 = 0x37 = '7'
Routines provided :
Rtn# Name Address # Arguments
---- ---------------------- ---------- -----------
0 (null) 0x00000000 0
1 (null) 0x00000000 0
2 (null) 0x00000000 0
3 msgQSend 0x001d9464 5
4 msgQReceive 0x001d94ec 4
5 _msgQOpen 0x001d9540 5
6 objDelete 0x001d95b8 2
7 objInfoGet 0x001d9bf8 4
8 _semTake 0x001d9684 2
9 _semGive 0x001d96d0 1
10 _semOpen 0x001d970c 5
11 semCtl 0x001d9768 4
12 _taskOpen 0x001d98b8 1
13 taskCtl 0x001d99dc 4
14 taskDelay 0x001d99d4 1
15 rtpSpawn 0x001a2e14 6
16 rtpInfoGet 0x001a2e60 2
17 taskKill 0x001a2ec8 2
18 taskSigqueue 0x001a2f00 3
19 _timer_open 0x0018a860 4
20 timerCtl 0x0018a8c0 4
21 pxOpen 0x0018a960 4
22 pxClose 0x0018acf4 1
23 pxUnlink 0x0018ae44 2
24 pxCtl 0x0018b334 4
25 pxMqReceive 0x0018aea0 6
26 pxMqSend 0x0018afcc 6
27 pxSemWait 0x0018b1fc 3
28 pxSemPost 0x0018b0f8 1
29 pipeDevCreate 0x001971a8 3
30 pipeDevDelete 0x001971c4 2
value = 50 = 0x32 = '2'
->
115
VxWorks
Kernel Programmer's Guide, 6.6
The syscallMonitor( ) routine allows truss style monitoring of system calls from
kernel mode, on a global, or per-process basis. It lists (on the console) every system
call made, and their arguments. The routine synopsis is:
syscallMonitor(level, RTP_ID)
If the level argument is set to 1, the system call monitor is turned on; if it is set to 0,
it is turned off. If the RTP_ID is set to an RTP_ID, it will monitor only the system
calls made from that process; if it is set to 0, it will monitor all system calls.
The sysCallHookLib library provides routines for adding extensions to the
VxWorks system call library with hook routines. Hook routines can be added
without modifying kernel code. The kernel provides call-outs whenever system
call groups are registered, and on entry and exit from system calls. Each hook type
is represented as an array of function pointers. For each hook type, hook functions
are called in the order they were added. For more information, see the
syscallHookLib API reference.
Since system calls are not functions written in C, the apigen documentation
generation utility cannot be used to generate API references from source code
comments. You can, however, create a function header in a C file that can be read
by apigen. The function header for system calls is no different from that for other
C functions.
116
2 Kernel
2.9 Custom System Calls
117
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: The scheduler framework is not supported for the symmetric multiprocess-
ing (SMP) configuration of VxWorks. For general information about VxWorks
SMP and about migration, see 15. VxWorks SMP and 15.15 Migrating Code to
VxWorks SMP, p.704.
Code Requirements
A custom scheduler must manage the set of tasks that are in the READY state; that
is, the tasks that are eligible for execution. At a minimum, a custom scheduler must
118
2 Kernel
2.10 Custom Scheduler
define a ready queue structure for all ready tasks or a hook-routine that is executed
at every clock tick.
2
A custom scheduler may also specify other class-specific structures, manage data
in the task control block, and so on.
Users must define a ready-queue class for all READY tasks. A set of routines
required by the Q_CLASS structure must be implemented. For more information
about the Q_CLASS structure, see Multi-way Queue Structure, p.124.
For a custom scheduler that must store user specific information in tasks, the
pSchedInfo member of the task control block (TCB) may be used. The pSchedInfo
member of the TCB is a void pointer.
There are two ways to access pSchedInfo, as follows:
■ If the qNode is given, the TASK_QNODE_TO_PSCHEDINFO( ) macro may be
used to get the address of pSchedInfo. The file
installDir/vxworks-6.x/target/h/taskLib.h provides the definition of this
macro. The macro is typically used in the user-defined queue management
functions. For example:
void customQPut
(
CUSTOM_Q_HEAD *pQHead, /* head of readyQ */
CUSTOM_NODE *pQNode, /* mode of insert */
ULONG key /* key for insert */
)
{
void **ppSchedInfo;
119
VxWorks
Kernel Programmer's Guide, 6.6
can be used for getting the value of pSchedInfo. Both macros are defined in
installDir/vxworks-6.x/target/h/taskUtilLib.h.
The custom scheduler may use pSchedInfo as the pointer to the user-specific data
structure for tasks. If so, it must allocate memory for the data structure using a task
hook routine that calls malloc( ) or memalign( ). This approach, however, makes
the task creation process less deterministic.
The memory can also be statically allocated (using global variables) for
user-specified storage, and then used it during task initialization.
120
2 Kernel
2.10 Custom Scheduler
/* call kernelRoundRobinHook() */
if (_func_kernelRoundRobinHook != NULL)
_func_kernelRoundRobinHook (tid);
/* other work */
...
}
If the custom scheduler does not take advantage of the VxWorks round-robin
scheduling policy using kernelRoundRobinHook( ), the routine
kernelTimeSlice( ) must not be used to adjust system time slice nor to enable or
disable round-robin scheduling. The kernelTimeSlice( ) routine is used to
dynamically enable round robin scheduling and to set the system time slice, or to
disable it.
For more information about VxWorks round-robin scheduling, see Round-Robin
Scheduling, p.169.
Configuration Requirements
121
VxWorks
Kernel Programmer's Guide, 6.6
To enable the custom scheduler framework, VxWorks must be configured with the
INCLUDE_CUSTOM_SCHEDULER component.
tickAnnounceHookAdd ((FUNCPTR)usrTickHook);
kernelRoundRobinInstall();
}
The usrTickHook argument is the hook to be called at each tick interrupt. The
kernelRoundRobinInstall( ) call is for the user VxWorks round-robin scheduling
scheme.
The usrCustomSchedulerInit( ) routine is in the
installDir/vxworks-6.x/target/config/comps/src/usrCustomerScheduler.c file.
For information about the vxKernelSchedDesc variable, see Scheduler
Initialization, p.123. This variable must be initialized for a custom scheduler.
There are several ways for users to link the definition and implementation of
Q_NODE, Q_HEAD, and Q_CLASS structure to VxWorks. For example, the custom
scheduler configuration file
installDir/vxworks-6.x/target/config/comps/src/usrCustomerScheduler.c can be
the placeholder for the Q_NODE and Q_HEAD type definitions and user specified
Q_CLASS implementation.
Another way is to create a new header file for Q_NODE and Q_HEAD definitions
and a new source file for Q_CLASS implementation, and then link the new object
file to VxWorks using the EXTRA_MODULES macro. For example:
EXTRA_MODULES = qUserPriLib.o
For information about using the EXTRA_MODULES macro, see 2.6.8 Linking Kernel
Application Object Modules with VxWorks, p.64.
122
2 Kernel
2.10 Custom Scheduler
The traditional VxWorks scheduler is the default, and is included in the system 2
with the INCLUDE_VX_TRADITIONAL_SCHEDULER component. The traditional
scheduler has a priority-based preemptive scheduling policy. A round-robin
scheduling extension can be enabled with the kernelTimeSlice( ) routine. For
more information about these options, see 4.3.3 VxWorks Traditional Scheduler,
p.168.
This section provides information about key features of the traditional VxWorks
scheduler that can be useful in designing a custom scheduler.
Scheduler Initialization
#endif /* INCLUDE_VX_TRADITIONAL_SCHEDULER */
The readyQClassId is a pointer to a ready queue class. The ready queue class is a
structure with a set of pointers to routines that manage tasks that are in the READY
state. The readyQInitArg1 and readyQInitArg2 are the input arguments for the
initRtn( ) routine of the ready queue class.
123
VxWorks
Kernel Programmer's Guide, 6.6
The first field in the Q_HEAD contains the highest priority node.
124
2 Kernel
2.10 Custom Scheduler
NOTE: Both the qFirst( ) routine and Q_FIRST( ) macro simply read the first four
bytes of the Q_HEAD structure (the pFirstNode field) to determine the head of the
2
queue. There is, therefore, no need for a queue-class specific routine to determine
which node is the head of the queue.
Each task control block contains a Q_NODE structure for use by a multi-way queue
class to manage the set of ready tasks. This same Q_NODE is used to manage a task
when it is in a pend queue.
Note that a custom implementation of a multi-way queue class may define
class-specific Q_HEAD and Q_NODE structures. The size of the class-specific
structures must not exceed 16 bytes, which is the current size of both the Q_HEAD
and Q_NODE structures.
Q_CLASS Structure
The kernel interacts with a multi-way queue class through a Q_CLASS structure. A
Q_CLASS structure contains function pointers to the class-specific operators. For
example, the address of the class specific put routine is stored in the putRtn field.
As described in Scheduler Initialization, p.123, the qInit( ) routine is used to
initialize a multi-way queue head to a specified queue type. The second parameter
specifies the queue class (that is, the type of queue), and is merely a pointer to a
Q_CLASS structure. All kernel invocations of the queue class operators are
performed indirectly through the Q_CLASS structure.
The Q_CLASS structure is defined in qClass.h as follows:
125
VxWorks
Kernel Programmer's Guide, 6.6
The restoreRtn operator is used in VxWorks SMP, which does not support custom
schedulers. The operator must therefore be set to NULL.
The following operators are not applicable to a queue class that is used to manage
the set of ready tasks: advanceRtn, getExpiredRtn, and calibrateRtn.
The signatures of the expected Q_CLASS operators are as follows:
Q_HEAD * createRtn (... /* optional arguments */);
STATUS initRtn (Q_HEAD *pQHead, ... /* optional arguments */);
STATUS deleteRtn (Q_HEAD *pQHead);
STATUS terminateRtn (Q_HEAD *pQHead);
void putRtn (Q_HEAD *pQHead, Q_NODE *pQNode, ULONG key);
Q_NODE * getRtn (Q_HEAD *pQHead);
STATUS removeRtn (Q_HEAD *pQHead, Q_NODE *pQNode);
void resortRtn (Q_HEAD *pQHead, Q_NODE *pQNode, ULONG newKey);
ULONG keyRtn (Q_HEAD *pQHead, Q_NODE *pQNode, int keyType);
int infoRtn (Q_HEAD *pQHead, Q_NODE *nodeArray [ ], int maxNodes);
Q_NODE * eachRtn (Q_HEAD *pQHead, FUNCPTR routine, int routineArg);
Q_CLASS Operators
This section provides descriptions of each Q_CLASS operator that pertains to the
management of ready tasks. Each description provides information about when
the kernel invokes the operator for managing ready tasks.
Descriptions of the advanceRtn, getExpiredRtn, and calibrateRtn operators are
not provided as they are not applicable to managing the set of ready tasks.
126
2 Kernel
2.10 Custom Scheduler
Some Q_CLASS operators are invoked within the kernel context. The operator
description indicate whether the operator is invoked within kernel context or not.
The operators that are invoked within kernel context do not have access to all 2
VxWorks facilities. Table 2-8 lists the routines that are available from within kernel
context.
vxLib vxTas( )
! WARNING: The use of any VxWorks APIs that are not listed Table 2-8 from an
operator that is invoked from kernel context results in unpredictable behavior.
Typically the target will hang or reboot.
createRtn
Allocates a multi-way queue head structure from the system memory pool.
The dynamically-allocated head structure is subsequently initialized.
Currently the kernel does not utilize this operator. Instead, the ready task
queue is initialized by statically allocating the head structure, and using the
initRtn operator.
initRtn
Initializes a multi-way queue head. Up to ten optional arguments can be
passed to the initRtn. The kernel initializes the ready task queue from the
usrKernelInit( ) routine as described in Scheduler Initialization, p.123. This
operator is not called from within kernel context.
127
VxWorks
Kernel Programmer's Guide, 6.6
deleteRtn
Deallocates (frees) the multi-way queue head. All queued nodes are lost.
Currently the kernel does not utilize this operator.
terminateRtn
Terminates a multi-way queue head. All queued nodes will be lost. Currently
the kernel does not utilize this operator.
putRtn
Inserts a node into a multi-way queue. The insertion is based on the key and
the underlying queue class. The second parameter is the Q_NODE structure
pointer of the task to be inserted into the queue. Recall that each task control
block contains a Q_NODE structure for use by a multi-way queue class to
manage the set of ready tasks.
The third parameter, (the key), is the task’s current priority. Note that a task’s
current priority may be different than a task’s normal priority due to the mutex
semaphore priority inheritance protocol.
The pFirstNode field of the Q_HEAD structure must be updated to contain the
first node in the queue (if any change has occurred).
The putRtn operator is called whenever a task becomes ready; that is, a task is
no longer suspended, pended, delayed, or stopped (or a combination thereof).
The VxWorks round-robin policy performs a removeRtn operation followed
by a putRtn when a task has exceeded its time slice. In this case, the task does
not change state. However, the expectation after performing a removeRtn
operation followed by a putRtn operation is that the task appears as the last
task in the list of tasks with the same priority, if there are any.
Performing a taskDelay(0) operation also results in a removeRtn operation
followed by a putRtn. Again, in this case the task does not change state, and
the expectation after performing a removeRtn operation followed by a putRtn
operation is that the task appears as the last task in the list of tasks with the
same priority, if there are any.
This operator is called from within kernel context.
getRtn
Removes and returns the first node in a multi-way queue. Currently the kernel
does not utilize this operator.
removeRtn
Removes the specified node from the specified multi-way queue.
128
2 Kernel
2.10 Custom Scheduler
The removeRtn operator is called whenever a task is no longer ready; that is,
it is no longer eligible for execution, since it has become suspended, pended,
delayed, or stopped (or a combination thereof). 2
See the discussion of the putRtn operator above for more information about
situations in which the kernel performs a removeRtn operation followed by a
putRtn without the task’s state actually changing.
This operator is called from within kernel context.
resortRtn
Resorts a node to a new position based on a new key.
The resortRtn operator is called whenever a task’s priority changes, either due
to an explicit priority change with the taskPrioritySet( ) API, or an implicit
priority change due to the mutex semaphore priority inheritance protocol.
The difference between invoking the resortRtn operator and a
removeRtn/putRtn combination is that the former operator does not change
the position of the task in the list of tasks with the same priority (if any) when
the priority is the same as the old priority.
This operator is called from within kernel context.
keyRtn
Returns the key of a node currently in a multi-way queue. The keyType
parameter determines key style on certain queue classes. Currently the kernel
does not utilize this operator.
infoRtn
Gathers information about a multi-way queue. The information consists of an
array, supplied by the caller, filled with all the node pointers currently in the
queue. Currently the kernel does not utilize this operator.
eachRtn
Calls a user-supplied routine once for each node in the multi-way queue. The
routine should be declared as follows:
BOOL routine
(
Q_NODE *pQNode, /* pointer to a queue node */
int arg /* arbitrary user-supplied argument */
);
129
VxWorks
Kernel Programmer's Guide, 6.6
130
3
Boot Loader
131
VxWorks
Kernel Programmer's Guide, 6.6
3.1 Introduction
A VxWorks boot loader is an application whose purpose is to load a VxWorks
image onto a target. It is sometimes called the VxWorks bootrom, but use of this
term is not encouraged (it conflates application and media). Like VxWorks, the
boot loader can be configured with various facilities; such a command shell for
dynamically setting boot parameters, a network loader, and a file system loader.
The same boot loaders are used for uniprocessor (UP), symmetric multiprocessor
(SMP), and asymmetric multiprocessor (AMP), configurations of VxWorks.
In a development environment, boot loaders are useful for loading a VxWorks
image from a host system, where VxWorks can be quickly modified and rebuilt.
They can also be used in production systems when the boot loader and operating
system are stored on a disk or other media.
Self-booting (standalone) VxWorks images do not require a boot loader. These
images are commonly used in production systems (stored in non-volatile devices).
For more information, see 2.4.1 VxWorks Image Types, p.15.
Usually, the boot loader is programmed in a non-volatile device (usually flash
memory or EEPROM) at an address such that it is the first code that is run by the
processor when the target is powered on or rebooted. The procedure to get the
boot loader programmed in a non-volatile device or written to a disk is dependent
on the target, and is described in the BSP reference documentation.
The VxWorks product installation includes default boot loader images for each
installed BSP. If they do not meet your needs, you can create custom boot loaders.
For example, you may need to use a different network driver to load the VxWorks
image over your development network, or you may want to remove the boot
loader shell for deployed systems.
For information beyond what is in this chapter, particularly information about
setting up a cross-development environment, see the Wind River Workbench User’s
Guide: Setting up Your Hardware.
132
3 Boot Loader
3.3 Boot Loader Image Types
name of the file to be loaded, the user name, and more. To use the default boot
loader, you must interactively change the default parameters using the boot loader
shell so that the loader can find the VxWorks image on the host and load it onto
the target.
3
After you have entered boot loader parameters, the target can be booted with the
VxWorks image. For most targets, the new settings are saved (in a non-volatile
device or to disk) so you can reboot the target without resetting them.
You interact with the boot loader shell at a terminal console that is usually
established by connecting a serial port of the target to a serial port on the host and
starting a terminal application on the host. For information about the setup
required to establish communication over the serial port of a particular target, see
the reference documentation for the BSP in question.
When you apply power to the target (or each time it is reset), the target runs the
boot loader (from ROM, flash, disk, or other media). During the boot process, the
target uses its serial port to communicate with the host system. The boot loader
displays a banner page and then starts a seven-second countdown before booting
VxWorks. You can interrupt the boot process by pressing any key in order to set
the appropriate boot parameters.
Default boot loader images are in installDir/vxworks-6.x/target/config/bspName.
The boot loader commands and parameters are described in 3.4.1 Boot Loader Shell
Commands, p.136 and 3.5 Boot Loader Parameters, p.140. The different types of boot
loader images provided as defaults are described in 3.3 Boot Loader Image Types,
p.133.
133
VxWorks
Kernel Programmer's Guide, 6.6
information about build methods, see 3.7 Customizing and Building Boot Loaders,
p.146.
Compressed Image
Uncompressed Image
134
3 Boot Loader
3.4 Boot Loader Shell
! CAUTION: Do not add any of the following components to the boot loader:
■ INCLUDE_WDB_BANNER
■ INCLUDE_SIMPLE_BANNER
■ INCLUDE_SHELL
They conflict with the boot loader shell. If you include any of them, you will
encounter configuration errors.
When the boot loader has initiated booting the system, it prints a banner. After a
countdown elapses, the boot loader loads and runs the specified image. (If the boot
loader shell is not included, the loader executes with its current parameters and
without a countdown.) To reset the boot parameters, interrupt the boot loader
during the countdown by pressing any key before the countdown period elapses.
To access the boot-loader shell prompt, power on (or reboot) the target; then stop
the boot sequence by pressing any key during the seven-second countdown. The
appearance of the boot-loader banner followed by keystroke interruption of the
boot process looks like the following:
VxWorks System Boot
CPU: PC PENTIUM3
Version: VxWorks 6.3
BSP version: 2.0/6
Creation date: Jun 08 2006, 12:08:39
135
VxWorks
Kernel Programmer's Guide, 6.6
The VxWorks boot loader provides a set of commands that can be executed from
the boot loader shell, which are described in the following tables. For information
about the boot loader shell, see 3.4 Boot Loader Shell, p.135.
136
3 Boot Loader
3.4 Boot Loader Shell
Command Description
? Same as h.
@ Boot (that is, load and execute the VxWorks image file) using
the current boot parameters. See 3.5 Boot Loader Parameters,
p.140.
$ [ paramString ] Boot (that is, load and execute the VxWorks image file). If
used without a parameter string, the command is the same as
@. The parameter string can be used to set boot loader
parameter values all at once, instead of interactively. See
3.5 Boot Loader Parameters, p.140 and 3.5.3 Changing Boot
Loader Parameters Interactively, p.144).
p Print the current boot parameter values. See 3.5 Boot Loader
Parameters, p.140.
137
VxWorks
Kernel Programmer's Guide, 6.6
Command Description
ENTER
Pressing the ENTER key alone does not change the
address specified with adrs, but continues prompting at
the next address.
number
Entering a number sets the 16-bit contents at the
memory address to that number.
.
Entering a . (period) leaves the address unchanged, and
quits.
f adrs, nbytes, value Fill nbytes of memory, starting at adrs with value.
Command Description
M [dev] [unitNo] [MAC] Set and display Ethernet MAC address. For example:
M motfcc0 00:A0:1E:00:10:0A
138
3 Boot Loader
3.4 Boot Loader Shell
Command Description
N [last3ByteValuesMAC] Set (last three bytes) and display Ethernet MAC address. 3
NOTE: The M command (see Table 3-3) is a replacement for the N command
(Table 3-4), which is maintained for backwards compatibility purposes. For
information about which of the two is supported for a given BSP, consult the BSP
reference.
The M and N command are provided by the
INCLUDE_BOOT_ETH_MAC_HANDLER and INCLUDE_BOOT_ETH_ADR_SET,
respectively. Do not use both components in the same configuration of VxWorks.
Command Description
P Print the error log for the error detection and reporting
facility.
C Clear the error log for the error detection and reporting
facility.
For information about the error detection and reporting facility, see 11. Error
Detection and Reporting.
Command Description
139
VxWorks
Kernel Programmer's Guide, 6.6
Command Description
usbBulkShow nodeId Displays all the logical unit numbers of the USB device
specified by nodeId (for example, a USB memory stick). It
performs the same function as the kernel shell C interpreter
command of the same name.
If the boot loader is configured with its command shell, the current set of boot
loader parameters can be displayed interactively with the p command, as follows:
[VxWorks Boot]: p
A display similar to the following appears. Note that the p command does not
actually display unassigned parameters, although this example shows them for
completeness.
boot device : ln
unit number : 0
processor number : 0
host name : mars
file name : c:\tmp\vxWorks
140
3 Boot Loader
3.5 Boot Loader Parameters
c:\temp\vxWorks
HOST TARGET
mars phobos
user: fred
90.0.0.1 90.0.0.50:ffffff00
Ethernet
90.0.0.x subnet
Each of the boot loader parameters is described below, with reference to the
example shown above.
The letters in parentheses after some of the parameters are alternative names used
with the single-string interactive configuration method described in 3.5.3 Changing
Boot Loader Parameters Interactively, p.144, and the static configuration method
described in 3.7.3 Configuring Boot Loader Parameters Statically, p.147.
boot device
The type of device from which to boot. This must be one of the drivers
included in the boot loader (for example, enp for a CMC controller). Due to
limited space in boot media, only a few drivers can be included. A list of the
drivers included in the boot loader image can be displayed in the boot loader
141
VxWorks
Kernel Programmer's Guide, 6.6
shell with the devs or h command. For more information about boot devices,
see 3.7.5 Selecting a Boot Device, p.148.
unit number
The unit number of the boot device, starting at zero.
processor number
A unique numerical target identifier for systems with multiple targets on a
backplane. The backplane master must have its processor number set to zero.
For boards not connected to a backplane, a value of zero is typically used but
is not required.
host name
The name of the host machine to boot from. This is the name by which the host
is known to VxWorks; it need not be the name used by the host itself. (The host
name is mars in the example above.)
file name
The full path name of the VxWorks image to be booted (c:\myProj\vxWorks
in the example). This path name is also reported to the host when you start a
target server, so that it can locate the host-resident image of VxWorks. The
path name is limited to a 160 byte string, including the null terminator.
inet on ethernet (e)
The Internet Protocol (IP) address of a target system Ethernet interface, as well
as the subnet mask used for that interface. The address consists of the IP
address, in dot decimal format, followed by a colon, followed by the mask in
hex format (here, 90.0.0.50:ffffff00).
inet on backplane (b)
The Internet address of a target system with a backplane interface (blank in the
example).
host inet (h)
The Internet address of the host to boot from (90.0.0.1 in the example).
gateway inet (g)
The Internet address of a gateway node for the target if the host is not on the
same network as the target (blank in the example).
user (u)
The user ID used to access the host for the purpose of loading the VxWorks
image file (which is fred in the example). The user must have host permission
to read the VxWorks image file.
142
3 Boot Loader
3.5 Boot Loader Parameters
On a Windows host, the user specified with this parameter must have FTP
access to the host, and the ftp password parameter (below) must be used to
provide the associated password.
On a UNIX (Linux or Solaris) host, the user must have FTP, TFTP, or rsh 3
access. For rsh, the user must be granted access by adding the user ID to the
host's /etc/host.equiv file, or more typically to the user's .rhosts file
(~userName/.rhosts).
ftp password (pw)
For FTP or TFTP access, this field is used for the password for the user
identified with the user parameter (above). For rsh access it should be left
blank.
NOTE: If this parameter is not used, the boot loader attempts to load the
run-time system image using a protocol based on the UNIX rsh utility, which
is not available for Windows hosts.
flags (f)
Configuration options specified as a numeric value that is the sum of the
values of selected option bits defined below. (This field is zero in the example
because no special boot options were selected.)
0x01 = Do not enable the system controller, even if the processor number is 0.
(This option is board specific; refer to your target documentation.)
0x02 = Load all VxWorks symbolsa, instead of just globals.
0x04 = Do not auto-boot.
0x08 = Auto-boot fast (short countdown).
0x20 = Disable login security.
0x80 = Use TFTP to get boot image.
0x400 = Set system to debug mode for the error detection and reporting facility
(depending on whether you are working on kernel modules or user
applications, for more information see 11. Error Detection and Reporting.
a. Loading a very large group of symbol can cause delays of up to several minutes while
Workbench loads the symbols. For information about how to specify the size of the
symbol batch to load, see the Wind River Workbench User’s Guide.
143
VxWorks
Kernel Programmer's Guide, 6.6
system boots. A startup script file can contain only the shell’s C interpreter
commands. (Note that you must not add the INCLUDE_SHELL,
INCLUDE_WDB_BANNER, or INCLUDE_SIMPLE_BANNER components to a
boot loader. These components conflict with the boot loader shell. Doing so
causes project configuration errors.)
This parameter can also be used to specify process-based (RTP) applications to
run automatically at boot time, if VxWorks has been configured with the
appropriate components. See VxWorks Application Programmer’s Guide:
Applications and Processes.
other (o)
This parameter is generally unused and available for applications (blank in the
example). It can be used, for example, for specifying the default network
interface when booting from a file system device. For more information, see
3.7.4 Enabling Networking for Non-Boot Interfaces, p.148.
Boot parameters can be entered interactively from the boot loader prompt, either
individually or as a string.
For information about changing boot parameters statically, see 3.7.3 Configuring
Boot Loader Parameters Statically, p.147.
144
3 Boot Loader
3.6 Rebooting VxWorks
To change more than one parameter at time, use the $ boot command at the boot
prompt with a parameter string. The syntax is as follows:
3
$dev(0,procnum)host:/file h=# e=# b=# g=# u=usr [pw=passwd] f=# tn=targetname s=script o=other
For example:
[VxWorks Boot]:$ln(0,0)mars:c:\myProj\vxWorks e=90.0.0.50 h=90.0.0.1 u=fred pw=…
The order of the parameters with assignments (those with equal signs) is not
important. Omit any assigned fields that are irrelevant. The codes for the assigned
fields correspond to the letter codes shown in parentheses by the p command and
in 3.5.2 Description of Boot Loader Parameters, p.141.
This method can be particularly useful when booting a target from a host script.
The changes made to boot parameters are retained between reboots for most types
of targets (in a non-volatile device or on disk).
145
VxWorks
Kernel Programmer's Guide, 6.6
For most BSPs, boot loaders can be configured and built with Wind River
Workbench or the command-line project tool vxprj, using the PROFILE_BOOTAPP
configuration profile.
For some BSPs, the legacy method using bspDir/config.h and bspDir/make must be
used. Note that the legacy method has been deprecated for most purposes, and
cannot be used for multiprocessor development. For information about this
method, see the VxWorks Command-Line Tools User’s Guide.
The INCLUDE_BOOT_APP component provides the basic facility for loading and
executing a VxWorks image.
The PROFILE_BOOTAPP configuration profile can be used with Workbench or
vxprj to create a boot loader (including INCLUDE_BOOT_APP). This profile
includes a basic set of boot loader components, such as those for the boot loader
146
3 Boot Loader
3.7 Customizing and Building Boot Loaders
shell, drivers, file systems, and so on. Among the components that
PROFILE_BOOTAPP provides are those for loading and executing a VxWorks
image, for booting from a network with various protocols, and for booting from
various file systems.
3
A boot loader needs to be configured appropriately for any device or file system
from which you want to boot. Other components, which can be used to create a
boot loader for specific boot environments and with various boot management
facilities, are described throughout this chapter.
Boot loader parameters include the boot device, IP addresses of the host and target
systems, the location of VxWorks image file, and so on. For detailed information
about the parameters, see 3.5 Boot Loader Parameters, p.140. (For information about
configuring boot loader parameters dynamically, see 3.5.3 Changing Boot Loader
Parameters Interactively, p.144.)
Using Workbench, the DEFAULT_BOOT_LINE configuration parameter of the
INCLUDE_BSP_MACROS component can be used to change the default boot loader
parameters.
Using the legacy config.h method (which should only be used for BSPs that do not
support PROFILE_BOOTAPP), edit the DEFAULT_BOOT_LINE macro in
installDir/vxworks-6.x/target/config/bspName/config.h file to change the default
boot loader parameters. The DEFAULT_BOOT_LINE macro for a Pentium BSP
looks like the following:
#define DEFAULT_BOOT_LINE \
"fd=0,0(0,0)host:/fd0/vxWorks.st h=90.0.0.3 e=90.0.0.50 u=target"
For more information about configuration methods, see 3.7.1 Configuring Boot
Loaders, p.146.
147
VxWorks
Kernel Programmer's Guide, 6.6
The other (o) boot loader parameter can be used to specify a network interface in
addition to whatever device is specified for the boot device. For example, it can be
used when booting from a local SCSI disk to specify a network interface to be
included. The following example illustrates parameter settings for booting from a
SCSI device, and enabling the network with an on-board Ethernet device (here
with ln for LANCE Ethernet device) with the other field.
boot device : scsi=2,0
processor number : 0
host name : host
file name : /sd0/vxWorks
inet on ethernet (e) : 147.11.1.222:ffffff00
host inet (h) : 147.11.1.3
user (u) : jane
flags (f) : 0x0
target name (tn) : t222
other : ln
The boot devices that are included in a boot loader image can be identified at
run-time with the devs or h command from the boot loader shell (see 3.4.1 Boot
Loader Shell Commands, p.136).
In order to boot VxWorks, however, the boot loader must be configured with the
appropriate device or devices for your target hardware and desired boot options—
they may not be provided by the default boot loader. The process of configuring
the boot loader with devices is the same as for VxWorks itself, and the topic of
device configuration is covered in 2.4.3 Device Driver Selection, p.22.
Once a boot loader has been configured with the appropriate boot device (or
devices), it must also be instructed as to which device to use. This can be done
interactively or statically.
For information about interactive specification using the boot device parameter,
see 3.5.2 Description of Boot Loader Parameters, p.141 and 3.5.3 Changing Boot Loader
Parameters Interactively, p.144. For information about static configuration, see
3.7.3 Configuring Boot Loader Parameters Statically, p.147.
148
3 Boot Loader
3.7 Customizing and Building Boot Loaders
The boot devices that are supported by a given BSP are described in the BSP
reference. The syntax used for specifying them with the boot device boot loader
parameter is provided below.
3
ATA Device
PCMCIA Device
SCSI Device
TSFS Device
The syntax for specifying a Target Server File System device is simply tsfs. No
additional boot device arguments are required. The file path and name must be
relative to the root of the host file system as defined for the target server on the
host. For information about the TSFS, see 8.9 Target Server File System: TSFS, p.520.
The VxWorks boot loader can be customized to meet the size constraints of the
non-volatile device on a particular board, as well as the manner in which it
retrieves the VxWorks image file.
149
VxWorks
Kernel Programmer's Guide, 6.6
The persistent memory region is an area of RAM at the top of system memory
specifically reserved for error records and core dumps. For more information
about use of persistent memory, see 11.2.2 Configuring the Persistent Memory
Region, p.567.
If you increase the size of the persistent memory region for VxWorks beyond the
default, you must also create and install a new boot loader with the same
PM_RESERVED_MEM value.
If you do not, the boot loader (image plus heap) overlays the area of VxWorks
persistent memory that extends beyond its own when the system reboots, and any
data that may have been stored in the overlapping area will be corrupted. For a
simple illustration of this problem, see Figure 3-2.
Note that when you change the value of the PM_RESERVED_MEM for the boot
loader, you need to change the value of RAM_HIGH_ADRS if there is insufficient
room for the boot loader itself between RAM_HIGH_ADRS and sysMemTop( ). If
you do so, also be sure that there is sufficient room for the VxWorks image
between RAM_LOW_ADRS and RAM_HIGH_ADRS.
! WARNING: Not properly configuring the boot loader (as described above) could
corrupt the persistent memory region when the system boots.
150
3 Boot Loader
3.7 Customizing and Building Boot Loaders
RAM_HIGH_ADRS
VxWorks
(image + sys mem)
VxWorks Image
loaded here
RAM_LOW_ADRS
LOCAL_MEM_LOCAL_ADRS
151
VxWorks
Kernel Programmer's Guide, 6.6
% make bootrom
The different types of boot loader images that you can build are described in
3.3 Boot Loader Image Types, p.133.
For information about configuration methods, see 3.7.1 Configuring Boot Loaders,
p.146.
! CAUTION: Do not build boot loaders for symmetric multiprocessor (SMP) and
asymmetric multiprocessor (AMP) configurations of VxWorks with the SMP or
AMP build option—neither with Workbench nor with vxprj. The same boot
loaders are used for uniprocessor (UP), SMP, and AMP, configurations of
VxWorks.
152
3 Boot Loader
3.9 Booting From a Network
INCLUDE_BOOT_FTP_LOADER
FTP boot loader.
INCLUDE_BOOT_TFTP_LOADER
TFTP boot loader. 3
INCLUDE_BOOT_RSH_LOADER
RSH boot loader.
The parameters and settings specific to booting from a network with a give
protocol are described below.
For general about boot parameters and how to set them, see 3.5.2 Description of Boot
Loader Parameters, p.141, 3.5.3 Changing Boot Loader Parameters Interactively, p.144,
and 3.7.3 Configuring Boot Loader Parameters Statically, p.147.
FTP
The user and ftp password boot parameters must be set to match account settings
with the FTP server on the host.
TFTP
The flags boot parameter must be set to 0x80, and the user and ftp password
parameters must be set match account settings with the TFTP server on the host.
RSH
The ftp password parameter must be set to empty, that is by entering . (a period).
153
VxWorks
Kernel Programmer's Guide, 6.6
The parameters specific to booting from a target file system are described below.
For general about boot parameters and how to set them, see 3.5.2 Description of Boot
Loader Parameters, p.141, 3.5.3 Changing Boot Loader Parameters Interactively, p.144,
and 3.7.3 Configuring Boot Loader Parameters Statically, p.147.
ATA
FD—Floppy Disk
154
3 Boot Loader
3.11 Booting From the Host File System Using TSFS
! WARNING: The TSFS boot facility is not compatible with WDB agent network
configurations. For information about WDB, see 12.6 WDB Target Agent, p.628.
To configure a boot loader for TSFS, the boot device parameter must be tsfs, and
the file path and name must be relative to the root of the host file system defined
for the target server.
Regardless of how you specify the boot line parameters, you must reconfigure (as
described below) and rebuild the boot image.
If two serial lines connect the host and target (one for the target console and one
for WDB communications), the following configuration parameters must be set:
■
CONSOLE_TTY
■
CONSOLE_TTY 0
■ WDB_TTY_CHANNEL
■
WDB_TTY_CHANNEL 1
■
WDB_COMM_TYPE
■ WDB_COMM_TYPE WDB_COMM_SERIAL
If one serial line connects the host and target, the following configuration
parameters must be set:
■ CONSOLE_TTY
■
CONSOLE_TTY NONE
■
WDB_TTY_CHANNEL
■
WDB_TTY_CHANNEL 0
■
WDB_COMM_TYPE
■
WDB_COMM_TYPE WDB_COMM_SERIAL
With any of these TSFS configurations, you can also use the target server console
to set the boot loader parameters by including the
INCLUDE_TSFS_BOOT_VIO_CONSOLE component in VxWorks. This disables the
auto-boot mechanism, which might otherwise boot the target before the target
server could start its virtual I/O mechanism. (The auto-boot mechanism is
similarly disabled when CONSOLE_TTY is set to NONE, or when CONSOLE_TTY is
set to WDB_TTY_CHANNEL.) Using the target server console is particularly useful
155
VxWorks
Kernel Programmer's Guide, 6.6
156
4
Multitasking
157
VxWorks
Kernel Programmer's Guide, 6.6
158
4 Multitasking
4.1 Introduction
4.1 Introduction
Modern real-time systems are based on the complementary concepts of
multitasking and intertask communications. A multitasking environment allows a
real-time application to be constructed as a set of independent tasks, each with its
own thread of execution and set of system resources. 4
Tasks are the basic unit of scheduling in VxWorks. All tasks, whether in the kernel
or in processes, are subject to the same scheduler. VxWorks processes are not
themselves scheduled.
Intertask communication facilities allow tasks to synchronize and communicate in
order to coordinate their activity. In VxWorks, the intertask communication
facilities include semaphores, message queues, message channels, pipes,
network-transparent sockets, and signals.
For interprocess communication, VxWorks semaphores and message queues,
pipes, and events (as well as POSIX semaphores and events) can be created as
public objects to provide accessibility across memory boundaries (between the
kernel and processes, and between different processes). In addition, message
channels provide a socket-based inter-processor and inter-process
communications mechanism.
Hardware interrupt handling is a key facility in real-time systems because
interrupts are the usual mechanism to inform a system of external events. To get
the fastest possible response to interrupts, interrupt service routines (ISRs) in
VxWorks run in a special context of their own, outside any task’s context.
VxWorks includes a watchdog-timer mechanism that allows any C function to be
connected to a specified time delay. Watchdog timers are maintained as part of the
system clock ISR. For information about POSIX timers, see 5.6 POSIX Clocks and
Timers, p.261.
This chapter discusses the tasking, intertask communication, and interprocess
communication facilities that are at the heart of the VxWorks run-time
environment.
For information about POSIX support for VxWorks, see 5. POSIX Facilities.
NOTE: This chapter provides information about facilities available in the VxWorks
kernel. For information about facilities available to real-time processes, see the
corresponding chapter in the VxWorks Application Programmer’s Guide.
159
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: This chapter provides information about multitasking facilities that are
common to both uniprocessor (UP) and symmetric multiprocessor (SMP)
configurations of VxWorks. It also provides information about those facilities that
are specific to the UP configuration. In the latter case, the alternatives available for
SMP systems are noted.
With few exceptions, the symmetric multiprocessor (SMP) and uniprocessor (UP)
configurations of VxWorks share the same API—the difference amounts to only a
few routines. Also note that some programming practices—such as implicit
synchronization techniques relying on task priority instead of explicit locking—
are not appropriate for an SMP system.
For information about SMP programming, see 15. VxWorks SMP. For information
specifically about migration, see 15.15 Migrating Code to VxWorks SMP, p.704.
Each task has its own context, which is the CPU environment and system resources
that the task sees each time it is scheduled to run by the kernel. On a context switch,
a task’s context is saved in the task control block (TCB).
A task’s context includes:
■
a thread of execution; that is, the task’s program counter
■
the tasks’ virtual memory context (if process support is included)
■
the CPU registers and (optionally) coprocessor registers
160
4 Multitasking
4.2 Tasks and Multitasking
■
stacks for dynamic variables and function calls
■
I/O assignments for standard input, output, and error
■
a delay timer
■
a time-slice timer
4
■
kernel control structures
■ signal handlers
■ task private environment (for environment variables)
■ error status (errno)
■ debugging and performance monitoring values
If VxWorks is configured without process support (the INCLUDE_RTP
component), the context of a task does not include its virtual memory context. All
tasks can only run in a single common address space (the kernel).
However, if VxWorks is configured with process support—regardless of whether
or not any processes are active—the context of a kernel task does include its virtual
memory context, because the system has the potential to operate with other virtual
memory contexts besides the kernel. That is, the system could have tasks running
in several different virtual memory contexts (the kernel and one or more
processes).
For information about virtual memory contexts, see 6. Memory Management.
NOTE: The POSIX standard includes the concept of a thread, which is similar to a
task, but with some additional features. For details, see 5.10 POSIX Threads, p.266.
The kernel maintains the current state of each task in the system. A task changes
from one state to another as a result of kernel function calls made by the
application. When created, tasks enter the suspended state. Activation is necessary
for a created task to enter the ready state. The activation phase is extremely fast,
enabling applications to pre-create tasks and activate them in a timely manner. An
alternative is the spawning primitive, which allows a task to be created and
activated with a single function. Tasks can be deleted from any state.
161
VxWorks
Kernel Programmer's Guide, 6.6
Table 4-1 describes the state symbols that you see when working with development
tools. Example 4-1 shows output from the i( ) command containing task state
information.
162
4 Multitasking
4.2 Tasks and Multitasking
READY The task is not waiting for any resource other than the CPU.
4
PEND The task is blocked due to the unavailability of some resource.
DELAY The task is asleep for some duration.
SUSPEND The task is unavailable for execution (but not pended or delayed).
This state is used primarily for debugging. Suspension does not
inhibit state transition, only execution. Thus, pended-suspended
tasks can still unblock and delayed-suspended tasks can still
awaken.
STOP The task is stopped by the debugger.
DELAY + S The task is both delayed and suspended.
PEND + S The task is both pended and suspended.
PEND + T The a task is pended with a timeout value.
STOP + P Task is pended and stopped (by the debugger, error detection and
reporting facilities, or SIGSTOP signal).
STOP + S Task is stopped by (by the debugger, error detection and reporting
facilities, or SIGSTOP signal) and suspended.
STOP + T Task is delayed and stopped (by the debugger, error detection and
reporting facilities, or SIGSTOP signal).
PEND + S + T The task is pended with a timeout value and suspended.
STOP +P + S Task is pended, suspended and stopped by the debugger.
STOP + P + T Task pended with a timeout and stopped by the debugger.
STOP +T + S Task is suspended, delayed, and stopped by the debugger.
ST+P+S+T Task is pended with a timeout, suspended, and stopped by the
debugger.
163
VxWorks
Kernel Programmer's Guide, 6.6
The STOP state is used by the debugger facilities when a breakpoint is hit. It is also
used by the error detection and reporting facilities when an error condition occurs
(see 11. Error Detection and Reporting).
-> i
164
4 Multitasking
4.2 Tasks and Multitasking
Figure 4-1 illustrates task state transitions for a deployed system—without the
STOP state associated with development activity. The routines listed are examples
of ones that would cause the associated transition. For example, a task that called
taskDelay( ) would move from the ready state to the delayed state.
suspended
taskInit( )
165
VxWorks
Kernel Programmer's Guide, 6.6
Task scheduling relies on a task’s priority. The VxWorks kernel provides 256
priority levels, numbered 0 through 255. Priority 0 is the highest and priority 255
is the lowest.
A task is assigned its priority at creation, but you can also change it
programmatically thereafter. For information about priority assignment, see
4.4.1 Task Creation and Activation, p.172 and 4.3.2 Task Scheduling Control, p.167).
All application tasks should be in the priority range from 100 to 255.
In contrast to application tasks, which should be in the task priority range from 100
to 255, driver support tasks (which are associated with an ISR) can be in the range
of 51-99.
These tasks are crucial; for example, if a support task fails while copying data from
a chip, the device loses that data. Examples of driver support tasks include tNet0
(the VxWorks network daemon task), an HDLC task, and so on.
The system tNet0 has a priority of 50, so user tasks should not be assigned
priorities below that task; if they are, the network connection could die and
prevent debugging capabilities with the host tools.
166
4 Multitasking
4.3 Task Scheduling
The routines that control task scheduling are listed in Table 4-2.
Task Priority
Tasks are assigned a priority when they are created (see 4.4.1 Task Creation and
Activation, p.172). You can change a task’s priority level while it is executing by
calling taskPrioritySet( ). The ability to change task priorities dynamically allows
applications to track precedence changes in the real world.
Preemption Locks
The scheduler can be explicitly disabled and enabled on a per-task basis in the
kernel with the routines taskLock( ) and taskUnlock( ). When a task disables the
scheduler by calling taskLock( ), no priority-based preemption can take place
while that task is running.
If the task that has disabled the scheduler with taskLock( ) explicitly blocks or
suspends, the scheduler selects the next highest-priority eligible task to execute.
167
VxWorks
Kernel Programmer's Guide, 6.6
When the preemption-locked task unblocks and begins running again, preemption
is again disabled.
NOTE: The taskLock( ) and taskUnlock( ) routines are provided for the UP
configuration of VxWorks, but not the SMP configuration. Several alternative are
available for SMP systems, including task-only spinlocks, which default to
taskLock( ) and taskUnlock( ) behavior in a UP system. For more information, see
15.6.2 Task-Only Spinlocks, p.684 and 15.15 Migrating Code to VxWorks SMP, p.704.
Note that preemption locks prevent task context switching, but do not lock out
interrupt handling.
Preemption locks can be used to achieve mutual exclusion; however, keep the
duration of preemption locking to a minimum. For more information, see
4.11 Mutual Exclusion, p.197.
When using taskLock( ), consider that it will not achieve mutual exclusion.
Generally, if interrupted by hardware, the system will eventually return to your
task. However, if you block, you lose task lockout. Thus, before you return from
the routine, taskUnlock( ) should be called.
When a task is accessing a variable or data structure that is also accessed by an ISR,
you can use intLock( ) to achieve mutual exclusion. Using intLock( ) makes the
operation atomic in a single processor environment. It is best if the operation is kept
minimal, meaning a few lines of code and no function calls. If the call is too long,
it can directly impact interrupt latency and cause the system to become far less
deterministic.
For information about interrupts, see 4.11.1 Interrupt Locks and Latency, p.198 and
4.20 Interrupt Service Routines, p.242.
168
4 Multitasking
4.3 Task Scheduling
For information about the POSIX thread scheduler and custom schedulers, see
5.12 POSIX and VxWorks Scheduling, p.280 and 2.10 Custom Scheduler, p.118,
respectively.
A priority-based preemptive scheduler preempts the CPU when a task has a higher
priority than the current task running. Thus, the kernel ensures that the CPU is
always allocated to the highest priority task that is ready to run. This means that if
a task—with a higher priority than that of the current task—becomes ready to run,
the kernel immediately saves the current task’s context, and switches to the context
of the higher priority task. For example, in Figure 4-2, task t1 is preempted by
higher-priority task t2, which in turn is preempted by t3. When t3 completes, t2
continues executing. When t2 completes execution, t1 continues executing.
The disadvantage of this scheduling policy is that, when multiple tasks of equal
priority must share the processor, if a single task is never blocked, it can usurp the
processor. Thus, other equal-priority tasks are never given a chance to run.
Round-robin scheduling solves this problem.
HIGH t3
priority
t2 t2
LOW t1 t1
time
Round-Robin Scheduling
169
VxWorks
Kernel Programmer's Guide, 6.6
algorithm attempts to share the CPU amongst these tasks by using time-slicing.
Each task in a group of tasks with the same priority executes for a defined interval,
or time slice, before relinquishing the CPU to the next task in the group. No one of
them, therefore, can usurp the processor until it is blocked. See Figure 4-3 for an
illustration of this activity.
Note that while round-robin scheduling is used in some operating systems to
provide equal CPU time to all tasks (or processes), regardless of their priority, this
is not the case with VxWorks. Priority-based preemption is essentially unaffected
by the VxWorks implementation of round-robin scheduling. Any higher-priority
task that is ready to run immediately gets the CPU, regardless of whether or not
the current task is done with its slice of execution time. When the interrupted task
gets to run again, it simply continues using its unfinished execution time.
In most systems, it is not necessary to enable round-robin scheduling, the
exception being when multiple copies of the same code are to be run, such as in a
user interface task.
Note that the taskRotate( )routine can be used as an alternative to round-robin
scheduling. It is useful for situations in which you want to share the CPU amongst
tasks of the same priority that are ready to run, but to do so as a program requires,
rather than at predetermined equal intervals.
170
4 Multitasking
4.4 Task Creation and Management
If a task blocks or is preempted by a higher priority task during its interval, its
time-slice count is saved and then restored when the task becomes eligible for
execution. In the case of preemption, the task will resume execution once the
higher priority task completes, assuming that no other task of a higher priority is
ready to run. In the case where the task blocks, it is placed at the tail of the list of
tasks at its priority level. If preemption is disabled during round-robin scheduling, 4
the time-slice count of the executing task is not incremented.
Time-slice counts are accrued by the task that is executing when a system tick
occurs, regardless of whether or not the task has executed for the entire tick
interval. Due to preemption by higher priority tasks or ISRs stealing CPU time
from the task, it is possible for a task to effectively execute for either more or less
total CPU time than its allotted time slice.
Figure 4-3 shows round-robin scheduling for three tasks of the same priority: t1,
t2, and t3. Task t2 is preempted by a higher priority task t4 but resumes at the count
where it left off when t4 is finished.
HIGH t4
time slice
priority
LOW t1 t2 t3 t1 t2 t2 t3
time
171
VxWorks
Kernel Programmer's Guide, 6.6
task creation and control, as well as for retrieving information about tasks. See the
VxWorks API reference for taskLib for further information.
For interactive use, you can control VxWorks tasks with the host tools or the kernel
shell; see the Wind River Workbench User’s Guide, the VxWorks Command-Line Tools
User’s Guide, and VxWorks Kernel Programmer’s Guide: Target Tools.
Note that a task’s priority can be changed after it has been spawned; see 4.3.2 Task
Scheduling Control, p.167.
The taskSpawn( ) routine creates the new task context, which includes allocating
the stack and setting up the task environment to call the main routine (an ordinary
subroutine) with the specified arguments. The new task begins execution at the
entry to the specified routine.
Call Description
taskOpen( ) Open a task (or optionally create one, if it does not exist).
The taskOpen( ) routine provides a POSIX-like API for creating a task (with
optional activation) or obtaining a handle on existing task. It also provides for
creating a task as either a public or private object (see 4.4.4 Task Names and IDs,
p.177). The taskOpen( ) routine is the most general purpose task-creation routine.
172
4 Multitasking
4.4 Task Creation and Management
When a task is spawned, you can pass in one or more option parameters, which are
listed in Table 4-4. The result is determined by performing a logical OR operation
on the specified options.
173
VxWorks
Kernel Programmer's Guide, 6.6
Name Description
You must include the VX_FP_TASK option when creating a task that does any of
the following:
■
Performs floating-point operations.
■
Calls any function that returns a floating-point value.
■
Calls any function that takes a floating-point value as an argument.
For example:
tid = taskSpawn ("tMyTask", 90, VX_FP_TASK, 20000, myFunc, 2387, 0, 0,
0, 0, 0, 0, 0, 0, 0);
174
4 Multitasking
4.4 Task Creation and Management
Note that in addition to using the VX_NO_STACK_FILL task creation option for
individual tasks, you can use the VX_GLOBAL_NO_STACK_FILL configuration
parameter (when you configure VxWorks) to disable stack filling for all tasks and
interrupts in the system.
4
By default, task and interrupt stacks are filled with 0xEE. Filling stacks is useful
during development for debugging with the checkStack( ) routine. It is generally
not used in deployed systems because not filling stacks provides better
performance during task creation (and at boot time for statically-initialized tasks).
After a task is spawned, you can examine or alter task options by using the
routines listed in Table 4-5. Currently, only the VX_UNBREAKABLE option can be
altered.
Call Description
The size of each task’s stack is defined when the task is created (see 4.4.1 Task
Creation and Activation, p.172).
It can be difficult, however, to know exactly how much stack space to allocate. To
help avoid stack overflow and corruption, you can initially allocated a stack that is
much larger than you expect the task to require. Then monitor the stack
periodically from the shell with checkStack( ) or ti( ). When you have determined
actual usage, adjust the stack size accordingly for testing and for the deployed
system.
In addition to experimenting with task stack size, you can also configure and test
systems with guard zone protection for task stacks (for more information, see Task
Stack Protection, p.176).
175
VxWorks
Kernel Programmer's Guide, 6.6
Task stacks can be protected with guard zones and by making task stacks
non-executable.
176
4 Multitasking
4.4 Task Creation and Management
■
TASK_KERNEL_EXEC_STACK_UNDERFLOW_SIZE for kernel task execution
stack underflow size.
The value of these parameters can be modified to increase the size of the guard
zones on a system-wide basis. The size of a guard zone is rounded up to a multiple
of the CPU MMU page size. The insertion of a guard zone can be prevented by
4
setting the parameter to zero.
Stack guard zones in the kernel consume RAM, as guard zones correspond to
mapped memory for which accesses are made invalid.
VxWorks creates kernel task stacks with a non-executable attribute only if the
system is configured with the INCLUDE_TASK_STACK_NO_EXEC component, and
if the CPU supports making memory non-executable on an MMU-page basis. The
size of a stack is always rounded up to a multiple of an MMU page size when the
stack is made non-executable (as is also the case when guard zones are inserted).
When a task is spawned, you can specify an ASCII string of any length to be the
task name, and a task ID is returned.
Most VxWorks task routines take a task ID as the argument specifying a task.
VxWorks uses a convention that a task ID of 0 (zero) always implies the calling
task. In the kernel, the task ID is a 4-byte handle to the task’s data structures.
The following rules and guidelines should be followed when naming tasks:
■ The names of public tasks must be unique and must begin with a forward
slash; for example /tMyTask. Note that public tasks are visible throughout the
entire system—in the kernel and any processes.
■
The names of private tasks should be unique. VxWorks does not require that
private task names be unique, but it is preferable to use unique names to avoid
confusing the user. (Note that private tasks are visible only within the entity in
which they were created—either the kernel or a process.)
To use the host development tools to their best advantage, task names should not
conflict with globally visible routine or variable names. To avoid name conflicts,
VxWorks uses a convention of prefixing any kernel task name started from the
target with the letter t, and any task name started from the host with the letter u.
In addition, the name of the initial task of a real-time process is the executable file
name (less the extension) prefixed with the letter i.
177
VxWorks
Kernel Programmer's Guide, 6.6
Creating a task as a public object allows other tasks from outside of its process to
send signals or events to it (with the taskKill( ) or the eventSend( ) routine,
respectively).
For more information, see 4.9 Public and Private Objects, p.195.
You do not have to explicitly name tasks. If a NULL pointer is supplied for the
name argument of taskSpawn( ), then VxWorks assigns a unique name. The name
is of the form tN, where N is a decimal integer that is incremented by one for each
unnamed task that is spawned.
The taskLib routines listed in Table 4-6 manage task IDs and names.
178
4 Multitasking
4.4 Task Creation and Management
Call Description
The routines listed in Table 4-7 get information about a task by taking a snapshot
of a task’s context when the routine is called. Because the task state is dynamic, the
information may not be current unless the task is known to be dormant (that is,
suspended).
Call Description
taskRegsSet( ) Sets a task’s registers (cannot be used with the current task).
For information about task-specific variables and their use, see 4.7.3 Task-Specific
Variables, p.191.
179
VxWorks
Kernel Programmer's Guide, 6.6
Tasks can be dynamically deleted from the system. VxWorks includes the routines
listed in Table 4-8 to delete tasks and to protect tasks from unexpected deletion.
Call Description
exit( ) Terminates the calling task and frees memory (task stacks and
task control blocks only).a
taskDelete( ) Terminates a specified task and frees memory (task stacks and
task control blocks only).a The calling task may terminate itself
with this routine.
! WARNING: Make sure that tasks are not deleted at inappropriate times. Before an
application deletes a task, the task should release all shared resources that it holds.
Tasks implicitly call exit( ) if the entry routine specified during task creation
returns.
When a task is deleted, no other task is notified of this deletion. The routines
taskSafe( ) and taskUnsafe( ) address problems that stem from unexpected
deletion of tasks. The routine taskSafe( ) protects a task from deletion by other
tasks. This protection is often needed when a task executes in a critical region or
engages a critical resource.
For example, a task might take a semaphore for exclusive access to some data
structure. While executing inside the critical region, the task might be deleted by
another task. Because the task is unable to complete the critical region, the data
structure might be left in a corrupt or inconsistent state. Furthermore, because the
semaphore can never be released by the task, the critical resource is now
unavailable for use by any other task and is essentially frozen.
Using taskSafe( ) to protect the task that took the semaphore prevents such an
outcome. Any task that tries to delete a task protected with taskSafe( ) is blocked.
180
4 Multitasking
4.4 Task Creation and Management
When finished with its critical resource, the protected task can make itself available
for deletion by calling taskUnsafe( ), which readies any deleting task. To support
nested deletion-safe regions, a count is kept of the number of times taskSafe( ) and
taskUnsafe( ) are called. Deletion is allowed only when the count is zero, that is,
there are as many unsafes as safes. Only the calling task is protected. A task cannot
make another task safe or unsafe from deletion. 4
The following code fragment shows how to use taskSafe( ) and taskUnsafe( ) to
protect a critical region of code:
taskSafe ();
semTake (semId, WAIT_FOREVER); /* Block until semaphore available */
.
. /* critical region code */
.
semGive (semId); /* Release semaphore */
taskUnsafe ();
Deletion safety is often coupled closely with mutual exclusion, as in this example.
For convenience and efficiency, a special kind of semaphore, the mutual-exclusion
semaphore, offers an option for deletion safety. For more information, see
4.12.3 Mutual-Exclusion Semaphores, p.206.
The routines listed in Table 4-9 provide direct control over a task’s execution.
Call Description
181
VxWorks
Kernel Programmer's Guide, 6.6
Delay operations provide a simple mechanism for a task to sleep for a fixed
duration. Task delays are often used for polling applications. For example, to delay
a task for half a second without making assumptions about the clock rate, call:
taskDelay (sysClkRateGet ( ) / 2);
The routine sysClkRateGet( ) returns the speed of the system clock in ticks per
second. Instead of taskDelay( ), you can use the POSIX routine nanosleep( ) to
specify a delay directly in time units. Only the units are different; the resolution of
both delay routines is the same, and depends on the system clock. For details, see
5.6 POSIX Clocks and Timers, p.261.
As a side effect, taskDelay( ) moves the calling task to the end of the ready queue
for tasks of the same priority. In particular, you can yield the CPU to any other
tasks of the same priority by delaying for zero clock ticks:
taskDelay (NO_WAIT); /* allow other tasks of same priority to run */
System clock resolution is typically 60Hz (60 times per second). This is a relatively
long time for one clock tick, and would be even at 100Hz or 120Hz. Thus, since
periodic delaying is effectively polling, you may want to consider using
event-driven techniques as an alternative.
Call Description
182
4 Multitasking
4.4 Task Creation and Management
Call Description
183
VxWorks
Kernel Programmer's Guide, 6.6
Library Routines
vxLib vxTas( )
184
4 Multitasking
4.5 Task Error Status: errno
As with any other errno implementation, take care not to have a local variable of
the same name.
In VxWorks, the underlying global errno is a single predefined global variable that
can be referenced directly by application code that is linked with VxWorks (either
statically on the host or dynamically at load time).
However, for errno to be useful in the multitasking environment of VxWorks, each
task must see its own version of errno. Therefore errno is saved and restored by
the kernel as part of each task’s context every time a context switch occurs.
Similarly, interrupt service routines (ISRs) see their own versions of errno. This is
accomplished by saving and restoring errno on the interrupt stack as part of the
interrupt enter and exit code provided automatically by the kernel (see
4.20.1 Connecting Routines to Interrupts, p.243).
Thus, regardless of the VxWorks context, an error code can be stored or consulted
without direct manipulation of the global variable errno.
185
VxWorks
Kernel Programmer's Guide, 6.6
Almost all VxWorks functions follow a convention that indicates simple success or
failure of their operation by the actual return value of the function. Many functions
return only the status values OK (0) or ERROR (-1). Some functions that normally
return a nonnegative number (for example, open( ) returns a file descriptor) also
return ERROR to indicate an error. Functions that return a pointer usually return
NULL (0) to indicate an error. In most cases, a function returning such an error
indication also sets errno to the specific error code.
The global variable errno is never cleared by VxWorks routines. Thus, its value
always indicates the last error status set. When a VxWorks subroutine gets an error
indication from a call to another routine, it usually returns its own error indication
without modifying errno. Thus, the value of errno that is set in the lower-level
routine remains available as the indication of error type.
For example, the VxWorks routine intConnect( ), which connects a user routine to
a hardware interrupt, allocates memory by calling malloc( ) and builds the
interrupt driver in this allocated memory. If malloc( ) fails because insufficient
memory remains in the pool, it sets errno to a code indicating an
insufficient-memory error was encountered in the memory allocation library,
memLib. The malloc( ) routine then returns NULL to indicate the failure. The
intConnect( ) routine, receiving the NULL from malloc( ), then returns its own
error indication of ERROR. However, it does not alter errno leaving it at the
insufficient memory code set by malloc( ). For example:
if ((pNew = malloc (CHUNK_SIZE)) = = NULL)
return (ERROR);
It is recommended that you use this mechanism in your own subroutines, setting
and examining errno as a debugging technique. A string constant associated with
errno can be displayed using printErrno( ) if the errno value has a corresponding
string entered in the error-status symbol table, statSymTbl. See the VxWorks API
reference for errnoLib for details on error-status values and building statSymTbl.
VxWorks errno values encode the module that issues the error, in the most
significant two bytes, and uses the least significant two bytes for individual error
numbers. All VxWorks module numbers are in the range 1–500; errno values with
a module number of zero are used for source compatibility.
All other errno values (that is, positive values greater than or equal to 501<<16, and
all negative values) are available for application use.
186
4 Multitasking
4.6 Task Exception Handling
See the VxWorks API reference on errnoLib for more information about defining
and decoding errno values with this convention.
187
VxWorks
Kernel Programmer's Guide, 6.6
taskOne (void)
{
myFunc();
... myFunc (void)
}
{
...
taskTwo (void) }
{
myFunc();
...
}
188
4 Multitasking
4.7 Shared Code and Reentrancy
NOTE: In some cases reentrant code is not preferable. A critical section should use
a binary semaphore to guard it, or use intLock( ) or intUnlock( ) if called from by
an ISR.
Many subroutines are pure code, having no data of their own except dynamic stack
variables. They work exclusively on data provided by the caller as parameters. The
linked-list library, lstLib, is a good example of this. Its routines operate on lists and
nodes provided by the caller in each subroutine call.
Subroutines of this kind are inherently reentrant. Multiple tasks can use such
routines simultaneously, without interfering with each other, because each task
does indeed have its own stack. See Figure 4-5.
taskOne ( ) ...
{ var = 1
... ...
comFunc(1);
...
}
taskTwo ( ) ...
{ comFunc (arg)
var = 2 {
... ...
comFunc(2); int var = arg;
... }
}
189
VxWorks
Kernel Programmer's Guide, 6.6
Some libraries encapsulate access to common data. This kind of library requires
some caution because the routines are not inherently reentrant. Multiple tasks
simultaneously invoking the routines in the library might interfere with access to
common variables. Such libraries must be made explicitly reentrant by providing
a mutual-exclusion mechanism to prohibit tasks from simultaneously executing
critical sections of code. The usual mutual-exclusion mechanism is the mutex
semaphore facility provided by semMLib and described in
4.12.3 Mutual-Exclusion Semaphores, p.206.
190
4 Multitasking
4.7 Shared Code and Reentrancy
NOTE: The __thread storage class variables can be used for both UP and SMP
configurations of VxWorks, and Wind River recommends its use in both cases as
the best method of providing task-specific variables. The taskVarLib and
tlsOldLib (formerly tlsLib) facilities—for the kernel-space and user-space
respectively— are maintained primarily for backwards-compatibility, are not
compatible with VxWorks SMP, and their use is not recommended. In addition to
being incompatible with VxWorks SMP, the taskVarLib and tlsOldLib facilities
increase task context switch times. For information about migration, see
15.15 Migrating Code to VxWorks SMP, p.704.
Also note that each task has a VxWorks events register, which receives events sent
from other tasks, ISRs, semaphores, or message queues. See 4.15 VxWorks Events,
p.220 for more information about this register, and the routines used to interact
with it.
The __thread specifier may be used alone, with the extern or static specifiers, but
with no other storage class specifier. When used with extern or static, __thread
must appear immediately after the other storage class specifier.
191
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks provides a task variable facility (with taskVarLib) that allows 4-byte
variables to be added to a task’s context, so that the value of such a variable is
switched every time a task switch occurs to or from its owner task.
NOTE: Wind River does not recommend using the taskVarLib facility, which is
maintained primarily for backwards-compatibility. Use thread-local (__thread)
storage class variables instead.
With VxWorks, it is possible to spawn several tasks with the same main routine.
Each spawn creates a new task with its own stack and context. Each spawn can also
pass the main routine different parameters to the new task. In this case, the same
rules of reentrancy described in 4.7.3 Task-Specific Variables, p.191 apply to the
entire task.
This is useful when the same function must be performed concurrently with
different sets of parameters. For example, a routine that monitors a particular kind
of equipment might be spawned several times to monitor several different pieces
of that equipment. The arguments to the main routine could indicate which
particular piece of equipment the task is to monitor.
In Figure 4-6, multiple joints of the mechanical arm use the same code. The tasks
manipulating the joints invoke joint( ). The joint number (jointNum) is used to
indicate which joint on the arm to manipulate.
192
4 Multitasking
4.7 Shared Code and Reentrancy
joint_2
joint_3
4
joint_1
joint
(
int jointNum
)
{
/* joint code here */
}
193
VxWorks
Kernel Programmer's Guide, 6.6
194
4 Multitasking
4.9 Public and Private Objects
Public objects are always named, and the name must begin with a forward-slash.
Private objects can be named or unnamed. If they are named, the name must not
begin with a forward-slash.
Only one public object of a given class and name can be created. That is, there can
be only one public semaphore with the name /foo. But there may be a public
semaphore named /foo and a public message queue named /foo. Obviously, more
distinctive naming is preferable (such as /fooSem and /fooMQ).
The system allows creation of only one private object of a given class and name in
any given memory context; that is, in any given process or in the kernel. For
example:
■
If process A has created a private semaphore named bar, it cannot create a
second semaphore named bar.
■
However, process B could create a private semaphore named bar, as long as it
did not already own one with that same name.
Note that private tasks are an exception to this rule—duplicate names are
permitted for private tasks; see 4.4.4 Task Names and IDs, p.177.
To create a named object, the appropriate xyzOpen( ) API must be used, such as
semOpen( ). When the routine specifies a name that starts with a forward slash,
the object will be public.
195
VxWorks
Kernel Programmer's Guide, 6.6
To delete public objects, the xyzDelete( ) API cannot be used (it can only be used
with private objects). Instead, the xyzClose( ) and xyzUnlink( ) APIs must be used
in accordance with the POSIX standard. That is, they must be unlinked from the
name space, and then the last close operation will delete the object (for example,
using the semUnlink( ) and semClose( ) APIs for a public semaphore).
Alternatively, all close operations can be performed first, and then the unlink
operation, after which the object is deleted. Note that if an object is created with the
OM_DELETE_ON_LAST_CLOSE flag, it is be deleted with the last close operation,
regardless of whether or not it was unlinked.
All objects are owned by the process to which the creator task belongs, or by the
kernel if the creator task is a kernel task. When ownership must be changed, for
example on a process creation hook, the objOwnerSet( ) can be used. However, its
use is restricted—the new owner must be a process or the kernel.
All objects that are owned by a process are automatically destroyed when the
process dies.
All objects that are children of another object are automatically destroyed when the
parent object is destroyed.
Processes can share public objects through an object lookup-by-name capability
(with the xyzOpen( ) set of routines). Sharing objects between processes can only
be done by name.
When a process terminates, all the private objects that it owns are deleted,
regardless of whether or not they are named. All references to public objects in the
process are closed (an xyzClose( ) operation is performed). Therefore, any public
object is deleted during resource reclamation, regardless of which process created
them, if there are no more outstanding xyzOpen( ) calls against it (that is, no other
process or the kernel has a reference to it), and the object was already unlinked or
was created with the OM_DELETE_ON_LAST_CLOSE option. The exception to this
rule is tasks, which are always reclaimed when its creator process dies.
When the creator process of a public object dies, but the object survives because it
hasn't been unlinked or because another process has a reference to it, ownership of
the object is assigned to the kernel.
The objShowAll( ) show routine can be used to display information about
ownership relations between objects.
196
4 Multitasking
4.10 Shared Data Structures
Global variables, linear buffers, ring buffers, linked lists, and pointers can be
referenced directly by code running in different contexts.
For information about using shared data regions to communicate between
processes, see VxWorks Application Programmer’s Guide: Applications and Processes.
TASKS MEMORY
access
task 1 sharedData
sharedData
access
task 2 sharedData
access
task 3 sharedData
197
VxWorks
Kernel Programmer's Guide, 6.6
The most powerful method available for mutual exclusion is the disabling of
interrupts with the intLock( ) routine. Such a lock guarantees exclusive access to
the CPU:
funcA ()
{
int lock = intLock();
.
. /* critical region of code that cannot be interrupted */
.
intUnlock (lock);
}
! WARNING: Invoking a VxWorks system routine with interrupts locked may result
in interrupts being re-enabled for an unspecified period of time. If the called
routine blocks, or results in a higher priority task becoming eligible for execution
(READY), interrupts will be re-enabled while another task executes, or while the
kernel is idle.
198
4 Multitasking
4.12 Semaphores
funcA ()
{
taskLock ();
.
. /* critical region of code that cannot be interrupted */
.
taskUnlock ();
} 4
However, this method can lead to unacceptable real-time response. Tasks of higher
priority are unable to execute until the locking task leaves the critical region, even
though the higher-priority task is not itself involved with the critical region. While
this kind of mutual exclusion is simple, if you use it, be sure to keep the duration
short. Semaphores provide a better mechanism; see 4.12 Semaphores, p.199.
! WARNING: The critical region code should not block. If it does, preemption could
be re-enabled.
NOTE: The taskLock( ) and taskUnlock( ) routines are provided for the UP
configuration of VxWorks, but not the SMP configuration. Several alternative are
available for SMP systems, including task-only spinlocks, which default to
taskLock( ) and taskUnlock( ) behavior in a UP system. For more information, see
15.6.2 Task-Only Spinlocks, p.684 and 15.15 Migrating Code to VxWorks SMP, p.704.
4.12 Semaphores
VxWorks semaphores are highly optimized and provide the fastest intertask
communication mechanism in VxWorks. Semaphores are the primary means for
addressing the requirements of both mutual exclusion and task synchronization,
as described below:
■ For mutual exclusion, semaphores interlock access to shared resources. They
provide mutual exclusion with finer granularity than either interrupt
disabling or preemptive locks, discussed in 4.11 Mutual Exclusion, p.197.
■ For synchronization, semaphores coordinate a task’s execution with external
events.
199
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks provides the following types of semaphores, which are optimized for
different types of uses:
binary
The fastest, most general-purpose semaphore. Optimized for synchronization
or mutual exclusion. For more information, see 4.12.2 Binary Semaphores, p.202.
mutual exclusion
A special binary semaphore optimized for problems inherent in mutual
exclusion: priority inversion, deletion safety, and recursion. For more
information, see 4.12.3 Mutual-Exclusion Semaphores, p.206.
counting
Like the binary semaphore, but keeps track of the number of times a
semaphore is given. Optimized for guarding multiple instances of a resource.
For more information, see 4.12.4 Counting Semaphores, p.209.
read/write
A special type of semaphore that provides mutual exclusion for tasks that need
write access to an object, and concurrent access for tasks that only need read
access to the object. This type of semaphore is particularly useful for SMP
systems. For more information, see 4.12.5 Read/Write Semaphores, p.210.
VxWorks semaphores can be created as private objects, which are accessible only
within the memory space in which they were created (kernel or process); or as
public objects, which accessible throughout the system. For more information, see
4.9 Public and Private Objects, p.195.
VxWorks not only provides the semaphores designed expressly for VxWorks, but
also POSIX semaphores, designed for portability. An alternate semaphore library
provides the POSIX-compliant semaphore interface; see 5.13 POSIX Semaphores,
p.292.
NOTE: The semaphores described here are for use with UP and SMP
configurations of VxWorks. The optional product VxMP provides semaphores
that can be used in an asymmetric multiprocessor (AMP) system, in the VxWorks
kernel (but not in UP or SMP systems). For more information, see
16. Shared-Memory Objects: VxMP.
200
4 Multitasking
4.12 Semaphores
Call Description
201
VxWorks
Kernel Programmer's Guide, 6.6
The semaphore creation routines listed in Table 4-12 perform a dynamic, two-step
operation, in which memory is allocated for the semaphore object at runtime, and
then the object is initialized. Semaphores (and other VxWorks objects) can also be
statically instantiated—which means that their memory is allocated for the object
at compile time—and the object is then initialized at runtime with an initialization
routine.
For information about static instantiation, see 2.6.4 Static Instantiation of Kernel
Objects, p.56. For information about semaphore initialization routines, see the
VxWorks API references.
202
4 Multitasking
4.12 Semaphores
task is
no no pended for
semaphore timeout = timeout 4
available? NO_WAIT
value
yes yes
When a task gives a binary semaphore, using semGive( ), the outcome also
depends on whether the semaphore is available (full) or unavailable (empty) at the
time of the call; see Figure 4-9. If the semaphore is already available (full), giving
the semaphore has no effect at all. If the semaphore is unavailable (empty) and no
task is waiting to take it, then the semaphore becomes available (full). If the
semaphore is unavailable (empty) and one or more tasks are pending on its
availability, then the first task in the queue of blocked tasks is unblocked, and the
semaphore is left unavailable (empty).
no no
task continues,
semaphore tasks
semaphore
available? pended?
made available
yes yes
203
VxWorks
Kernel Programmer's Guide, 6.6
Mutual Exclusion
SEM_ID semMutex;
When a task wants to access the resource, it must first take that semaphore. As long
as the task keeps the semaphore, all other tasks seeking access to the resource are
blocked from execution. When the task is finished with the resource, it gives back
the semaphore, allowing another task to use the resource.
Thus, all accesses to a resource requiring mutual exclusion are bracketed with
semTake( ) and semGive( ) pairs:
semTake (semMutex, WAIT_FOREVER);
.
. /* critical region, only accessible by a single task at a time */
.
semGive (semMutex);
Synchronization
204
4 Multitasking
4.12 Semaphores
In Example 4-2, the init( ) routine creates the binary semaphore, attaches an ISR to
an event, and spawns a task to process the event. The routine task1( ) runs until it
calls semTake( ). It remains blocked at that point until an event causes the ISR to
call semGive( ). When the ISR completes, task1( ) executes to process the event.
There is an advantage of handling event processing within the context of a
dedicated task: less processing takes place at interrupt level, thereby reducing 4
interrupt latency. This model of event processing is recommended for real-time
applications.
/* includes */
#include <vxWorks.h>
#include <semLib.h>
#include <arch/arch/ivarch.h> /* replace arch with architecture type */
init (
int someIntNum
)
{
/* connect interrupt service routine */
intConnect (INUM_TO_IVEC (someIntNum), eventInterruptSvcRout, 0);
/* create semaphore */
syncSem = semBCreate (SEM_Q_FIFO, SEM_EMPTY);
task1 (void)
{
...
semTake (syncSem, WAIT_FOREVER); /* wait for event to occur */
printf ("task 1 got the semaphore\n");
... /* process event */
}
eventInterruptSvcRout (void)
{
...
semGive (syncSem); /* let task 1 process event */
...
}
Broadcast synchronization allows all processes that are blocked on the same
semaphore to be unblocked atomically. Correct application behavior often
requires a set of tasks to process an event before any task of the set has the
205
VxWorks
Kernel Programmer's Guide, 6.6
opportunity to process further events. The routine semFlush( ) addresses this class
of synchronization problem by unblocking all tasks pended on a semaphore.
Priority Inversion
HIGH t1 t1
priority
t2
LOW t3 t3 t3
time
206
4 Multitasking
4.12 Semaphores
HIGH t1 t3 t1
priority
t2
LOW t3
time
207
VxWorks
Kernel Programmer's Guide, 6.6
Deletion Safety
208
4 Multitasking
4.12 Semaphores
/* includes */ 4
#include <vxWorks.h>
#include <semLib.h>
SEM_ID mySem;
init ()
{
mySem = semMCreate (SEM_Q_PRIORITY);
}
funcA ()
{
semTake (mySem, WAIT_FOREVER);
printf ("funcA: Got mutual-exclusion semaphore\n");
...
funcB ();
...
semGive (mySem);
printf ("funcA: Released mutual-exclusion semaphore\n");
}
funcB ()
{
semTake (mySem, WAIT_FOREVER);
printf ("funcB: Got mutual-exclusion semaphore\n");
...
semGive (mySem);
printf ("funcB: Releases mutual-exclusion semaphore\n");
}
209
VxWorks
Kernel Programmer's Guide, 6.6
Table 4-13 shows an example time sequence of tasks taking and giving a counting
semaphore that was initialized to a count of 3.
Count
Semaphore Call after Call Resulting Behavior
Counting semaphores are useful for guarding multiple copies of resources. For
example, the use of five tape drives might be coordinated using a counting
semaphore with an initial count of 5, or a ring buffer with 256 entries might be
implemented using a counting semaphore with an initial count of 256. The initial
count is specified as an argument to the semCCreate( ) routine.
210
4 Multitasking
4.12 Semaphores
A read/write semaphore differs from other types of semaphore in that the access
mode must be specified when the semaphore is taken. The mode determines
whether the access is exclusive (write mode), or if concurrent access is allowed
(read mode). Different APIs correspond to the different modes of access, as
follows:
■ semRTake( ) for read (exclusive) mode
■ semWTake( ) for write (concurrent) mode
You can also use semTake( ) on a read/write semaphore, but the behavior is the
same as semWTake( ). And you can use semGive( ) on a read/write semaphore as
long as the task owns it is in the same mode.
For more information about read/write semaphore APIs, see Table 4-12 and the
VxWorks API references.
When a task takes a read/write semaphore in write mode, the behavior is identical
to that of a mutex semaphore. The task owns the semaphore exclusively. An
attempt to give a semaphore held by one task in this mode by task results in a
return value of ERROR.
When a task takes a read/write semaphore in read mode, the behavior is different
from other semaphores. It does not provide exclusive access to a resource (does not
protect critical sections), and the semaphore may be concurrently held in read
mode by more than one task.
The maximum number of tasks that can take a read/write semaphore in read
mode can be specified when the semaphore is created with the create routine call.
The system maximum for all read/write semaphores can also be set with
SEM_RW_MAX_CONCURRENT_READERS component parameter. By default it is
set to 32.
If the number of tasks is not specified when the create routine is called, the system
default is used.
211
VxWorks
Kernel Programmer's Guide, 6.6
Read/write semaphores can be taken recursively in both read and write mode.
Optionally, priority inheritance and deletion safety are available for each mode.
212
4 Multitasking
4.12 Semaphores
Timeouts
A semTake( ) with NO_WAIT (0), which means do not wait at all, sets errno to
S_objLib_OBJ_UNAVAILABLE. A semTake( ) with a positive timeout value returns
S_objLib_OBJ_TIMEOUT. A timeout value of WAIT_FOREVER (-1) means wait
indefinitely.
Queues
213
VxWorks
Kernel Programmer's Guide, 6.6
TCB
TCB TCB
200 TCB
120 TCB 90 TCB
100
80 140
TCB TCB
Priority ordering better preserves the intended priority structure of the system at
the expense of some overhead in take operations because of sorting the tasks by
priority. A FIFO queue requires no priority sorting overhead and leads to
constant-time performance. The selection of queue type is specified during
semaphore creation with the semaphore creation routine. Semaphores using the
priority inheritance option (SEM_INVERSION_SAFE) must select priority-order
queuing.
Semaphores can send VxWorks events to a specified task when they becomes free.
For more information, see 4.15 VxWorks Events, p.220.
214
4 Multitasking
4.13 Message Queues
message queue 1
message
task 1 task 2
message
message queue 2
Multiple tasks can send to and receive from the same message queue. Full-duplex
communication between two tasks generally requires two message queues, one for
each direction; see Figure 4-13.
VxWorks message queues can be created as private objects, which accessible only
within the memory space in which they were created (process or kernel); or as
public objects, which accessible throughout the system. For more information, see
4.9 Public and Private Objects, p.195.
There are two message-queue subroutine libraries in VxWorks. The first of these,
msgQLib, provides VxWorks message queues, designed expressly for VxWorks;
the second, mqPxLib, is compliant with the POSIX standard (1003.1b) for real-time
extensions. See 5.13.1 Comparison of POSIX and VxWorks Semaphores, p.293 for a
discussion of the differences between the two message-queue designs.
215
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks message queues are created, used, and deleted with the routines shown
in Table 4-14. This library provides messages that are queued in FIFO order, with
a single exception: there are two priority levels, and messages marked as high
priority are attached to the head of the queue.
Call Description
Timeouts
216
4 Multitasking
4.13 Message Queues
Urgent Messages
/* In this example, task t1 creates the message queue and sends a message
* to task t2. Task t2 receives the message from the queue and simply
* displays the message.
*/
/* includes */
#include <vxWorks.h>
#include <msgQLib.h>
/* defines */
#define MAX_MSGS (10)
#define MAX_MSG_LEN (100)
MSG_Q_ID myMsgQId;
task2 (void)
{
char msgBuf[MAX_MSG_LEN];
/* display message */
printf ("Message from task 1:\n%s\n", msgBuf);
}
217
VxWorks
Kernel Programmer's Guide, 6.6
Queuing
VxWorks message queues include the ability to select the queuing mechanism
employed for tasks blocked on a message queue. The MSG_Q_FIFO and
MSG_Q_PRIORITY options are provided to specify (to the msgQCreate( ) and
msgQOpen( ) routines) the queuing mechanism that should be used for tasks that
pend on msgQSend( ) and msgQReceive( ).
The VxWorks show( ) command produces a display of the key message queue
attributes, for either kind of message queue. For example, if myMsgQId is a
VxWorks message queue, the output is sent to the standard output device, and
looks like the following from the shell (using the C interpreter):
-> show myMsgQId
Message Queue Id : 0x3adaf0
Task Queuing : FIFO
Message Byte Len : 4
Messages Max : 30
Messages Queued : 14
Receivers Blocked : 0
Send timeouts : 0
Receive timeouts : 0
Real-time systems are often structured using a client-server model of tasks. In this
model, server tasks accept requests from client tasks to perform some service, and
usually return a reply. The requests and replies are usually made in the form of
intertask messages. In VxWorks, message queues or pipes (see 4.14 Pipes, p.219)
are a natural way to implement this functionality.
For example, client-server communications might be implemented as shown in
Figure 4-14. Each server task creates a message queue to receive request messages
from clients. Each client task creates a message queue to receive reply messages
from servers. Each request message includes a field containing the msgQId of the
client’s reply message queue. A server task’s main loop consists of reading request
messages from its request message queue, performing the request, and sending a
reply to the client’s reply message queue.
218
4 Multitasking
4.14 Pipes
reply queue 1
message
4
client 1
request queue
server task
message
message
The same architecture can be achieved with pipes instead of message queues, or by
other means that are tailored to the needs of the particular application.
Message queues can send VxWorks events to a specified task when a message
arrives on the queue and no task is waiting on it. For more information, see
4.15 VxWorks Events, p.220.
4.14 Pipes
Pipes provide an alternative interface to the message queue facility that goes
through the VxWorks I/O system. Pipes are virtual I/O devices managed by the
219
VxWorks
Kernel Programmer's Guide, 6.6
driver pipeDrv. The routine pipeDevCreate( ) creates a pipe device and the
underlying message queue associated with that pipe. The call specifies the name
of the created pipe, the maximum number of messages that can be queued to it,
and the maximum length of each message:
status = pipeDevCreate ("/pipe/name", max_msgs, max_length);
The created pipe is a normally named I/O device. Tasks can use the standard I/O
routines to open, read, and write pipes, and invoke ioctl routines. As they do with
other I/O devices, tasks block when they read from an empty pipe until data is
available, and block when they write to a full pipe until there is space available.
Like message queues, ISRs can write to a pipe, but cannot read from a pipe.
As I/O devices, pipes provide one important feature that message queues
cannot—the ability to be used with select( ). This routine allows a task to wait for
data to be available on any of a set of I/O devices. The select( ) routine also works
with other asynchronous I/O devices including network sockets and serial
devices. Thus, by using select( ), a task can wait for data on a combination of
several pipes, sockets, and serial devices; see 7.4.9 Pending on Multiple File
Descriptors with select( ), p.376.
Pipes allow you to implement a client-server model of intertask communications;
see 4.13.3 Servers and Clients with Message Queues, p.218.
1. VxWorks events are based on pSOS operating system events. VxWorks introduced func-
tionality similar to pSOS events (but with enhancements) with the VxWorks 5.5 release.
220
4 Multitasking
4.15 VxWorks Events
and message queues. A task can wait on multiple events from multiple sources.
Events thereby provide a means for coordination of complex matrix of activity
without allocation of additional system resources.
Each task has 32 event flags, bit-wise encoded in a 32-bit word (bits 25 to 32 are
reserved for Wind River use). These flags are stored in the task’s event register. Note
4
that an event flag itself has no intrinsic meaning. The significance of each of the 32
event flags depends entirely on how any given task is coded to respond to their
being set. There is no mechanism for recording how many times any given event
has been received by a task. Once a flag has been set, its being set again by the same
or a different sender is essentially an invisible operation.
Events are similar to signals in that they are sent to a task asynchronously; but
differ in that receipt is synchronous. That is, the receiving task must call a routine
to receive at will, and can choose to pend while waiting for events to arrive. Unlike
signals, therefore, events do not require a handler.
For a code example of how events can be used, see the eventLib API reference.
NOTE: VxWorks events, which are also simply referred to as events in this section,
should not be confused with System Viewer events.
A task can pend on one or more events, or simply check on which events have been
received, with a call to eventReceive( ). The routine specifies which events to wait
for, and provides options for waiting for one or all of those events. It also provides
various options for how to manage unsolicited events.
In order for a task to receive events from a semaphore or a message queue,
however, it must first register with the specific object, using semEvStart( ) for a
semaphore or msgQEvStart( ) for a message queue. Only one task can be
registered with any given semaphore or message queue at a time.
The semEvStart( ) routine identifies the semaphore and the events that it should
send to the task when the semaphore is free. It also provides a set of options to
specify whether the events are sent only the first time the semaphore is free, or
each time; whether to send events if the semaphore is free at the time of
221
VxWorks
Kernel Programmer's Guide, 6.6
Tasks and ISRs can send specific events to a task using eventSend( ), whether or
not the receiving task is prepared to make use of them.
Semaphores and message queues send events automatically to tasks that have
registered for notification with semEvStart( ) or msgQEvStart( ), respectively.
These objects send events when they are free. The conditions under which objects
are free are as follows:
Mutex Semaphore
A mutex semaphore is considered free when it no longer has an owner and no
task is pending on it. For example, following a call to semGive( ), the
semaphore will not send events if another task is pending on a semTake( ) for
the same semaphore.
Binary Semaphore
A binary semaphore is considered free when no task owns it and no task is
waiting for it.
222
4 Multitasking
4.15 VxWorks Events
Counting Semaphore
A counting semaphore is considered free when its count is nonzero and no
task is pending on it. Events cannot, therefore, be used as a mechanism to
compute the number of times a semaphore is released or given.
Message Queue
4
A message queue is considered free when a message is present in the queue
and no task is pending for the arrival of a message in that queue. Events
cannot, therefore, be used as a mechanism to compute the number of messages
sent to a message queue.
Note that just because an object has been released does not mean that it is free. For
example, if a semaphore is given, it is released; but it is not free if another task is
waiting for it at the time it is released. When two or more tasks are constantly
exchanging ownership of an object, it is therefore possible that the object never
becomes free, and never sends events.
Also note that when a semaphore or message queue sends events to a task to
indicate that it is free, it does not mean that the object is in any way reserved for the
task. A task waiting for events from an object unpends when the resource becomes
free, but the object may be taken in the interval between notification and
unpending. The object could be taken by a higher priority task if the task receiving
the event was pended in eventReceive( ). Or a lower priority task might steal the
object: if the task receiving the event was pended in some routine other than
eventReceive( ), a low priority task could execute and (for example) perform a
semTake( ) after the event is sent, but before the receiving task unpends from the
blocking call. There is, therefore, no guarantee that the resource will still be
available when the task subsequently attempts to take ownership of it.
! WARNING: Because events cannot be reserved for an application in any way, care
should be taken to ensure that events are used uniquely and unambiguously. Note
that events 25 to 32 (VXEV25 to VXEV32) are reserved for Wind River’s use, and
should not be used by customers. Third parties should be sure to document their
use of events so that their customers do not use the same ones for their
applications.
If a semaphore or message queue is deleted while a task is waiting for events from
it, the task is automatically unpended by the semDelete( ) or msgQDelete( )
implementation. This prevents the task from pending indefinitely while waiting
for events from an object that has been deleted. The pending task then returns to
223
VxWorks
Kernel Programmer's Guide, 6.6
the ready state (just as if it were pending on the semaphore itself) and receives an
ERROR return value from the eventReceive( ) call that caused it to pend initially.
If, however, the object is deleted between a tasks’ registration call and its
eventReceive( ) call, the task pends anyway. For example, if a semaphore is
deleted while the task is between the semEvStart( ) and eventReceive( ) calls, the
task pends in eventReceive( ), but the event is never sent. It is important, therefore,
to use a timeout other than WAIT_FOREVER when object deletion is expected.
If a task is deleted before a semaphore or message queue sends events to it, the
events can still be sent, but are obviously not received. By default, VxWorks
handles this event-delivery failure silently.
It can, however, be useful for an application that created an object to be informed
when events were not received by the (now absent) task that registered for them.
In this case, semaphores and message queues can be created with an option that
causes an error to be returned if event delivery fails (the
SEM_EVENTSEND_ERROR_NOTIFY and MSG_Q_EVENTSEND_ERROR_NOTIFY
options, respectively). The semGive( ) or msgQSend( ) call then returns ERROR
when the object becomes free.
The error does not mean the semaphore was not given or that the message was not
properly delivered. It simply means the resource could not send events to the
registered task. Note that a failure to send a message or give a semaphore takes
precedence over an events failure.
When events are sent to a task, they are stored in the task’s events register (see
4.15.5 Task Events Register, p.225), which is not directly accessible to the task itself.
When the events specified with an eventReceive( ) call have been received and the
task unpends, the contents of the events register is copied to a variable that is
accessible to the task.
When eventReceive( ) is used with the EVENTS_WAIT_ANY option—which
means that the task unpends for the first of any of the specified events that it
receives—the contents of the events variable can be checked to determine which
event caused the task to unpend.
The eventReceive( ) routine also provides an option that allows for checking
which events have been received prior to the full set being received.
224
4 Multitasking
4.15 VxWorks Events
The routines used for working with events are listed in Table 4-15.
eventReceive( ) Pends a task until the specified events have been received. Can
also be used to check what events have been received in the
interim.
For more information about these routines, see the VxWorks API references for
eventLib, semEvLib, and msgQEvLib.
Each task has its own task events register. The task events register is a 32-bit field
used to store the events that the task receives from other tasks (or itself), ISRs,
semaphores, and message queues.
Events 25 to 32 (VXEV25 or 0x01000000 to VXEV32 or 0x80000000) are reserved for
Wind River use only, and should not be used by customers.
As noted above (4.15.3 Accessing Event Flags, p.224), a task cannot access the
contents of its events registry directly.
Table 4-16 describes the routines that affect the contents of the events register.
225
VxWorks
Kernel Programmer's Guide, 6.6
eventReceive( ) Clears or leaves the contents of the task’s events register intact,
depending on the options selected.
For the purpose of debugging systems that make use of events, the taskShow,
semShow, and msgQShow libraries display event information.
The taskShow library displays the following information:
■
the contents of the event register
■
the desired events
■
the options specified when eventReceive( ) was called
The semShow and msgQShow libraries display the following information:
■
the task registered to receive events
■
the events the resource is meant to send to that task
■
the options passed to semEvStart( ) or msgQEvStart( )
226
4 Multitasking
4.16 Message Channels
4.18 Signals
Signals are an operating system facility designed for handling exceptional
conditions and asynchronously altering the flow of control. In many respects
signals are the software equivalent to hardware interrupts. Signals generated by
the operating system include those produced in response to bus errors and floating
point exceptions. The signal facility also provides APIs that can be used to generate
and manage signals programmatically.
In applications, signals are most appropriate for error and exception handling, and
not for a general-purpose inter-task communication. Common uses include using
227
VxWorks
Kernel Programmer's Guide, 6.6
signals to kill processes and tasks, to send signal events when a timer has fired or
message has arrived at a message queue, and so on.
In accordance with POSIX, VxWorks supports 63 signals, each of which has a
unique number and default action (defined in signal.h). The value 0 is reserved for
use as the NULL signal.
Signals can be raised (sent) from tasks to tasks or to processes. Signals can be either
caught (received) or ignored by the receiving task or process. Whether signals are
caught or ignored generally depends on the setting of a signal mask. In the kernel,
signal masks are specific to tasks, and if no task is set up to receive a specific signal,
it is ignored. In user space, signal masks are specific to processes; and some signals,
such as SIGKILL and SIGSTOP, cannot be ignored.
To manage responses to signals, you can create and register signal handling
routines that allow a task to respond to a specific signal in whatever way is useful
for your application.
A kernel task or interrupt service routine (ISR) can raise a signal for a specific task
or process. In the kernel, signal generation and delivery runs in the context of the
task or ISR that generates the signal. In accordance with the POSIX standard, a
signal sent to a process is handled by the first available task that has been set up to
handle the signal in the process.
Each kernel task has a signal mask associated with it. The signal mask determines
which signals the task accepts. By default, the signal mask is initialized with all
signals unblocked (there is no inheritance of mask settings in the kernel). The mask
can be changed with sigprocmask( ).
Signal handlers in the kernel can be registered for a specific task. A signal handler
executes in the receiving task’s context and makes use of that task’s execution
stack. The signal handler is invoked even if the task is blocked (suspended or
pended).
VxWorks provides a software signal facility that includes POSIX routines, UNIX
BSD-compatible routines, and native VxWorks routines. The POSIX-compliant
signal interfaces include both the basic signaling interface specified in the POSIX
standard 1003.1, and the queued-signals extension from POSIX 1003.1b.
Additional, non-POSIX APIs provide support for signals between kernel and user
applications. These non-POSIX APIs are: taskSigqueue( ), rtpSigqueue( ),
rtpTaskSigqueue( ), taskKill( ), rtpKill( ), rtpTaskKill( ), and taskRaise( ).
In the VxWorks kernel—for backward compatibility with prior versions of
VxWorks—the POSIX API that would take a process identifier as one of their
parameters, take a task identifier instead.
228
4 Multitasking
4.18 Signals
NOTE: Wind River recommends that you do not use both POSIX APIs and
VxWorks APIs in the same application. Doing so may make a POSIX application
non-conformant.
NOTE: POSIX signals are handled differently in the kernel and in real-time 4
processes. In the kernel the target of a signal is always a task; but in user space, the
target of a signal may be either a specific task or an entire process.
NOTE: The VxWorks implementation of sigLib does not impose any special
restrictions on operations on SIGKILL, SIGCONT, and SIGSTOP signals such as
those imposed by UNIX. For example, the UNIX implementation of signal( )
cannot be called on SIGKILL and SIGSTOP.
229
VxWorks
Kernel Programmer's Guide, 6.6
The maximum number of queued signals in the kernel is set with the configuration
parameter NUM_SIGNAL_QUEUES. The default value is 16.
Signals are in many ways analogous to hardware interrupts. The basic signal
facility provides a set of 63 distinct signals. A signal handler binds to a particular
signal with sigvec( ) or sigaction( ) in much the same way that an ISR is connected
to an interrupt vector with intConnect( ). A signal can be asserted by calling kill( )
or sigqueue( ). This is similar to the occurrence of an interrupt. The sigprocmask( )
routine let signals be selectively inhibited. Certain signals are associated with
hardware exceptions. For example, bus errors, illegal instructions, and
floating-point exceptions raise specific signals.
For a list and description of basic POSIX and BSD signal routines provided by
VxWorks in the kernel, see Table 4-17.
230
4 Multitasking
4.18 Signals
VxWorks also provides a POSIX and BSD-like kill( ) routine, which sends a signal
to a task.
VxWorks also provides additional routines that serve as aliases for POSIX
routines, such as rtpKill( ), that provide for sending signals from the kernel to
processes.
4
For more information about signal routines, see the VxWorks API reference for
sigLib and rtpSigLib.
231
VxWorks
Kernel Programmer's Guide, 6.6
Routine Description
Routine Description
232
4 Multitasking
4.18 Signals
#include <stdio.h>
#include <signal.h>
#include <taskLib.h>
#include <rtpLib.h>
#ifdef _WRS_KERNEL
#include <private/rtpLibP.h> 4
#include <private/taskLibP.h>
#include <errnoLib.h>
#endif
void sigMasterHandler
(
int sig, /* caught signal */
#ifdef _WRS_KERNEL
int code,
#else
siginfo_t * pInfo, /* signal info */
#endif
struct sigcontext *pContext /* unused */
);
/****************************************************************************
*
* main - entry point for the queued signal demo
*
* This routine acts the task entry point in the case of the demo spawned as a
* kernel task. It also can act as a RTP entry point in the case of RTP based
* demo.
*/
233
VxWorks
Kernel Programmer's Guide, 6.6
for (;;);
/****************************************************************************
*
* sigMasterHandler - signal handler
*
* This routine is the signal handler for the SIGUSR1 signal
*/
void sigMasterHandler
(
int sig, /* caught signal */
#ifdef _WRS_KERNEL
int code,
#else
siginfo_t * pInfo , /* signal info */
#endif
struct sigcontext *pContext /* unused */
)
{
printf ("Task 0x%x got signal # %d signal value %d \n",
taskIdCurrent, sig,
#ifdef _WRS_KERNEL
code
#else
pInfo->si_value.sival_int
#endif
);
}
/****************************************************************************
*
* sig - helper routine to send a queued signal
*
* This routine can send a queued signal to a kernel task or RTP task or RTP.
* <id> is the ID of the receiver entity. <value> is the value to be sent
* along with the signal. The signal number being sent is SIGUSR1.
*/
#ifdef _WRS_KERNEL
STATUS sig
(
int id,
int val
)
{
union sigval valueCode;
valueCode.sival_int = val;
234
4 Multitasking
4.18 Signals
return (OK);
}
#endif
235
VxWorks
Kernel Programmer's Guide, 6.6
The code provided in this example can be used to do any of the following:
■
Send a queued signal to a kernel task.
■
Send a queued signal to a task in a process (RTP).
■
Send a queued signal to a process.
The sig( ) routine provided in this code is a helper routine used to send a queued
signal.
To use the code as a kernel application, VxWorks must be configured with
BUNDLE_NET_SHELL and BUNDLE_POSIX.
To send a queued signal to a kernel task:
1. Build the code with the VxWorks image or as a downloadable kernel module.
Boot the system. If the code is not linked to the system image, load the module;
for example, from the kernel shell:
-> ld < signal_ex.o
value = 8382064 = 0x7fe670
3. Send a queued signal to the spawned kernel task. From the kernel shell use the
command:
sig kernelTaskId, signalValue
For example:
-> sig 0x7fd620, 20
value = 0 = 0x0
-> Task 0x7fd620 got signal # 30 signal value 20
sig 0x7fd620, 20
value = 0 = 0x0
-> Task 0x7fd620 got signal # 30 signal value 20
For information on using the code in a process (as an RTP application), see the
VxWorks Application Programmer’s Guide: Multitasking.
236
4 Multitasking
4.18 Signals
The signal event facility allows a pthread or task to receive notification that a
particular event has occurred (such as the arrival of a message at a message queue,
or the firing of a timer) by way of a signal.
4
The following routines can be used to register for signal notification of their
respective event activities: mq_notify( ), timer_create( ), timer_open( ),
aio_read( ), aio_write( ) and lio_listio( ).
The POSIX 1003.1-2001 standard defines three signal event notification types:
SIGEV_NONE
Indicates that no notification is required when the event occurs. This is useful
for applications that use asynchronous I/O with polling for completion.
SIGEV_SIGNAL
Indicates that a signal is generated when the event occurs.
SIGEV_THREAD
Provides for callback functions for asynchronous notifications done by a
function call within the context of a new thread. This provides a
multi-threaded process with a more natural means of notification than signals.
VxWorks supports this option in user space (processes), but not in the kernel.
The notification type is specified using the sigevent structure, which is defined in
installDir/vxworks-6.x/target/h/sigeventCommon.h. A pointer the structure is
used in the call to register for signal notification; for example, with mq_notify( ).
To use the signal event facility, configure VxWorks with the INCLUDE_SIGEVENT
component.
Signals are more appropriate for error and exception handling than as a
general-purpose intertask communication mechanism. And normally, signal
handlers should be treated like ISRs: no routine should be called from a signal
handler that might cause the handler to block. Because signals are asynchronous,
it is difficult to predict which resources might be unavailable when a particular
signal is raised.
To be perfectly safe, call only those routines listed in Table 4-20. Deviate from this
practice only if you are certain that your signal handler cannot create a deadlock
situation.
237
VxWorks
Kernel Programmer's Guide, 6.6
In addition, you should be particularly careful when using C++ for a signal
handler or when invoking a C++ method from a signal handler written in C or
assembly. Some of the issues involved in using C++ include the following:
■
The VxWorks intConnect( ) and signal( ) routines require the address of the
function to execute when the interrupt or signal occurs, but the address of a
non-static member function cannot be used, so static member functions must
be implement.
■ Objects cannot be instantiated or deleted in signal handling code.
■ C++ code used to execute in a signal handler should restrict itself to Embedded
C++. No exceptions nor run-time type identification (RTTI) should be used.
Library Routines
eventLib eventSend( )
logLib logMsg( )
msgQLib msgQSend( )
sigLib kill( )
238
4 Multitasking
4.18 Signals
Library Routines
239
VxWorks
Kernel Programmer's Guide, 6.6
malloc( ), the signal handler that calls longjmp( ) could leave the kernel in an
inconsistent state.
These scenarios are very difficult to debug, and should be avoided. One safe way
to synchronize other elements of the application code and a signal handler is to set
up dedicated flags and data structures that are set from signal handlers and read
from the other elements. This ensures a consistency in usage of the data structure.
In addition, the other elements of the application code must check for the
occurrence of signals at any time by periodically checking to see if the
synchronizing data structure or flag has been modified in the background by a
signal handler, and then acting accordingly. The use of the volatile keyword is
useful for memory locations that are accessed from both a signal handler and other
elements of the application.
Taking a mutex semaphore in a signal handler is an especially bad idea. Mutex
semaphores can be taken recursively. A signal handler can therefore easily
re-acquire a mutex that was taken by any other element of the application. Since
the signal handler is an asynchronously executing entity, it has thereby broken the
mutual exclusion that the mutex was supposed to provide.
Taking a binary semaphore in a signal handler is an equally bad idea. If any other
element has already taken it, the signal handler will cause the task to block on
itself. This is a deadlock from which no recovery is possible. Counting semaphores,
if available, suffer from the same issue as mutexes, and if unavailable, are
equivalent to the binary semaphore situation that causes an unrecoverable
deadlock.
On a general note, the signal facility should be used only for notifying/handling
exceptional or error conditions. Usage of signals as a general purpose IPC
mechanism or in the data flow path of an application can cause some of the pitfalls
described above.
240
4 Multitasking
4.19 Watchdog Timers
routines connected to watchdog timers. The functions in Table 4-21 are provided
by the wdLib library.
Call Description 4
wdCreate( ) Allocates and initializes a watchdog timer.
A watchdog timer is first created by calling wdCreate( ). Then the timer can be
started by calling wdStart( ), which takes as arguments the number of ticks to
delay, the C function to call, and an argument to be passed to that function. After
the specified number of ticks have elapsed, the function is called with the specified
argument. The watchdog timer can be canceled any time before the delay has
elapsed by calling wdCancel( ).
/* includes */
#include <vxWorks.h>
#include <logLib.h>
#include <wdLib.h>
/* defines */
#define SECONDS (3)
WDOG_ID myWatchDogId;
task (void)
{
/* Create watchdog */
if ((myWatchDogId = wdCreate( )) == NULL)
return (ERROR);
For information about POSIX timers, see 5.6 POSIX Clocks and Timers, p.261.
241
VxWorks
Kernel Programmer's Guide, 6.6
Call Description
For information about interrupt locks and latency, see 4.11.1 Interrupt Locks and
Latency, p.198.
NOTE: The intLock( ) and intUnlock( ) routines are provided for the UP
configuration of VxWorks, but not the SMP configuration. Several alternative are
available for SMP systems, including the ISR-callable spinlock, which default to
intLock( ) and intUnlock( ) behavior in a UP system. For more information, see
15.6.1 ISR-Callable Spinlocks, p.684 and 15.15 Migrating Code to VxWorks SMP,
p.704.
242
4 Multitasking
4.20 Interrupt Service Routines
You can use system hardware interrupts other than those used by VxWorks.
VxWorks provides the routine intConnect( ), which allows C functions to be
connected to any interrupt. The arguments to this routine are the byte offset of the
interrupt vector to connect to, the address of the C function to be connected, and 4
an argument to pass to the function. When an interrupt occurs with a vector
established in this way, the connected C function is called at interrupt level with
the specified argument. When the interrupt handling is finished, the connected
function returns. A routine connected to an interrupt in this way is called an
interrupt service routine (ISR).
Interrupts cannot actually vector directly to C functions. Instead, intConnect( )
builds a small amount of code that saves the necessary registers, sets up a stack
entry (either on a special interrupt stack, or on the current task’s stack) with the
argument to be passed, and calls the connected function. On return from the
function it restores the registers and stack, and exits the interrupt; see Figure 4-15.
For target boards with VME backplanes, the BSP provides two standard routines
for controlling VME bus interrupts, sysIntEnable( ) and sysIntDisable( ).
All ISRs use the same interrupt stack. This stack is allocated and initialized by the
system at startup according to specified configuration parameters. It must be large
enough to handle the worst possible combination of nested interrupts.
243
VxWorks
Kernel Programmer's Guide, 6.6
! CAUTION: Some architectures do not permit using a separate interrupt stack, and
ISRs use the stack of the interrupted task. With such architectures, make sure to
create tasks with enough stack space to handle the worst possible combination of
nested interrupts and the worst possible combination of ordinary nested calls. See
the VxWorks reference for your BSP to determine whether your architecture
supports a separate interrupt stack. If it does not, also see Task Stack Protection,
p.176.
Use the checkStack( ) facility during development to see how close your tasks and
ISRs have come to exhausting the available stack space.
In addition to experimenting with stack size, you can also configure and test
systems with guard zone protection for interrupt stacks (for more information, see
Interrupt Stack Protection, p.244).
By default, interrupt (and task) stacks are filled with 0xEE. Filling stacks is useful
during development for debugging with the checkStack( ) routine. It is generally
not used in deployed systems because not filling stacks provides better
performance. You can use the VX_GLOBAL_NO_STACK_FILL configuration
parameter (when you configure VxWorks) to disable stack filling for all interrupts
(and tasks) in the system.
244
4 Multitasking
4.20 Interrupt Service Routines
The sizes of the guard zones are defined by the following configuration
parameters:
■
INTERRUPT_STACK_OVERFLOW_SIZE for interrupt stack overflow size.
■
INTERRUPT_STACK_UNDERFLOW_SIZE for interrupt stack underflow size.
4
The value of these parameters can be modified to increase the size of the guard
zone. The size of a guard zone is rounded up to the CPU MMU page size. The
insertion of a guard zone can be prevented by setting the parameter to zero.
There are some restrictions on the routines you can call from an ISR. For example,
you cannot use routines like printf( ), malloc( ), and semTake( ) in your ISR. You
can, however, use semGive( ), logMsg( ), msgQSend( ), and bcopy( ). For more
information, see 4.20.5 Special Limitations of ISRs, p.246.
Two basic techniques for debugging an ISR are to use logMsg( ) and to have global
variables that are incremented each time though the ISR. You can then print the
globals from the shell. You can also use global variables from the shell to turn on
different condition flows through the ISR.
245
VxWorks
Kernel Programmer's Guide, 6.6
In order to help reduce the occurrences of work queue overflows, system architects
can use the WIND_JOBS_MAX kernel configuration parameter to increase the size
of the kernel work queue. However in most cases this is simply hiding the root
cause of the overflow.
Many VxWorks facilities are available to ISRs, but there are some important
limitations. These limitations stem from the fact that an ISR does not run in a
regular task context and has no task control block, so all ISRs share a single stack.
For this reason, the basic restriction on ISRs is that they must not invoke routines
that might cause the caller to block. For example, they must not try to take a
semaphore, because if the semaphore is unavailable, the kernel tries to switch the
caller to the pended state. However, ISRs can give semaphores, releasing any tasks
waiting on them.
Because the memory facilities malloc( ) and free( ) take a semaphore, they cannot
be called by ISRs, and neither can routines that make calls to malloc( ) and free( ).
For example, ISRs cannot call any creation or deletion routines.
ISRs also must not perform I/O through VxWorks drivers. Although there are no
inherent restrictions in the I/O system, most device drivers require a task context
because they might block the caller to wait for the device. An important exception
is the VxWorks pipe driver, which is designed to permit writes by ISRs.
VxWorks supplies a logging facility, in which a logging task prints text messages
to the system console. This mechanism was specifically designed for ISR use, and
is the most common way to print messages from ISRs. For more information, see
the VxWorks API reference for logLib.
An ISR also must not call routines that use a floating-point coprocessor. In
VxWorks, the interrupt driver code created by intConnect( ) does not save and
restore floating-point registers; thus, ISRs must not include floating-point
instructions. If an ISR requires floating-point instructions, it must explicitly save
and restore the registers of the floating-point coprocessor using routines in
fppArchLib.
In addition, you should be particularly careful when using C++ for an ISR or when
invoking a C++ method from an ISR written in C or assembly. Some of the issues
involved in using C++ include the following:
246
4 Multitasking
4.20 Interrupt Service Routines
■
The VxWorks intConnect( ) routine require the address of the function to
execute when the interrupt occurs, but the address of a non-static member
function cannot be used, so static member functions must be implement.
■
Objects cannot be instantiated or deleted in ISR code.
■
C++ code used to execute in an ISR should restrict itself to Embedded C++. No 4
exceptions nor run-time type identification (RTTI) should be used.
All VxWorks utility libraries, such as the linked-list and ring-buffer libraries, can
be used by ISRs. As discussed earlier (4.5 Task Error Status: errno, p.184), the global
variable errno is saved and restored as a part of the interrupt enter and exit code
generated by the intConnect( ) facility. Thus, errno can be referenced and
modified by ISRs as in any other code. Table 4-23 lists routines that can be called
from ISRs.
247
VxWorks
Kernel Programmer's Guide, 6.6
Library Routine
eventLib eventSend( )
logLib logMsg( )
msgQLib msgQSend( )
pipeDrv write( )
semPxLib sem_post( )
sigLib kill( )
248
4 Multitasking
4.20 Interrupt Service Routines
This exception usually occurs when kernel calls are made from interrupt level at a
very high rate. It generally indicates a problem with clearing the interrupt signal
or a similar driver problem. (See 4.20.4 ISRs and the Kernel Work Queue, p.245.)
The VxWorks interrupt support described earlier in this section is acceptable for
most applications. However, on occasion, low-level control is required for events
such as critical motion control or system failure response. In such cases it is
desirable to reserve the highest interrupt levels to ensure zero-latency response to
these events. To achieve zero-latency response, VxWorks provides the routine
intLockLevelSet( ), which sets the system-wide interrupt-lockout level to the
specified level. If you do not specify a level, the default is the highest level
supported by the processor architecture. For information about
architecture-specific implementations of intLockLevelSet( ), see the VxWorks
Architecture Supplement.
! CAUTION: Some hardware prevents masking certain interrupt levels; check the
hardware manufacturer’s documentation.
249
VxWorks
Kernel Programmer's Guide, 6.6
ISRs connected to interrupt levels that are not locked out (either an interrupt level
higher than that set by intLockLevelSet( ), or an interrupt level defined in
hardware as non-maskable) have special restrictions:
■
The ISR can be connected only with intVecSet( ).
■ The ISR cannot use any VxWorks operating system facilities that depend on
interrupt locks for correct operation. The effective result is that the ISR cannot
safely make any call to any VxWorks function, except reboot.
For more information, see the VxWorks Architecture Supplement for the architecture
in question.
! WARNING: The use of NMI with any VxWorks functionality, other than reboot, is
not recommended. Routines marked as interrupt safe do not imply they are NMI
safe and, in fact, are usually the very ones that NMI routines must not call (because
they typically use intLock( ) to achieve the interrupt safe condition).
While it is important that VxWorks support direct connection of ISRs that run at
interrupt level, interrupt events usually propagate to task-level code. Many
VxWorks facilities are not available to interrupt-level code, including I/O to any
device other than pipes. The following techniques can be used to communicate
from ISRs to task-level code:
■ Shared Memory and Ring Buffers
ISRs can share variables, buffers, and ring buffers with task-level code.
■ Semaphores
ISRs can send messages to message queues for tasks to receive (except for
shared message queues using VxMP). If the queue is full, the message is
discarded.
250
4 Multitasking
4.20 Interrupt Service Routines
■
Pipes
ISRs can write messages to pipes that tasks can read. Tasks and ISRs can write
to the same pipes. However, if the pipe is full, the message written is discarded
because the ISR cannot block. ISRs must not invoke any I/O routine on pipes
other than write( ).
4
■
Signals
251
VxWorks
Kernel Programmer's Guide, 6.6
252
5
POSIX Facilities
253
VxWorks
Kernel Programmer's Guide, 6.6
5.1 Introduction
VxWorks provides extensive POSIX support in many of its native kernel libraries.
To facilitate application portability, VxWorks provides additional POSIX
interfaces as optional components. In the kernel, VxWorks implements some of the
traditional interfaces described by the POSIX standard IEEE Std 1003.1 (POSIX.1)
as well as many of the real-time interfaces in the POSIX.1 optional functionality.
For detailed information about POSIX standards and facilities, see The Open
Group Web sites at http://www.opengroup.org/ and http://www.unix.org/.
While VxWorks provides many POSIX compliant APIs, not all POSIX APIs are
suitable for embedded and real-time systems, or are entirely compatible with the
VxWorks operating system architecture. In a few cases, therefore, Wind River has
imposed minor limitations on POSIX functionality to serve either real-time
systems or VxWorks compatibility. For example:
■
Swapping memory to disk is not appropriate in real-time systems, and
VxWorks provides no facilities for doing so. It does, however, provide POSIX
page-locking routines to facilitate porting code to VxWorks. The routines
otherwise provide no useful function—pages are always locked in VxWorks
systems (for more information see 5.9 POSIX Page-Locking Interface, p.266).
■
VxWorks tasks are scheduled on a system-wide basis; processes themselves
cannot be scheduled. As a consequence, while POSIX access routines allow
two values for contention scope (PTHREAD_SCOPE_SYSTEM and
PTHREAD_SCOPE_PROCESS), only system-wide scope is implemented in
VxWorks for these routines (for more information, see 5.10 POSIX Threads,
p.266 and 5.12 POSIX and VxWorks Scheduling, p.280).
Any such limitations on POSIX functionality are identified in this chapter, or in
other chapters of this guide that provide more detailed information on specific
POSIX APIs.
254
5 POSIX Facilities
5.2 Configuring VxWorks with POSIX Facilities
POSIX and VxWorks Semaphores, p.293, although POSIX semaphores are also
implemented in VxWorks.
VxWorks extensions to POSIX are identified as such.
NOTE: This chapter provides information about POSIX facilities available in the
kernel. For information about facilities available for real-time processes (RTPs), see
the corresponding chapter in the VxWorks Application Programmer’s Guide. 5
! CAUTION: The set of components used for POSIX support in kernel space is not the
same as the set of components used for POSIX support in user space. For
information about the components for user space, see Table 5-1, and see the
VxWorks Application Programmer's Guide: POSIX Facilities for the appropriate
component bundle.
Table 5-1 provides an overview of the individual VxWorks components that must
be configured in the kernel to provide support for the specified POSIX facilities.
Networking facilities are described in the Wind River Network Stack for VxWorks 6
Programmer’s Guide.
255
VxWorks
Kernel Programmer's Guide, 6.6
256
5 POSIX Facilities
5.3 General POSIX Support
257
VxWorks
Kernel Programmer's Guide, 6.6
Functionality Library
Math C Library
The following sections of this chapter describe the optional POSIX API
components that are provided in addition to the native VxWorks APIs.
258
5 POSIX Facilities
5.4 POSIX Header Files
! CAUTION: Wind River advises that you do not use both POSIX libraries and native
VxWorks libraries that provide similar functionality. Doing so may result in
undesirable interactions between the two, as some POSIX APIs manipulate
resources that are also used by native VxWorks APIs. For example, do not use
tickLib routines to manipulate the system's tick counter if you are also using
clockLib routines, do not use the taskLib API to change the priority of a POSIX
thread instead of the pthread API, and so on. 5
A POSIX application can use the following APIs at run-time to determine the
status of POSIX support in the system:
■
The sysconf( ) routine returns the current values of the configurable system
variables, allowing an application to determine whether an optional feature is
supported or not, and the precise value of system's limits.
■
The confstr( ) routine returns a string associated with a system variable. With
this release, the confstr( ) routine returns a string only for the system's default
path.
■
The uname( ) routine lets an application get information about the system on
which it is running. The identification information provided for VxWorks is
the system name (VxWorks), the network name of the system, the system's
release number, the machine name (BSP model), the architecture's endianness,
the kernel version number, the processor name (CPU family), the BSP revision
level, and the system's build date.
259
VxWorks
Kernel Programmer's Guide, 6.6
pthread.h pthreads
semaphore.h semaphores
signal.h signals
260
5 POSIX Facilities
5.5 POSIX Namespace
POSIX Clocks
POSIX defines various software (virtual) clocks, which are identified as the
CLOCK_REALTIME clock, CLOCK_MONOTONIC clock, process CPU-time clocks,
and thread CPU-time clocks. These clocks all use one system hardware timer.
The real-time clock and the monotonic clock are system-wide clocks, and are
therefore supported for both the VxWorks kernel and processes. The process
CPU-time clocks are not supported in VxWorks. The thread CPU-time clocks are
supported for POSIX threads running in processes. A POSIX thread can use the
real-time clock, the monotonic clock, and a thread CPU-time clock for its
application.
261
VxWorks
Kernel Programmer's Guide, 6.6
For information about thread CPU-time clocks, see the VxWorks Application
Programmer’s Guide: POSIX Facilities.
The real-time clock can be reset (but only from the kernel). The monotonic clock
cannot be reset, and provides the time that has elapsed since the system booted.
The real-time clock can be accessed with the POSIX clock and timer routines by
using the clock_id parameter CLOCK_REALTIME. A real-time clock can be reset at
run time with a call to clock_settime( ) from within the kernel (not from a process).
The monotonic clock can be accessed by calling clock_gettime( ) with a clock_id
parameter of CLOCK_MONOTONIC. A monotonic clock keeps track of the time
that has elapsed since system startup; that is, the value returned by
clock_gettime( ) is the amount of time (in seconds and nanoseconds) that has
passed since the system booted. A monotonic clock cannot be reset. Applications
can therefore rely on the fact that any measurement of a time interval that they
might make has not been falsified by a call to clock_settime( ).
Both CLOCK_REALTIME and CLOCK_MONOTONIC are defined in time.h.
See Table 5-4 for a list of the POSIX clock routines. The obsolete VxWorks-specific
POSIX extension clock_setres( ) is provided for backwards-compatibility
purposes. For more information about clock routines, see the API reference for
clockLib.
Routine Description
To include the clockLib library in the system, configure VxWorks with the
INCLUDE_POSIX_CLOCKS component. For thread CPU-time clocks, the
INCLUDE_POSIX_PTHREAD_SCHEDULER and
INCLUDE_POSIX_THREAD_CPUTIME components must be used as well.
262
5 POSIX Facilities
5.6 POSIX Clocks and Timers
POSIX Timers
The POSIX timer facility provides routines for tasks to signal themselves at some
time in the future. Routines are provided to create, set, and delete a timer.
Timers are created based on clocks. In the kernel, the CLOCK_REALTIME and
CLOCK_MONOTONIC clocks are supported for timers. In processes, the
CLOCK_REALTIME clock, CLOCK_MONOTONIC clock, and thread CPU-time
5
clocks (including CLOCK_THREAD_CPUTIME_ID clock) are supported.
When a timer goes off, the default signal, SIGALRM, is sent to the task. To install a
signal handler that executes when the timer expires, use the sigaction( ) routine
(see 4.18 Signals, p.227).
See Table 5-5 for a list of the POSIX timer routines. The VxWorks timerLib library
includes a set of VxWorks-specific POSIX extensions: timer_open( ),
timer_close( ), timer_cancel( ), timer_connect( ), and timer_unlink( ). These
routines allow for an easier and more powerful use of POSIX timers on VxWorks.
For more information, see the VxWorks API reference for timerLib.
Routine Description
timer_create( ) Allocate a timer using the specified clock for a timing base
(CLOCK_REALTIME or CLOCK_MONOTONIC).
timer_gettime( ) Get the remaining time before expiration and the reload
value.
timer_settime( ) Set the time until the next expiration and arm timer.
263
VxWorks
Kernel Programmer's Guide, 6.6
Routine Description
nanosleep( ) Suspend the current pthread (task) until the time interval
elapses.
/* includes */
#include <vxWorks.h>
#include <time.h>
/* create timer */
if (timer_create (CLOCK_REALTIME, NULL, &timerid) == ERROR)
{
printf ("create FAILED\n");
return (ERROR);
}
return (OK);
}
264
5 POSIX Facilities
5.7 POSIX Asynchronous I/O
265
VxWorks
Kernel Programmer's Guide, 6.6
To include the mmanPxLib library in the system, configure VxWorks with the
INCLUDE_POSIX_MEM component.
266
5 POSIX Facilities
5.10 POSIX Threads
■
For porting POSIX applications to VxWorks.
■
To make use of the POSIX thread scheduler in real-time processes (including
concurrent scheduling policies).
For information about POSIX thread scheduler, see 5.12 POSIX and VxWorks
Scheduling, p.280.
5
A major difference between VxWorks tasks and POSIX threads is the way in which
options and settings are specified. For VxWorks tasks these options are set with the
task creation API, usually taskSpawn( ).
POSIX threads, on the other hand, have characteristics that are called attributes.
Each attribute contains a set of values, and a set of access routines to retrieve and set
those values. You specify all pthread attributes before pthread creation in the
attributes object pthread_attr_t. In a few cases, you can dynamically modify the
attribute values of a pthread after its creation.
Pthread Name
While POSIX threads are not named entities, the VxWorks tasks upon which they
are based are named. By default the underlying task elements are named
pthrNumber (for example, pthr3). The number part of the name is incremented
each time a new thread is created (with a roll-over at 2^32 - 1). It is, however,
possible to name these tasks using the thread name attribute.
■
Attribute Name: threadname
■
Possible Values: a null-terminated string of characters
■
Default Value: none (the default naming policy is used)
■
Access Functions (VxWorks-specific POSIX extensions):
pthread_attr_setname( ) and pthread_attr_getname( )
267
VxWorks
Kernel Programmer's Guide, 6.6
Pthread Options
POSIX threads are agnostic with regard to target architecture. Some VxWorks
tasks, on the other hand, may be created with specific options in order to benefit
from certain features of the architecture. For example, for the Altivec-capable
PowerPC architecture, tasks must be created with the VX_ALTIVEC_TASK in order
to make use of the Altivec processor. The pthread options attribute can be used to
set such options for the VxWorks task upon which the POSIX thread is based.
■ Attribute Name: threadoptions
■ Possible Values: the same as the VxWorks task options. See taskLib.h
■ Default Value: none (the default task options are used)
■ Access Functions (VxWorks-specific POSIX extensions):
pthread_attr_setopt( ) and pthread_attr_getopt( )
268
5 POSIX Facilities
5.10 POSIX Threads
The following examples create a pthread using the default attributes and use
explicit attributes.
pthread_t tid; 5
pthread_attr_t attr;
int ret;
pthread_attr_init(&attr);
pthread_t tid;
int ret;
269
VxWorks
Kernel Programmer's Guide, 6.6
pthread_t threadId;
pthread_attr_t attr;
void * stackaddr = NULL;
int stacksize = 0;
pthread_attr_init (&attr);
/*
* Allocate memory for a stack region for the thread. Malloc() is used
* for simplification since a real-life case is likely to use
memPartAlloc()
* on the kernel side, or mmap() on the user side.
*/
if (stackbase == NULL)
{
printf ("FAILED: mystack: malloc failed\n");
return (-1);
}
270
5 POSIX Facilities
5.10 POSIX Threads
VxWorks provides many POSIX thread routines. Table 5-7 lists a few that are
directly relevant to pthread creation or execution. See the VxWorks API reference
for information about the other routines, and more details about all of them.
271
VxWorks
Kernel Programmer's Guide, 6.6
Routine Description
272
5 POSIX Facilities
5.10 POSIX Threads
Routine Description
POSIX threads can store and access private data; that is, pthread-specific data.
They use a key maintained for each pthread by the pthread library to access that
data. A key corresponds to a location associated with the data. It is created by
calling pthread_key_create( ) and released by calling pthread_key_delete( ). The
location is accessed by calling pthread_getspecific( ) and pthread_setspecific( ).
This location represents a pointer to the data, and not the data itself, so there is no
limitation on the size and content of the data associated with a key.
The pthread library supports a maximum of 256 keys for all the pthreads in the
kernel.
The pthread_key_create( ) routine has an option for a destructor function, which
is called when the creating pthread exits or is cancelled, if the value associated with
the key is non-NULL.
This destructor function frees the storage associated with the data itself, and not
with the key. It is important to set a destructor function for preventing memory
leaks to occur when the pthread that allocated memory for the data is cancelled.
The key itself should be freed as well, by calling pthread_key_delete( ), otherwise
the key cannot be reused by the pthread library.
273
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: While the msync( ), fcntl( ), and tcdrain( ) routines are mandated POSIX
1003.1 cancellation points, they are not provided with VxWorks for this release.
Library Routines
aioPxLib aio_suspend( )
semPxLib sem_wait( )
Routines that can be used with cancellation points of pthreads are listed in
Table 5-10.
274
5 POSIX Facilities
5.10 POSIX Threads
Routine Description
275
VxWorks
Kernel Programmer's Guide, 6.6
The routines that can be used to act directly on a mutex object and on the mutex
attribute object are listed in Table 5-11 and Table 5-12 (respectively).
Routine Description
276
5 POSIX Facilities
5.11 POSIX Thread Mutexes and Condition Variables
Routine Description
The protocol mutex attribute defines how the mutex variable deals with the
priority inversion problem (which is described in the section for VxWorks
mutual-exclusion semaphores; see 4.12.3 Mutual-Exclusion Semaphores, p.206).
■ Attribute Name: protocol
■ Possible Values: PTHREAD_PRIO_NONE, PTHREAD_PRIO_INHERIT and
PTHREAD_PRIO_PROTECT
■ Access Routines: pthread_mutexattr_getprotocol( ) and
pthread_mutexattr_setprotocol( )
The PTHREAD_PRIO_INHERIT option is the default value of the protocol attribute
for pthreads created in the kernel (unlike pthreads created in processes, for which
the default is PTHREAD_PRIO_NONE).
The PTHREAD_PRIO_INHERIT value is used to create a mutex with priority
inheritance—and is equivalent to the association of SEM_Q_PRIORITY and
SEM_INVERSION_SAFE options used with semMCreate( ). A pthread owning a
mutex variable created with the PTHREAD_PRIO_INHERIT value inherits the
priority of any higher-priority pthread waiting for the mutex and executes at this
elevated priority until it releases the mutex, at which points it returns to its original
priority.
277
VxWorks
Kernel Programmer's Guide, 6.6
The prioceiling attribute is the POSIX priority ceiling for mutex variables created
with the protocol attribute set to PTHREAD_PRIO_PROTECT.
■ Attribute Name: prioceiling
■ Possible Values: any valid (POSIX) priority value (0-255, with zero being the
lowest).
■ Access Routines: pthread_mutexattr_getprioceiling( ) and
pthread_mutexattr_setprioceiling( )
■ Dynamic Access Routines: pthread_mutex_getprioceiling( ) and
pthread_mutex_setprioceiling( )
Note that the POSIX priority numbering scheme is the inverse of the VxWorks
scheme. For more information see 5.12.2 POSIX and VxWorks Priority Numbering,
p.282.
A priority ceiling is defined by the following conditions:
■ Any pthread attempting to acquire a mutex, whose priority is higher than the
ceiling, cannot acquire the mutex.
■ Any pthread whose priority is lower than the ceiling value has its priority
elevated to the ceiling value for the duration that the mutex is held.
■ The pthread’s priority is restored to its previous value when the mutex is
released.
278
5 POSIX Facilities
5.11 POSIX Thread Mutexes and Condition Variables
that can be used to act directly on a condition variable and on the condition
variable attribute object are listed in Table 5-11 and Table 5-12 (respectively).
Routine Description
Routine Description
279
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: Wind River recommends that you do not use both POSIX APIs and
VxWorks APIs in the same application. Doing so may make a POSIX application
non-compliant.
Table 5-15 provides an overview of how scheduling works for tasks and pthreads,
for each of the schedulers, in both the kernel and processes (RTPs). The key
differences are the following:
■
The POSIX thread scheduler provides POSIX scheduling support for threads
running in processes.
■
In all other cases, the POSIX thread scheduler schedules pthreads and tasks in
the same (non-POSIX) manner as the traditional VxWorks scheduler. (There is
a minor difference between how it handles tasks and pthreads whose priorities
have been lowered; see Differences in Re-Queuing Pthreads and Tasks With
Lowered Priorities, p.288.)
■
The traditional VxWorks scheduler cannot be used to schedule pthreads in
processes. In fact, pthreads cannot be started in processes unless VxWorks is
configured with the POSIX thread scheduler.
The information provided in Table 5-15 is discussed in detail in subsequent
sections.
280
5 POSIX Facilities
5.12 POSIX and VxWorks Scheduling
Table 5-15 Task and Pthread Scheduling in the Kernel and in Processes
281
VxWorks
Kernel Programmer's Guide, 6.6
The POSIX priority numbering scheme is the inverse of the VxWorks priority
numbering scheme. In POSIX, the higher the number, the higher the priority. In
VxWorks, the lower the number, the higher the priority, where 0 is the highest
priority.
The priority numbers used with the POSIX scheduling library, schedPxLib, do not,
therefore, match those used and reported by all other components of VxWorks.
You can change the default POSIX numbering scheme by setting the global
variable posixPriorityNumbering to FALSE. If you do so, schedPxLib uses the
282
5 POSIX Facilities
5.12 POSIX and VxWorks Scheduling
VxWorks numbering scheme (a smaller number means a higher priority) and its
priority numbers match those used by the other components of VxWorks.
In the following sections, discussions of pthreads and tasks at the same priority level
refer to functionally equivalent priority levels, and not to priority numbers.
All VxWorks tasks and pthreads are scheduled according to the system-wide
default scheduling policy. The only exception to this rule is for pthreads running
in user mode (in processes). In this case, concurrent scheduling policies that differ
from the system default can be applied to pthreads.
Note that pthreads can be run in processes only if VxWorks is configured with the
POSIX thread scheduler; they cannot be run in processes if VxWorks is configured
with the traditional scheduler.
The system-wide default scheduling policy for VxWorks, regardless of which
scheduler is used, is priority-based preemptive scheduling—which corresponds to
the POSIX SCHED_FIFO scheduling policy.
At run-time the active system-wide default scheduling policy can be changed to
round-robin scheduling with the kernelTimeSlice( ) routine. It can be changed
back by calling kernelTimeSlice( ) with a parameter of zero. VxWorks
round-robin scheduling corresponds to the POSIX SCHED_RR policy.
The kernelTimeSlice( ) routine cannot be called in user mode (that is, from a
process). A call with a non-zero parameter immediately affects all kernel and user
tasks, all kernel pthreads, and all user pthreads using the SCHED_OTHER policy.
Any user pthreads running with the SCHED_RR policy are unaffected by the call;
but those started after it use the newly defined timeslice.
283
VxWorks
Kernel Programmer's Guide, 6.6
The VxWorks traditional scheduler can be used with both tasks and pthreads in
the kernel. It cannot be used with pthreads in processes. If VxWorks is configured
with the traditional scheduler, a pthread_create( ) call in a process fails and the
errno is set to ENOSYS.
The traditional VxWorks scheduler schedules pthreads as if they were tasks. All
tasks and pthreads executing in a system are therefore subject to the current
default scheduling policy (either the priority-based preemptive policy or the
round-robin scheduling policy; see 5.12.3 Default Scheduling Policy, p.283), and
concurrent policies cannot be applied to individual pthreads. For general
information about the traditional scheduler and how it works with tasks, see
4.3 Task Scheduling, p.166.
The scheduling options provided by the traditional VxWorks scheduler are similar
to the POSIX ones. The following pthreads scheduling policies correspond to the
traditional VxWorks scheduling policies:
■ SCHED_FIFO is similar to VxWorks priority-based preemptive scheduling.
There are differences as to where tasks or pthreads are placed in the ready
queue if their priority is lowered; see Caveats About Scheduling Behavior with the
POSIX Thread Scheduler, p.287.
■ SCHED_RR corresponds to VxWorks round-robin scheduling.
■ SCHED_OTHER corresponds to the current system-wide default scheduling
policy. The SCHED_OTHER policy is the default policy for pthreads in
VxWorks.
There is no VxWorks traditional scheduler policy that corresponds to
SCHED_SPORADIC.
Concurrent scheduling policies are not supported for pthreads in the kernel, and
care must therefore be taken with pthread scheduling-inheritance and scheduling
policy attributes.
If the scheduling-inheritance attribute is set to PTHREAD_EXPLICIT_SCHED and
the scheduling policy to SCHED_FIFO or SCHED_RR, and this policy does not
284
5 POSIX Facilities
5.12 POSIX and VxWorks Scheduling
match the current system-wide default scheduling policy, the creation of pthreads
fails.
Wind River therefore recommends that you always use
PTHREAD_INHERIT_SCHED (which is the default) as a scheduling-inheritance
attribute. In this case the current VxWorks scheduling policy applies, and the
parent pthread's priority is used. Or, if the pthread must be started with a different
priority than its parent, the scheduling-inheritance attribute can be set to 5
PTHREAD_EXPLICIT_SCHED and the scheduling policy attribute set to be
SCHED_OTHER (which corresponds to the current system-wide default
scheduling policy.).
In order to take advantage of the POSIX scheduling model, VxWorks must be
configured with the POSIX thread scheduler, and the pthreads in question must be
run in processes (RTPs). See 5.12.5 POSIX Threads Scheduler, p.285.
The POSIX thread scheduler can be used to schedule both pthreads and tasks in a
VxWorks system. Note that the purpose of the POSIX thread scheduler is to
provide POSIX scheduling support for pthreads running in processes. There is no
reason to use it in a system that does not require this support (kernel-only systems,
or systems with processes but without pthreads).
The POSIX thread scheduler is required for running pthreads in processes, where it
provides compliance with POSIX 1003.1 for pthread scheduling (including
concurrent scheduling policies). If VxWorks is not configured with the POSIX
thread scheduler, pthreads cannot be created in processes.
NOTE: The POSIX priority numbering scheme is the inverse of the VxWorks
scheme, so references to a given priority level or same level in comparisons of these
schemes refer to functionally equivalent priority levels, and not to priority
numbers. For more information about the numbering schemes see 5.12.2 POSIX
and VxWorks Priority Numbering, p.282.
The POSIX thread scheduler schedules kernel tasks and kernel pthreads in the same
manner as the traditional VxWorks task scheduler. See 4.3 Task Scheduling, p.166
for information about the traditional scheduler and how it works with VxWorks
tasks, and 5.12.4 VxWorks Traditional Scheduler, p.284 for information about how
285
VxWorks
Kernel Programmer's Guide, 6.6
Scheduling in Processes
When VxWorks is configured with the POSIX thread scheduler, tasks executing in
processes are scheduled according to system-wide default scheduling policy. On
the other hand, pthreads executing in processes are scheduled according to POSIX
1003.1. Scheduling policies can be assigned to each pthread and changed
dynamically. The scheduling policies are as follows:
■ SCHED_FIFO is a preemptive priority scheduling policy. For a given priority
level, pthreads scheduled with this policy are handled as peers of the VxWorks
tasks at the same level. There is a slight difference in how pthreads and tasks
are handled if their priorities are lowered (for more information; see Differences
in Re-Queuing Pthreads and Tasks With Lowered Priorities, p.288).
■ SCHED_RR is a per-priority round-robin scheduling policy. For a given
priority level, all pthreads scheduled with this policy are given the same time
of execution (time-slice) before giving up the CPU.
■ SCHED_SPORADIC is a policy used for aperiodic activities, which ensures that
the pthreads associated with the policy are served periodically at a high
priority for a bounded amount of time, and a low background priority at all
other times.
■ SCHED_OTHER corresponds to the scheduling policy currently in use for
VxWorks tasks, which is either preemptive priority or round-robin. Pthreads
scheduled with this policy are submitted to the system's global scheduling
policy, exactly like VxWorks tasks or kernel pthreads.
Note the following with regard to the VxWorks implementation of the
SCHED_SPORADIC policy:
■ The system periodic clock is used for time accounting.
■
Dynamically changing the scheduling policy to SCHED_SPORADIC is not
supported; however, dynamically changing the policy from
SCHED_SPORADIC to another policy is supported.
■
VxWorks does not impose an upper limit on the maximum number of
replenishment events with the SS_REPL_MAX macro. A default of 40 events is
set with the sched_ss_max_repl field of the thread attribute structure, which
can be changed.
286
5 POSIX Facilities
5.12 POSIX and VxWorks Scheduling
Using the POSIX thread scheduler involves a few complexities that should be
taken into account when designing your system. Care should be taken with regard
to the following:
■
Using both round-robin and priority-based preemptive scheduling policies.
■
Running pthreads with the individual SCHED_OTHER policy.
■
Differences in re-queuing pthreads and tasks with lowered priorities.
■
Backwards compatibility issues for POSIX applications designed for the
VxWorks traditional scheduler.
287
VxWorks
Kernel Programmer's Guide, 6.6
The POSIX thread scheduler re-queues pthreads that have had their priority
lowered differently than it re-queues tasks that have had their priority lowered.
The difference is as follows:
■
When the priority of a pthread is lowered (with the pthread_setschedprio( )
routine), the POSIX thread scheduler places it at the head of the priority list.
■
When the priority of a task is lowered (with the taskPrioritySet( ) routine), the
POSIX thread scheduler places it at the tail of the priority list—which is the
same as what the traditional VxWorks scheduler would do.
288
5 POSIX Facilities
5.12 POSIX and VxWorks Scheduling
What this means is that lowering the priority of a task and a pthread may have a
different effect on when they will run (if there are other tasks or pthreads in their
priority list). For example, if a task and a pthread each have their priority lowered
to effectively the same level, the pthread will be at the head of the priority list and
the task at the end. The pthread will run before any other pthreads or tasks at this
level, and the task after any other pthreads or tasks.
5
Backwards Compatibility Issues for Applications
Using the POSIX thread scheduler changes the behavior of POSIX applications that
were written to run with the traditional VxWorks scheduler. For existing POSIX
applications that require backward-compatibility, the scheduling policy can be
changed to SCHED_OTHER for all pthreads. This causes their policy to default to
the active VxWorks task scheduling policy (as was the case before the introduction
of the POSIX thread scheduler).
The POSIX 1003.1b scheduling routines provided by the schedPxLib library for
VxWorks are described in Table 5-16.
Routine Description
For more information about these routines, see the schedPxLib API reference.
289
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: Several scheduling routines that were provided with schedPxLib for
VxWorks 5.x and early versions of VxWorks 6.x are not POSIX compliant, and are
maintained only for backward compatibility in the kernel. The use of these
routines is deprecated: sched_setparam( ), sched_getparam( ),
sched_setscheduler( ), and sched_getscheduler( ).
The native VxWorks routines taskPrioritySet( ) and taskPriorityGet( ) should be
used for task priorities. The POSIX routines pthread_setschedparam( ) and
pthread_getschedparam( ) should be used for pthread priorities.
For information about changing the default system scheduling policy, see
5.12.3 Default Scheduling Policy, p.283. For information about concurrent
scheduling policies, see 5.12.5 POSIX Threads Scheduler, p.285.
Note that the POSIX priority numbering scheme is the inverse of the VxWorks
scheme. For more information see 5.12.2 POSIX and VxWorks Priority Numbering,
p.282.
To include the schedPxLib library in the system, configure VxWorks with the
INCLUDE_POSIX_SCHED component.
290
5 POSIX Facilities
5.12 POSIX and VxWorks Scheduling
/* includes */
#include <vxWorks.h>
#include <sched.h> 5
kernelTimeSlice (30);
291
VxWorks
Kernel Programmer's Guide, 6.6
Routine Description
2. Some operating systems, such as UNIX, require symbolic names for objects that are to be
shared among processes. This is because processes do not normally share memory in such
operating systems. In VxWorks, named semaphores can be used to share semaphores
between real-time processes. In the VxWorks kernel there is no need for named semaphores,
because all kernel objects have unique identifiers. However, using named semaphores of
the POSIX variety provides a convenient way of determining the object’s ID.
292
5 POSIX Facilities
5.13 POSIX Semaphores
293
VxWorks
Kernel Programmer's Guide, 6.6
When using unnamed semaphores, typically one task allocates memory for the
semaphore and initializes it. A semaphore is represented with the data structure
sem_t, defined in semaphore.h. The semaphore initialization routine, sem_init( ),
lets you specify the initial value.
Once the semaphore is initialized, any task can use the semaphore by locking it
with sem_wait( ) (blocking) or sem_trywait( ) (non-blocking), and unlocking it
with sem_post( ).
Semaphores can be used for both synchronization and exclusion. Thus, when a
semaphore is used for synchronization, it is typically initialized to zero (locked).
The task waiting to be synchronized blocks on a sem_wait( ). The task doing the
synchronizing unlocks the semaphore using sem_post( ). If the task that is blocked
on the semaphore is the only one waiting for that semaphore, the task unblocks
and becomes ready to run. If other tasks are blocked on the semaphore, the task
with the highest priority is unblocked.
When a semaphore is used for mutual exclusion, it is typically initialized to a value
greater than zero, meaning that the resource is available. Therefore, the first task
to lock the semaphore does so without blocking, setting the semaphore to 0
(locked). Subsequent tasks will block until the semaphore is released. As with the
previous scenario, when the semaphore is released the task with the highest
priority is unblocked.
294
5 POSIX Facilities
5.13 POSIX Semaphores
/*
* This example uses unnamed semaphores to synchronize an action between the
* calling task and a task that it spawns (tSyncTask). To run from the
shell,
* spawn as a task:
*
* -> sp unnameSem
*/ 5
/* includes */
#include <vxWorks.h>
#include <semaphore.h>
/* forward declarations */
/************************************************************************
* unnameSem - test case for unamed semaphores
*
* This routine tests unamed semaphores.
*
* RETURNS: N/A
*
* ERRNOS: N/A
*/
if (pSem == NULL)
{
printf ("pSem allocation failed\n");
return;
}
295
VxWorks
Kernel Programmer's Guide, 6.6
void syncTask
(
sem_t * pSem
)
{
/* wait for synchronization from unnameSem */
296
5 POSIX Facilities
5.13 POSIX Semaphores
The sem_open( ) routine either opens a named semaphore that already exists or,
as an option, creates a new semaphore. You can specify which of these possibilities
you want by combining the following flag values:
O_CREAT
Create the semaphore if it does not already exist. If it exists, either fail or open 5
the semaphore, depending on whether O_EXCL is specified.
O_EXCL
Open the semaphore only if newly created; fail if the semaphore exists.
The results, based on the flags and whether the semaphore accessed already exists,
are shown in Table 5-18.
Once initialized, a semaphore remains usable until explicitly destroyed. Tasks can
explicitly mark a semaphore for destruction at any time, but the system only
destroys the semaphore when no task has the semaphore open.
If VxWorks is configured with INCLUDE_POSIX_SEM_SHOW, you can use show( )
from the shell (with the C interpreter) to display information about a POSIX
semaphore. 3
3. The show( ) routine is not a POSIX routine, nor is it meant to be used programmatically. It
is designed for interactive use with the shell (with the shell’s C interpreter).
297
VxWorks
Kernel Programmer's Guide, 6.6
This example shows information about the POSIX semaphore mySem with two
tasks blocked and waiting for it:
-> show semId
value = 0 = 0x0
Semaphore name :mySem
sem_open() count :3
Semaphore value :0
No. of blocked tasks :2
NOTE: POSIX named semaphores may be shared between processes only if their
names start with a / (forward slash) character. They are otherwise private to the
process in which they were created, and cannot be accessed from another process.
See 4.9 Public and Private Objects, p.195.
298
5 POSIX Facilities
5.13 POSIX Semaphores
/*
* In this example, nameSem() creates a task for synchronization. The
* new task, tSyncSemTask, blocks on the semaphore created in nameSem().
* Once the synchronization takes place, both tasks close the semaphore,
* and nameSem() unlinks it. To run this task from the shell, spawn
* nameSem as a task:
* -> sp nameSem, "myTest"
*/ 5
/* includes */
#include <vxWorks.h>
#include <taskLib.h>
#include <stdio.h>
#include <semaphore.h>
#include <fcntl.h>
/* forward declaration */
/****************************************************************************
*
* nameSem - test program for POSIX semaphores
*
* This routine opens a named semaphore and spawns a task, tSyncSemTask, which
* waits on the named semaphore.
*
* RETURNS: N/A
*
* ERRNO: N/A
*/
void nameSem
(
char * name
)
{
sem_t * semId;
299
VxWorks
Kernel Programmer's Guide, 6.6
return;
}
/* give semaphore */
printf ("nameSem: posting semaphore - synchronizing action\n");
if (sem_post (semId) == -1)
{
printf ("nameSem: sem_post failed\n");
sem_close(semId);
return;
}
/* all done */
if (sem_close (semId) == -1)
{
printf ("nameSem: sem_close failed\n");
return;
}
/****************************************************************************
*
* syncSemTask - waits on a named POSIX semaphore
*
* This routine waits on the named semaphore created by nameSem().
*
* RETURNS: N/A
*
* ERRNO: N/A
*/
void syncSemTask
(
char * name
)
{
sem_t * semId;
/* open semaphore */
printf ("syncSemTask: opening semaphore\n");
if ((semId = sem_open (name, 0)) == (sem_t *) -1)
{
printf ("syncSemTask: sem_open failed\n");
return;
}
300
5 POSIX Facilities
5.13 POSIX Semaphores
301
VxWorks
Kernel Programmer's Guide, 6.6
Routine Description
Note that there are behavioral differences between the kernel and user space
versions of mq_open( ). The kernel version allows for creation of a message queue
for any permission specified by the oflags parameter. The user-space version
complies with the POSIX PSE52 profile, so that after the first call, any subsequent
calls in the same process are only allowed if an equivalent or lower permission is
specified.
For information about the use of permissions with the user-space version of
mq_open( ), see the VxWorks Application Programmer’s Guide: POSIX Facilities.
The VxWorks initialization routine mqPxLibInit( ) initializes the kernel’s POSIX
message queue library (this is a kernel-only routine). It is called automatically at
boot time when the INCLUDE_POSIX_MQ component is part of the system.
For information about the VxWorks message queue library, see the msgQLib API
reference.
302
5 POSIX Facilities
5.14 POSIX Message Queues
POSIX message queues are similar to VxWorks message queues, except that POSIX
message queues provide messages with a range of priorities. The differences are
summarized in Table 5-20.
303
VxWorks
Kernel Programmer's Guide, 6.6
/*
* This example sets the O_NONBLOCK flag and examines message queue
* attributes.
*/
/* includes */
#include <vxWorks.h>
#include <mqueue.h>
#include <fcntl.h>
#include <errno.h>
/* defines */
#define MSG_SIZE 16
int attrEx
(
char * name
)
{
mqd_t mqPXId; /* mq descriptor */
struct mq_attr attr; /* queue attribute structure */
struct mq_attr oldAttr; /* old queue attributes */
char buffer[MSG_SIZE];
int prio;
attr.mq_flags = 0;
attr.mq_maxmsg = 1;
attr.mq_msgsize = 16;
if ((mqPXId = mq_open (name, O_CREAT | O_RDWR , 0, &attr))
== (mqd_t) -1)
return (ERROR);
else
printf ("mq_open with non-block succeeded\n");
attr.mq_flags = O_NONBLOCK;
if (mq_setattr (mqPXId, &attr, &oldAttr) == -1)
return (ERROR);
else
{
/* paranoia check - oldAttr should not include non-blocking. */
if (oldAttr.mq_flags & O_NONBLOCK)
return (ERROR);
else
printf ("mq_setattr turning on non-blocking succeeded\n");
}
304
5 POSIX Facilities
5.14 POSIX Message Queues
305
VxWorks
Kernel Programmer's Guide, 6.6
Before a set of tasks can communicate through a POSIX message queue, one of the
tasks must create the message queue by calling mq_open( ) with the O_CREAT flag
set. Once a message queue is created, other tasks can open that queue by name to
send and receive messages on it. Only the first task opens the queue with the
O_CREAT flag; subsequent tasks can open the queue for receiving only
(O_RDONLY), sending only (O_WRONLY), or both sending and receiving
(O_RDWR).
To put messages on a queue, use mq_send( ). If a task attempts to put a message
on the queue when the queue is full, the task blocks until some other task reads a
message from the queue, making space available. To avoid blocking on
mq_send( ), set O_NONBLOCK when you open the message queue. In that case,
when the queue is full, mq_send( ) returns -1 and sets errno to EAGAIN instead of
pending, allowing you to try again or take other action as appropriate.
One of the arguments to mq_send( ) specifies a message priority. Priorities range
from 0 (lowest priority) to 31 (highest priority).
When a task receives a message using mq_receive( ), the task receives the
highest-priority message currently on the queue. Among multiple messages with
the same priority, the first message placed on the queue is the first received (FIFO
order). If the queue is empty, the task blocks until a message is placed on the
queue.
To avoid pending (blocking) on mq_receive( ), open the message queue with
O_NONBLOCK; in that case, when a task attempts to read from an empty queue,
mq_receive( ) returns -1 and sets errno to EAGAIN.
To close a message queue, call mq_close( ). Closing the queue does not destroy it,
but only asserts that your task is no longer using the queue. To request that the
306
5 POSIX Facilities
5.14 POSIX Message Queues
NOTE: In VxWorks, a POSIX message queue whose name does not start with a
5
forward-slash (/) character is considered private to the process that has opened it
and can not be accessed from another process. A message queue whose name
starts with a forward-slash (/) character is a public object, and other processes can
access it (as according to the POSIX standard). See 4.9 Public and Private Objects,
p.195.
307
VxWorks
Kernel Programmer's Guide, 6.6
/*
* In this example, the mqExInit() routine spawns two tasks that
* communicate using the message queue.
* To run this test case on the target shell:
*
* -> sp mqExInit
*/
/* defines */
/* forward declarations */
/* includes */
#include <vxWorks.h>
#include <taskLib.h>
#include <stdio.h>
#include <mqueue.h>
#include <fcntl.h>
#include <errno.h>
#include <mqEx.h>
/* defines */
#define HI_PRIO 31
#define MSG_SIZE 16
#define MSG "greetings"
/****************************************************************************
*
* mqExInit - main for message queue send and receive test case
*
* This routine spawns to tasks to perform the message queue send and receive
* test case.
*
* RETURNS: OK, or ERROR
*
* ERRNOS: N/A
*/
308
5 POSIX Facilities
5.14 POSIX Message Queues
/****************************************************************************
*
* receiveTask - receive messages from the message queue
*
* This routine creates a message queue and calls mq_receive() to wait for
* a message arriving in the message queue.
*
* RETURNS: OK, or ERROR
*
* ERRNOS: N/A
*/
309
VxWorks
Kernel Programmer's Guide, 6.6
{
printf ("receiveTask: Msg of priority %d received:\n\t\t%s\n",
prio, msg);
}
}
/****************************************************************************
*
* sendTask - send a message to a message queue
*
* This routine opens an already created message queue and
* calls mq_send() to send a message to the opened message queue.
*
* RETURNS: OK, or ERROR
*
* ERRNOS: N/A
*/
void sendTask (void)
{
mqd_t mqPXId; /* msg queue descriptor */
A pthread (or task) can use the mq_notify( ) routine to request notification of the
arrival of a message at an empty queue. The pthread can thereby avoid blocking
or polling to wait for a message.
Each queue can register only one pthread for notification at a time. Once a queue
has a pthread to notify, no further attempts to register with mq_notify( ) can
succeed until the notification request is satisfied or cancelled.
310
5 POSIX Facilities
5.14 POSIX Message Queues
/*
* In this example, a task uses mq_notify() to discover when a message
* has arrived on a previously empty queue. To run this from the shell:
*
* -> ld < mq_notify_test.o
* -> sp exMqNotify, "greetings"
* -> mq_send
*
*/
/* includes */
#include <vxWorks.h>
#include <signal.h>
#include <mqueue.h>
#include <fcntl.h>
#include <errno.h>
#include <stdio.h>
#include <string.h>
/* defines */
/* forward declarations */
311
VxWorks
Kernel Programmer's Guide, 6.6
/****************************************************************************
* exMqNotify - example of how to use mq_notify()
*
* This routine illustrates the use of mq_notify() to request notification
* via signal of new messages in a queue. To simplify the example, a
* single task both sends and receives a message.
*
* RETURNS: 0 on success, or -1
*
* ERRNOS: N/A
*/
int exMqNotify
(
char * pMessage, /* text for message to self */
int loopCnt /* number of times to send a msg */
)
{
struct mq_attr attr; /* queue attribute structure */
struct sigevent sigNotify; /* to attach notification */
struct sigaction mySigAction; /* to attach signal handler */
mqd_t exMqId; /* id of message queue */
int cnt = 0;
/*
* Install signal handler for the notify signal and fill in
* a sigaction structure and pass it to sigaction(). Because the handler
* needs the siginfo structure as an argument, the SA_SIGINFO flag is
* set in sa_flags.
*/
mySigAction.sa_sigaction = exNotificationHandle;
mySigAction.sa_flags = SA_SIGINFO;
sigemptyset (&mySigAction.sa_mask);
attr.mq_flags = 0;
attr.mq_maxmsg = 2;
attr.mq_msgsize = MSG_SIZE;
312
5 POSIX Facilities
5.14 POSIX Message Queues
/*
* Set up notification: fill in a sigevent structure and pass it 5
* to mq_notify(). The queue ID is passed as an argument to the
* signal handler.
*/
sigNotify.sigev_signo = SIGUSR1;
sigNotify.sigev_notify = SIGEV_SIGNAL;
sigNotify.sigev_value.sival_int = (int) exMqId;
/*
* We just created the message queue, but it may not be empty;
* a higher-priority task may have placed a message there while
* we were requesting notification. mq_notify() does nothing if
* messages are already in the queue; therefore we try to
* retrieve any messages already in the queue.
*/
exMqRead (exMqId);
/*
* Now we know the queue is empty, so we will receive a signal
* the next time a message arrives.
*
* We send a message, which causes the notify handler to be invoked.
* It is a little silly to have the task that gets the notification
* be the one that puts the messages on the queue, but we do it here
* to simplify the example. A real application would do other work
* instead at this point.
*/
/* Cleanup */
313
VxWorks
Kernel Programmer's Guide, 6.6
/* More cleanup */
return (0);
}
/****************************************************************************
* exNotificationHandle - handler to read in messages
*
* This routine is a signal handler; it reads in messages from a
* message queue.
*
* RETURNS: N/A
*
* ERRNOS: N/A
*/
/*
* Request notification again; it resets each time
* a notification signal goes out.
*/
sigNotify.sigev_signo = pInfo->si_signo;
sigNotify.sigev_value = pInfo->si_value;
sigNotify.sigev_notify = SIGEV_SIGNAL;
314
5 POSIX Facilities
5.14 POSIX Message Queues
exMqRead (exMqId);
}
/****************************************************************************
* exMqRead - read in messages
*
* This small utility routine receives and displays all messages
* currently in a POSIX message queue; assumes queue has O_NONBLOCK. 5
*
* RETURNS: N/A
*
* ERRNOS: N/A
*/
/*
* Read in the messages - uses a loop to read in the messages
* because a notification is sent ONLY when a message is sent on
* an EMPTY message queue. There could be multiple msgs if, for
* example, a higher-priority task was sending them. Because the
* message queue was opened with the O_NONBLOCK flag, eventually
* this loop exits with errno set to EAGAIN (meaning we did an
* mq_receive() on an empty message queue).
*/
if (errno != EAGAIN)
{
printf ("mq_receive: errno = %d\n", errno);
}
}
315
VxWorks
Kernel Programmer's Guide, 6.6
316
6
Memory Management
Kernel Facilities
317
VxWorks
Kernel Programmer's Guide, 6.6
6.1 Introduction
VxWorks provides memory management facilities for all code that executes in the
kernel, as well as memory management facilities for applications that execute as
real-time processes. This chapter deals primarily with kernel-space memory
management, although it also provides information about what memory maps
look like for systems that include support for processes (and related facilities).
This chapter discusses the following topics:
■
The VxWorks components required for different types of memory
management support.
■
The layout of memory for different configurations of VxWorks.
■
Excluding memory from VxWorks use.
■
Using run-time memory autosizing.
■
The kernel heap and memory partition management facilities that are
available in the kernel.
■
Memory error detection facilities, including instrumentation provided by
VxWorks components and the Wind River compiler.
■
Virtual memory management, both automated and programmatic.
■
Using the real-time process environment without an MMU.
For information about the memory management facilities available to
process-based applications, see VxWorks Application Programmer’s Guide: Memory
Management.
For information about additional error detection facilities useful for debugging
software faults, see 11. Error Detection and Reporting.
NOTE: This chapter provides information about facilities available in the VxWorks
kernel. For information about facilities available to real-time processes, see the
corresponding chapter in the VxWorks Application Programmer’s Guide.
318
6 Memory Management
6.2 Configuring VxWorks With Memory Management Facilities
319
VxWorks
Kernel Programmer's Guide, 6.6
Within system RAM, the elements of a VxWorks system are arranged as follows:
■
Below RAM_LOW_ADRS there is an architecture specific layout of memory
blocks used for saving boot parameters, the system exception message area,
the exception or interrupt vector table, and so on. For specific details, see the
VxWorks Architecture Supplement.
■
The kernel code (text, data, and bss) starting at address RAM_LOW_ADRS.
ROM-resident images are an exception, for which the text segment is located
outside of the system RAM (see 2.4.1 VxWorks Image Types, p.15).
■
The WDB target agent memory pool is located immediately above the kernel
code, if WDB is configured into the system (see 12.6 WDB Target Agent, p.628).
■
The kernel heap follows the WDB memory pool.
■
An optional area of persistent memory.
320
6 Memory Management
6.3 System Memory Maps
■
An optional area of user-reserved memory may be located above the kernel
heap.
Figure 6-1 illustrates a typical memory map for a system without process support.
For a comparable illustration of a system with process support—which means it
has unmapped memory available for processes that have not yet been created—see
Figure 6-2.
Note that the memory pool for the WDB target agent is present only if WDB is
configured into the kernel. Without WDB, the kernel heap starts right above the 6
end of the kernel BSS ELF segment.
The routine sysMemTop( ) returns the end of the kernel heap area. If both the
user-reserved memory size (USER_RESERVED_MEM) and the persistent memory
size (PM_RESERVED_MEM) are zero, then sysMemTop( ) returns the same value
than sysPhysMemTop( ), and the kernel heap extends to the end of the system
RAM area. For more information about configuring user-reserved memory and
persistent memory. See 6.6 Reserved Memory, p.330 for more information. Also see
6.3.2 System Memory Map with Process Support, p.323.
321
VxWorks
Kernel Programmer's Guide, 6.6
Figure 6-1 Fig 5-1: Memory Map of a System Without Process Support
LOCAL_MEM_LOCAL_ADRS
(start of system RAM)
RAM_LOW_ADRS
(start of kernel text)
Kernel Image
(text + rodata + data + bss)
Kernel Heap
(System Memory Partition)
sysMemTop( ) = sysPhysMemTop( ) -
USR_RESERVED_MEM -
User-Reserved Memory Area
PM_RESERVED_MEM
(optional)
sysPhysMemTop( )
(end of system RAM)
322
6 Memory Management
6.3 System Memory Maps
323
VxWorks
Kernel Programmer's Guide, 6.6
Figure 6-2 Fig 5-2: Memory Map of a System With Process Support
LOCAL_MEM_LOCAL_ADRS
RAM_LOW_ADRS
Kernel Image
(text + rodata + data + bss)
sysMemTop( ) = sysPhysMemTop( ) -
USR_RESERVED_MEM -
User-Reserved Memory Area
PM_RESERVED_MEM
(optional)
sysPhysMemTop( )
324
6 Memory Management
6.3 System Memory Maps
A VxWorks system configured for real-time processes may have one or more
applications executing as processes at run-time. It may also have shared libraries
and shared data regions instantiated. The kernel, each of the processes, shared 6
libraries, and shared data regions occupy a discrete space in virtual memory.
Each VxWorks process has its own region of virtual memory; processes do not
overlap in virtual memory. This flat virtual-memory map provides advantages in
speed, in a programming model that accommodates systems with and without an
MMU, and in debugging applications (see 6.11 Processes Without MMU Support,
p.357).
The virtual space assigned to a process is not necessarily composed of one large
contiguous block of virtual memory. In some cases it will be composed of several
smaller blocks of virtual space which are discontinuous from each other.
Figure 6-3 illustrates the memory map of a system with the kernel areas (RAM and
I/O), two different processes (RTP A and RTP B), as well as one shared library, and
one shared data region.
325
VxWorks
Kernel Programmer's Guide, 6.6
Figure 6-3 Fig 5-3: Memory Map of a System With Two Processes
Kernel Code
+
Kernel Heap
+
Kernel Task Stacks
+
Kernel Object Modules
RTP A Code
RTP B Heap
Kernel Memory Space
Shared Library
Shared Library and
Shared Data Memory Space
RTP B Code
I/O Region 1
I/O Region 2
326
6 Memory Management
6.3 System Memory Maps
Each process has its own virtual memory context, defined by its MMU translation
table used to map virtual and physical memory, and other information about each
page of memory. This memory context describes the virtual space that all of the
tasks in a the process can access. In other words, it defines the memory view of a
process.
The kernel space is mapped with supervisor access privilege in the memory
context of each process (but not with user mode privilege). Therefore tasks
executing in a process can access kernel memory space only in system calls, during
6
which the execution is switched to supervisor mode. (For information about
system calls, see VxWorks Application Programmer’s Guide: Applications and
Processes.)
A shared library or shared data region is mapped into the virtual context of a
process only when the process’ application code opens or creates it, and it
effectively disappears from the process’ memory view when the application closes
or deletes the shared library or shared data region.
Figure 6-4 illustrates the different memory views of a system with two processes
(RTP A and RTP B), a shared library that both RTP A and RTP B opened, as well
as a shared data region that both a kernel application and RTP B opened.
The first memory view corresponds to the memory space accessible by kernel
tasks. The second and third memory views correspond to the memory space
accessible by tasks executing in process A, respectively process B. Note that the
grayed areas are only accessible during system calls.
327
VxWorks
Kernel Programmer's Guide, 6.6
RTP A Code
RTP B Heap
RTP B Code
328
6 Memory Management
6.4 Shell Commands
Note that on system without an MMU, or with the MMU disabled, there is only
one memory view shared by the kernel and all process tasks. This memory view
corresponds to Figure 6-3. Any task in the system, whether it is a kernel or a task
executing in a process, has access to all the memory: kernel space, I/O regions, any
processes memory, shared libraries, and shared data regions. In other words, such
configurations do not provide any memory protection. For more information, see
6.11 Processes Without MMU Support, p.357.
329
VxWorks
Kernel Programmer's Guide, 6.6
Boot loaders may or may not clear reserved memory, depending on the
configuration that was used to create them. If the boot loader is built with both
USER_RESERVED_MEM and PM_RESERVED_MEM set to zero, the system RAM is
cleared through the address calculated as:
(LOCAL_MEM_LOCAL_ADRS + LOCAL_MEM_SIZE)
To ensure that reserved memory is not cleared, the boot loader should be created
with the USER_RESERVED_MEM and the PM_RESERVED_MEM parameter set to
the desired sizes; that is, the same values that are used to build the downloaded
VxWorks image.
330
6 Memory Management
6.7 Kernel Heap and Memory Partition Management
NOTE: If autosizing of system RAM is enabled, the top of the system RAM
detected at run-time may be different from the address calculated as
LOCAL_MEM_LOCAL_ADRS + LOCAL_MEM_SIZE, resulting in non-identical
location of the memory range not being cleared by the boot loader. For more
information about autosizing, see 6.5 System RAM Autosizing, p.329.
6.7.1 Configuring the Kernel Heap and the Memory Partition Manager
There are two kernel components for configuring the kernel heap and the memory
partition manager. The core functionality for both the kernel heap and memory
partition is provided by the INCLUDE_MEM_MGR_BASIC component (see the
VxWorks API reference for memPartLib). The INCLUDE_MEM_MGR_FULL
component extends the functionality required for a full-featured heap and
memory partition manager (see the VxWorks API reference for memLib).
The kernel heap is automatically created by the system when either one of these
components are included in the VxWorks configuration. The size of the kernel
heap is set as described in 6.3 System Memory Maps, p.319; see Figure 6-1 and
Figure 6-2.
331
VxWorks
Kernel Programmer's Guide, 6.6
Information about allocation statistics in the kernel heap and in kernel memory
partitions can be obtained with the show routines provided with the
INCLUDE_MEM_SHOW component. For more information, see the VxWorks API
reference for memShow.
332
6 Memory Management
6.8 Memory Error Detection
To supplement the error detection features built into memLib and memPartLib
(such as valid block checking), components can be added to VxWorks to perform
automatic, programmatic, and interactive error checks on memLib and
memPartLib operations.
The components help detect common programming errors such as double-freeing
an allocated block, freeing or reallocating an invalid pointer, memory leaks. In
addition, with compiler-assisted code instrumentation, they help detect
bounds-check violations, buffer over-runs and under-runs, pointer references to
free memory blocks, pointer references to automatic variables outside the scope of
the variable, and so on. Note that compiler-assisted instrumentation must be used
333
VxWorks
Kernel Programmer's Guide, 6.6
in order to track buffer underruns and overruns. For information about compiler
instrumentation, see 6.8.2 Compiler Instrumentation, p.340.
Errors detected by the automatic checks are logged by the error detection and
reporting facility.
To enable the basic level of memory partition and heap instrumentation, the
following components must be included into the kernel configuration:
■ INCLUDE_MEM_EDR, includes the basic memory partition debugging
functionality and instrumentation code.
■ INCLUDE_EDR_ERRLOG, INCLUDE_EDR_POLICIES and
INCLUDE_EDR_SHOW for error detection, reporting, and persistent memory.
For more information see 11. Error Detection and Reporting.
The following component may also be included:
■ INCLUDE_MEM_EDR_SHOW, for enabling the show routines.
In addition, the following parameters of the INCLUDE_MEM_EDR component can
be modified:
MEDR_EXTENDED_ENABLE
Set to TRUE to enable logging trace information for each allocated block, but
at the cost of increased memory used to store entries in the allocation database.
The default setting is FALSE.
MEDR_FILL_FREE_ENABLE
Set to TRUE to enable pattern-filling queued free blocks. This aids detecting
writes into freed buffers. The default setting is FALSE.
MEDR_FREE_QUEUE_LEN
Maximum length of the free queue. When a memory block is freed, instead of
immediately returning it to the partition's memory pool, it is kept in a queue.
This is useful for detecting references to a memory block after it has been freed.
When the queue reaches the maximum length allowed, the blocks are returned
to the respective memory pool in a FIFO order. Queuing is disabled when this
parameter is 0. Default setting for this parameter is 64.
MEDR_BLOCK_GUARD_ENABLE
Enable guard signatures in the front and the end of each allocated block.
Enabling this feature aids in detecting buffer overruns, underruns, and some
heap memory corruption. The default setting is FALSE.
334
6 Memory Management
6.8 Memory Error Detection
MEDR_POOL_SIZE
Set the size of the memory pool used to maintain the memory block database.
Default setting in the kernel is 1MB. The database uses 32 bytes per memory
block without extended information enabled, and 64 bytes per block with
extended information enabled (call stack trace). This pool is allocated from the
kernel heap.
Error Types
6
During execution, errors are automatically logged when the allocation, free, and
re-allocation functions are called. The following error types are automatically
identified and logged:
■
Allocation returns block address within an already allocated block from the
same partition. This would indicate corruption in the partition data structures.
■
Allocation returns block address that is in the task's stack space. This would
indicate corruption in the partition data structures.
■
Allocation returns block address that is in the kernel's static data section. This
would indicate corruption in the partition data structures.
■
Freeing a pointer that is in the task’s stack space.
■
Freeing memory that was already freed and is still in the free queue.
■
Freeing memory that is in the kernel’s static data section.
■
Freeing memory in a different partition than the one in which it was allocated.
■
Freeing a partial memory block.
■
Freeing a memory block with the guard zone corrupted, when the
MEDR_BLOCK_GUARD_ENABLE environment variable is TRUE.
■
Pattern in a memory block which is in the free queue has been corrupted, when
the MEDR_FILL_FREE_ENABLE environment variable is TRUE.
Shell Commands
The show routines and commands described in Table 6-1 are available for use with
the shell’s C and command interpreters to display information.
335
VxWorks
Kernel Programmer's Guide, 6.6
336
6 Memory Management
6.8 Memory Error Detection
Code Example
The following shell session is executed with the C interpreter. The sample code
listed above is compiled and linked in the VxWorks kernel (see 2.6.8 Linking Kernel
Application Object Modules with VxWorks, p.64). The kernel must include the
INCLUDE_MEM_EDR and INCLUDE_MEM_EDR_SHOW components. In order to
enable saving call stack information, the parameter MEDR_EXTENDED_ENABLE is
set TRUE. Also, the kernel should be configured with the error detection and
reporting facility, including the show routines, as described in 11.2 Configuring
Error Detection and Reporting Facilities, p.566.
First mark all allocated blocks:
-> memEdrBlockMark
value = 6390 = 0x18f6
Next, clear the error log. This step is optional, and is done only to start with a clean
log:
-> edrClear
value = 0 = 0x0
The kernel application may be started in a new task spawned with the sp( ) utility,
as follows:
-> taskId = sp (heapErrors)
New symbol "taskId" added to kernel symbol table.
Task spawned: id = 0x246d010, name = t1
taskId = 0x2469ed0: value = 38195216 = 0x246d010
337
VxWorks
Kernel Programmer's Guide, 6.6
At this point the application finished execution. The following command lists the
memory blocks allocated, but not freed by the application task. Note that the
listing shows the call stack at the time of the allocation:
-> memEdrBlockShow 0, 0, taskId, 5, 1
Errors detected while executing the application are logged in persistent memory
region.
338
6 Memory Management
6.8 Memory Error Detection
Display the log using edrShow( ). The first error corresponds to line 9 in the test
code; the second error corresponds to line 12.
-> edrShow
ERROR LOG
=========
Log Size: 524288 bytes (128 pages)
Record Size: 4096 bytes
Max Records: 123
CPU Type: 0x5a
Errors Missed: 0 (old) + 0 (recent) 6
Error count: 2
Boot count: 20
Generation count: 94
==[1/2]==============================================================
Severity/Facility: NON-FATAL/KERNEL
Boot Cycle: 20
OS Version: 6.0.0
Time: THU JAN 01 00:00:31 1970 (ticks = 1880)
Task: "t1" (0x0246d010)
<<<<<Traceback>>>>>
==[2/2]==============================================================
Severity/Facility: NON-FATAL/KERNEL
Boot Cycle: 20
OS Version: 6.0.0
Time: THU JAN 01 00:00:31 1970 (ticks = 1880)
Task: "t1" (0x0246d010)
<<<<<Traceback>>>>>
339
VxWorks
Kernel Programmer's Guide, 6.6
Additional errors are detected if the application is compiled using the Run-Time
Error Checking (RTEC) feature of the Wind River Compiler (Diab). The following
flag should be used:
-Xrtc=option
Code compiled with the -Xrtc flag is instrumented for run-time checks such as
pointer reference check and pointer arithmetic validation, standard library
parameter validation, and so on. These instrumentations are supported through
the memory partition run-time error detection library. Table 6-2 lists the -Xrtc
options that are supported.
Note that using the -Xrtc flag without specifying any options is the same as using
them all.
Option Description
0x80 report source code filename and line number in error logs
The errors and warnings detected by the RTEC compile-in instrumentation are
logged by the error detection and reporting facility (see 11. Error Detection and
Reporting). The following error types are identified:
■
Bounds-check violation for allocated memory blocks.
■
Bounds-check violation of static (global) variables.
■
Bounds-check violation of automatic variables.
■
Reference to a block in the free queue.
■
Reference to the free part of the task’s stack.
340
6 Memory Management
6.8 Memory Error Detection
■
De-referencing a NULL pointer.
Shell Commands
6
The compiler provided instrumentation automatically logs errors detected in
applications using the error detection and reporting facility. For a list of the shell
commands available for error logs see 11.4 Displaying and Clearing Error Records,
p.570.
Code Example
%.out : %.c
@ $(RM) $@
$(CC) $(CFLAGS) -Xrtc=0xfb $(OPTION_OBJECT_ONLY) $<
@ $(RM) ctdt_$(BUILD_EXT).c
$(NM) $(basename $@).o | $(MUNCH) > ctdt_$(BUILD_EXT).c
$(MAKE) CC_COMPILER=$(OPTION_DOLLAR_SYMBOLS) ctdt_$(BUILD_EXT).o
$(LD_PARTIAL) $(LD_PARTIAL_LAST_FLAGS) $(OPTION_OBJECT_NAME)$@ $(basename
$@).o ctdt_$(BUILD_EXT).o
include $(TGT_DIR)/h/make/rules.library
341
VxWorks
Kernel Programmer's Guide, 6.6
The the following application code generates various errors that can be recorded
and displayed (line numbers are included for reference purposes). Its use is
illustrated in Shell Session Example, p.342.
#include <vxWorks.h>
#include <stdlib.h>
#include <string.h>
void refErrors ()
{
char name[] = "very_long_name";
char * pChar;
int state[] = { 0, 1, 2, 3 };
int ix = 0;
/* of allocated block */
free (pChar);
The following shell session log is executed with the C interpreter. The sample code
listed above is compiled and linked in the VxWorks kernel (see 2.6.8 Linking Kernel
Application Object Modules with VxWorks, p.64). The kernel must include the
INCLUDE_MEM_EDR and INCLUDE_MEM_EDR_RTC components. Also, the
kernel should be configured with the error detection and reporting facility,
including the show routines, as described in 11.2 Configuring Error Detection and
Reporting Facilities, p.566.
First, clear the error log to start with a clean log:
-> edrClear
value = 0 = 0x0
Start the kernel application in a new task spawned with the sp( ) utility:
-> sp refErrors
Task spawned: id = 0x246d7d0, name = t2
value = 38197200 = 0x246d7d0
At this point the application finished execution. Errors detected while executing
the application are logged in the persistent memory region. Display the log using
342
6 Memory Management
6.8 Memory Error Detection
edrShow( ). In the example below, the log display is interspersed with description
of the errors.
-> edrShow
ERROR LOG
=========
Log Size: 524288 bytes (128 pages)
Record Size: 4096 bytes
Max Records: 123
CPU Type: 0x5a
Errors Missed: 0 (old) + 0 (recent) 6
Error count: 3
Boot count: 21
Generation count: 97
The first error corresponds to line 13 in the test code. A string of length 14 is copied
into a allocated buffer of size 13:
==[1/3]==============================================================
Severity/Facility: NON-FATAL/KERNEL
Boot Cycle: 21
OS Version: 6.0.0
Time: THU JAN 01 00:14:22 1970 (ticks = 51738)
Task: "t2" (0x0246d7d0)
Injection Point: refErr.c:13
<<<<<Traceback>>>>>
343
VxWorks
Kernel Programmer's Guide, 6.6
The second error refers to line 18: the local state array is referenced with index 4.
Since the array has only four elements, the range of valid indexes is 0 to 3:
==[2/3]==============================================================
Severity/Facility: NON-FATAL/KERNEL
Boot Cycle: 21
OS Version: 6.0.0
Time: THU JAN 01 00:14:22 1970 (ticks = 51738)
Task: "t2" (0x0246d7d0)
Injection Point: refErr.c:18
<<<<<Traceback>>>>>
The last error is caused by the code on line 22. A memory block that has been freed
is being modified:
==[3/3]==============================================================
Severity/Facility: NON-FATAL/KERNEL
Boot Cycle: 21
OS Version: 6.0.0
Time: THU JAN 01 00:14:22 1970 (ticks = 51739)
Task: "t2" (0x0246d7d0)
Injection Point: refErr.c:22
<<<<<Traceback>>>>>
344
6 Memory Management
6.9 Virtual Memory Management
NOTE: There are differences in the vmBaseLib library provided for the symmetric
multiprocessor (SMP) and uniprocessor (UP) configurations of VxWorks, and
special guidelines for its use in optimizing SMP applications. For more
information about vmBaseLib and SMP, see vmBaseLib Restrictions, p.711 and
Using vmBaseLib, p.703. For general information about VxWorks SMP and about
migration, see 15. VxWorks SMP and 15.15 Migrating Code to VxWorks SMP, p.704.
345
VxWorks
Kernel Programmer's Guide, 6.6
The components listed in Table 6-3 provide basic virtual memory management, as
well as show routines for use from the shell.
Constant Description
For information about related components see 6.10 Additional Memory Protection
Features, p.355.
The kernel virtual memory context is created automatically at boot time based on
configuration data provided by the BSP. The primary data is in the
sysPhysMemDesc[ ] table, which is usually defined in the BSP’s sysLib.c file. The
table defines the initial kernel mappings and initial attributes. The entries in this
table are of PHYS_MEM_DESC structure type, which is defined in vmLib.h.
There is usually no need to change the default sysPhysMemDesc[ ] configuration.
However, modification may be required or advisable, for example, when:
346
6 Memory Management
6.9 Virtual Memory Management
■
New driver support or new devices (for example, flash memory) are added to
the system.
■
The protection or cache attributes of certain entries must be changed. For
example, entries for flash memory can be read-only if the content of the flash
is never written from VxWorks. However, if a flash driver such as TrueFFS is
used, the protection attribute has to be set to writable.
■ There are unused entries in the table. In general, it is best to keep only those
entries that actually describe the system, as each entry may require additional 6
system RAM for page tables (depending on size of the entry, its location
relative to other entries, and architecture-specific MMU parameters). The
larger the memory blocks mapped, the more memory is used for page tables.
The sysPhysMemDesc[ ] table can be modified at run-time. This is useful, for
example, with PCI drivers that can be auto-configured, which means that memory
requirements are detected at run-time. In this case the size and address fields can
be updated programmatically for the corresponding sysPhysMemDesc[ ] entries.
It is important to make such updates before the VM subsystem is initialized by
usrMmuInit( ), for example during execution of sysHwInit( ).
Configuration Example
347
VxWorks
Kernel Programmer's Guide, 6.6
/* shared memory */
{
(VIRT_ADDR) 0x4000000, /* virtual address */
(PHYS_ADDR) 0x4000000, /* physical address */
0x20000, /* length */
/* initial state mask */
MMU_ATTR_VALID_MSK | MMU_ATTR_PROT_MSK | MMU_ATTR_CACHE_MSK,
/* initial state */
MMU_ATTR_VALID | MMU_ATTR_PROT_SUP_READ | MMU_ATTR_PROT_SUP_WRITE |
MMU_ATTR_CACHE_OFF
}
For some architectures, the system RAM (the memory used for the VxWorks
kernel image, kernel heap, and so on) must be identity mapped. This means that
for the corresponding entry in the sysPhysMemDesc[ ] table, the virtual address
must be the same as the physical address. For more information see 6.3 System
Memory Maps, p.319 and the VxWorks Architecture Supplement.
This section describes the facilities provided for manipulating the MMU
programmatically using low-level routines in vmBaseLib. You can make portions
of memory non-cacheable, write-protect portions of memory, invalidate pages,
lock TLB entries, or optimize the size of memory pages.
For more information about the virtual memory routines, see the VxWorks API
reference for vmBaseLib.
NOTE: There are differences in the vmBaseLib library provided for the symmetric
multiprocessor (SMP) and uniprocessor (UP) configurations of VxWorks, and
special guidelines for its use in optimizing SMP applications. For more
information about vmBaseLib and SMP, see vmBaseLib Restrictions, p.711 and
Using vmBaseLib, p.703. For general information about VxWorks SMP and about
migration, see 15. VxWorks SMP and 15.15 Migrating Code to VxWorks SMP, p.704.
Each virtual memory page (typically 4 KB) has a state associated with it. A page
can be valid/invalid, readable, writable, executable, or cacheable/non-cacheable.
The state of a page can be changed with the vmStateSet( ) routine. See Table 6-4
and Table 6-5 for lists of the page state constants and page state masks that can be
used with vmStateSet( ). A page state mask must be used to describe which flags
348
6 Memory Management
6.9 Virtual Memory Management
are being changed. A logical OR operator can be used with states and masks to
define both mapping protection and cache attributes.
Constant Description
349
VxWorks
Kernel Programmer's Guide, 6.6
Constant Description
Constant Description
Not all combinations of protection settings are supported by all CPUs. For
example, many processor types do not provide setting for execute or non-execute
settings. On such processors, readable also means executable.
For information about architecture-specific page states and their combination, see
the VxWorks Architecture Supplement.
In this code example, a task calls dataModify( ) to modify the data structure
pointed to by pData. This routine makes the memory writable, modifies the data,
and sets the memory back to nonwritable. If a task subsequently tries to modify the
data without using dataModify( ), a data access exception occurs.
350
6 Memory Management
6.9 Virtual Memory Management
351
VxWorks
Kernel Programmer's Guide, 6.6
STATUS dataModify
(
MY_DATA * pNewData
)
{
/* take semaphore for exclusive access to data */
semTake (dataSemId, WAIT_FOREVER);
/* make memory writable */
if (vmStateSet (NULL, (VIRT_ADDR) pData, pageSize, MMU_ATTR_PROT_MSK,
MMU_ATTR_PROT_SUP_READ | MMU_ATTR_PROT_SUP_WRITE) == ERROR)
{
semGive (dataSemId);
return (ERROR);
}
/* update data*/
bcopy ((char *) pNewData, (char *) pData, sizeof(MY_DATA));
/* make memory not writable */
if (vmStateSet (NULL, (VIRT_ADDR) pData, pageSize, MMU_ATTR_PROT_MSK,
MMU_ATTR_PROT_SUP_READ) == ERROR)
{
semGive (dataSemId);
return (ERROR);
}
semGive (dataSemId);
return (OK);
}
352
6 Memory Management
6.9 Virtual Memory Management
For some processors it is possible to enable larger page sizes than the default
(defined by VM_PAGE_SIZE) for large, contiguous memory blocks that have
homogeneous memory attributes. There are several advantages to using such
optimization, including:
■ Reducing the number of page table entries (PTE) needed to map memory,
resulting in less memory used.
■ More efficient TLB entry usage, resulting in fewer TLB misses, therefore
potentially better performance.
Optimization of the entire kernel memory space (including I/O blocks) at startup
can be accomplished by configuring VxWorks with the
INCLUDE_PAGE_SIZE_OPTIMIZATION component.
Page size optimization for specific blocks of memory can be accomplished at
run-time with the vmPageOptimize( ) routine.
De-optimization is performed automatically when necessary. For example, if part
of a memory block that has been optimized is set with different attributes, the large
page is automatically broken up into multiple smaller pages and the new attribute
is set to the requested pages only.
353
VxWorks
Kernel Programmer's Guide, 6.6
To make sure that vmStateSet( ) can be called safely from an ISR for specific pages,
the page must first have the MMU_ATTR_NO_BLOCK attribute set. The following
code example shows how this can be done:
#include <vxWorks.h>
#include <vmLib.h>
char * pData;
void someInitFunction ()
{
/* allocate buffer */
void someISR ()
{
...
/* now it's safe to set any attribute for the buffer in an ISR */
6.9.3 Troubleshooting
The show routines and commands described in Table 6-6 are available to assist
with trouble-shooting virtual memory problems.
354
6 Memory Management
6.10 Additional Memory Protection Features
355
VxWorks
Kernel Programmer's Guide, 6.6
Component Description
Note that protection of the kernel text segment—and the text segments of kernel
modules dynamically loaded into the kernel space—is not provided by default. On
the other hand, the text segment of processes and shared libraries is always
write-protected, whether or not VxWorks is configured with the
INCLUDE_PROTECT_TEXT component. Similarly, the execution stack of a process
task is not affected by the INCLUDE_PROTECT_TASK_STACK or
INCLUDE_TASK_STACK_NO_EXEC components—it is always protected unless
the task is spawned with the taskSpawn( ) option VX_NO_STACK_PROTECT.
VxWorks can be configured so that guard zones are inserted at the beginning and
end of task execution stacks. For more information, see Task Stack Guard Zones,
p.176.
The operating system can also be configured to insert guard zones at both ends of
the interrupt stack. For more information, see Interrupt Stack Protection, p.244.
356
6 Memory Management
6.11 Processes Without MMU Support
VxWorks can be configured so that task stacks are non-executable. For more
information, see Non-Executable Task Stacks, p.177.
All text segments are write-protected when VxWorks is configured with the 6
INCLUDE_PROTECT_TEXT component. When VxWorks is loaded, all text
segments are write-protected The text segments of any additional object modules
loaded in the kernel space using ld( ) are automatically marked as read-only.
When object modules are loaded, memory that is to be write-protected is allocated
in page-size increments. No additional steps are required to write-protect kernel
application code.
357
VxWorks
Kernel Programmer's Guide, 6.6
There are no special components needed for the process environment with
software-simulated paging. As with any configurations that provide process
support, the INCLUDE_RTP component must be added to the kernel.
The steps required to enable software-simulated paging are:
358
6 Memory Management
6.11 Processes Without MMU Support
359
VxWorks
Kernel Programmer's Guide, 6.6
360
7
I/O System
361
VxWorks
Kernel Programmer's Guide, 6.6
7.1 Introduction
The VxWorks I/O system is designed to present a simple, uniform,
device-independent interface to any kind of device, including:
■
character-oriented devices such as terminals or communications lines
■
random-access block devices such as disks
■
virtual devices such as intertask pipes and sockets
■
monitor and control devices such as digital and analog I/O devices
■
network devices that give access to remote devices
The VxWorks I/O system provides standard C libraries for both basic and
buffered I/O. The basic I/O libraries are UNIX-compatible; the buffered I/O
libraries are ANSI C-compatible.
Internally, the VxWorks I/O system has a unique design that makes it faster and
more flexible than most other I/O systems. These are important attributes in a
real-time system.
The diagram in Figure 7-1 illustrates the relationships between the different
elements of the VxWorks I/O system. All of these elements are discussed in this
chapter, except for file system routines (which are dealt with in 8. Local File
Systems), and the network elements (which are covered in the Wind River Network
Stack for VxWorks 6 Programmer’s Guide).
362
7 I/O System
7.1 Introduction
Application
XBD Routines
xyzStrategy( )
Hardware Devices
Network
Disk Drive
Serial Device
NOTE: The dotted lines in Figure 7-1 indicate that the XBD facility is required for
some file systems, but not others. For example, HRFS and dosFs require XBD,
while ROMFS has its own interface to drivers. See 7.8.8 Extended Block Device
Facility: XBD, p.404.
NOTE: This chapter provides information about facilities available in the VxWorks
kernel. For information about facilities available to real-time processes, see the
corresponding chapter in the VxWorks Application Programmer’s Guide.
363
VxWorks
Kernel Programmer's Guide, 6.6
364
7 I/O System
7.3 Files, Devices, and Drivers
365
VxWorks
Kernel Programmer's Guide, 6.6
If a matching device name cannot be found, then the I/O function is directed at a
default device. You can set this default device to be any device in the system,
including no device at all, in which case failure to match a device name returns an
error. You can obtain the current default path by using ioDefPathGet( ). You can
set the default path by using ioDefPathSet( ).
Non-block devices are named when they are added to the I/O system, usually at
system initialization time. Block devices are named when they are initialized for
use with a specific file system. The VxWorks I/O system imposes no restrictions
on the names given to devices. The I/O system does not interpret device or
filenames in any way, other than during the search for matching device and
filenames.
It is useful to adopt some naming conventions for device and file names: most
device names begin with a forward-slash (/), except non-NFS network devices,
and VxWorks HRFS and dosFs file system devices.
NOTE: To be recognized by the virtual root file system, device names must begin
with a single leading forward-slash, and must not contain any other slash
characters. For more information, see 8.3 Virtual Root File System: VRFS, p.459.
By convention, NFS-based network devices are mounted with names that begin
with a slash. For example:
/usr
Non-NFS network devices are named with the remote machine name followed by
a colon. For example:
host:
The remainder of the name is the filename in the remote directory on the remote
system.
File system devices using dosFs are often named with uppercase letters and digits
followed by a colon. For example:
DEV1:
NOTE: Filenames and directory names on dosFs devices are often separated by
backslashes (\). These can be used interchangeably with forward slashes (/).
! CAUTION: Because device names are recognized by the I/O system using simple
substring matching, a slash (/ or \) should not be used alone as a device name, nor
should a slash be used as any part of a device name itself.
366
7 I/O System
7.4 Basic I/O
Routine Description
open( ) Opens a file (optionally, creates a file if it does not already exist.)
At the basic I/O level, files are referred to by a file descriptor. A file descriptor is a
small integer returned by a call to open( ) or creat( ). The other basic I/O calls take
a file descriptor as a parameter to specify a file.
File descriptors are not global. The kernel has its own set of file descriptors, and
each process (RTP) has its own set. Tasks within the kernel, or within a specific
process share file descriptors. The only instance in which file descriptors may be
shared across these boundaries, is when one process is a child of another process
or of the kernel and it does not explicitly close a file using the descriptors it inherits
from its parent. (Processes created by kernel tasks share only the spawning kernel
task's standard I/O file descriptors 0, 1 and 2.) For example:
■
If task A and task B are running in process foo, and they each perform a
write( ) on file descriptor 7, they will write to the same file (and device).
■
If process bar is started independently of process foo (it is not foo’s child) and
its tasks X and Y each perform a write( ) on file descriptor 7, they will be
writing to a different file than tasks A and B in process foo.
367
VxWorks
Kernel Programmer's Guide, 6.6
■
If process foobar is started by process foo (it is foo’s child) and its tasks M and
N each perform a write( ) on file descriptor 7, they will be writing to the same
file as tasks A and B in process foo. However, this is only true as long as the
tasks do not close the file. If they close it, and subsequently open file descriptor
7 they will operate on a different file.
When a file is opened, a file descriptor is allocated and returned. When the file is
closed, the file descriptor is deallocated.
The number of file descriptors available in the kernel is defined with the
NUM_FILES configuration macro. This specifies the size of the file descriptor table,
which controls how many file descriptors can be simultaneously in use. The
default number is 50, but it can be changed to suit the needs of the system.
To avoid running out of file descriptors, and encountering errors on file creation,
applications should close any descriptors that are no longer in use.
The size of the file descriptor table for the kernel can also be changed at
programmatically. The rtpIoTableSizeGet( ) routine reads the size of the file
descriptor table and the rtpIoTableSizeSet( ) routine changes it. Note that these
routines can be used with both the kernel and processes (the I/O system treats the
kernel as a special kind of process).
The calling entity—whether kernel or process—can be specified with an
rtpIoTableSizeSet( ) call by setting the first parameter to zero. The new size of the
file descriptor table is set with the second parameter. Note that you can only
increase the size.
368
7 I/O System
7.4 Basic I/O
These standard file descriptors are used to make tasks and modules independent
of their actual I/O assignments. If a module sends its output to standard output
(file descriptor 1), its output can then be redirected to any file or device, without
altering the module.
VxWorks allows two levels of redirection. First, there is a global assignment of the
three standard file descriptors. Second, individual tasks can override the global
assignment of these file descriptors with assignments that apply only to that task.
When VxWorks is initialized, the global standard I/O file descriptors, stdin (0),
stdout (1) and stderr (2), are set to the system console device file descriptor by
default, which is usually the serial tty device.
Each kernel task uses these global standard I/O file descriptors by default. Thus,
any standard I/O operations like calls to printf( ) and getchar( ) use the global
standard I/O file descriptors.
Standard I/O can be redirected, however, either at the individual task level, or
globally for the kernel.
The standard I/O of a specific task can be changed with the ioTaskStdSet( )
routine. The parameters of this routine are the task ID of the task for which the
change is to be made (0 is used for the calling task itself), the standard file
descriptor to be redirected, and the file descriptor to direct it to. For example, a task
can make the following call to write standard output to the fileFd file descriptor:
ioTaskStdSet (0, 1, fileFd);
The third argument (fileFd in this case) can be any valid open file descriptor. If it
is a file system file, all the task's subsequent standard output, such as that from
printf( ), is written to it.
To reset the task standard I/O back to global standard I/O, the third argument can
be 0, 1, or 2.
The global standard I/O file descriptors can also be changed from the default
setting, which affects all kernel tasks except that have had their task-specific
standard I/O file descriptors changed from the global ones.
Global standard I/O file descriptors are changed by calling ioGlobalStdSet( ). The
parameters to this routine are the standard I/O file descriptor to be redirected, and
the file descriptor to direct it to. For example:
ioGlobalStdSet (1, newFd);
369
VxWorks
Kernel Programmer's Guide, 6.6
This call sets the global standard output to newFd, which can be any valid open
file descriptor. All tasks that do not have their individual task standard output
redirected are affected by this redirection, and all their subsequent standard I/O
output goes to newFd.
The current settings of the global and any task's task standard I/O can be
determined by calling ioGlobalStdGet( ) and ioTaskStdGet( ). For more
information, see the VxWorks API references for these routines.
Be careful with file descriptors used for task standard I/O redirection to ensure
that data corruption does not occur. Before any task’s standard I/O file descriptors
are closed, they should be replaced with new file descriptors with a call to
ioTaskStdSet( ).
If a task’s standard I/O is set with ioTaskStdSet( ), the file descriptor number is
stored in that task’s memory. In some cases, this file descriptor may be closed,
released by some other task or the one that opened it. Once it is released, it may be
reused and opened to track a different file. Should the task holding it as a task
standard I/O descriptor continue to use it for I/O, data corruption is unavoidable.
As an example, consider a task spawned from a telnet or rlogin session. The task
inherits the network session task's standard I/O file descriptors. If the session
exits, the standard I/O file descriptors of the network session task are closed.
However, the spawned task still holds those file descriptors as its task standard
I/O continued with input and output to them. If the closed file descriptors are
recycled and re-used by other open( ) call, however, data corruption results,
perhaps with serious consequences for the system. To prevent this from
happening, all spawned tasks must have their standard I/O file descriptors
redirected before the network session is terminated.
The following example illustrates this scenario, with redirection of a spawned
task’s standard I/O to the global standard I/O from the shell before logout. The
taskspawn( ) call is abbreviated to simplify presentation.
-> taskSpawn "someTask",......
Task spawned: id = 0x52a010, name = t4
value = 5414928 = 0x52a010
-> ioTaskStdSet 0x52a010,0,0
value = 0 = 0x0
-> ioTaskStdSet 0x52a010,1,1
value = 0 = 0x0
-> ioTaskStdSet 0x52a010,2,2
value = 0 = 0x0
-> logout
370
7 I/O System
7.4 Basic I/O
The next example illustrates task standard I/O redirection to other file descriptors.
-> taskSpawn "someTask",......
Task spawned: id = 0x52a010, name = t4
value = 5414928 = 0x52a010
-> ioTaskStdSet 0x52a010,0,someOtherFdx
value = 0 = 0x0
-> ioTaskStdSet 0x52a010,1,someOtherFdy
value = 0 = 0x0
-> ioTaskStdSet 0x52a010,2,someOtherFdz
value = 0 = 0x0
-> logout
7
Before I/O can be performed on a device, a file descriptor must be opened to the
device by invoking the open( ) routine—or creat( ), as discussed in the next section.
The arguments to open( ) are the filename, the type of access, and the mode (file
permissions):
fd = open ("name", flags, mode);
For open( ) calls made in the kernel, the mode parameter can be set to zero if file
permissions do not need to be specified.
The file-access options that can be used with the flags parameter to open( ) are
listed in Table 7-2.
Flag Description
371
VxWorks
Kernel Programmer's Guide, 6.6
Flag Description
O_APPEND Set the file offset to the end of the file prior to each write, which
guarantees that writes are made at the end of the file. It has no
effect on devices other than the regular file system.
O_NONBLOCK Non-blocking I/O.
O_NOCTTY If the named file is a terminal device, don't make it the
controlling terminal for the process.
O_TRUNC Open with truncation. If the file exists and is a regular file, and
the file is successfully opened, its length is truncated to 0. It has
no effect on devices other than the regular file system.
! WARNING: While the third parameter to open( )—mode, for file permissions—is
usually optional for other operating systems, it is required for the VxWorks
implementation of open( ) in the kernel (but not in processes). When the mode
parameter is not appropriate for a given call, it should be set to zero. Note that this
can be an issue when porting software from UNIX to VxWorks.
Note the following special cases with regard to use of the file access and mode (file
permissions) parameters to open( ):
■ In general, you can open only preexisting devices and files with open( ).
However, with NFS network, dosFs, and HRFS devices, you can also create
files with open( ) by OR’ing O_CREAT with one of the other access flags.
■ HRFS directories can be opened with the open( ) routine, but only using the
O_RDONLY flag.
■ With both dosFs and NFS devices, you can use the O_CREAT flag to create a
subdirectory by setting mode to FSTAT_DIR. Other uses of the mode parameter
with dosFs devices are ignored.
372
7 I/O System
7.4 Basic I/O
■
With an HRFS device you cannot use the O_CREAT flag and the FSTAT_DIR
mode option to create a subdirectory. HRFS ignores the mode option and
simply creates a regular file.
■
The netDrv default file system does not support the F_STAT_DIR mode option
or the O_CREAT flag.
■
For NFS devices, the third parameter to open( ) is normally used to specify the
mode of the file. For example:
myFd = open ("fooFile", O_CREAT | O_RDWR, 0644);
■ While HRFS supports setting the permission mode for a file, it is not used by 7
the VxWorks operating system.
■ Files can be opened with the O_SYNC flag, indicating that each write should be
immediately written to the backing media. This flag is currently supported by
the dosFs file system, and includes synchronizing the FAT and the directory
entries.
■ The O_SYNC flag has no effect with HRFS because file system is always
synchronous. HRFS updates files as though the O_SYNC flag were set.
NOTE: Drivers or file systems may or may not honor the flag values or the mode
values. A file opened with O_RDONLY mode may in fact be writable if the driver
allows it. Consult the driver or file system information for specifics.
See the VxWorks file system API references for more information about the
features that each file system supports.
The open( ) routine, if successful, returns a file descriptor. This file descriptor is
then used in subsequent I/O calls to specify that file. The file descriptor is an
identifier that is not task specific; that is, it is shared by all tasks within the memory
space. Within a given process or the kernel, therefore, one task can open a file and
any other task can then use the file descriptor. The file descriptor remains valid
until close( ) is invoked with that file descriptor, as follows:
close (fd);
At that point, I/O to the file is flushed (completely written out) and the file
descriptor can no longer be used by any task within the process (or kernel).
However, the same file descriptor number can again be assigned by the I/O
system in any subsequent open( ).
Since the kernel only terminates when the system shuts down, there is no situation
analogous to file descriptors being closed automatically when a process
terminates. File descriptors in the kernel can only be closed by direct command.
373
VxWorks
Kernel Programmer's Guide, 6.6
File-oriented devices must be able to create and remove files as well as open
existing files.
The creat( ) routine directs a file-oriented device to make a new file on the device
and return a file descriptor for it. The arguments to creat( ) are similar to those of
open( ) except that the filename specifies the name of the new file rather than an
existing one; the creat( ) routine returns a file descriptor identifying the new file.
fd = creat ("name", flag);
Note that with the HRFS file system the creat( ) routine is POSIX-compliant, and
the second parameter is used to specify file permissions; the file is opened in
O_RDWR mode.
With dosFs, however, the creat( ) routine is not POSIX-compliant and the second
parameter is used for open mode flags.
The remove( ) routine deletes a named file on a file-system device:
remove ("name");
After a file descriptor is obtained by invoking open( ) or creat( ), tasks can read
bytes from a file with read( ) and write bytes to a file with write( ). The arguments
to read( ) are the file descriptor, the address of the buffer to receive input, and the
maximum number of bytes to read:
nBytes = read (fd, &buffer, maxBytes);
The read( ) routine waits for input to be available from the specified file, and
returns the number of bytes actually read. For file-system devices, if the number of
bytes read is less than the number requested, a subsequent read( ) returns 0 (zero),
indicating end-of-file. For non-file-system devices, the number of bytes read can be
less than the number requested even if more bytes are available; a subsequent
read( ) may or may not return 0. In the case of serial devices and TCP sockets,
repeated calls to read( ) are sometimes necessary to read a specific number of
bytes. (See the reference entry for fioRead( ) in fioLib). A return value of ERROR
(-1) indicates an unsuccessful read.
374
7 I/O System
7.4 Basic I/O
The arguments to write( ) are the file descriptor, the address of the buffer that
contains the data to be output, and the number of bytes to be written:
actualBytes = write (fd, &buffer, nBytes);
The write( ) routine ensures that all specified data is at least queued for output
before returning to the caller, though the data may not yet have been written to the
device (this is driver dependent). The write( ) routine returns the number of bytes
written; if the number returned is not equal to the number requested, an error has
occurred.
7
7.4.7 File Truncation
It is sometimes convenient to discard part of the data in a file. After a file is open
for writing, you can use the ftruncate( ) routine to truncate a file to a specified size.
Its arguments are a file descriptor and the desired length of the file in bytes:
status = ftruncate (fd, length);
The ioctl( ) routine provides a flexible mechanism for performing I/O functions
that are not performed by the other basic I/O calls. Examples include determining
how many bytes are currently available for input, setting device-specific options,
obtaining information about a file system, and positioning random-access files to
specific byte positions.
375
VxWorks
Kernel Programmer's Guide, 6.6
The arguments to the ioctl( ) routine are the file descriptor, a code that identifies
the control function requested, and an optional function-dependent argument:
result = ioctl (fd, function, arg);
For example, the following call uses the FIOBAUDRATE function to set the baud
rate of a tty device to 9600:
status = ioctl (fd, FIOBAUDRATE, 9600);
The discussion of specific devices in 7.8 Devices in VxWorks, p.393 summarizes the
ioctl( ) functions available for each device. The ioctl( ) control codes are defined in
ioLib.h. For more information, see the reference entries for specific device drivers
or file systems.
376
7 I/O System
7.4 Basic I/O
and write file descriptors of interest. When select( ) returns, the bit fields are
modified to reflect the file descriptors that have become available. The macros for
building and manipulating these bit fields are listed in Table 7-3.
Macro Description
Applications can use select( ) with any character I/O devices that provide support
for this facility (for example, pipes, serial devices, and sockets).
For information on writing a device driver that supports select( ), see Implementing
select( ), p.443.
377
VxWorks
Kernel Programmer's Guide, 6.6
#include <vxWorks.h>
#include <selectLib.h>
#include <fcntl.h>
#define MAX_FDS 2
#define MAX_DATA 1024
#define PIPEHI "/pipe/highPriority"
#define PIPENORM "/pipe/normalPriority"
/************************************************************************
* selServer - reads data as it becomes available from two different pipes
*
* Opens two pipe fds, reading from whichever becomes available. The
* server code assumes the pipes have been created from either another
* task or the shell. To test this code from the shell do the following:
* -> ld < selServer.o
* -> pipeDevCreate ("/pipe/highPriority", 5, 1024)
* -> pipeDevCreate ("/pipe/normalPriority", 5, 1024)
* -> fdHi = open ("/pipe/highPriority", 1, 0)
* -> fdNorm = open ("/pipe/normalPriority", 1, 0)
* -> iosFdShow
* -> sp selServer
* -> i
* At this point you should see selServer’s state as pended. You can now
* write to either pipe to make the selServer display your message.
* -> write fdNorm, "Howdy", 6
* -> write fdHi, "Urgent", 7
*/
378
7 I/O System
7.4 Basic I/O
{
close (fds[0]);
close (fds[1]);
return (ERROR);
}
FOREVER
{
/* clear bits in read bit mask */
FD_ZERO (&readFds);
7
/* initialize bit mask */
/* step through array and read from fds that are ready */
379
VxWorks
Kernel Programmer's Guide, 6.6
The POSIX fsPxLib library provides I/O and file system routines for various file
manipulations. These routines are described in Table 7-4.
Routine Description
For more information, see the API references for fsPxLib and ioLib.
380
7 I/O System
7.5 Buffered I/O: stdio
Although the VxWorks I/O system is efficient, some overhead is associated with
each low-level call. First, the I/O system must dispatch from the
7
device-independent user call (read( ), write( ), and so on) to the driver-specific
routine for that function. Second, most drivers invoke a mutual exclusion or
queuing mechanism to prevent simultaneous requests by multiple users from
interfering with each other.
This overhead is quite small because the VxWorks primitives are fast. However, an
application processing a single character at a time from a file incurs that overhead
for each character if it reads each character with a separate read( ) call:
n = read (fd, &char, 1);
To make this type of I/O more efficient and flexible, the stdio package implements
a buffering scheme in which data is read and written in large chunks and buffered
privately. This buffering is transparent to the application; it is handled
automatically by the stdio routines and macros. To access a file with stdio, a file is
opened with fopen( ) instead of open( ) (many stdio calls begin with the letter f):
fp = fopen ("/usr/foo", "r");
The returned value, a file pointer is a handle for the opened file and its associated
buffers and pointers. A file pointer is actually a pointer to the associated data
structure of type FILE (that is, it is declared as FILE *). By contrast, the low-level I/O
routines identify a file with a file descriptor, which is a small integer. In fact, the
FILE structure pointed to by the file pointer contains the underlying file descriptor
of the open file.
A file descriptor that is already open can be associated subsequently with a FILE
buffer by calling fdopen( ):
fp = fdopen (fd, "r");
After a file is opened with fopen( ), data can be read with fread( ), or a character at
a time with getc( ), and data can be written with fwrite( ), or a character at a time
with putc( ).
The routines and macros to get data into or out of a file are extremely efficient.
They access the buffer with direct pointers that are incremented as data is read or
381
VxWorks
Kernel Programmer's Guide, 6.6
written by the user. They pause to call the low-level read or write routines only
when a read buffer is empty or a write buffer is full.
! WARNING: The stdio buffers and pointers are private to a particular task. They are
not interlocked with semaphores or any other mutual exclusion mechanism,
because this defeats the point of an efficient private buffering scheme. Therefore,
multiple tasks must not perform I/O to the same stdio FILE pointer at the same
time.
As discussed in 7.4 Basic I/O, p.367, there are three special file descriptors (0, 1, and
2) reserved for standard input, standard output, and standard error. Three
corresponding stdio FILE buffers are automatically created when a task uses the
standard file descriptors, stdin, stdout, and stderr, to do buffered I/O to the
standard file descriptors. Each task using the standard I/O file descriptors has its
own stdio FILE buffers. The FILE buffers are deallocated when the task exits.
Additional routines in fioLib provide formatted but unbuffered output. The
routine printErr( ) is analogous to printf( ) but outputs formatted strings to the
standard error file descriptor (2). The routine fdprintf( ) outputs formatted strings
to a specified file descriptor.
The routines printf( ), sprintf( ), and sscanf( ) are generally considered to be part
of the standard stdio package. However, the VxWorks implementation of these
routines, while functionally the same, does not use the stdio package. Instead, it
uses a self-contained, formatted, non-buffered interface to the I/O system in the
library fioLib.
382
7 I/O System
7.7 Asynchronous Input/Output
Note that these routines provide the functionality specified by ANSI; however,
printf( ) is not buffered.
Because these routines are implemented in this way, the full stdio package, which
is optional, can be omitted from a VxWorks configuration without sacrificing their
availability. Applications requiring printf-style output that is buffered can still
accomplish this by calling fprintf( ) explicitly to stdout.
While sscanf( ) is implemented in fioLib and can be used even if stdio is omitted,
the same is not true of scanf( ), which is implemented in the usual way in stdio.
7
Another higher-level I/O facility is provided by the library logLib, which allows
formatted messages to be logged without having to do I/O in the current task’s
context, or when there is no task context. The message format and parameters are
sent on a message queue to a logging task, which then formats and outputs the
message. This is useful when messages must be logged from interrupt level, or
when it is desirable not to delay the current task for I/O or use the current task’s
stack for message formatting (which can take up significant stack space). The
message is displayed on the console unless otherwise redirected at system startup
using logInit( ) or dynamically using logFdSet( ).
383
VxWorks
Kernel Programmer's Guide, 6.6
The VxWorks AIO implementation meets the specification in the POSIX 1003.1b
standard.
The benefit of AIO is greater processing efficiency: it permits I/O operations to
take place whenever resources are available, rather than making them await
arbitrary events such as the completion of independent operations. AIO eliminates
some of the unnecessary blocking of tasks that is caused by ordinary synchronous
I/O; this decreases contention for resources between input/output and internal
processing, and expedites throughput.
Include AIO in your VxWorks configuration with the INCLUDE_POSIX_AIO and
INCLUDE_POSIX_AIO_SYSDRV components. The second configuration constant
enables the auxiliary AIO system driver, required for asynchronous I/O on all
current VxWorks devices.
The VxWorks library aioPxLib provides POSIX AIO routines. To access a file
asynchronously, open it with the open( ) routine, like any other file. Thereafter, use
the file descriptor returned by open( ) in calls to the AIO routines. The POSIX AIO
routines (and two associated non-POSIX routines) are listed in Table 7-5.
The default VxWorks initialization code calls aioPxLibInit( ) automatically when
the POSIX AIO component is included in VxWorks with INCLUDE_POSIX_AIO.
The aioPxLibInit( ) routine takes one parameter, the maximum number of
lio_listio( ) calls that can be outstanding at one time. By default this parameter is
MAX_LIO_CALLS. When the parameter is 0 (the default), the value is taken from
AIO_CLUST_MAX (defined in
installDir/vxworks-6.x/target/h/private/aioPxLibP.h).
The AIO system driver, aioSysDrv, is initialized by default with the routine
aioSysInit( ) when both INCLUDE_POSIX_AIO and
INCLUDE_POSIX_AIO_SYSDRV are included in VxWorks. The purpose of
aioSysDrv is to provide request queues independent of any particular device
driver, so that you can use any VxWorks device driver with AIO.
384
7 I/O System
7.7 Asynchronous Input/Output
Function Description
The routine aioSysInit( ) takes three parameters: the number of AIO system tasks
to spawn, and the priority and stack size for these system tasks. The number of
AIO system tasks spawned equals the number of AIO requests that can be handled
in parallel. The default initialization call uses three constants:
MAX_AIO_SYS_TASKS, AIO_TASK_PRIORITY, and AIO_TASK_STACK_SIZE.
When any of the parameters passed to aioSysInit( ) is 0, the corresponding value
is taken from AIO_IO_TASKS_DFLT, AIO_IO_PRIO_DFLT, and
AIO_IO_STACK_DFLT (all defined in
installDir/vxworks-6.x/target/h/aioSysDrv.h).
Table 7-6 lists the names of the constants, and shows the constants used within
initialization routines when the parameters are left at their default values of 0, and
where these constants are defined.
385
VxWorks
Kernel Programmer's Guide, 6.6
Each of the AIO calls takes an AIO control block (aiocb) as an argument. The
calling routine must allocate space for the aiocb, and this space must remain
available for the duration of the AIO operation. (Thus the aiocb must not be
created on the task's stack unless the calling routine will not return until after the
AIO operation is complete and aio_return( ) has been called.) Each aiocb describes
a single AIO operation. Therefore, simultaneous asynchronous I/O operations
using the same aiocb are not valid and produce undefined results.
The aiocb structure is defined in aio.h. It contains the following fields:
aio_fildes
The file descriptor for I/O.
aio_offset
The offset from the beginning of the file.
aio_buf
The address of the buffer from/to which AIO is requested.
aio_nbytes
The number of bytes to read or write.
aio_reqprio
The priority reduction for this AIO request.
aio_sigevent
The signal to return on completion of an operation (optional).
aio_lio_opcode
An operation to be performed by a lio_listio( ) call.
386
7 I/O System
7.7 Asynchronous Input/Output
aio_sys
The address of VxWorks-specific data (non-POSIX).
For full definitions and important additional information, see the reference entry
for aioPxLib.
! CAUTION: The aiocb structure and the data buffers referenced by it are used by the
system to perform the AIO request. Therefore, once the aiocb has been submitted
to the system, the application must not modify the aiocb structure until after a
subsequent call to aio_return( ). The aio_return( ) call retrieves the previously
submitted AIO data structures from the system. After the aio_return( ) call, the 7
calling application can modify the aiocb, free the memory it occupies, or reuse it
for another AIO call. If space for the aiocb is allocated from the stack, the task
should not be deleted (or complete running) until the aiocb has been retrieved
from the system with an aio_return( ) call.
The following code uses a pipe for the asynchronous I/O operations. The example
creates the pipe, submits an AIO read request, verifies that the read request is still
in progress, and submits an AIO write request. Under normal circumstances, a
387
VxWorks
Kernel Programmer's Guide, 6.6
synchronous read to an empty pipe blocks and the task does not execute the write,
but in the case of AIO, we initiate the read request and continue. After the write
request is submitted, the example task loops, checking the status of the AIO
requests periodically until both the read and write complete. Because the AIO
control blocks are on the stack, we must call aio_return( ) before returning from
aioExample( ).
/* includes */
#include <vxWorks.h>
#include <stdio.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <aio.h>
/* defines */
/************************************************************************
* aioExample - use AIO library * This example shows the basic functions of
the AIO library.
* RETURNS: OK if successful, otherwise ERROR.
*/
388
7 I/O System
7.7 Asynchronous Input/Output
/* clean up */
if (aio_return (&aiocb_read) == -1)
printf ("aioExample: aio_return for aiocb_read failed\n");
if (aio_return (&aiocb_write) == -1)
printf ("aioExample: aio_return for aiocb_write failed\n");
close (fd);
return (OK);
}
389
VxWorks
Kernel Programmer's Guide, 6.6
A task can determine whether an AIO request is complete in any of the following
ways:
■
Check the result of aio_error( ) periodically, as in the previous example, until
the status of an AIO request is no longer EINPROGRESS.
■ Use aio_suspend( ) to suspend the task until the AIO request is complete.
■ Use signals to be informed when the AIO request is complete.
The following example is similar to the preceding aioExample( ), except that it
uses signals for notification that the write operation has finished. If you test this
from the shell, spawn the routine to run at a lower priority than the AIO system
tasks to assure that the test routine does not block completion of the AIO request.
#include <vxWorks.h>
#include <stdio.h>
#include <aio.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
/* defines */
/* forward declarations */
/************************************************************************
* aioExampleSig - use AIO library.
*
* This example shows the basic functions of the AIO library.
* Note if this is run from the shell it must be spawned. Use:
* -> sp aioExampleSig
*
* RETURNS: OK if successful, otherwise ERROR.
*/
390
7 I/O System
7.7 Asynchronous Input/Output
write_action.sa_sigaction = writeSigHandler;
write_action.sa_flags = SA_SIGINFO;
sigemptyset (&write_action.sa_mask);
sigaction (WRITE_EXAMPLE_SIG_NO, &write_action, NULL);
read_action.sa_sigaction = readSigHandler;
read_action.sa_flags = SA_SIGINFO;
sigemptyset (&read_action.sa_mask);
sigaction (READ_EXAMPLE_SIG_NO, &read_action, NULL);
aiocb_read.aio_sigevent.sigev_signo = READ_EXAMPLE_SIG_NO;
aiocb_read.aio_sigevent.sigev_notify = SIGEV_SIGNAL;
aiocb_read.aio_sigevent.sigev_value.sival_ptr =
(void *) &aiocb_read;
391
VxWorks
Kernel Programmer's Guide, 6.6
aiocb_write.aio_sigevent.sigev_signo = WRITE_EXAMPLE_SIG_NO;
aiocb_write.aio_sigevent.sigev_notify = SIGEV_SIGNAL;
aiocb_write.aio_sigevent.sigev_value.sival_ptr =
(void *) &aiocb_write;
close (fd);
return (OK);
}
void writeSigHandler
(
int sig,
struct siginfo * info,
void * pContext
)
{
/* print out what was written */
printf ("writeSigHandler: Got signal for aio write\n");
void readSigHandler
(
int sig,
struct siginfo * info,
void * pContext
)
{
/* print out what was read */
printf ("readSigHandler: Got signal for aio read\n");
392
7 I/O System
7.8 Devices in VxWorks
393
VxWorks
Kernel Programmer's Guide, 6.6
! WARNING: Devices should not be given the same name, or they will overwrite
each other in core I/O.
VxWorks provides terminal and pseudo-terminal devices (tty and pty). The tty
device is for actual terminals; the pty device is for processes that simulate
terminals. These pseudo terminals are useful in applications such as remote login
facilities.
VxWorks serial I/O devices are buffered serial byte streams. Each device has a ring
buffer (circular buffer) for both input and output. Reading from a tty device
extracts bytes from the input ring. Writing to a tty device adds bytes to the output
ring. The size of each ring buffer is specified when the device is created during
system initialization.
NOTE: For the remainder of this section, the term tty is used to indicate both tty
and pty devices
tty Options
The tty devices have a full range of options that affect the behavior of the device.
These options are selected by setting bits in the device option word using the
ioctl( ) routine with the FIOSETOPTIONS function. For example, to set all the tty
options except OPT_MON_TRAP:
status = ioctl (fd, FIOSETOPTIONS, OPT_TERMINAL & ~OPT_MON_TRAP);
394
7 I/O System
7.8 Devices in VxWorks
Library Description
OPT_LINE Selects line mode. (See Raw Mode and Line Mode, p.395.)
OPT_ECHO Echoes input characters to the output of the same channel.
OPT_CRMOD Translates input RETURN characters into NEWLINE (\n);
translates output NEWLINE into RETURN-LINEFEED.
OPT_TANDEM Responds to software flow control characters CTRL+Q and 7
CTRL+S (XON and XOFF).
OPT_7_BIT Strips the most significant bit from all input bytes.
OPT_MON_TRAP Enables the special ROM monitor trap character, CTRL+X by
default.
OPT_ABORT Enables the special kernel shell abort character, CTRL+C by
default. (Only useful if the kernel shell is configured into the
system)
OPT_TERMINAL Sets all of the above option bits.
OPT_RAW Sets none of the above option bits.
A tty device operates in one of two modes: raw mode (unbuffered) or line mode. Raw
mode is the default. Line mode is selected by the OPT_LINE bit of the device option
word (see tty Options, p.394).
In raw mode, each input character is available to readers as soon as it is input from
the device. Reading from a tty device in raw mode causes as many characters as
possible to be extracted from the input ring, up to the limit of the user’s read buffer.
Input cannot be modified except as directed by other tty option bits.
In line mode, all input characters are saved until a NEWLINE character is input; then
the entire line of characters, including the NEWLINE, is made available in the ring
at one time. Reading from a tty device in line mode causes characters up to the end
of the next line to be extracted from the input ring, up to the limit of the user’s read
buffer. Input can be modified by the special characters CTRL+H (backspace),
395
VxWorks
Kernel Programmer's Guide, 6.6
CTRL+U (line-delete), and CTRL+D (end-of-file), which are discussed in tty Special
Characters, p.396.
The following special characters are enabled if the tty device operates in line mode,
that is, with the OPT_LINE bit set:
■ The backspace character, by default CTRL+H, causes successive previous
characters to be deleted from the current line, up to the start of the line. It does
this by echoing a backspace followed by a space, and then another backspace.
■ The line-delete character, by default CTRL+U, deletes all the characters of the
current line.
■ The end-of-file (EOF) character, by default CTRL+D, causes the current line to
become available in the input ring without a NEWLINE and without entering
the EOF character itself. Thus if the EOF character is the first character typed
on a line, reading that line returns a zero byte count, which is the usual
indication of end-of-file.
The following characters have special effects if the tty device is operating with the
corresponding option bit set:
■ The software flow control characters CTRL+Q and CTRL+S (XON and XOFF).
Receipt of a CTRL+S input character suspends output to that channel.
Subsequent receipt of a CTRL+Q resumes the output. Conversely, when the
VxWorks input buffer is almost full, a CTRL+S is output to signal the other side
to suspend transmission. When the input buffer is empty enough, a CTRL+Q
is output to signal the other side to resume transmission. The software flow
control characters are enabled by OPT_TANDEM.
■ The ROM monitor trap character, by default CTRL+X. This character traps to the
ROM-resident monitor program. Note that this is drastic. All normal VxWorks
functioning is suspended, and the computer system is controlled entirely by
the monitor. Depending on the particular monitor, it may or may not be
possible to restart VxWorks from the point of interruption.1 The monitor trap
character is enabled by OPT_MON_TRAP.
■
The special kernel shell abort character, by default CTRL+C. This character
restarts the kernel shell if it gets stuck in an unfriendly routine, such as one that
1. It will not be possible to restart VxWorks if un-handled external interrupts occur during the
boot countdown.
396
7 I/O System
7.8 Devices in VxWorks
397
VxWorks
Kernel Programmer's Guide, 6.6
The tty devices respond to the ioctl( ) functions in Table 7-10, defined in ioLib.h.
For more information, see the reference entries for tyLib, ttyDrv, and ioctl( ).
Function Description
! CAUTION: To change the driver’s hardware options (for example, the number of
stop bits or parity bits), use the ioctl( ) function SIO_HW_OPTS_SET. Because this
command is not implemented in most drivers, you may need to add it to your BSP
serial driver, which resides in installDir/vxworks-6.x/target/src/drv/sio. The details
of how to implement this command depend on your board’s serial chip. The
constants defined in the header file installDir/vxworks-6.x/target/h/sioLib.h
provide the POSIX definitions for setting the hardware options.
Pipes are virtual devices by which tasks communicate with each other through the
I/O system. Tasks write messages to pipes; these messages can then be read by
other tasks. Pipe devices are managed by pipeDrv and use the kernel message
queue facility to bear the actual message traffic.
398
7 I/O System
7.8 Devices in VxWorks
Creating Pipes
The new pipe can have at most maxMsgs messages queued at a time. Tasks that
write to a pipe that already has the maximum number of messages queued are
blocked until a message is dequeued. Each message in the pipe can be at most
maxLength bytes long; attempts to write longer messages result in an error.
7
Writing to Pipes from ISRs
VxWorks pipes are designed to allow ISRs to write to pipes in the same way as
task-level code. Many VxWorks facilities cannot be used from ISRs, including
output to devices other than pipes. However, ISRs can use pipes to communicate
with tasks, which can then invoke such facilities. ISRs write to a pipe using the
write( ) call. Tasks and ISRs can write to the same pipes. However, if the pipe is
full, the message is discarded because the ISRs cannot pend. ISRs must not invoke
any I/O function on pipes other than write( ). For more information ISRs, see
4.20 Interrupt Service Routines, p.242.
Pipe devices respond to the ioctl( ) functions summarized in Table 7-11. The
functions listed are defined in the header file ioLib.h. For more information, see
the reference entries for pipeDrv and for ioctl( ) in ioLib.
Function Description
FIONREAD Gets the size in bytes of the first message in the pipe.
399
VxWorks
Kernel Programmer's Guide, 6.6
The memDrv device allows the I/O system to access memory directly as a
pseudo-I/O device. Memory location and size are specified when the device is
created. The device provides a high-level means for reading and writing bytes in
absolute memory locations through I/O calls. It is useful when data must be
preserved between boots of VxWorks or when sharing data between CPUs.
The memDrv driver is initialized automatically by the system with memDrv( )
when the INCLUDE_USR_MEMDRV component is included in VxWorks. The call
for device creation must be made from the kernel:
STATUS memDevCreate
(char * name, char * base, int length)
Memory for the device is an absolute memory location beginning at base. The
length parameter indicates the size of the memory.
For additional information on the memory driver, see the memDrv( ),
memDevCreate( ), and memDevCreateDir( ) entries in the VxWorks API
reference, as well as the entry for memdrvbuild in the online Wind River Host
Utilities API Reference.
For information about creating a RAM disk, which provides support for file
systems,.
The memory device responds to the ioctl( ) functions summarized in Table 7-12.
The functions listed are defined in the header file ioLib.h.
Function Description
For more information, see the reference entries for memDrv, ioLib, and ioctl( ).
400
7 I/O System
7.8 Devices in VxWorks
Network File System (NFS) devices allow files on remote hosts to be accessed with
the NFS protocol. The NFS protocol specifies both client software, to read files from
remote machines, and server software, to export files to remote machines.
The driver nfsDrv acts as a VxWorks NFS client to access files on any NFS server
on the network. VxWorks also allows you to run an NFS server to export files to
other systems.
Using NFS devices, you can create, open, and access remote files exactly as though
7
they were on a file system on a local disk. This is called network transparency.
For detailed information about the VxWorks implementation of NFS, see
9. Network File System: NFS.
Access to a remote NFS file system is established by mounting that file system
locally and creating an I/O device for it using nfsMount( ). Its arguments are
(1) the host name of the NFS server, (2) the name of the host file system, and (3) the
local name for the file system.
For example, the following call mounts /usr of the host mars as /vxusr locally:
nfsMount ("mars", "/usr", "/vxusr");
This creates a VxWorks I/O device with the specified local name (/vxusr, in this
example). If the local name is specified as NULL, the local name is the same as the
remote name.
After a remote file system is mounted, the files are accessed as though the file
system were local. Thus, after the previous example, opening the file /vxusr/foo
opens the file /usr/foo on the host mars.
The remote file system must be exported by the system on which it actually resides.
However, NFS servers can export only local file systems. Use the appropriate
command on the server to see which file systems are local. NFS requires
authentication parameters to identify the user making the remote access. To set
these parameters, use the routines nfsAuthUnixSet( ) and nfsAuthUnixPrompt( ).
To include NFS client support, use the INCLUDE_NFS component.
The subject of exporting and mounting NFS file systems and authenticating access
permissions is discussed in more detail in 9. Network File System: NFS. See also the
401
VxWorks
Kernel Programmer's Guide, 6.6
reference entries nfsLib and nfsDrv, and the NFS documentation from Sun
Microsystems.
NFS client devices respond to the ioctl( ) functions summarized in Table 7-13. The
functions listed are defined in ioLib.h. For more information, see the reference
entries for nfsDrv, ioLib, and ioctl( ).
Function Description
VxWorks also supports network access to files on a remote host through the
Remote Shell protocol (RSH) or the File Transfer Protocol (FTP).
These implementations of network devices use the driver netDrv, which is
included in the Wind River Network Stack. Using this driver, you can open, read,
write, and close files located on remote systems without needing to manage the
details of the underlying protocol used to effect the transfer of information. (For
more information, see the Wind River Network Stack for VxWorks 6 Programmer’s
Guide.)
When a remote file is opened using RSH or FTP, the entire file is copied into local
memory. As a result, the largest file that can be opened is restricted by the available
memory. Read and write operations are performed on the memory-resident copy
402
7 I/O System
7.8 Devices in VxWorks
of the file. When closed, the file is copied back to the original remote file if it was
modified.
In general, NFS devices are preferable to RSH and FTP devices for performance
and flexibility, because NFS does not copy the entire file into local memory.
However, NFS is not supported by all host systems.
To access files on a remote host using either RSH or FTP, a network device must 7
first be created by calling the routine netDevCreate( ). The arguments to
netDevCreate( ) are (1) the name of the device, (2) the name of the host the device
accesses, and (3) which protocol to use: 0 (RSH) or 1 (FTP).
For example, the following call creates an RSH device called mars: that accesses the
host mars. By convention, the name for a network device is the remote machine’s
name followed by a colon (:).
netDevCreate ("mars:", "mars", 0);
RSH and FTP devices respond to the same ioctl( ) functions as NFS devices except
for FIOSYNC and FIOREADDIR. The functions are defined in the header file
ioLib.h. For more information, see the API reference entries for netDrv and ioctl( ).
403
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks provides both /null and /dev/null for null devices. The /null device is the
traditional VxWorks null device, which is provided by default for backward
compatibility. The /dev/null device is provided by the
BUNDLE_RTP_POSIX_PSE52 component bundle, and is required for conformance
with the POSIX PSE52 profile.
Note that the devs shell command lists /null and /dev/null with other devices, but
the ls command does not list /dev/null under the VRFS root directory (because the
name violates the VRFS naming scheme). Applications can, in any case, use /null
or /dev/null as required.
For information about POSIX PSE52, see the VxWorks Kernel Programmer’s Guide:
POSIX Facilities. For information about VRFS, see VxWorks Kernel Programmer’s
Guide: Local File Systems.
7.8.7 Sockets
The extended block device (XBD) facility mediates I/O activity between file
systems and block devices. It provides a standard interface between file systems
on the one hand, and block drivers on the other.
The XBD facility also provides support for removable file systems, automatic file
system detection, and multiple file systems. For more information on these
features, see 8. Local File Systems.
404
7 I/O System
7.8 Devices in VxWorks
NOTE: The XBD facility is required for some file systems (such as HRFS, dosFs,
cdromFs, and rawFs), but not others (such as ROMFS).
xbdPartition Module
The xbdPartition facility creates a device for each partition that is detected on
media. Each partition that is detected is probed by the file system monitor and an
I/O device is added to VxWorks. This device is an instantiation of the file system
found by the file system monitor (or rawFs if the file system is not recognized or
detected). If no partitions are detected, a single device is created to represent the
entire media. There can be up to four partitions on a single media. For information
about the file system monitor, see 8.2 File System Monitor, p.457.
405
VxWorks
Kernel Programmer's Guide, 6.6
The partition facility also names the partitions. The names are derived from the
base device name and the partition number. The base device name is derived from
the device driver name (for more information in this regard, see the VxWorks
Device Driver Developer’s Guide). For example, the XBD-compatible device for an
ATA hard disk would have a base name of /ata00. If it had four partitions, they
would be named as follows:
/ata00:1
/ata00:2
/ata00:3
/ata00:4
If there were no partitions, the name would be /ata00:0. For an example of how the
facility is used, see Example 8-1.
partLib Library
The partLib library provides facilities for creating PC-style partitions on media. It
can create up to four primary partitions. Note that when partitions are created, any
existing information on the media is lost. For more information see the VxWorks
API reference for xbdCreatePartition( ).
406
7 I/O System
7.8 Devices in VxWorks
407
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: While TRFS is a I/O layer added to dosFs, it uses a modified on-media
format that is not compatible with other FAT-based file systems, including
Microsoft Windows and the VxWorks dosFs file system without the TRFS layer. It
should not, therefore, be used when compatibility with other systems is a
requirement
For information about dosFs, see 8.5 MS-DOS-Compatible File System: dosFs, p.481.
TRFS is automatically detected and instantiated if the media has already been
formatted for use with TRFS, in a manner very similar to the instantiation of the
dosFs or HRFS file system. The primary difference is that when TRFS is detected
by the file system monitor, it calls the TRFS creation function, and the creation
function then creates another XBD instance and generates an insertion event for it.
The monitor then detects the new XBD and begins probing. In this case, however,
the monitor does not examine the media directly—all commands are routed
through TRFS, which performs the appropriate translations. If a file system is
detected, such as dosFs, the dosFs creation function is called by the monitor and
dosFs is instantiated. If not, rawfs is instantiated.
For information about how file systems are automatically instantiated, see 8.2 File
System Monitor, p.457.
408
7 I/O System
7.8 Devices in VxWorks
overHead
An integer that identifies the portion of the disk to use as transactional
workspace in parts-per-thousand of the disk.
type
An integer with the values of either FORMAT_REGULAR (0), which does not
reserve any blocks from the disk; or FORMAT_TFFS (1), which reserves the first
block.
Once the TRFS format is complete, a dosFs file system can be created by calling the
dosFs formatter on the same volume.
7
When a FAT file system is created using the function dosFsVolFormat( ) in
conjunction with TRFS, a transaction point is automatically inserted following the
format. One cannot, therefore, unformat by rolling back a transaction point.
/* Create a RAM disk with 512 byte sized sectors and 1024 sectors.*/
if (xbdRamDiskDevCreate (512, 1024 * 512, 0, "/trfs") == NULL)
{
printf ("Could not create RAM disk\n");
return;
}
/* Put TRFS on the RAM disk */
if (usrFormatTrans ("/trfs", 100, 0) != OK)
{
printf ("Could not format\n");
return;
}
409
VxWorks
Kernel Programmer's Guide, 6.6
Once TRFS and dosFs are created, the dosFs file system may be used with the
ordinary file creation and manipulation commands. No changes to the file system
become permanent, however, until TRFS is used to commit them.
It is important to note that the entire dosFs file system—and not individual files—
are committed. The entire disk state must therefore be consistent before executing
a commit; that is, there must not be a file system operation in progress (by another
task, for example) when the file system is committed. If multiple tasks update the
file system, care must be taken to ensure the file data is in a known state before
setting a transaction point.
There are two ways to commit the file system:
■ Using the volume name of the device formatted for TRFS.
■ Using a file descriptor which is open on TRFS.
The function usrTransCommit( ) takes the volume name of the TRFS device and
causes it to commit. The function usrTransCommitFd( ) takes a file descriptor
open on TRFS and causes a commit of the entire file system.
The following code examples illustrate creating a file system with TRFS and setting
a transaction point. The first routine creates a new TRFS layer and dosFs file
system; and the second sets a transaction point.
void createTrfs
(
void
)
{
/* Create an XBD RAM disk with 512 byte sized sectors and 1024 sectors.*/
if (xbdRamDiskDevCreate (512, 1024 * 512, 0, "/trfs") == NULL)
{
printf ("Could not create XBD RAM disk\n");
return;
}
410
7 I/O System
7.8 Devices in VxWorks
void transTrfs
(
void
)
{
/* This assumes a TRFS with DosFs on "/trfs" */
7
... /* Perform file operations here */
usrTransCommit ("/trfs");
411
VxWorks
Kernel Programmer's Guide, 6.6
412
7 I/O System
7.8 Devices in VxWorks
Application
I/O System
driver table 7
File System
XBD
Non-Block Block
Device Driver Device Driver
Device(s) Device(s)
A RAM driver emulates a disk device, but keeps all data in memory. The
INCLUDE_XBD_RAMDRV component allows the use of a file system to access data
413
VxWorks
Kernel Programmer's Guide, 6.6
stored in RAM memory. RAM disks can be created using volatile as well a
non-volatile RAM. A RAM disk can be used with the HRFS, dosFs, and rawFs file
systems. The RAM disk links into the file system monitor and event framework.
For more about information about RAM disks, see the API reference for
xbdRamDisk, as well as Example 7-4, Example 8-3, and Example 8-7. For
information about compatible file systems, see 8.4 Highly Reliable File System:
HRFS, p.461, 8.5 MS-DOS-Compatible File System: dosFs, p.481, and 8.6 Raw File
System: rawFs, p.507.
Note that the XBD-compatible RAM disk facility supersedes the ramDrv facility.
SCSI Drivers
SCSI is a standard peripheral interface that allows connection with a wide variety
of hard disks, optical disks, floppy disks, tape drives, and CD-ROM devices. SCSI
block drivers are compatible with the dosFs libraries, and offer several advantages
for target configurations. They provide:
■ local mass storage in non-networked environments
■ faster I/O throughput than Ethernet networks
The SCSI-2 support in VxWorks supersedes previous SCSI support, although it
offers the option of configuring the original SCSI functionality, now known as
SCSI-1. With SCSI-2 enabled, the VxWorks environment can still handle SCSI-1
applications, such as file systems created under SCSI-1. However, applications that
directly used SCSI-1 data structures defined in scsiLib.h may require
modifications and recompilation for SCSI-2 compatibility.
The VxWorks SCSI implementation consists of two modules, one for the
device-independent SCSI interface and one to support a specific SCSI controller.
The scsiLib library provides routines that support the device-independent
interface; device-specific libraries provide configuration routines that support
specific controllers. There are also additional support routines for individual
targets in sysLib.c.
414
7 I/O System
7.8 Devices in VxWorks
Component Description
! CAUTION: Including SCSI-2 in your VxWorks image can significantly increase the
image size.
Each board in a SCSI-2 environment must define a unique SCSI bus ID for the SCSI
initiator. SCSI-1 drivers, which support only a single initiator at a time, assume an
initiator SCSI bus ID of 7. However, SCSI-2 supports multiple initiators, up to eight
initiators and targets at one time. Therefore, to ensure a unique ID, choose a value
in the range 0-7 to be passed as a parameter to the driver’s initialization routine (for
example, ncr710CtrlInitScsi2( )) by the sysScsiInit( ) routine in sysScsi.c. For
more information, see the reference entry for the relevant driver initialization
routine. If there are multiple boards on one SCSI bus, and all of these boards use
the same BSP, then different versions of the BSP must be compiled for each board
by assigning unique SCSI bus IDs.
415
VxWorks
Kernel Programmer's Guide, 6.6
The SCSI subsystem supports libraries and drivers for both SCSI-1 and SCSI-2. It
consists of the following six libraries which are independent of any SCSI controller:
scsiLib
routines that provide the mechanism for switching SCSI requests to either
the SCSI-1 library (scsi1Lib) or the SCSI-2 library (scsi2Lib), as configured
by the board support package (BSP).
scsi1Lib
SCSI-1 library routines and interface, used when only INCLUDE_SCSI is
used (see Configuring SCSI Drivers, p.414).
scsi2Lib
SCSI-2 library routines and all physical device creation and deletion
routines.
scsiCommonLib
commands common to all types of SCSI devices.
scsiDirectLib
routines and commands for direct access devices (disks).
scsiSeqLib
routines and commands for sequential access block devices (tapes).
Controller-independent support for the SCSI-2 functionality is divided into
scsi2Lib, scsiCommonLib, scsiDirectLib, and scsiSeqLib. The interface to any of
these SCSI-2 libraries can be accessed directly. However, scsiSeqLib is designed to
be used in conjunction with tapeFs, while scsiDirectLib works with dosFs and
rawFs. Applications written for SCSI-1 can be used with SCSI-2; however, SCSI-1
device drivers cannot.
VxWorks targets using SCSI interface controllers require a controller-specific
device driver. These device drivers work in conjunction with the
controller-independent SCSI libraries, and they provide controller configuration
and initialization routines contained in controller-specific libraries. For example,
the Western Digital WD33C93 SCSI controller is supported by the device driver
libraries wd33c93Lib, wd33c93Lib1, and wd33c93Lib2. Routines tied to SCSI-1
(such as wd33c93CtrlCreate( )) and SCSI-2 (such as wd33c93CtrlCreateScsi2( ))
416
7 I/O System
7.8 Devices in VxWorks
are segregated into separate libraries to simplify configuration. There are also
additional support routines for individual targets in sysLib.c.
When VxWorks is built with the INCLUDE_SCSI component, the system startup
code initializes the SCSI interface by executing sysScsiInit( ) and usrScsiConfig( ).
The call to sysScsiInit( ) initializes the SCSI controller and sets up interrupt
handling. The physical device configuration is specified in usrScsiConfig( ),
which is in installDir/vxworks-6.x/target/src/config/usrScsi.c. The routine contains
an example of the calling sequence to declare a hypothetical configuration, 7
including:
■ definition of physical devices with scsiPhysDevCreate( )
■ creation of logical partitions with scsiBlkDevCreate( )
■ creation of an XBD block wrapper driver with xbdBlkDevCreate( ).
If a recognized file system exists on the SCSI media, it is instantiated automatically
when xbdBlkDevCreate( ) returns. If not, the file system formatter must be called
to create the file system. See the dosFsVolFormat( ) API reference for information
about creating a dosFs file system; see the hrfsFormat( ) API reference for creating
an HRFS file system.
If you are not using SCSI_AUTO_CONFIG, modify usrScsiConfig( ) to reflect your
actual configuration. For more information on the calls used in this routine, see the
reference entries for scsiPhysDevCreate( ), scsiBlkDevCreate( ), and
xbdBlkDevCreate( ).
417
VxWorks
Kernel Programmer's Guide, 6.6
There are numerous types of SCSI devices, each supporting its own mix of SCSI-2
features. To set device-specific options, define a SCSI_OPTIONS structure and
assign the desired values to the structure’s fields. After setting the appropriate
fields, call scsiTargetOptionsSet( ) to effect your selections. Example 7-6
illustrates one possible device configuration using SCSI_OPTIONS.
Call scsiTargetOptionsSet( ) after initializing the SCSI subsystem, but before
initializing the SCSI physical device. For more information about setting and
implementing options, see the reference entry for scsiTargetOptionsSet( ).
The SCSI subsystem performs each SCSI command request as a SCSI transaction.
This requires the SCSI subsystem to select a device. Different SCSI devices require
different amounts of time to respond to a selection; in some cases, the selTimeOut
field may need to be altered from the default.
If a device does not support SCSI messages, the boolean field messages can be set
to FALSE. Similarly, if a device does not support disconnect/reconnect, the
boolean field disconnect can be set to FALSE.
The SCSI subsystem automatically tries to negotiate synchronous data transfer
parameters. However, if a SCSI device does not support synchronous data
transfer, set the maxOffset field to 0. By default, the SCSI subsystem tries to
negotiate the maximum possible REQ/ACK offset and the minimum possible data
transfer period supported by the SCSI controller on the VxWorks target. This is
done to maximize the speed of transfers between two devices. However, speed
depends upon electrical characteristics, like cable length, cable quality, and device
termination; therefore, it may be necessary to reduce the values of maxOffset or
minPeriod for fast transfers.
The tagType field defines the type of tagged command queuing desired, using one
of the following macros:
■
SCSI_TAG_UNTAGGED
418
7 I/O System
7.8 Devices in VxWorks
■
SCSI_TAG_SIMPLE
■
SCSI_TAG_ORDERED
■
SCSI_TAG_HEAD_OF_QUEUE
For more information about the types of tagged command queuing available, see
the ANSI X3T9-I/O Interface Specification Small Computer System Interface
(SCSI-2).
The maxTags field sets the maximum number of command tags available for a
particular SCSI device.
Wide data transfers with a SCSI target device are automatically negotiated upon 7
initialization by the SCSI subsystem. Wide data transfer parameters are always
negotiated before synchronous data transfer parameters, as specified by the SCSI
ANSI specification, because a wide negotiation resets any prior negotiation of
synchronous parameters. However, if a SCSI device does not support wide
parameters and there are problems initializing that device, you must set the
xferWidth field to 0. By default, the SCSI subsystem tries to negotiate the
maximum possible transfer width supported by the SCSI controller on the
VxWorks target in order to maximize the default transfer speed between the two
devices. For more information on the actual routine call, see the reference entry for
scsiTargetOptionsSet( ).
The following examples show some possible configurations for different SCSI
devices. Example 7-5 is a simple block device configuration setup. Example 7-6
involves selecting special options and demonstrates the use of
scsiTargetOptionsSet( ). Example 7-7 configures a SCSI device for synchronous
data transfer. Example 7-8 shows how to configure the SCSI bus ID. These
examples can be embedded either in the usrScsiConfig( ) routine or in a
user-defined SCSI configuration function.
419
VxWorks
Kernel Programmer's Guide, 6.6
If problems with your configuration occur, insert the following lines at the
beginning of usrScsiConfig( ) to obtain further information on SCSI bus activity.
#if FALSE
scsiDebug = TRUE;
scsiIntsDebug = TRUE;
#endif
Do not declare the global variables scsiDebug and scsiIntsDebug locally. They
can be set or reset from the shell.
420
7 I/O System
7.8 Devices in VxWorks
Example 7-6 Configuring a SCSI Disk Drive with Asynchronous Data Transfer and No Tagged Command
Queuing
In this example, a SCSI disk device is configured without support for synchronous
data transfer and tagged command queuing. The scsiTargetOptionsSet( ) routine
is used to turn off these features. The SCSI ID of this disk device is 2, and the LUN
is 0:
int which;
SCSI_OPTIONS option;
int devBusId;
7
devBusId = 2;
which = SCSI_SET_OPT_XFER_PARAMS | SCSI_SET_OPT_TAG_PARAMS;
option.maxOffset = SCSI_SYNC_XFER_ASYNC_OFFSET;
/* => 0 defined in scsi2Lib.h */
option.minPeriod = SCSI_SYNC_XFER_MIN_PERIOD; /* defined in scsi2Lib.h */
option.tagType = SCSI_TAG_UNTAGGED; /* defined in scsi2Lib.h */
option.maxTag = SCSI_MAX_TAGS;
421
VxWorks
Kernel Programmer's Guide, 6.6
Example 7-7 Configuring a SCSI Disk for Synchronous Data Transfer with Non-Default Offset and Period
Values
In this example, a SCSI disk drive is configured with support for synchronous data
transfer. The offset and period values are user-defined and differ from the driver
default values. The chosen period is 25, defined in SCSI units of 4 ns. Thus, the
period is actually 4 * 25 = 100 ns. The synchronous offset is chosen to be 2. Note
that you may need to adjust the values depending on your hardware environment.
int which;
SCSI_OPTIONS option;
int devBusId;
devBusId = 2;
which = SCSI_SET_IPT_XFER_PARAMS;
option.maxOffset = 2;
option.minPeriod = 25;
To change the bus ID of the SCSI controller, modify sysScsiInit( ) in sysScsi.c. Set
the SCSI bus ID to a value between 0 and 7 in the call to xxxCtrlInitScsi2( ), where
xxx is the controller name. The default bus ID for the SCSI controller is 7.
Troubleshooting
■ Incompatibilities Between SCSI-1 and SCSI-2
Applications written for SCSI-1 may not execute for SCSI-2 because data
structures in scsi2Lib.h, such as SCSI_TRANSACTION and SCSI_PHYS_DEV,
have changed. This applies only if the application used these structures
directly.
422
7 I/O System
7.8 Devices in VxWorks
If this is the case, you can choose to configure only the SCSI-1 level of support,
or you can modify your application according to the data structures in
scsi2Lib.h. In order to set new fields in the modified structure, some
applications may simply need to be recompiled, and some applications will
have to be modified and then recompiled.
■
SCSI Bus Failure
If your SCSI bus hangs, it could be for a variety of reasons. Some of the more
common are:
– Your cable has a defect. This is the most common cause of failure. 7
423
VxWorks
Kernel Programmer's Guide, 6.6
In VxWorks, file descriptors are unique to the kernel and to each process—as
in UNIX and Windows. The kernel and each process has its own universe of
file descriptors, distinct from each other. When the process is created, its
universe of file descriptors is initially populated by duplicating the file
descriptors of its creator. (This applies only when the creator is a process. If the
creator is a kernel task, only the three standard I/O descriptors 0, 1 and 2 are
duplicated.) Thereafter, all open, close, or dup activities affect only that
process’ universe of descriptors.
In kernel and in each process, file descriptors are global to that entity, meaning
that they are accessible by any task running in it.
In the kernel, however, standard input, standard output, and standard error
(0, 1, and 2) can be made task specific.
For more information see 7.4.1 File Descriptors, p.367 and 7.4.3 Standard I/O
Redirection, p.369.
■
I/O Control
The specific parameters passed to ioctl( ) functions can differ between UNIX
and VxWorks.
■ Driver Routines
424
7 I/O System
7.10 Internal I/O System Structure
425
VxWorks
Kernel Programmer's Guide, 6.6
Example 7-9 shows the abbreviated code for a hypothetical driver that is used as
an example throughout the following discussions. This example driver is typical
of drivers for character-oriented devices.
In VxWorks, each driver has a short, unique abbreviation, such as net or tty, which
is used as a prefix for each of its routines. The abbreviation for the example driver
is xx.
/*
* xxDrv - driver initialization routine
* xxDrv() init’s the driver. It installs the driver via iosDrvInstall.
* It may allocate data structures, connect ISRs, and initialize hardware
*/
STATUS xxDrv ()
{
xxDrvNum = iosDrvInstall (xxCreat, 0, xxOpen, 0, xxRead, xxWrite, xxIoctl)
;
(void) intConnect (intvec, xxInterrupt, ...);
...
}
/************************************************************************
* xxDevCreate - device creation routine
*
* Called to add a device called <name> to be svced by this driver. Other
* driver-dependent arguments may include buffer sizes, device addresses.
* The routine adds the device to the I/O system by calling iosDevAdd.
* It may also allocate and initialize data structures for the device,
* initialize semaphores, initialize device hardware, and so on.
*/
426
7 I/O System
7.10 Internal I/O System Structure
/*
*
* The following routines implement the basic I/O functions.
* The xxOpen() return value is meaningful only to this driver,
* and is passed back as an argument to the other I/O routines.
*/
/*
* xxInterrupt - interrupt service routine
*
* Most drivers have routines that handle interrupts from the devices
* serviced by the driver. These routines are connected to the interrupts
* by calling intConnect (usually in xxDrv above). They can receive a
* single argument, specified in the call to intConnect (see intLib).
*/
427
VxWorks
Kernel Programmer's Guide, 6.6
7.10.1 Drivers
A driver for a non-block device generally implements the seven basic I/O
functions—creat( ), remove( ), open( ), close( ), read( ), write( ), and ioctl( )—for a
particular kind of device. The driver implements these general functions with
corresponding device-specific routines that are installed with iosDrvInstall( ).
Not all of the general I/O functions are implemented if they are not supported by
a particular device. For example, remove( ) is usually not supported for devices
that are not used with file systems.
If any of the seven basic I/O routines are not implemented by a driver, a null
function pointer should be used for the corresponding iosDrvInstall( ) parameter
when the driver is installed. Any call to a routine that is not supported will then
fail and return an ENOTSUP error.
Drivers may (optionally) allow tasks to wait for activity on multiple file
descriptors. This functionality is implemented with the driver’s ioctl( ) routine; see
Implementing select( ), p.443.
A driver for a block device interfaces with a file system, rather than directly with
the I/O system. The file system in turn implements most I/O functions. The driver
need only supply routines to read and write blocks, reset the device, perform I/O
control, and check device status.
When an application invokes one of the basic I/O functions, the I/O system routes
the request to the appropriate routine of a specific driver, as described in the
following sections. The driver’s routine runs in the calling task’s context, as though
it were called directly from the application. Thus, the driver is free to use any
facilities normally available to tasks, including I/O to other devices. This means
that most drivers have to use some mechanism to provide mutual exclusion to
critical regions of code. The usual mechanism is the semaphore facility provided
in semLib.
In addition to the routines that implement the seven basic I/O functions, drivers
also have three other routines:
■
An initialization routine that installs the driver in the I/O system, connects to
any interrupts used by the devices serviced by the driver, and performs any
necessary hardware initialization. This routine is typically named xxDrv( ).
428
7 I/O System
7.10 Internal I/O System Structure
■
A routine to add devices that are to be serviced by the driver to the I/O system.
This routine is typically named xxDevCreate( ).
■
Interrupt-level routines that are connected to the interrupts of the devices
serviced by the driver.
The function of the I/O system is to route user I/O requests to the appropriate
routine of the appropriate driver. The I/O system does this by maintaining a table 7
that contains the address of each routine for each driver. Drivers are installed
dynamically by calling the I/O system internal routine iosDrvInstall( ). The
arguments to this routine are the addresses of the seven I/O routines for the new
driver. The iosDrvInstall( ) routine enters these addresses in a free slot in the
driver table and returns the index of this slot. This index is known as the driver
number and is used subsequently to associate particular devices with the driver.
Null (0) addresses can be specified for any of the seven basic I/O routines that are
not supported by a device. For example, remove( ) is usually not supported for
non-file-system devices, and a null is specified for the driver’s remove function.
When a user I/O call matches a null driver routine, the call fails and an ENOTSUP
error is returned.
VxWorks file systems (such as dosFsLib) contain their own entries in the driver
table, which are created when the file system library is initialized.
429
VxWorks
Kernel Programmer's Guide, 6.6
DRIVER CALL:
Figure 7-3 shows the actions taken by the example driver and by the I/O system
when the initialization routine xxDrv( ) runs.
The driver calls iosDrvInstall( ), specifying the addresses of the driver’s routines
for the seven basic I/O functions. Then, the I/O system:
1. Locates the next available slot in the driver table, in this case slot 2.
2. Enters the addresses of the driver routines in the driver table.
3. Returns the slot number as the driver number of the newly installed driver.
7.10.2 Devices
Some drivers are capable of servicing many instances of a particular kind of device.
For example, a single driver for a serial communications device can often handle
430
7 I/O System
7.10 Internal I/O System Structure
many separate channels that differ only in a few parameters, such as device
address.
In the VxWorks I/O system, devices are defined by a data structure called a device
header (DEV_HDR). This data structure contains the device name string and the
driver number for the driver that services this device. The device headers for all
the devices in the system are kept in a memory-resident linked list called the device
list. The device header is the initial part of a larger structure determined by the
individual drivers. This larger structure, called a device descriptor, contains
additional device-specific data such as device addresses, buffers, and semaphores.
7
Non-block devices are added to the I/O system dynamically by calling the internal
I/O routine iosDevAdd( ). The arguments to iosDevAdd( ) are the address of the
device descriptor for the new device, the device’s name, and the driver number of
the driver that services the device. The device descriptor specified by the driver
can contain any necessary device-dependent information, as long as it begins with
a device header. The driver does not need to fill in the device header, only the
device-dependent information. The iosDevAdd( ) routine enters the specified
device name and the driver number in the device header and adds it to the system
device list.
To add a block device to the I/O system, call the device initialization routine for
the file system required on that device—for example, dosFsDevCreate( ). The
device initialization routine then calls iosDevAdd( ) automatically.
The routine iosDevFind( ) can be used to locate the device structure (by obtaining
a pointer to the DEV_HDR, which is the first member of that structure) and to
verify that a device name exists in the table.
The following is an example using iosDevFind( ):
431
VxWorks
Kernel Programmer's Guide, 6.6
In Figure 7-4, the example driver’s device creation routine xxDevCreate( ) adds
devices to the I/O system by calling iosDevAdd( ).
432
7 I/O System
7.10 Internal I/O System Structure
DEVICE LIST:
"/dk0/" "/xx0" "/xx1"
1 2 2
device- device-
dependent dependent
data data
Deleting Devices
A device can be deleted with iosDevDelete( ) and the associated driver removed
with iosDrvRemove( ).
Note that a device-deletion operation causes the file descriptors that are open on
the device to be invalidated, but not closed. The file descriptors can only be closed
by an explicit act on the part of an application. If this were not the case, and file
descriptors were closed automatically by the I/O system, the descriptors could be
reassigned to new files while they were still being used by an application that was
unaware of the deletion of the device. The new files could then be accessed
unintentionally by an application attempting to use the files associated with the
433
VxWorks
Kernel Programmer's Guide, 6.6
deleted device, as well as by an application that was correctly using the new files.
This would result in I/O errors and possible device data corruption.
Because the file descriptors of a device that has been deleted are invalid, any
subsequent I/O calls that use them—except close( )—will fail. The behavior of the
I/O routines in this regard is as follows:
■
close( ) releases the file descriptor at I/O system level and the driver close
routine is not called.
■ read( ), write( ), and ioctl( ) fail with error ENXIO (no such device or address).
■ While open( ), remove( ), and create( ) do not take an open file descriptor as
input, they fail because the device name is no longer in the device list.
Note that even if a device is deleted and immediately added again with the same
device name, the file descriptors that were invalidated with the deletion are not
restored to valid status. The behavior of the I/O calls on the associated file
descriptors is the same as if the device had not been added again.
Applications that are likely to encounter device deletion should be sure to check
for ENXIO errors from read( ), write( ), and ioctl( ) calls, and to then close the
relevant file descriptors.
For situations in which devices are dynamically installed and deleted, the
iosDevDelCallback( ) routine provides the means for calling a post-deletion
handler after all driver invocations are exited.
A common use of a device deletion callback is to prevent a race condition that
would result from a device descriptor being deleted in one thread of execution
while it was still being used in another.
A device descriptor belongs to an application, and the I/O system cannot control
its creation and release. It is a user data structure with DEV_HDR data structure
embedded at the front of it, followed by any specific member of the device. Its
pointer can be used to pass into any iosDevXyz( ) routine as a DEV_HDR pointer,
or used as the device descriptor for user device handling.
When a device is deleted, an application should not immediately release the device
descriptor memory after iosDevDelete( ) and iosDrvRemove( ) calls because a
driver invocation of the deleted device might still be in process. If the device
descriptor is deleted while it is still in use by a driver routine, serious errors could
occur.
434
7 I/O System
7.10 Internal I/O System Structure
For example, the following would produce a race condition: task A invokes the
driver routine xyzOpen( ) by a call to open( ) and the xyzOpen( ) call does not
return before task B deletes the device and releases the device descriptor.
However, if descriptor release is not performed by task B, but by a callback
function installed with iosDevDelCallback( ), then the release occurs only after
task A’s invocation of the driver routine has finished.
A device callback routine is called immediately when a device is deleted with
iosDevDelete( ) or iosDrvRemove( ) as long as no invocations of the associated
driver are operative (that is, the device driver reference counter is zero).
7
Otherwise, the callback routine is not executed until the last driver call exits (and
the device driver reference counter reaches zero).
A device deletion callback routine should be called with only one parameter, the
pointer to the DEV_HDR data structure of the device in question. For example:
devDeleteCallback(pDevHdr)
STATUS fsDevCreate
(
char * pDevName, /* device name */
device_t device, /* underlying block device */
u_int maxFiles, /* max no. of simultaneously open files */
u_int devCreateOptions /* write option & volume integrity */
)
{
FS_VOLUME_DESC *pVolDesc = NULL; /* volume descriptor ptr */
. . . . . .
pVolDesc = (FS_VOLUME_DESC *) malloc (sizeof (*pVolDesc));
pVolDesc->device = device;
435
VxWorks
Kernel Programmer's Guide, 6.6
. . . . . .
if (iosDevAdd((void *)pVolDesc, pDevName, fsDrvNum ) == ERROR)
{
pVolDesc->magic = NONE;
goto error_iosadd;
}
/* Device deletion callback installed to release memory resource. */
iosDevDelCallback((DEV_HDR *) pVolDesc, (FUNCPTR) fsVolDescRelease);
. . . . . .
}
STATUS fsDevDelete
(
FS_VOLUME_DESC *pVolDesc /* pointer to volume descriptor */
)
{
. . . . . .
/*
* Delete the file system device from I/O device list. Callback
* fsVolDescRelease will be called from now on at a
* safe time by I/O system.
*/
iosDevDelete((DEV_HDR *) pVolDesc);
. . . . . .
}
The application should check the error returned by a deleted device, as follows:
if (write (fd, (char *)buffer, nbytes) == ERROR)
{
if (errno == ENXIO)
{
/* Device is deleted. fd must be closed by application. */
close(fd);
}
else
{
/* write failure due to other reason. Do some error dealing. */
. . . . . .
}
}
436
7 I/O System
7.10 Internal I/O System Structure
Several file descriptors can be open to a single device at one time. A device driver
can maintain additional information associated with a file descriptor beyond the
I/O system’s device information. In particular, devices on which multiple files can
be open at one time have file-specific information (for example, file offset)
associated with each file descriptor. You can also have several file descriptors open
to a non-block device, such as a tty; typically there is no additional information,
and thus writing on any of the file descriptors produces identical results.
7
Files are opened with open( ) or creat( ). The I/O system searches the device list
for a device name that matches the filename (or an initial substring) specified by
the caller. If a match is found, the I/O system uses the driver number contained in
the corresponding device header to locate and call the driver’s open routine in the
driver table.
The I/O system must establish an association between the file descriptor used by
the caller in subsequent I/O calls, and the driver that services it. Additionally, the
driver must associate some data structure per descriptor. In the case of non-block
devices, this is usually the device descriptor that was located by the I/O system.
The I/O system maintains these associations in a table called the file descriptor table.
This table contains the driver number and an additional driver-determined 4-byte
value. The driver value is the internal descriptor returned by the driver’s open
routine, and can be any value the driver requires to identify the file. In subsequent
calls to the driver’s other I/O functions (read( ), write( ), ioctl( ), and close( )), this
value is supplied to the driver in place of the file descriptor in the application-level
I/O call.
In Figure 7-5 and Figure 7-6, a user calls open( ) to open the file /xx0. The I/O
system takes the following series of actions:
1. It searches the device list for a device name that matches the specified filename
(or an initial substring). In this case, a complete device name matches.
2. It reserves a slot in the file descriptor table and creates a new file descriptor
object, which is used if the open is successful.
437
VxWorks
Kernel Programmer's Guide, 6.6
3. It then looks up the address of the driver’s open routine, xxOpen( ), and calls
that routine. Note that the arguments to xxOpen( ) are transformed by the I/O
system from the user’s original arguments to open( ). The first argument to
xxOpen( ) is a pointer to the device descriptor the I/O system located in the
full filename search. The next parameter is the remainder of the filename
specified by the user, after removing the initial substring that matched the
device name. In this case, because the device name matched the entire
filename, the remainder passed to the driver is a null string. The driver is free
to interpret this remainder in any way it wants. In the case of block devices,
this remainder is the name of a file on the device. In the case of non-block
devices like this one, it is usually an error for the remainder to be anything but
the null string. The third parameter is the file access flag, in this case
O_RDONLY; that is, the file is opened for reading only. The last parameter is
the mode, as passed to the original open( ) routine.
4. It executes xxOpen( ), which returns a value that subsequently identifies the
newly opened file. In this case, the value is the pointer to the device descriptor.
This value is supplied to the driver in subsequent I/O calls that refer to the file
being opened. Note that if the driver returns only the device descriptor, the
driver cannot distinguish multiple files opened to the same device. In the case
of non-block device drivers, this is usually appropriate.
5. The I/O system then enters the driver number and the value returned by
xxOpen( ) in the new file descriptor object.
Again, the value entered in the file descriptor object has meaning only for the
driver, and is arbitrary as far as the I/O system is concerned.
6. Finally, it returns to the user the index of the slot in the file descriptor table, in
this case 3.
438
7 I/O System
7.10 Internal I/O System Structure
fd = open ("/xx0", O_RDONLY, 0); xxdev = xxOpen (xxdev, "", O_RDONLY, 0);
[1] I/O system finds [2] I/O system reserves [3] I/O system calls 7
name in device list. a slot in the file descriptor driver’s open routine
table. with pointer to
device descriptor.
pDevHdr value
FILE DESCRIPTOR TABLE: 0
1
2
3
4
DEVICE LIST:
"/dk0/" "/xx0" "/xx1"
1 2 2
device-
dependent
data
439
VxWorks
Kernel Programmer's Guide, 6.6
fd = open ("/xx0", O_RDONLY, 0); xxdev = xxOpen (xxdev, "", O_RDONLY, 0);
[6] I/O system returns [5] I/O system enters [4] Driver returns any
index in table of driver number and identifying value, in
new open file (fd = 3). identifying value in this case the pointer to
reserved table slot. the device descriptor.
drvnum value
FILE DESCRIPTOR TABLE:
0
1
2
3 2 xxdev
4
DEVICE LIST:
"/dk0/" "/xx0" "/xx1"
1 2 2
device-
dependent
data
440
7 I/O System
7.10 Internal I/O System Structure
In Figure 7-7, the user calls read( ) to obtain input data from the file. The specified
file descriptor is the index into the file descriptor table for this file. The I/O system
uses the driver number contained in the table to locate the driver’s read routine,
xxRead( ). The I/O system calls xxRead( ), passing it the identifying value in the
file descriptor table that was returned by the driver’s open routine, xxOpen( ).
Again, in this case the value is the pointer to the device descriptor. The driver’s
read routine then does whatever is necessary to read data from the device. The
process for user calls to write( ) and ioctl( ) follow the same procedure.
7
441
VxWorks
Kernel Programmer's Guide, 6.6
drvnum value
FILE DESCRIPTOR TABLE: 0
1
2
3 2 xxdev
4
DEVICE LIST:
"/dk0/" "/xx0" "/xx1"
1 2 2
device-
dependent
data
442
7 I/O System
7.10 Internal I/O System Structure
The user terminates the use of a file by calling close( ). As in the case of read( ), the
I/O system uses the driver number contained in the file descriptor table to locate
the driver’s close routine. In the example driver, no close routine is specified; thus
no driver routines are called. Instead, the I/O system marks the slot in the file
descriptor table as being available. Any subsequent references to that file
descriptor cause an error. Subsequent calls to open( ) can reuse that slot.
7
Implementing select( )
Supporting select( ) in your driver allows tasks to wait for input from multiple
devices or to specify a maximum time to wait for the device to become ready for
I/O. Writing a driver that supports select( ) is simple, because most of the
functionality is provided in selectLib. You might want your driver to support
select( ) if any of the following is appropriate for the device:
■ The tasks want to specify a timeout to wait for I/O from the device. For
example, a task might want to time out on a UDP socket if the packet never
arrives.
■ The driver supports multiple devices, and the tasks want to wait
simultaneously for any number of them. For example, multiple pipes might be
used for different data priorities.
■ The tasks want to wait for I/O from the device while also waiting for I/O from
another device. For example, a server task might use both pipes and sockets.
To implement select( ), the driver must keep a list of tasks waiting for device
activity. When the device becomes ready, the driver unblocks all the tasks waiting
on the device.
For a device driver to support select( ), it must declare a SEL_WAKEUP_LIST
structure (typically declared as part of the device descriptor structure) and
initialize it by calling selWakeupListInit( ). This is done in the driver’s
xxDevCreate( ) routine. When a task calls select( ), selectLib calls the driver’s
ioctl( ) routine with the function FIOSELECT or FIOUNSELECT. If ioctl( ) is called
with FIOSELECT, the driver must do the following:
1. Add the SEL_WAKEUP_NODE (provided as the third argument of ioctl( )) to
the SEL_WAKEUP_LIST by calling selNodeAdd( ).
443
VxWorks
Kernel Programmer's Guide, 6.6
2. Use the routine selWakeupType( ) to check whether the task is waiting for
data to read from the device (SELREAD) or if the device is ready to be written
(SELWRITE).
3. If the device is ready (for reading or writing as determined by
selWakeupType( )), the driver calls the routine selWakeup( ) to make sure
that the select( ) call in the task does not pend. This avoids the situation where
the task is blocked but the device is ready.
If ioctl( ) is called with FIOUNSELECT, the driver calls selNodeDelete( ) to remove
the provided SEL_WAKEUP_NODE from the wakeup list.
When the device becomes available, selWakeupAll( ) is used to unblock all the
tasks waiting on this device. Although this typically occurs in the driver’s ISR, it
can also occur elsewhere. For example, a pipe driver might call selWakeupAll( )
from its xxRead( ) routine to unblock all the tasks waiting to write, now that there
is room in the pipe to store the data. Similarly the pipe’s xxWrite( ) routine might
call selWakeupAll( ) to unblock all the tasks waiting to read, now that there is data
in the pipe.
444
7 I/O System
7.10 Internal I/O System Structure
/* This code fragment shows how a driver might support select(). In this
* example, the driver unblocks tasks waiting for the device to become ready
* in its interrupt service routine.
*/
#include <vxWorks.h>
#include <selectLib.h>
STATUS myDrvDevCreate
(
char * name, /* name of device to create */
)
{
MY_DEV * pMyDrvDev; /* pointer to device descriptor*/
... additional driver code ...
/* initialize MY_DEV */
pMyDrvDev->myDrvDataAvailable=FALSE
pMyDrvDev->myDrvRdyForWriting=FALSE
STATUS myDrvIoctl
(
MY_DEV * pMyDrvDev, /* pointer to device descriptor */
445
VxWorks
Kernel Programmer's Guide, 6.6
switch (request)
{
... additional driver code ...
case FIOSELECT:
case FIOUNSELECT:
void myDrvIsr
(
MY_DEV * pMyDrvDev;
)
{
... additional driver code ...
if (pMyDrvDev->myDrvDataAvailable)
selWakeupAll (&pMyDrvDev->selWakeupList, SELREAD);
446
7 I/O System
7.10 Internal I/O System Structure
if (pMyDrvDev->myDrvRdyForWriting)
selWakeupAll (&pMyDrvDev->selWakeupList, SELWRITE);
}
Cache Coherency
NOTE: The cache facilities described in this section are provided for the
uniprocessor (UP) configuration of VxWorks, some of which are not appropriate— 7
and not provided—for the symmetric multiprocessor (SMP) configuration. For
more information in this regard, see cacheLib Restrictions, p.710. For general
information about VxWorks SMP and about migration, see 15. VxWorks SMP and
15.15 Migrating Code to VxWorks SMP, p.704.
Drivers written for boards with caches must guarantee cache coherency. Cache
coherency means data in the cache must be in sync, or coherent, with data in RAM.
The data cache and RAM can get out of sync any time there is asynchronous access
to RAM (for example, DMA device access or VMEbus access). Data caches are used
to increase performance by reducing the number of memory accesses. Figure 7-8
shows the relationships between the CPU, data cache, RAM, and a DMA device.
Data caches can operate in one of two modes: writethrough and copyback.
Write-through mode writes data to both the cache and RAM; this guarantees cache
coherency on output but not input. Copyback mode writes the data only to the
cache; this makes cache coherency an issue for both input and output of data.
CPU
Data Cache
DMA
RAM
Device
447
VxWorks
Kernel Programmer's Guide, 6.6
If a CPU writes data to RAM that is destined for a DMA device, the data can first
be written to the data cache. When the DMA device transfers the data from RAM,
there is no guarantee that the data in RAM was updated with the data in the cache.
Thus, the data output to the device may not be the most recent—the new data may
still be sitting in the cache. This data incoherence can be solved by making sure the
data cache is flushed to RAM before the data is transferred to the DMA device.
If a CPU reads data from RAM that originated from a DMA device, the data read
can be from the cache buffer (if the cache buffer for this data is not marked invalid)
and not the data just transferred from the device to RAM. The solution to this data
incoherence is to make sure that the cache buffer is marked invalid so that the data
is read from RAM and not from the cache.
Drivers can solve the cache coherency problem either by allocating cache-safe
buffers (buffers that are marked non-cacheable) or flushing and invalidating cache
entries any time the data is written to or read from the device. Allocating
cache-safe buffers is useful for static buffers; however, this typically requires MMU
support. Non-cacheable buffers that are allocated and freed frequently (dynamic
buffers) can result in large amounts of memory being marked non-cacheable. An
alternative to using non-cacheable buffers is to flush and invalidate cache entries
manually; this allows dynamic buffers to be kept coherent.
The routines cacheFlush( ) and cacheInvalidate( ) are used to manually flush and
invalidate cache buffers. Before a device reads the data, flush the data from the
cache to RAM using cacheFlush( ) to ensure the device reads current data. After
the device has written the data into RAM, invalidate the cache entry with
cacheInvalidate( ). This guarantees that when the data is read by the CPU, the
cache is updated with the new data in RAM.
448
7 I/O System
7.10 Internal I/O System Structure
#include <vxWorks.h>
#include <cacheLib.h>
#include <fcntl.h>
#include "example.h"
void exampleDmaTransfer /* 1 = READ, 0 = WRITE */
( 7
UINT8 *pExampleBuf,
int exampleBufLen,
int xferDirection
)
{
if (xferDirection == 1)
{
myDevToBuf (pExampleBuf);
cacheInvalidate (DATA_CACHE, pExampleBuf, exampleBufLen);
}
else
{
cacheFlush (DATA_CACHE, pExampleBuf, exampleBufLen);
myBufToDev (pExampleBuf);
}
}
449
VxWorks
Kernel Programmer's Guide, 6.6
450
7 I/O System
7.10 Internal I/O System Structure
#include <vxWorks.h>
#include <cacheLib.h>
#include "myExample.h"
STATUS myDmaExample (void)
{ 7
void * pMyBuf;
void * pPhysAddr;
/* invalidate buffer */
CACHE_DMA_INVALIDATE (pMyBuf, MY_BUF_SIZE);
… use data …
451
VxWorks
Kernel Programmer's Guide, 6.6
7.11 PCMCIA
A PCMCIA card can be plugged into notebook computers to connect devices such
as modems and external hard drives.2 VxWorks provides PCMCIA facilities for
pcPentium, pcPentium2, and pcPentium3 BSPs and PCMCIA drivers that allow
VxWorks running on these targets to support PCMCIA hardware.
PCMCIA support is at the PCMCIA Release 2.1 level. It does not include socket
services or card services, which are not required by VxWorks. It does include chip
drivers and libraries. The PCMCIA libraries and drivers are also available in
source code form for VxWorks systems based on CPU architectures other than
Intel Pentium.
To include PCMCIA support in your system, configure VxWorks with the
INCLUDE_PCMCIA component. For information about PCMCIA facilities, see the
API references for pcmciaLib and pcmciaShow.
2. PCMCIA stands for Personal Computer Memory Card International Association, and refers
to both the association and the standards that it has developed.
452
8
Local File Systems
453
VxWorks
Kernel Programmer's Guide, 6.6
8.1 Introduction
VxWorks provides a variety of file systems that are suitable for different types of
applications. The file systems can be used simultaneously, and in most cases in
multiple instances, for a single VxWorks system.
Most VxWorks file systems rely on the extended block device (XBD) facility for a
a standard I/O interface between the file system and device drivers. This standard
interface allows you to write your own file system for VxWorks, and freely mix file
systems and device drivers.
File systems used for removable devices make use of the file system monitor for
automatic detection of device insertion and instantiation of the appropriate file
system on the device.
The relationship between applications, file systems, I/O facilities, device drivers
and hardware devices is illustrated in Figure 8-1. Note that this illustration is
relevant for the HRFS, dosFs, rawFs, and cdromFs file systems. The dotted line
indicates the elements that must be configured and instantiated to create a specific,
functional run-time file system.
454
8 Local File Systems
8.1 Introduction
Application
I/O System
File System
HRFS, dosFs, rawFs, cdromFs
XBD Facility
Block Device
SCSI, ATA, RAM disk, Floppy, TrueFFS, and so on
Hardware
For information about the XBD facility, see 7.8.8 Extended Block Device Facility:
XBD, p.404.
455
VxWorks
Kernel Programmer's Guide, 6.6
This chapter discusses the file system monitor and the following VxWorks file
systems, describing how they are organized, configured, and used:
■
VRFS
A virtual root file system for use with applications that require a POSIX root
file system. The VRFS is simply a root directory from which other file systems
and devices can be accessed. See 8.3 Virtual Root File System: VRFS, p.459.
■
HRFS
Provides a simple raw file system that treats an entire disk as a single large file.
See 8.6 Raw File System: rawFs, p.507.
■ cdromFs
Designed for bundling applications and other files with a VxWorks system
image. No storage media is required beyond that used for the VxWorks boot
image. See 8.8 Read-Only Memory File System: ROMFS, p.518.
■ TSFS
Uses the host target server to provide the target with access to files on the host
system. See 8.9 Target Server File System: TSFS, p.520.
For information about the XBD facility, see 7.8.8 Extended Block Device Facility:
XBD, p.404).
456
8 Local File Systems
8.2 File System Monitor
VxWorks can be configured with file-system support for flash memory devices
using TrueFFS and the HRFS or dosFs file system. For more information, see
8.5 MS-DOS-Compatible File System: dosFs, p.481 and 10. Flash File System Support:
TrueFFS.
NOTE: This chapter provides information about facilities available in the VxWorks
kernel. For information about facilities available to real-time processes, see the
VxWorks Application Programmer’s Guide: Local File Systems.
457
VxWorks
Kernel Programmer's Guide, 6.6
6. When a file system's probe routine returns success, that file system's
instantiation routine is executed. If none of the probes are successful, or if the
file system instantiation routine fails, a rawFs file system is created on the
device by default.
When a device is removed, the following occurs:
1. The block device detects the removal of the hardware device associated with
it and generates a removal event.
2. The block device removes itself, freeing all its resources.
3. The file system associated with the block device removes itself from core I/O,
invalidating its file handles.
4. The file system removes itself, freeing all its resources.
The types of device insertion events to which the file system monitor responds are
described in more detail below.
XBD Primary Insertion Event
An XBD-compliant block device generates a primary insertion event when
media that can support partitions is inserted (that is, if a partition table is
found). In response, the file system monitor creates a partition manager, which
in turn generates secondary insertion events for each partition that it finds on
the media (see below).
Note that for block devices used with the XBD wrapper component
(INCLUDE_XBD_BLK_DEV), a primary insertion event is always generated,
regardless of the media. The wrapper element is essentially hardware
agnostic; it cannot know if the device might include partitions. For example,
the device could be a hard disk—for which partitions are expected—or it could
be a floppy device.
Note also that a RAM disk device can generate a primary insertion event,
depending on the parameters used when it was created (see XBD RAM Disk,
p.413 and the API reference for XbdRamDisk).
XBD Secondary Insertion Event
A secondary insertion event is generated by either by a block device whose
media does not support partitions, or by an XBD partition manager. The
secondary event signals the file system manager to run the probe routines that
identify the file system on the device. If a probe routine returns OK, the
458
8 Local File Systems
8.3 Virtual Root File System: VRFS
The file system monitor name mapping facility allows XBD names to be mapped
to a more suitable name. It's primary use is for the partition manager which
appends :x to the base xbd name when it detects a partition. By using the fsm name
facility you can map the partition names to something more useful. For example,
the floppy drive configlette uses the name component to map the supplied floppy
name plus the :0 the partition manager will add to /fdx. Where x represents the
floppy drive number. If this was not done one would see the default device names
in the list generated by the devs shell command. For more information see the API
references for fsmNameInstall( ), fsmNameMap( ), and fsmNameUninstall( );
also see Example 8-2.
459
VxWorks
Kernel Programmer's Guide, 6.6
To include the VRFS in VxWorks, configure the kernel with the INCLUDE_VRFS
component. The VRFS is created and mounted automatically if the component is
included in VxWorks.
This shell session illustrates the relationship between device names and access to
devices and file systems with the VRFS.
-> devs
drv name
0 /null
1 /tyCo/0
1 /tyCo/1
2 /aioPipe/0x1817040
6 /romfs
7 /
9 yow-build02-lx:
10 /vio
11 /shm
12 /ram0
value = 25 = 0x19
-> cd "/"
value = 0 = 0x0
-> ll
?--------- 0 0 0 0 Jan 1 00:00 null
drwxrwxr-x 0 15179 100 20 Jan 23 2098 romfs/
?--------- 0 0 0 0 Jan 1 00:00 vio
drwxrwxrwx 1 0 0 0 Jan 1 00:00 shm/
drwxrwxrwx 1 0 0 2048 Jan 1 00:00 ram0/
value = 0 = 0x0
NOTE: Configuring VxWorks with support for POSIX PSE52 conformance (using
BUNDLE_RTP_POSIX_PSE52) provides the /dev/null device. Note that the devs
shell command lists /dev/null with other devices, but the ls command does not list
/dev/null under the VRFS root directory (because the name violates the VRFS
naming scheme). Applications can, in any case, use /dev/null as required. For
information about null devices, see 7.8.6 Null Devices, p.404. For information about
POSIX PSE52, see the VxWorks Application Programmer’s Guide: POSIX Facilities.
460
8 Local File Systems
8.4 Highly Reliable File System: HRFS
! CAUTION: VRFS alters the behavior of other file systems because it provides a root
directory on VxWorks. Changing directory to an absolute path on a host file
system will not work when VRFS is installed without preceding the absolute path
with the VxWorks device name. For example, if the current working directory is
hostname, changing directory to /home/panloki will not work— it must be named
hostname:/home/panloki.
8
8.4 Highly Reliable File System: HRFS
The Highly Reliable File System (HRFS) is a transactional file system for real-time
systems. The primary features of the file system are:
■ Fault tolerance. The file system is never in an inconsistent state, and is
therefore able to recover quickly from unexpected loses of power.
■ Configurable commit policies.
■ Hierarchical file and directory system, allowing for efficient organization of
files on a volume.
■ Compatibility with a widely available storage devices.
■
POSIX compliance.
For more information about the HRFS libraries see the VxWorks API references for
hrfsFormatLib, hrFsLib, and hrfsChkDskLib.
For information about using HRFS with flash memory, see 10. Flash File System
Support: TrueFFS.
To include HRFS support in VxWorks, configure the kernel with the appropriate
required and optional components.
461
VxWorks
Kernel Programmer's Guide, 6.6
Required Components
462
8 Local File Systems
8.4 Highly Reliable File System: HRFS
HRFS_DEFAULT_MAX_FILES
Defines how many files can be simultaneously open on an HRFS volume. The
minimum is 1. The default setting is 10. Note that is not the same as the
maximum number of file descriptors.
HRFS_DEFAULT_COMMIT_POLICY
Defines the default commit policy for an HRFS volume, which is
FS_COMMIT_AUTO. Commit policies can also be changed at runtime. For
more information see 8.4.5 Transactional Operations and Commit Policies, p.471
and 8.4.6 Configuring Transaction Points at Runtime, p.474.
HRFS_DEFAULT_COMMIT_PERIOD
Defines the initial commit period of an HRFS volume if it has been configured 8
for periodic commits. This parameter is measured in milliseconds. The default
value is 5000 milliseconds (5 seconds). The commit period can also be changed
at runtime. For more information see 8.4.5 Transactional Operations and Commit
Policies, p.471 and 8.4.6 Configuring Transaction Points at Runtime, p.474.
This section describes the process of creating an HRFS file system. It first provides
a summary overview and then a detailed description of each step. See 8.4.4 HRFS,
ATA, and RAM Disk Examples, p.465 for examples of the steps and code examples.
For information operating system configuration, see 8.4.1 Configuring VxWorks for
HRFS, p.461. Note that the file system is initialized automatically at boot time.
The steps involved in creating an HRFS file system are as follows:
1. If you are using a custom driver, create the appropriate block device. See
Step 1:Create a Block Device, p.464.
If you are using a standard VxWorks component for the device, it is created
automatically.
2. If you are using a device driver that is not XBD-compliant, create an XBD
device wrapper. See Step 2:Create an XBD Device Wrapper, p.464 (Also see XBD
Block Device Wrapper, p.406.).
3. Optionally, create and mount partitions. See Step 3:Create Partitions, p.465.
463
VxWorks
Kernel Programmer's Guide, 6.6
4. If you are not using pre-formatted disks, format the volumes. See
Step 4:Formatting the Volume, p.465.
Before any other operations can be performed, the HRFS file system library,
hrFsLib, must be initialized. This happens automatically at boot time, triggered by
the required HRFS components that were included in the system.
Initializing HRFS involves the creation of a vnode layer. HRFS installs an number
of internal vnode operators into the this layer. The vnode layer invokes
iosDrvInstall( ) when media is detected, which adds the driver to the I/O driver
table. The driver number assigned to vnodes—and therefore HRFS—is recorded
in a global variable, vnodeAffDriverNumber. The table specifies entry points for
the vnode file operations that are accessing devices using HRFS.
464
8 Local File Systems
8.4 Highly Reliable File System: HRFS
mounted. If a file system is found, it is mounted as well. If file system is not HRFS,
it must be formatted (see below).
This section provides examples of the steps discussed in the preceding section.
They are meant to be relatively generic, and illustrate the following:
■
Creating and working with an HRFS file system on an ATA disk with
commands from the shell.
■
Code that creates and formats partitions.
■
Code that creates and formats a RAM disk volume.
! CAUTION: Because device names are recognized by the I/O system using simple
substring matching, file systems should not use a slash (/) alone as a name;
unexpected results may otherwise occur.
This example demonstrates how to initialize an ATA disk with HRFS on two
partitions from the shell. While these steps use an ATA device, they are applicable
to other block devices.
465
VxWorks
Kernel Programmer's Guide, 6.6
1. If you are using a custom driver, create an ATA block device that controls the
master ATA hard disk (drive zero) on the primary ATA controller (controller
zero). This device uses the entire disk.
-> xbd = ataXbdDevCreate(0,0,0,0,"/ata")
New symbol "xbd" added to kernel symbol table.
Instantiating /ata:0 as rawFs
xbd = 0xca4fe0: value = 262145 = 0x40001
The xbd variable is of type device_t. A value of zero would indicate an error
in the ataXbdDevCreate( ) call, which usually indicates a BSP configuration or
hardware configuration error.
If you are using the standard INCLUDE_ATA device component, the block
device is created automatically. Note that in this case the default device name
(provided by the component) is /ata0a.
2. Display information about devices.
-> devs
drv name
0 /null
1 /tyCo/0
1 /tyCo/1
8 yow-grand:
9 /vio
4 /ata:0
value = 25 = 0x19
3. The new ata driver /ata:0 is listed. The zero in the name indicates that no
partitions were detected. Note that if no file system is detected on the device,
the rawFs file system is instantiated automatically and appears in the device
list. Prepare the disk for first use. Create two partitions on this disk device,
specifying 50% of the disk space for the second partition, leaving 50% for the
first partition. This step should only be performed once, when the disk is first
initialized. If partitions are already written to the disk, this step should not be
performed since it destroys any data on the disk.
-> xbdCreatePartition ("/ata:0", 2, 50, 0, 0)
value = 0 = 0x0
466
8 Local File Systems
8.4 Highly Reliable File System: HRFS
4. Then list the devices to display information about the new partitions.
-> devs
drv name
0 /null
1 /tyCo/0
1 /tyCo/1
8 yow-grand:
9 /vio
3 /ata:1
3 /ata:2
Note that /ata:0 does not appear in this list, and two new devices, /ata:1 and
/ata:2, have been added to represent the new partitions. Each volume has
rawfs instantiated in it as they are new and unformatted. 8
5. Format the volumes for HRFS. This step need only be done once, when the
volumes are first created. If the volumes have already been formatted, then
omit this step. This example formats the file system volumes with default
options.
-> hrfsFormat ("/ata:1", 0ll, 0, 0)
Formatting /ata:1 for HRFS
Instantiating /ata:1 as rawFs
Formatting...OK.
value = 0 = 0x0
Note that in the hrfsFormat( ) call, the ll (two lower-case L letters) used with
the second parameter is required to indicate to the shell that the data type is
long long.
For more information, see the API reference for hrFsFormatLib.
6. Display information about the HRFS volumes.
-> ll "/ata:1"
-> ll "/ata:2"
467
VxWorks
Kernel Programmer's Guide, 6.6
If you are working with an ATA hard disk or a CD-ROM file system from an
ATAPI CD-ROM drive, you can, alternatively, use usrAtaConfig( ). This
routine processes several steps at once. For more information, see the API
reference.
This code takes the name of a block device that you have already instantiated,
creates three partitions, creates the partition handler for these partitions, and
creates the HRFS device handler for them. Then it formats the partitions using
hrfsFormat( ).
STATUS usrPartDiskFsInit
(
char * xbdName /* device name used during creation of XBD */
)
{
const char * devNames[] = { "/sd0a", "/sd0b", "/sd0c" };
devname_t xbdPartName;
int i;
/* create partitions */
return OK;
}
468
8 Local File Systems
8.4 Highly Reliable File System: HRFS
Note that in most cases you would be likely to format the different partitions for
different file systems.
Example 8-3 Creating and Formatting a RAM Disk Volume and Performing File I/O
The following code creates a RAM disk, formats it for use with the HRFS file
system, and performs file system operations.
#include <vxWorks.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <hrFsLib.h>
#include <xbdPartition.h> 8
#include <xbdRamDisk.h>
STATUS hrfsSetup
(
void
)
{
STATUS error;
device_t xbd;
/*
* Format the RAM disk for HRFS. Allow for upto a 1000 files/directories
* and let HRFS determine the logical block size.
*/
error = hrfsFormat (DEVNAME, 0ll, 0, 1000);
if (error != OK)
{
printf("Failed to format RAM disk. errno = 0x%x\n", errno);
return (ERROR);
}
STATUS hrfsFileExample
(
void
469
VxWorks
Kernel Programmer's Guide, 6.6
)
{
int fd;
char path[PATH_MAX];
char *testString = "hello world";
int size = strlen (testString) + 1; /* size of test string including EOS */
int len;
470
8 Local File Systems
8.4 Highly Reliable File System: HRFS
}
close (fd);
return (OK);
}
Note that to use this code, you must configure VxWorks with the
INCLUDE_HRFS_FORMAT, INCLUDE_XBD_RAMDRV and
INCLUDE_XBD_PART_LIB components.
The following illustrates running the example from the shell.
-> hrfsSetup
Instantiating /myram as rawFs
Formatting /myram for HRFS
Instantiating /myram as rawFs 8
Formatting...OK.
/myram now ready for use.
value = 0 = 0x0
-> hrfsFileExample
Writing 12 bytes to file.
Reading 12 bytes from file.
value = 0 = 0x0
-> ll "/myram"
Listing Directory /myram:
drwxrwxrwx 1 0 0 2048 Jan 1 00:00 ./
drwxrwxrwx 1 0 0 2048 Jan 1 00:00 ../
-rwxrwxrwx 1 0 0 12 Jan 1 00:00 myfile
value = 0 = 0x0
->
HRFS is a transactional file system. That is, transaction or commit points are set to
make disk changes permanent. Commit points can be configured to be set under
different conditions, which are referred to as policies. Some disk operations trigger
commits regardless of the policy. Under certain circumstances, HRFS rollbacks
undo disk changes since the last commit, in order to protect the integrity of the file
system.
For information about static and dynamic configuration of commit policies, see
8.4.2 Configuring HRFS, p.462 and 8.4.6 Configuring Transaction Points at Runtime,
p.474.
Commit Policies
471
VxWorks
Kernel Programmer's Guide, 6.6
Automatic
Any operation that changes data on the disk results in a transaction point
being set. This is the safest policy in terms of the potential for data loss. It is
also the slowest in terms of performance, as every write to disk cause a
commit. This is the default policy. There is no need for explicit action on the
part of an application to commit a change. The following routines, for
example, cause modifications to disk and result in a commit when the
automatic policy is in force:
472
8 Local File Systems
8.4 Highly Reliable File System: HRFS
■
write( )
■
remove( )
■
delete( )
■
mkdir( )
■
rmdir( )
■
link( )
■
unlink( )
■ truncate( )
■ ftruncate( )
■ ioctl( ) when used with a control function that requires modifying the
disk.
8
Manual
The application decides when a commit is to be performed. The user explicitly
sets transaction points. This is the fastest policy in terms of performance but
obviously has the potential for greater data loss. The application can, however,
decide when critical data has been written and needs to be committed. The
commit( ) routine is used with this policy.
Periodic
Transaction points are set automatically at periodic intervals. This policy is in
between automatic and manual in terms of performance and potential data
loss.
Mandatory Commits
For both manual and periodic commit policies there are circumstances under
which a commit is always performed. Mandatory commits occur under the
following circumstances:
■ Creation of a file or directory
■ Deletion of a file or directory.
■ Renaming/moving a file or directory.
■
Space in the inode journal is exhausted.
■
Commit policy is changed at runtime.
Note that mandatory commits are a subset of automatic commits—they do not, for
example, include write( ) and truncate( ).
Rollback
A rollback undoes any disk changes since the last commit. Rollbacks usually occur
when the system is unexpectedly powered down or reset. Rollbacks can also occur
473
VxWorks
Kernel Programmer's Guide, 6.6
when the file system encounters errors; for example, the lack of disk space to
complete a write( ), or an error is reported by the underlying device driver.
Rollbacks of this nature only happen on operations that modify the media. Errors
on read operations do not force a rollback.
A rollback involves HRFS returning to the state of the disk at the last transaction
point, which thereby preserves the integrity of the file system, but at the expense
of losing file data that has changed since the last transaction point. If the manual
or periodic commit policy is specified, there is the potential for losing a lot of
data—although the integrity of the file system is preserved.
The Highly Reliable File System (HRFS) provides configurable transaction points,
which allow for finer control of how and when transaction points are set.
The HRFS_DEFAULT_COMMIT_POLICY and HRFS_DEFAULT_COMMIT_PERIOD
component configuration parameters are used to statically define the default
commit policy and period (for more information see 8.4.2 Configuring HRFS,
p.462).
Both kernel and RTP applications can change commit policies at runtime. The
following ioctl( ) functions are used to get and set commit policies:
■ FIOCOMMITPOLICYGETFS
■ FIOCOMMITPOLICYSETFS
■ FIOCOMMITPERIODGETFS
■ FIOCOMMITPERIODSETFS
The commit policy for each volume can be changed using the ioctl( ) function
FIOCOMMITPOLICYSETFS as the second parameter.
The third parameter then specifies the actual commit policy:
FS_COMMIT_POLICY_AUTO, FS_COMMIT_POLICY_MANUAL, or
FS_COMMIT_POLICY_PERIODIC.
If an HRFS volume has been configured for periodic commits, the commit period
can be changed with ioctl( ) function FIOCOMMITPERIODSETFS. The third
parameter is used to specify the commit period in milliseconds. If 0 is specified
then the default commit period is used.
The commit( ) routine can be used to commit programmatically. The routine is
provided by the INCLUDE_DISK_UTILS component.
474
8 Local File Systems
8.4 Highly Reliable File System: HRFS
475
VxWorks
Kernel Programmer's Guide, 6.6
HRFS files and directories are stored on disk in data structures called inodes.
During formatting the maximum number of inodes is specified as a parameter to
hrfsFormat( ). The total number of files and directories can never exceed the
number inodes. Attempting to create a file or directory when all inodes are in use
generates an error. Deleting a file or directory frees the corresponding inode.
This section discusses creating and removing directories, and reading directory
entries.
Creating Subdirectories
You can create as many subdirectories as there are inodes. Subdirectories can be
created in the following ways:
■ With open( ). To create a directory, the O_CREAT option must be set in the
flags parameter and the S_IFDIR or FSTAT_DIR option must be set in the mode
parameter. The open( ) calls returns a file descriptor that describes the new
directory. The file descriptor can only be used for reading only and should be
closed when it no longer needed.
■ With mkdir( ) from usrFsLib.
When creating a directory using either of the above methods, the new directory
name must be specified. This name can be either a full pathname or a pathname
relative to the current working directory.
476
8 Local File Systems
8.4 Highly Reliable File System: HRFS
Removing Subdirectories
A directory that is to be deleted must be empty (except for the “.” and “..” entries).
The root directory can never be deleted. Subdirectories can be removed in the
following ways:
■
Using ioctl( ) with the FIORMDIR function and specifying the name of the
directory. The file descriptor used can refer to any file or directory on the
volume, or to the entire volume itself.
■ Using the remove( ), specifying the name of the directory.
■ Use rmdir( ) from usrFsLib.
8
Files on an HRFS file system device are created, deleted, written, and read using
the standard VxWorks I/O routines: creat( ), remove( ), write( ), and read( ). For
more information, see 7.4 Basic I/O, p.367, and the ioLib API references.
Note that and remove( ) is synonymous with unlink( ) for HRFS.
When a link is created an inode is not used. Another directory entry is created at
the location specified by the parameter to link( ). In addition, a reference count to
477
VxWorks
Kernel Programmer's Guide, 6.6
the linked file is stored in the file's corresponding inode. When unlinking a file, this
reference count is decremented. If the reference count is zero when unlink( ) is
called, the file is deleted except if there are open file descriptors open on the file. In
this case the directory entry is removed but the file still exists on the disk. This
prevents tasks and processes (RTPs) from opening the file. When the final open file
descriptor is closed the file is fully deleted freeing its inode.
Note that you cannot create a link to a subdirectory only to a regular file.
File Permissions
HRFS files have POSIX-style permission bits (unlike dosFs files, which have
attributes). The bits can be changed using the chmod( ) and fchmod( ) routines. See
the API references for more information.
Crash Recovery
If a system unexpectedly loses power or crashes, HRFS rolls back to the last
transaction point when the system reboots. The rollback occurs automatically
when the file system is mounted. Any changes made after the last complete
transaction are lost, but the disk remains in a consistent state.
Consistency Checking
An HRFS file system remains in a consistent state for most media (such as hard
drives) as long as the underlying hardware is working correctly and never writes
an incomplete sector or physical block.
This is necessarily true for RAM disks, however, because sector writing is simply
a copy of one memory location to another. The write operation may be interrupted
before completion if the system loses power or crashes.
The hrfsChkDsk( ) routine can, however, be used to check for inconsistencies in
the file system. The execution of the disk checker is not automatic; it must be done
478
8 Local File Systems
8.4 Highly Reliable File System: HRFS
The HRFS file system supports the ioctl( ) functions. These functions are defined
in the header file ioLib.h along with their associated constants; and they are listed
in Table 8-1.
479
VxWorks
Kernel Programmer's Guide, 6.6
Decimal
Function Value Description
For more information, see the API reference for ioctl( ) in ioLib.
480
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
For information about using dosFs with flash memory, see 10. Flash File System
Support: TrueFFS.
The dosFs file system can be used with the transaction-based reliable file system
(TRFS) facility; see 7.8.9 Transaction-Based Reliable File System Facility: TRFS, p.407.
To include dosFs support in VxWorks, configure the kernel with the appropriate
required and optional components.
481
VxWorks
Kernel Programmer's Guide, 6.6
Required Components
In addition, you must include the appropriate component for your block device;
for example, INCLUDE_ATA.
If you are using a device driver that is not designed for use with the XBD facility,
you must use the INCLUDE_XBD_BLK_DEV wrapper component in addition to
INCLUDE_XBD. See XBD Block Device Wrapper, p.406 for more information.
Note that you can use INCLUDE_DOSFS to automatically include the following
components:
■
INCLUDE_DOSFS_MAIN
■ INCLUDE_DOSFS_DIR_VFAT
■
INCLUDE_DOSFS_DIR_FIXED
■
INCLUDE_DOSFS_FAT
■ INCLUDE_DOSFS_CHKDSK
■
INCLUDE_DOSFS_FMT
482
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
Several dosFs component configuration parameters can be used to define how the
file system behaves when a dosfs volume is mounted. These parameters are as
follows:
DOSFS_CHK_ONLY
When a dosfs volume is mounted, the media is analyzed for errors, but no
repairs are made.
DOSFS_CHK_REPAIR
Similar to DOSFS_CHK_ONLY, but an attempt to repair the media is made if
errors are found.
DOSFS_CHK_NONE
Media is not checked for errors on mount.
DOSFS_CHK_FORCE
Used in conjunction with DOSFS_CHK_ONLY and DOSFS_CHK_REPAIR to
force a consistency check even if the disk has been marked clean.
DOS_CHK_VERB_SILENT or DOS_CHK_VERB_0
dosFs does not to produce any output to the terminal when mounting.
DOS_CHK_VERB_1
dosFs produces a minimal amount of output to the terminal when mounting.
DOS_CHK_VERB_2
dosFs to produces maximum amount output to the terminal when mounting.
Other parameters can be used to configure physical attributes of the file system.
They are as follows:
483
VxWorks
Kernel Programmer's Guide, 6.6
DOSFS_DEFAULT_CREATE_OPTIONS
The default parameter for the dosFsLib component. It specifies the action to be
taken when a dosFs file system is instantiated. Its default is
DOSFS_CHK_NONE.
DOSFS_DEFAULT_MAX_FILES
The maximum number of files. The default is 20.
DOSFS_DEFAULT_DATA_CACHE_SIZE
The size of the data cache. The default is 128 KB.
DOSFS_DEFAULT_FAT_CACHE_SIZE
The size of the FAT cache. The default 16 KB.
DOSFS_DEFAULT_DIR_CACHE_SIZE
The directory cache size. The default is 64 KB.
Caches can be tuned dynamically for individual instances of the file system using
the dosFsCacheInfo( ) and dosFsCacheTune( ) routines.
The routines dosFsCacheDelete( ) and dosFsCacheCreate( ) can be used to delete
and changes the size of caches. To change the size, first delete, and then create.
This section describes the process of creating a dosFs file system. It first provides
a summary overview and then a detailed description of each step. See 8.5.4 dosFs,
ATA Disk, and RAM Disk Examples, p.489 for examples of the steps and code
examples.
For information operating system configuration, see 8.5.1 Configuring VxWorks for
dosFs, p.481. Note that The file system is initialized automatically at boot time.
The steps involved in creating a dosFs file system are as follows:
1. If you are using a custom driver, create the appropriate block device. See
Step 1:Create a Block Device, p.464.
If you are using a standard VxWorks component for the device, it is created
automatically.
484
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
2. If you are using a device driver that is not XBD-compliant, create an XBD
device wrapper. See Step 2:Create an XBD Device Wrapper, p.486. (Also see XBD
Block Device Wrapper, p.406.)
3. Optionally, create and mount partitions. See Step 3:Create Partitions, p.465.
4. If you are not using pre-formatted disks, format the volumes. See
Step 4:Formatting the Volume, p.486.
5. Optionally, change the size of the disk cache. See Step 5:Change the Disk Cache
Size, p.488.
6. Optionally, check the disk for volume integrity. See Step 6:Check Disk Volume
Integrity, p.488. 8
Before any other operations can be performed, the dosFs file system library,
dosFsLib, must be initialized. This happens automatically at boot time, triggered
by the required dosFs components that were included in the system.
Initializing the file system invokes iosDrvInstall( ), which adds the driver to the
I/O system driver table. The driver number assigned to the dosFs file system is
recorded in a global variable, dosFsDrvNum. The table specifies the entry points
for the dosFs file operations that are accessed by the devices using dosFs.
485
VxWorks
Kernel Programmer's Guide, 6.6
A volume FAT format is set during disk formatting, according to either the volume
size (by default), or the per-user defined settings passed to dosFsVolFormat( ).
FAT options are summarized in Table 8-2:
Directory Formats
486
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
FAT12 12 bits per cluster Appropriate for very small devices Typically, each cluster is two
number with up to 4,084 KB clusters. sectors large.
FAT16 16 bits per cluster Appropriate for small disks of up Typically, used for volumes up
number to 65,524 KB clusters. to 2 GB; can support up to 8 GB.
FAT32 32 bits (only 28 used) Appropriate for medium and By convention, used for
per cluster number larger disk drives. volumes larger than 2 GB.
8
■
MSFT Long Names (VFAT)
1. The MSFT Long Names (VFAT) format supports 32-bit file size fields, limiting the file size
to a 4 GB maximum.
487
VxWorks
Kernel Programmer's Guide, 6.6
488
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
This section provides examples of the steps discussed in the preceding section.
These examples use a variety of configurations and device types. They are meant
to be relatively generic and applicable to most block devices. The examples
illustrate the following:
■
Creating and working with a dosFs file system on an ATA disk with
commands from the shell.
■ Code that creates and formats partitions.
■ Code that creates and formats a RAM disk volume.
8
The examples in this section require that VxWorks be configured with the
INCLUDE_DOSFS_FMT component. One example also relies on the
INCLUDE_DOSFS_CACHE component.
! CAUTION: Because device names are recognized by the I/O system using simple
substring matching, file systems should not use a slash (/) alone as a name;
unexpected results may occur.
This example demonstrates how to initialize an ATA disk with dosFs from the
shell. While these steps use an XBD-compatible ATA block device, they are
applicable to any XBD-compatible block device.
1. If you are using a custom driver, create an ATA block device that controls the
master ATA hard disk (drive zero) on the primary ATA controller (controller
zero). This device uses the entire disk.
-> xbd = ataXbdDevCreate(0,0,0,0,"/ata")
New symbol "xbd" added to kernel symbol table.
Instantiating /ata:0 as rawFs
xbd = 0xca4fe0: value = 262145 = 0x40001
The xbd variable is of type device_t. A value of zero would indicate an error
in ataXbdDevCreate( ). Such an error usually indicates a BSP configuration or
hardware configuration error.
If you are using the standard INCLUDE_ATA device component, the block
device is created automatically. Note that in this case the default device name
(provided by the component) is /ata0a.
2. Display information about devices.
-> devs
489
VxWorks
Kernel Programmer's Guide, 6.6
drv name
0 /null
1 /tyCo/0
1 /tyCo/1
8 yow-grand:
9 /vio
4 /ata:0
value = 25 = 0x19
The new ata driver /ata:0 is listed. The zero in the name indicates that no
partitions were detected. Note that if no file system is detected on the device,
the rawFs file system is instantiated automatically and appears the device list.
3. Create two partitions on this disk device, specifying 50% of the disk space for
the second partition, leaving 50% for the first partition. This step should only
be performed once, when the disk is first initialized. If partitions are already
written to the disk, this step should not be performed since it destroys data.
-> xbdCreatePartition ("/ata:0", 2, 50, 0, 0)
value = 0 = 0x0
4. Then list the devices to display information about the new partitions.
-> devs
drv name
0 /null
1 /tyCo/0
1 /tyCo/1
8 yow-grand:
9 /vio
3 /ata:1
3 /ata:2
Note that /ata:0 does not appear in this list, and two new devices, /ata:1 and
/ata:2, have been added to represent the new partitions. Each volume has
rawfs instantiated in it as they are new and unformatted.
5. Format the volumes for dosFs. This step need only be done once, when the
volumes are first initialized. If the volumes have already been initialized
(formatted), then omit this step. This example formats the file system volumes
with default options.
-> dosFsVolFormat ("/ata:1", 0, 0)
Formatting /ata:1 for DOSFS
Instantiating /ata:1 as rawFs
Formatting...Retrieved old volume params with %100 confidence:
Volume Parameters: FAT type: FAT32, sectors per cluster 8
2 FAT copies, 0 clusters, 38425 sectors per FAT
Sectors reserved 32, hidden 0, FAT sectors 76850
Root dir entries 0, sysId (null) , serial number 3a80000
Label:" " ...
Disk with 40149184 sectors of 512 bytes will be formatted with:
Volume Parameters: FAT type: FAT32, sectors per cluster 8
490
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
491
VxWorks
Kernel Programmer's Guide, 6.6
Above, we can see the Volume parameters for the /ata:2 volume. The file
system volumes are now mounted and ready to be used.
If you are working with an ATA hard disk or a CD-ROM file system from an
ATAPI CD-ROM drive, you can, alternatively, use usrAtaConfig( ). This
routine processes several steps at once. For more information, see the API
reference.
492
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
This code example takes a pointer to a block device, creates three partitions, creates
the partition handler for these partitions, and creates the dosFs device handler for
them. Then, it formats the partitions using dosFsVolFormat( ).
STATUS usrPartDiskFsInit
(
char * xbdName /* device name used during creation of XBD */
)
{
const char * devNames[] = { "/sd0a", "/sd0b", "/sd0c" };
devname_t xbdPartName;
/* create partitions */
/* Retrieve the current data cache tuning parameters and double them */
if (dosFsCacheInfoGet (devNames[0], DOS_DATA_CACHE, &cacheParams) == ERROR)
return ERROR;
cacheParams.bypass = cacheParams.bypass * 2;
cacheParams.readAhead = cacheParams.readAhead * 2;
493
VxWorks
Kernel Programmer's Guide, 6.6
return OK;
}
Note that in most cases you would be likely to format the different partitions for
different file systems.
The following code creates a RAM disk and formats it for use with the dosFs file
system.
STATUS usrRamDiskInit
(
void /* no argument */
)
{
int ramDiskSize = 512 * 1024 ; /* 512KB, 512 bytes per sector */
char *ramDiskDevName = "/ram0" ;
device_t xbd;
return OK;
}
494
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
Synchronizing Volumes
When a disk is synchronized, all modified buffered data is physically written to the
disk, so that the disk is up to date. This includes data written to files, updated 8
directory information, and the FAT. To avoid loss of data, a disk should be
synchronized before it is removed. For more information, see the API references
for close( ) and dosFsVolUnmount( ).
This section discusses creating and removing directories, and reading directory
entries.
Creating Subdirectories
For FAT32, subdirectories can be created in any directory at any time. For FAT12
and FAT16, subdirectories can be created in any directory at any time, except in the
root directory once it reaches its maximum entry count. Subdirectories can be
created in the following ways:
■ Using ioctl( ) with the FIOMKDIR function. The name of the directory to be
created is passed as a parameter to ioctl( ).
■
Using open( ). To create a directory, the O_CREAT option must be set in the
flags parameter and the FSTAT_DIR option must be set in the mode parameter.
The open( ) call returns a file descriptor that describes the new directory. Use
this file descriptor for reading only, and close it when it is no longer needed.
■
Use mkdir( ) from usrFsLib.
When creating a directory using any of the above methods, the new directory
name must be specified. This name can be either a full pathname or a pathname
relative to the current working directory.
495
VxWorks
Kernel Programmer's Guide, 6.6
Removing Subdirectories
A directory that is to be deleted must be empty (except for the “.” and “..” entries).
The root directory can never be deleted. Subdirectories can be removed in the
following ways:
■
Using ioctl( ) with the FIORMDIR function, specifying the name of the
directory. The file descriptor used can refer to any file or directory on the
volume, or to the entire volume itself.
■ Using the remove( ) function, specifying the name of the directory.
■ Use rmdir( ) from usrFsLib.
Files on a dosFs file system device are created, deleted, written, and read using the
standard VxWorks I/O routines: creat( ), remove( ), write( ), and read( ). For more
information, see 7.4 Basic I/O, p.367, and the ioLib API references.
File Attributes
The file-attribute byte in a dosFs directory entry consists of a set of flag bits, each
indicating a particular file characteristic. The characteristics described by the
file-attribute byte are shown in Table 8-3.
496
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
DOS_ATTR_RDONLY
If this flag is set, files accessed with open( ) cannot be written to. If the
O_WRONLY or O_RDWR flags are set, open( ) returns ERROR, setting errno to
S_dosFsLib_READ_ONLY.
DOS_ATTR_HIDDEN
This flag is ignored by dosFsLib and produces no special handling. For
example, entries with this flag are reported when searching directories.
DOS_ATTR_SYSTEM
This flag is ignored by dosFsLib and produces no special handling. For
example, entries with this flag are reported when searching directories.
DOS_ATTR_VOL_LABEL
This is a volume label flag, which indicates that a directory entry contains the
dosFs volume label for the disk. A label is not required. If used, there can be
only one volume label entry per volume, in the root directory. The volume
label entry is not reported when reading the contents of a directory (using
readdir( )). It can only be determined using the ioctl( ) function FIOLABELGET.
The volume label can be set (or reset) to any string of 11 or fewer characters,
using the ioctl( ) function FIOLABELSET. Any file descriptor open to the
volume can be used during these ioctl( ) calls.
DOS_ATTR_DIRECTORY
This is a directory flag, which indicates that this entry is a subdirectory, and
not a regular file.
DOS_ATTR_ARCHIVE
This is an archive flag, which is set when a file is created or modified. This flag
is intended for use by other programs that search a volume for modified files
497
VxWorks
Kernel Programmer's Guide, 6.6
and selectively archive them. Such a program must clear the archive flag, since
VxWorks does not.
All the flags in the attribute byte, except the directory and volume label flags, can
be set or cleared using the ioctl( ) function FIOATTRIBSET. This function is called
after the opening of the specific file with the attributes to be changed. The
attribute-byte value specified in the FIOATTRIBSET call is copied directly; to
preserve existing flag settings, determine the current attributes using stat( ) or
fstat( ), then change them using bitwise AND and OR operators.
This example makes a dosFs file read-only, and leaves other attributes intact.
STATUS changeAttributes
(
void
)
{
int fd;
struct stat statStruct;
/* open file */
/* close file */
close (fd);
return (OK);
}
NOTE: You can also use the attrib( ) routine to change file attributes. For more
information, see the entry in usrFsLib.
498
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
The dosFs file system allocates disk space using one of the following methods. The
first two methods are selected based upon the size of the write operation. The last
method must be manually specified.
■
single cluster allocation
Single cluster allocation uses a single cluster, which is the minimum allocation
unit. This method is automatically used when the write operation is smaller
than the size of a single cluster.
■ cluster group allocation (nearly contiguous)
8
Cluster group allocation uses adjacent (contiguous) groups of clusters, called
extents. Cluster group allocation is nearly contiguous allocation and is the
default method used when files are written in units larger than the size of a
disk’s cluster.
■ absolutely contiguous allocation
499
VxWorks
Kernel Programmer's Guide, 6.6
The dosFs file system defines the size of a cluster group based on the media’s
physical characteristics. That size is fixed for each particular media. Since seek
operations are an overhead that reduces performance, it is desirable to arrange
files so that sequential portions of a file are located in physically contiguous disk
clusters. Cluster group allocation occurs when the cluster group size is considered
sufficiently large so that the seek time is negligible compared to the read/write
time. This technique is sometimes referred to as nearly contiguous file access
because seek time between consecutive cluster groups is significantly reduced.
Because all large files on a volume are expected to have been written as a group of
extents, removing them frees a number of extents to be used for new files
subsequently created. Therefore, as long as free space is available for subsequent
file storage, there are always extents available for use. Thus, cluster group
allocation effectively prevents fragmentation (where a file is allocated in small units
spread across distant locations on the disk). Access to fragmented files can be
extremely slow, depending upon the degree of fragmentation.
500
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
2. Then, call ioctl( ). Use the file descriptor returned from open( ) or creat( ) as the
file descriptor argument. Specify FIOCONTIG as the function code argument
and the size of the requested contiguous area, in bytes, as the third argument.
The FAT is then searched for a suitable section of the disk. If found, this space is
assigned to the new file. The file can then be closed, or it can be used for further
I/O operations. The file descriptor used for calling ioctl( ) should be the only
descriptor open to the file. Always perform the ioctl( ) FIOCONTIG operation
before writing any data to the file.
To request the largest available contiguous space, use CONTIG_MAX for the size
of the contiguous area. For example:
8
status = ioctl (fd, FIOCONTIG, CONTIG_MAX);
Subdirectories can also be allocated a contiguous disk area in the same manner:
■ If the directory is created using the ioctl( ) function FIOMKDIR, it must be
subsequently opened to obtain a file descriptor to it.
■ If the directory is created using options to open( ), the returned file descriptor
from that call can be used.
A directory must be empty (except for the “.” and “..” entries) when it has
contiguous space allocated to it.
Fragmented files require following cluster chains in the FAT. However, if a file is
recognized as contiguous, the system can use an enhanced method that improves
performance. This applies to all contiguous files, whether or not they were
explicitly created using FIOCONTIG. Whenever a file is opened, it is checked for
contiguity. If it is found to be contiguous, the file system registers the necessary
information about that file to avoid the need for subsequent access to the FAT
table. This enhances performance when working with the file by eliminating seek
operations.
When you are opening a contiguous file, you can explicitly indicate that the file is
contiguous by specifying the DOS_O_CONTIG_CHK flag with open( ). This
prompts the file system to retrieve the section of contiguous space, allocated for
this file, from the FAT table.
To find the maximum contiguous area on a device, you can use the ioctl( ) function
FIONCONTIG. This information can also be displayed by dosFsConfigShow( ).
501
VxWorks
Kernel Programmer's Guide, 6.6
In this example, the size (in bytes) of the largest contiguous area is copied to the
integer pointed to by the third parameter to ioctl( ) (count).
STATUS contigTest
(
void /* no argument */
)
{
int count; /* size of maximum contiguous area in bytes */
int fd; /* file descriptor */
close (fd);
printf ("largest contiguous area = %d\n", count);
return (OK);
}
The DOS file system is inherently susceptible to data structure inconsistencies that
result from interruptions during certain types of disk updates. These types of
interruptions include power failures, system crashes (for fixed disks), and the
manual removal of a disk.
NOTE: The DOS file system is not considered a fault-tolerant file system. The
VxWorks dosFs file system, however, can be used in conjunction with the
Transaction-Based Reliable File System facility; see 7.8.9 Transaction-Based Reliable
File System Facility: TRFS, p.407.
Inconsistencies occur because the file system data for a single file is stored in three
separate regions of the disk. The data stored in these regions are:
■
The file chain in the File Allocation Table (FAT), located in a region near the
beginning of the disk.
■
The directory entry, located in a region that could be anywhere on the disk.
502
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
■
File clusters containing file data, that could be located anywhere on the disk.
Since all three regions are not always updated before an interruption, dosFs
includes an optional integrated consistency-checking mechanism to detect and
recover from inconsistencies. For example, if a disk is removed when a file is being
deleted, a consistency check completes the file deletion operation. Or, if a file is
being created when an interruption occurs, then the file is un-created. In other
words, the consistency checker either rolls forward or rolls back the operation that
experienced the inconsistency, making whichever correction is possible.
The dosFs file system supports the ioctl( ) functions. These functions are defined
in the header file ioLib.h along with their associated constants, and they are
described in Table 8-4.
Decimal
Function Value Description
503
VxWorks
Kernel Programmer's Guide, 6.6
Decimal
Function Value Description
For more information, see the API references for dosFsLib and for ioctl( ) in ioLib.
504
8 Local File Systems
8.5 MS-DOS-Compatible File System: dosFs
VxWorks can be booted from a local SCSI device (such as a hard drive in the target
system). Before you can boot from SCSI, you must make a new boot loader that
contains the SCSI library. Configure VxWorks with the INCLUDE_SCSI,
INCLUDE_SCSI_BOOT, and SYS_SCSI_CONFIG components.
After creating the SCSI boot loader ROM, you can prepare the dosFs file system for
use as a boot device. The simplest way to do this is to partition the SCSI device so
that a dosFs file system starts at block 0. You can then make the new system image,
place it on your SCSI boot device, and boot the new VxWorks system. These steps
are shown in more detail below.
8
! WARNING: For use as a boot device, the directory name for the dosFs file system
must begin and end with slashes (as with /sd0/ used in the following example).
This is an exception to the usual naming convention for dosFs file systems and is
incompatible with the NFS requirement that device names not end in a slash.
505
VxWorks
Kernel Programmer's Guide, 6.6
where id is the SCSI ID of the boot device, and lun is its Logical Unit Number
(LUN). To enable use of the network, include the on-board Ethernet device (for
example, ln for LANCE) in the other field.
The following example boots from a SCSI device with a SCSI ID of 2 and a LUN of
0.
506
8 Local File Systems
8.6 Raw File System: rawFs
To use the rawFs file system, configure VxWorks with the INCLUDE_RAWFS and
INCLUDE_XBD components.
If you are using a device driver that is not designed for use with the XBD facility,
you must use the INCLUDE_XBD_BLK_DEV wrapper component in addition to
INCLUDE_XBD. See XBD Block Device Wrapper, p.406 for more information.
507
VxWorks
Kernel Programmer's Guide, 6.6
The rawFs file system is the default file system. It is created automatically when
VxWorks cannot instantiate a known file system such as dosFs, HRFS, or cdromFs.
Unlike dosFs and HRFS, rawFs does not have a formatter. There are no particular
data structures on the media that signify the disk as being raw. To create a rawFs
file system manually, the current file system must be un-instantiated and replaced
with rawFs. Having two or more file systems on the same media can produce
instabilities in the VxWorks system. Hence, when instantiating a new file system
the previous one must be removed.
See Example 8-10Creating a rawFs File System, p.509 for code that illustrates how this
can be done. (See 8.2 File System Monitor, p.457 for information about default
creation of rawFs.)
The rawFs library rawFsLib is initialized automatically at boot time. The
rawFsInit( ) routine is called by the usrRoot( ) task after starting the VxWorks
system. The rawFsInit( ) routine takes a single parameter, the maximum number
of rawFs file descriptors that can be open at one time. This count is used to allocate
a set of descriptors; a descriptor is used each time a rawFs device is opened. The
parameter can be set with the NUM_RAWFS_FILES configuration parameter of the
INCLUDE_RAWFS component
The rawFsInit( ) routine also makes an entry for the rawFs file system in the I/O
system driver table (with iosDrvInstall( )). This entry specifies the entry points for
rawFs file operations, for all devices that use the rawFs file system. The driver
number assigned to the rawFs file system is placed in a global variable,
rawFsDrvNum.
After the rawFs file system is initialized, one or more devices must be created.
Devices are created with the device driver’s device creation routine
(xxDevCreate( )). The driver routine returns a pointer to a block device descriptor
structure (BLK_DEV). The BLK_DEV structure describes the physical aspects of the
device and specifies the routines in the device driver that a file system can call.
Immediately after its creation, the block device has neither a name nor a file system
associated with it. To initialize a block device for use with rawFs, the
already-created block device must be associated with rawFs and a name must be
508
8 Local File Systems
8.6 Raw File System: rawFs
assigned to it. This is done with the rawFsDevInit( ) routine. Its parameters are the
name to be used to identify the device and a pointer to the block device descriptor
structure (BLK_DEV):
RAW_VOL_DESC *pVolDesc;
BLK_DEV *pBlkDev;
pVolDesc = rawFsDevInit ("DEV1:", pBlkDev);
The rawFsDevInit( ) call assigns the specified name to the device and enters the
device in the I/O system device table (with iosDevAdd( )). It also allocates and
initializes the file system’s volume descriptor for the device. It returns a pointer to
the volume descriptor to the caller; this pointer is used to identify the volume
during certain file system calls.
8
Note that initializing the device for use with rawFs does not format the disk. That
is done using an ioctl( ) call with the FIODISKFORMAT function.
/* Map some XBD names. Use :0 and :1 since the disk may or may not have partitions */
/* The ejection of the current file system is asynchronous and is handled by another task.
Depending on relative priorities this may not happen immediately so the path wait even
facility is used. Each file system will trip this event when they instatiate to let
waiting task that it is ready.
*/
509
VxWorks
Kernel Programmer's Guide, 6.6
/* Eject the current file system and put rawfs in its place */
ioctl (fd, XBD_SOFT_EJECT, (int)XBD_TOP);
fsWaitForPath(&waitData);
Once the call to fsWaitForPath( ) returns the rawfs file system is ready.
! CAUTION: Because device names are recognized by the I/O system using simple
substring matching, file systems should not use a slash (/) alone as a name or
unexpected results may occur.
To begin I/O operations upon a rawFs device, first open the device using the
standard open( ) routine (or the creat( ) routine). Data on the rawFs device is
written and read using the standard I/O routines write( ) and read( ). For more
information, see 7.4 Basic I/O, p.367.
The character pointer associated with a file descriptor (that is, the byte offset where
the read and write operations take place) can be set by using ioctl( ) with the
FIOSEEK function.
Multiple file descriptors can be open simultaneously for a single device. These
must be carefully managed to avoid modifying data that is also being used by
another file descriptor. In most cases, such multiple open descriptors use FIOSEEK
to set their character pointers to separate disk areas.
The rawFs file system supports the ioctl( ) functions shown in Table 8-5. The
functions listed are defined in the header file ioLib.h. For more information, see
the API references for rawFsLib and for ioctl( ) in ioLib.
510
8 Local File Systems
8.7 CD-ROM File System: cdromFs
Decimal
Function Value Description
511
VxWorks
Kernel Programmer's Guide, 6.6
After initializing cdromFs and mounting it on a CD-ROM block device, you can
access data on that device using the standard POSIX I/O calls: open( ), close( ),
read( ), ioctl( ), readdir( ), and stat( ). The write( ) call always returns an error.
The cdromFs utility supports multiple drives, multiple open files, and concurrent
file access. When you specify a pathname, cdromFS accepts both forward slashes
(/) and back slashes (\) as path delimiters. However, the backslash is not
recommended because it might not be supported in future releases.
The initialization sequence for the cdromFs file system is similar to installing a
dosFs file system on a SCSI or ATA device.
After you have created the CD file system device (8.7.2 Creating and Using cdromFs,
p.513), use ioctl( ) to set file system options. The files system options are described
below:
CDROMFS_DIR_MODE_SET/GET
These options set and get the directory mode. The directory mode controls
whether a file is opened with the Joliet extensions, or without them. The
directory mode can be set to any of the following:
MODE_ISO9660
Do not use the Joliet extensions.
MODE_JOLIET
Use the Joliet extensions.
MODE_AUTO
Try opening the directory first without Joliet, and then with Joliet.
! CAUTION: Changing the directory mode un-mounts the file system. Therefore,
any open file descriptors are marked as obsolete.
CDROMFS_STRIP_SEMICOLON
This option sets the readdir( ) strip semicolon setting to FALSE if arg is 0, and
to TRUE otherwise. If TRUE, readdir( ) removes the semicolon and following
version number from the directory entries retrieved.
2. Therefore, mode 2/form 2 sectors are not supported, as they have 2324 bytes of user data
per sector. Both mode 1/form 1 and mode 2/form 1 sectors are supported, as they have 2048
bytes of user data per sector.
3. The first session (that is, the earliest session) is always read. The most commonly desired
behavior is to read the last session (that is, the latest session).
512
8 Local File Systems
8.7 CD-ROM File System: cdromFs
CDROMFS_GET_VOL_DESC
This option returns, in arg, the primary or supplementary volume descriptor
by which the volume is mounted. arg must be of type T_ISO_PVD_SVD_ID, as
defined in cdromFsLib.h. The result is the volume descriptor, adjusted for the
endianness of the processor (not the raw volume descriptor from the CD). This
result can be used directly by the processor. The result also includes some
information not in the volume descriptor, such as which volume descriptor is
in use.
For information on using cdromFs( ), see the API reference for cdromFsLib.
8
8.7.1 Configuring VxWorks for cdromFs
This section describes the steps for creating a block device for the CD-ROM,
creating a cdromFsLib device, mounting the file system, and accessing the media.
The steps are performed from the shell, and shell show routines are used to display
information.
513
VxWorks
Kernel Programmer's Guide, 6.6
514
8 Local File Systems
8.7 CD-ROM File System: cdromFs
515
VxWorks
Kernel Programmer's Guide, 6.6
volume publisher ID :
516
8 Local File Systems
8.7 CD-ROM File System: cdromFs
The cdromFs file system supports the ioctl( ) functions. These functions, and their
associated constants, are defined in the header files ioLib.h and cdromFsLib.h.
Table 8-6 describes the ioctl( ) functions that cdromFsLib supports. For more
information, see the API references for cdromFsLib and for ioctl( ) in ioLib.
517
VxWorks
Kernel Programmer's Guide, 6.6
cdromFsLib has a 4-byte version number. The version number is composed of four
parts, from most significant byte to least significant byte:
■
major number
■
minor number
■
patch level
■ build
The version number is returned by cdromFsVersionNumGet( ) and displayed by
cdromFsVersionNumDisplay( ).
518
8 Local File Systems
8.8 Read-Only Memory File System: ROMFS
Configuring VxWorks with ROMFS and applications involves the following steps:
1. Create a ROMFS directory in the project directory on the host system, using
the name /romfs.
8
2. Copy the application files into the /romfs directory.
3. Rebuild VxWorks.
For example, adding a process-based application called myVxApp.vxe from the
command line would look like this:
cd c:\myInstallDir\vxworks-6.1\target\proj\wrSbc8260_diab
mkdir romfs
copy c:\allMyVxApps\myVxApp.vxe romfs
make TOOL=diab
The contents of the romfs directory are automatically built into a ROMFS file
system and combined with the VxWorks image.
The ROMFS directory does not need to be created in the VxWorks project
directory. It can also be created in any location on (or accessible from) the host
system, and the make utility’s ROMFS_DIR macro used to identify where it is in
the build command. For example:
make TOOL=diab ROMFS_DIR="c:\allMyVxApps"
Note that any files located in the romfs directory are included in the system image,
regardless of whether or not they are application executables.
At run-time, the ROMFS file system is accessed as /romfs. The content of the
ROMFS directory can be browsed using the ls and cd shell commands, and
accessed programmatically with standard file system routines, such as open( ) and
read( ).
For example, if the directory
installDir/vxworks-6.x/target/proj/wrSbc8260_diab/romfs has been created on the
519
VxWorks
Kernel Programmer's Guide, 6.6
host, the file foo copied to it, and the system rebuilt and booted; then using cd and
ls from the shell (with the command interpreter) looks like this:
[vxWorks *]# cd /romfs
[vxWorks *]# ls
.
..
foo
[vxWorks *]#
NOTE: TSFS is not designed for use with large files (whether application
executables or other files), and performance may suffer when they are greater than
50 KB. For large files, use FTP or NFS instead of TSFS
TSFS provides all of the I/O features of the network driver for remote file access
(see 7.8.5 Non-NFS Network Devices, p.402), without requiring any target
resources—except those required for communication between the target system
and the target server on the host. The TSFS uses a WDB target agent driver to
transfer requests from the VxWorks I/O system to the target server. The target
server reads the request and executes it using the host file system. When you open
a file with TSFS, the file being opened is actually on the host. Subsequent read( )
and write( ) calls on the file descriptor obtained from the open( ) call read from and
write to the opened host file.
520
8 Local File Systems
8.9 Target Server File System: TSFS
The TSFS VIO driver is oriented toward file I/O rather than toward console
operations. TSFS provides all the I/O features that netDrv provides, without
requiring any target resource beyond what is already configured to support
communication between target and target server. It is possible to access host files
randomly without copying the entire file to the target, to load an object module
from a virtual file source, and to supply the filename to routines such as ld( ) and
copy( ).
Each I/O request, including open( ), is synchronous; the calling target task is
blocked until the operation is complete. This provides flow control not available in
the console VIO implementation. In addition, there is no need for WTX protocol
requests to be issued to associate the VIO channel with a particular host file; the
8
information is contained in the name of the file.
Consider a read( ) call. The driver transmits the ID of the file (previously
established by an open( ) call), the address of the buffer to receive the file data, and
the desired length of the read to the target server. The target server responds by
issuing the equivalent read( ) call on the host and transfers the data read to the
target program. The return value of read( ) and any errno that might arise are also
relayed to the target, so that the file appears to be local in every way.
For detailed information, see the API reference for wdbTsfsDrv.
Socket Support
TSFS sockets are operated on in a similar way to other TSFS files, using open( ),
close( ), read( ), write( ), and ioctl( ). To open a TSFS socket, use one of the
following forms of filename:
"TCP:hostIP:port"
"TCP:hostname:port"
The flags and permissions arguments are ignored. The following examples show
how to use these filenames:
fd = open("/tgtsvr/TCP:phobos:6164",0,0); /* open socket and connect */
/* to server phobos */
The result of this open( ) call is to open a TCP socket on the host and connect it to
the target server socket at hostname or hostIP awaiting connections on port. The
resultant socket is non-blocking. Use read( ) and write( ) to read and write to the
TSFS socket. Because the socket is non-blocking, the read( ) call returns
521
VxWorks
Kernel Programmer's Guide, 6.6
immediately with an error and the appropriate errno if there is no data available
to read from the socket. The ioctl( ) usage specific to TSFS sockets is discussed in
the API reference for wdbTsfsDrv. This socket configuration allows VxWorks to
use the socket facility without requiring sockLib and the networking modules on
the target.
Error Handling
Errors can arise at various points within TSFS and are reported back to the original
caller on the target, along with an appropriate error code. The error code returned
is the VxWorks errno which most closely matches the error experienced on the
host. If a WDB error is encountered, a WDB error message is returned rather than
a VxWorks errno.
Security Considerations
While TSFS has much in common with netDrv, the security considerations are
different (also see 7.8.5 Non-NFS Network Devices, p.402). With TSFS, the host file
operations are done on behalf of the user that launched the target server. The user
name given to the target as a boot parameter has no effect. In fact, none of the boot
parameters have any effect on the access privileges of TSFS.
In this environment, it is less clear to the user what the privilege restrictions to
TSFS actually are, since the user ID and host machine that start the target server
may vary from invocation to invocation. By default, any host tool that connects to
a target server which is supporting TSFS has access to any file with the same
522
8 Local File Systems
8.9 Target Server File System: TSFS
authorizations as the user that started that target server. However, the target
server can be locked (with the -L option) to restrict access to the TSFS.
The options which have been added to the target server startup routine to control
target access to host files using TSFS include:
-R Set the root of TSFS.
For example, specifying -R /tftpboot prepends this string to all TSFS filenames
received by the target server, so that /tgtsvr/etc/passwd maps to
/tftpboot/etc/passwd. If -R is not specified, TSFS is not activated and no TSFS
requests from the target will succeed. Restarting the target server without
specifying -R disables TSFS.
8
-RW Make TSFS read-write.
The target server interprets this option to mean that modifying operations
(including file create and delete or write) are authorized. If -RW is not
specified, the default is read only and no file modifications are allowed.
NOTE: For more information about the target server and the TSFS, see the tgtsvr
command reference. For information about specifying target server options from
Workbench, see the Wind River Workbench User’s Guide: Setting Up Your Hardware
and the Wind River Workbench User’s Guide: New Target Server Connections.
For information about using the TSFS to boot a targets, see 3.11 Booting From the
Host File System Using TSFS, p.155.
523
VxWorks
Kernel Programmer's Guide, 6.6
524
9
Network File System: NFS
9.1 Introduction
VxWorks provides an implementation of the Network File System (NFS)
application protocol, versions 2 and 3.
The first part of this chapter describes how to configure and use an NFS client,
which enables a VxWorks target to mount remote file systems and access the
contents of those file systems as if they were local. The second part of the chapter
describes how to configure and use an NFS server, which enables a VxWorks
target to export local systems to remote network systems.
NOTE: VxWorks does not normally provide authentication services for NFS
requests. If you need the NFS server to authenticate incoming requests, see the
nfsdInit( ) and mountdInit( ) reference entries for information on authorization
hooks.
525
VxWorks
Kernel Programmer's Guide, 6.6
NFS_USER_ID
NFS_GROUP_ID
NFS_MAXPATH
526
9 Network File System: NFS
9.2 Configuring VxWorks for an NFS Client
NFS v2 Client
Initialization
9
This component also configures the VxWorks image to initialize the NFS v2 client,
which includes a call to nfsAuthUnixSet( ):
nfsAuthUnixSet ( sysBootParams.hostName, NFS_USER_ID, NFS_GROUP_ID, 0,
(int *) 0);
Values for the NFS_USER_ID and NFS_GROUP_ID parameters are taken from the
required INCLUDE_CORE_NFS_CLIENT component.
Parameters
NFS2_CLIENT_CACHE_DEFAULT_NUM_LINES
NFS2_CLIENT_CACHE_DEFAULT_LINE_SIZE
527
VxWorks
Kernel Programmer's Guide, 6.6
Changing the size of the cache will not affect any existing cache. It will only
impact future caches.
NFS2_CLIENT_CACHE_DEFAULT_OPTIONS
This parameter configures the default options for the NFS v2 client cache. The
two valid settings for this parameter are:
0
The default value of zero (0) means that the cache will collect the written
data and only send it to the server when the cache line is full, or it needs
to be flushed (no options), which is the default value
1, NFS_CACHE_WRITE_THROUGH
A value of one means that the cache will be write-through.
You can modify the cache options, either at build time or at run-time. To
configure the cache at run time, call the routine usrNfs2CacheInit( ):
usrNfs2CacheInit (UINT32 numLines, UINT32 lineSize, UINT32 options);
NFS v3 Client
528
9 Network File System: NFS
9.2 Configuring VxWorks for an NFS Client
Initialization
This component also configures the VxWorks image to initialize the NFS v3 client,
which includes a call to nfsAuthUnixSet( ):
nfsAuthUnixSet ( sysBootParams.hostName, NFS_USER_ID, NFS_GROUP_ID, 0,
(int *) 0);
Values for the NFS_USER_ID and NFS_GROUP_ID parameters to this routine are
taken from the required INCLUDE_CORE_NFS_CLIENT component.
Parameters
NFS3_CLIENT_CACHE_DEFAULT_NUM_LINES 9
NFS3_CLIENT_CACHE_DEFAULT_LINE_SIZE
529
VxWorks
Kernel Programmer's Guide, 6.6
Changing the size of the cache will not affect any existing cache. It will only
impact future caches.
NFS3_CLIENT_CACHE_DEFAULT_OPTIONS
This parameter configures the default options for the NFS v3 client cache. The
two valid settings for this parameter are:
0
The default value of zero (0) means that the cache will collect the written
data and only send it to the server when the cache line is full, or it needs
to be flushed (no options), which is the default value
1, NFS_CACHE_WRITE_THROUGH
A value of one means that the cache will be write-through.
You can modify the cache options, either at build time or at run-time. To
configure the cache at run time, call the routine usrNfs3CacheInit( ):
usrNfs3CacheInit (UINT32 numLines, UINT32 lineSize, UINT32 options);
The NFS v3 client has one additional configurable parameter that is not available
on NFS v2. According to the RFC, the NFS v3 client can dictate to an NFS v3 server
how it should perform the write operations. At runtime, the NFS v3 client can be
set to inform the server that it should perform writes one of the following styles:
■ UNSTABLE
■ FILE_SYNC
■ DATA_SYNC
The default setting is UNSTABLE.
You can use two routines to configure these options at run-time:
■
nfs3StableWriteSet(stable_how mode) lets you set the mode
■
nfs3StableWriteGet( ) routine gets the current mode
530
9 Network File System: NFS
9.3 Creating an NFS Client
GROUP_EXPORTS
531
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks calls hostAdd( ) for the host system automatically at boot time. If
you want to mount file systems from other remote systems, you need to make
and explicit hostAdd( ) call for those systems.
4. Call nfsMount( ) or nfsMountAll( ) to actually mount a remote file system.
The nfsMountAll( ) routine queries the specified remote system for a list of
exported file names and then creates NFS client device instances for each
exported file system. To unmount a file system, use nfsUnmount( ). Use
nfsDevShow( ) to display a list of the mounted NFS devices.
As a convenience, the INCLUDE_NFS_MOUNT_ALL component configures an
image to make a boot time call to nfsMountAll( ) to mount all file systems
exported by the boot host.
The following sections supplement and expand on the procedure outlined above.
For a UNIX NFS server, the /etc/exports file specifies which of the server’s file
systems are exported for mounting by remote NFS clients. If a file system on a
UNIX NFS server is not listed in /etc/exports, the file system is not exported, which
means other machines cannot use NFS to mount it. For example, consider an
/etc/exports file that contains the line:
/usr
The server exports /usr without restriction. If you want to limit access to this
directory, you can include additional parameters on the line. For example:
1. On the UNIX box, login as root (super user).
2. Edit: /etc/exports
3. Specify the path and permission for the file system that you would export.
For example: /usr * (rw)
For more information on these parameters, consult your UNIX system
documentation.
4. Export the file system, run: exportfs -ra
5. On the UNIX target, run the NFS daemon: rpc.nfsd
6. Run: rpc.rquotad
7. To run mount the daemon, run: rpc.mountd
To check whether NFS is running, use: rpcinfo -p.
532
9 Network File System: NFS
9.3 Creating an NFS Client
Internally, NFS depends upon RPC to handle the remote execution of the
commands (open, read, write, and others) that access the data in the remote file
system. Associated with the RPC protocol is an authentication system known as
AUTH_UNIX. This authentication system requires RPC peers to provide a user
9
name, a user ID, and a group name. The recipient of an RPC message uses this
information to decide whether to honor or ignore the RPC request.
On a VxWorks host, you can set the NFS user name, user ID, and group name
using the NFS_GROUP_ID and NFS_USER_ID parameters included in the
INCLUDE_CORE_NFS_CLIENT component. You can also set these values by calling
nfsAuthUnixSet( ) or nfsAuthUnixPrompt( ). For example, to use
nfsAuthUnixSet( ) to set the NFS user ID to 1000 and the NFS group ID to 200 for
the machine mars, you would call nfsAuthUnixSet( ) as follows:
-> nfsAuthUnixSet "mars", 1000, 200, 0
After setting your NFS client name, user ID, and group ID, you are ready to call
nfsMount( ) to mount any file system exported by a known host. To add a system
to the list of hosts known to a VxWorks system, call hostAdd( ):
hostAdd ("host", "IPaddress" )
This routine associates a host name with an IP address. Thus, if you wanted to
mount a file system exported by a system called mars, you would need to have
533
VxWorks
Kernel Programmer's Guide, 6.6
already called hostAdd( ) for mars. For example, if mars were at 192.168.10.1 you
would need to call hostAdd( ) as follows:
hostAdd ("mars", "192.168.10.1" )
If mars exports a file system called /usr, you can now use a call to nfsMount( ) to
create a local mount of that remotely exported file system. The syntax of an
nfsMount( ) call is as follows:
nfsMount ("hostName", "hostFileSys", "localName")
hostName
The host name of the NFS server that exports the file system you want to
mount.
hostFileSys
The name of the host file system or subdirectory as it is known on the
exporting NFS server system.
localName
The local name to assign to the file system.
For example, if you wanted to mount a remote file system, /d0/, on your target,
wrs, as a device called /myDevice0/, you would make the following call to
nfsMount( ):
nfsMount ("wrs", "/d0/", "/myDevice0/");
The VxWorks target now has access to the contents of /d0/, although using the
device name, /myDevice0/. For example, if the remote device stores the file,
/d0/bigdog, you can access this file from the wrs target using the pathname,
/myDevice0/bigdog. If you want the local device to use the same device name
as is used on the exporting system, use a NULL as the third parameter of the
nfsMount( ) call. For example:
nfsMount ("wrs", "/d0/", NULL);
On the VxWorks target, the nfsMount( ) call creates the local device, /d0/.
Thus, on the target, the pathname to bigdog is the same as on the exporting
system; that is: /d0/bigdog.
If you do not need to mount the remote file system under a new name, you should
consider using nfsMountAll( ) instead of nfsMount( ). A call to nfsMountAll( )
mounts all file systems that are exported from the remote system and that are
accessible to the specified client.
534
9 Network File System: NFS
9.3 Creating an NFS Client
hostName
The name of the host from which you want to mount all exported file
systems.
clientName
The name of a client specified in an access list, if any. A NULL clientName
mounts only those file systems that are accessible to any client.
quietFlag
A boolean value that tells nfsMountAll( ) whether to execute in verbose
or silent mode. FALSE indicates verbose mode, and TRUE indicates quiet
mode. 9
535
VxWorks
Kernel Programmer's Guide, 6.6
After opening a file in a mounted directory, you can work with the file using the
ioctl( ) control functions listed in Table 9-1.
Table 9-1 Supported I/O Control Functions for Files Accessed through NFS
IOCTL Description
FIOGETNAME Gets the file name of fd and copies it to the buffer referenced
by nameBuf:
status = ioctl (fd, FIOGETNAME, &nameBuf);
FIOSEEK Sets the current byte offset in the file to the position specified
by newOffset. If the seek goes beyond the end-of-file, the file
grows. The end-of-file pointer gets moved to the new
position, and the new space is filled with zeros:
status = ioctl (fd, FIOSEEK, newOffset);
FIOWHERE Returns the current byte position in the file. This is the byte
offset of the next byte to be read or written. It takes no
additional argument:
position = ioctl (fd, FIOWHERE, 0);
FIOREADDIR Reads the next directory entry. Use the third argument in the
ioctl( ) call to supply a pointer to a directory descriptor of
type DIR.
DIR dirStruct;
fd = open ("directory", O_RDONLY);
status = ioctl (fd, FIOREADDIR, &dirStruct);
536
9 Network File System: NFS
9.4 Configuring VxWorks for an NFS Server
Table 9-1 Supported I/O Control Functions for Files Accessed through NFS (cont’d)
IOCTL Description
FIOFSTATGET Gets file status information (directory entry data). Use the
third argument in the ioctl( ) call to supply a pointer to a stat
structure that is filled with data describing the specified file.
For example:
struct stat statStruct;
fd = open ("file", O_RDONLY);
status = ioctl (fd, FIOFSTATGET, &statStruct);
FIOFSTATFSGET Gets the file system parameters for and open file descriptor.
Use the third argument in the ioctl( ) call to supply a pointer
to a statfs structure that is filled with data describing the
underlying file system.
statfs statfsStruct;
fd = open ("directory", O_RDONLY);
status = ioctl (fd, FIOFSTATFSGET, &statfsStruct);
537
VxWorks
Kernel Programmer's Guide, 6.6
NFS Server
NFS_MAXPATH
NFS_USER_ID
NFS_MAXFILENAME
Synopsis: Maximum file name length. Valid values range from 1 to 99.
Default: 40
538
9 Network File System: NFS
9.4 Configuring VxWorks for an NFS Server
NFS_GROUP_ID
NFS server V2
NOTE: VxWorks does not normally provide authentication services for NFS
requests. If you need to authenticate incoming requests, see the documentation for
nfsdInit( ) and mountdInit( ) for information about authorization hooks.
NFS server V3
539
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: VxWorks does not normally provide authentication services for NFS
requests. If you need to authenticate incoming requests, see the documentation for
nfsdInit( ) and mountdInit( ) for information about authorization hooks.
The following requests are accepted from clients. For details of their use, see RFC
1813: NFS: Network File System Protocol Specification.
NFSPROC_NULL 0
NFSPROC_GETATTR 1
NFSPROC_SETATTR 2
NFSPROC_LOOKUP 3
NFSPROC_ACCESS 4
NFSPROC_READLINK 5 – not supported, limitation in DOSFS
NFSPROC_READ 6
NFSPROC_WRITE 7
NFSPROC_CREATE 8
NFSPROC_MKDIR 9
NFSPROC_SYMLINK 10 – not supported
NFSPROC_MKNOD 11 – not supported
NFSPROC_REMOVE 12
540
9 Network File System: NFS
9.4 Configuring VxWorks for an NFS Server
NFSPROC_RMDIR 13
NFSPROC_RENAME 14
NFSPROC_LINK 15 – not supported
NFSPROC_READDIR 16
NFSPROC_READDIRPLUS 17
NFSPROC_FSSTAT 18
9
NFSPROC_FSINFO 19
NFSPROC_PATHCONF 20
NFSPROC_COMMIT 21
541
VxWorks
Kernel Programmer's Guide, 6.6
The following code fragment creates a RAM drive, initializes it for dosFs, and
exports the file system for NFS clients on the network:
unsigned myBlockSize; /* block size in bytes */
unsigned myTotalSize; /* disk size in bytes */
myBlockSize = 512;
myTotalSize = 16777216; /* 16Mb */
! CAUTION: For NFS-exportable file systems, the device name absolutely must not
end in a slash.
After you have an exportable file system, call nfsExport( ) to make it available to
NFS clients on your network. Then mount the file system from the remote NFS
client using the facilities of that system. The following example shows how to
export the new file system from a VxWorks target called vxTarget, and how to
mount it from a typical UNIX system.
1. After the file system (/export in this example) is initialized, the following
routine call specifies it as a file system to be exported with NFS:
nfsExport ("/export", 0, FALSE, 0);
The first three arguments specify the name of the file system to export; the
VxWorks NFS export ID (0 means to assign one automatically); and whether
542
9 Network File System: NFS
9.5 Creating an NFS Server
to export the file system as read-only. The last argument is a placeholder for
future extensions.
2. To mount the file system from another machine, see the system documentation
for that machine. Specify the name of the VxWorks system that exports the file
system, and the name of the desired file system. You can also specify a
different name for the file system as seen on the NFS client.
! CAUTION: On UNIX systems, you normally need root access to mount file systems.
543
VxWorks
Kernel Programmer's Guide, 6.6
544
10
Flash File System Support:
TrueFFS
10.1 Introduction
TrueFFS is a flash management facility that provides access to flash memory by
emulating disk access.
It provides VxWorks with block device functionality, which allows either the
dosFs file system (with or without TRFS support) or the HRFS file system to be
used to access flash memory in the same manner as a disk. For information about
the file system facilities, see 8.5 MS-DOS-Compatible File System: dosFs, p.481),
7.8.9 Transaction-Based Reliable File System Facility: TRFS, p.407, and 8.4 Highly
Reliable File System: HRFS, p.461.
In addition, TrueFFS provides full flash media management capabilities.
TrueFFS is a VxWorks-compatible implementation of M-Systems FLite, version
2.0. This system is reentrant, thread-safe, and supported on all CPU architectures
that host VxWorks. TrueFFS consists of the following four layers:
545
VxWorks
Kernel Programmer's Guide, 6.6
■
The core layer, which connects the other layers and handles global facilities,
such as back-grounding, garbage collection, timers, and other system
resources. This layer provides the block device interface for a file system.
■
The flash translation layer, which maintains the block allocation map that
associates the file system’s view of the storage medium with erase blocks in
flash.
■ The Memory Technology Device (MTD) layer, which implements the low-level
programming of the flash medium (map, read, write, and erase functions).
■ The socket layer, which provides an interface between TrueFFS and the board
hardware with board-specific hardware access routines.
Figure 10-1 illustrates the relationship between the file system, TrueFFS layers,
and the flash medium itself.
Translation Layer
MTDs
This chapter provides instructions for using TrueFFS with the MTDs and drivers
that are included in this release. It provides quick-start material for configuring
TrueFFS and formatting TrueFFS drives, and thus presents the basic steps required
to use the default TrueFFS facilities with your application.
It also provides information about creating a boot image region that excludes
TrueFFS, and about writing the boot image to that region.
If you must customize or create new socket drivers or MTDs, or would simply like
more detailed information about TrueFFS technology, see the VxWorks Device
Driver Developer’s Guide: Flash File System Support with TrueFFS.
546
10 Flash File System Support: TrueFFS
10.2 Overview of Implementation Steps
NOTE: This version of the TrueFFS product is a block device driver to VxWorks
that, although intended to be file system neutral, is provided for use with the dosFs
file system or the HRFS file system. The configuration steps for using TrueFFS with
dosFs and HRFS are slightly different.
547
VxWorks
Kernel Programmer's Guide, 6.6
Determine whether any of the MTDs provided with this release support the device
that you intend to use for TrueFFS. Devices are usually identified by their JEDEC
IDs. If you find an MTD appropriate to your flash device, you can use that MTD.
These drivers are also provided in binary form; so you do not need to compile the
MTD source code unless you have modified it.
548
10 Flash File System Support: TrueFFS
10.3 Creating a System with TrueFFS
NOTE: For the list of the MTD components and details about adding the MTD
component to your system, see Including the MTD Component, p.552.
The socket driver that you include in your system must be appropriate for your
BSP. Some BSPs include socket drivers, others do not. The socket driver file is
sysTffs.c and, if provided, it is located in your BSP directory.
If your BSP does not provide this file, follow the procedure described in the
VxWorks Device Driver Developer’s Guide: Flash File System Support with TrueFFS,
which explains how to port a stub version to your hardware.
In either case, the build process requires that a working socket driver (sysTffs.c)
be located in the BSP directory. For more information, see Adding the Socket Driver,
p.553.
549
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: Included with TrueFFS are sources for several MTDs and socket drivers.
The MTDs are in installDir/vxworks-6.x/target/src/drv/tffs. The socket drivers are
defined in the sysTffs.c files provided in the
installDir/vxworks-6.x/target/config/bspname directory for each BSP that supports
TrueFFS.
550
10 Flash File System Support: TrueFFS
10.3 Creating a System with TrueFFS
There are other file system components that are not required, but which may be
useful. These components add support for the basic functionality needed to use a
file system, such as the commands ls, cd, copy, and so forth (which are provided
by the INCLUDE_DISK_UTIL component).
TrueFFS provides optional utility components for automatic drive detection, show
routines, and writing a boot image to flash.
INCLUDE_TFFS_MOUNT
Including this component adds automatic detection (on booting) of existing
formatted TrueFFS drives.
INCLUDE_TFFS_SHOW
Including this component adds two TrueFFS configuration display utilities,
tffsShow( ) and tffsShowAll( ) for use from the shell.
The tffsShow( ) routine prints device information for a specified socket
interface. It is particularly useful when trying to determine the number of
erase units required to write a boot image (10.3.6 Reserving a Region in Flash for
a Boot Image, p.556). The tffsShowAll( ) routine provides the same information
for all socket interfaces registered with VxWorks. The tffsShowAll( ) routine
can be used from the shell to list the drives in the system. The drives are listed
in the order in which they were registered. This component is not included by
default.
551
VxWorks
Kernel Programmer's Guide, 6.6
INCLUDE_TFFS_BOOT_IMAGE
Including this component provides the tffsBootImagePut( ) routine (in
sysTffs.o). The routine is used to write a boot image to flash memory (see
Writing the Boot Image to Flash, p.558)
Add the MTD component appropriate to your flash device (10.3.1 Selecting an
MTD, p.548) to your system. The MTD components for flash devices from Intel,
AMD, Fujitsu, and Sharp, are described in Table 10-1. For more information about
support for these devices, see the VxWorks Device Driver Developer’s Guide: Flash File
System Support with TrueFFS.
Component Device
If you have written your own MTD, you must be sure that it is correctly defined
for inclusion in the system, and that it explicitly requires the transition layer. See
the VxWorks Device Driver Developer’s Guide: Flash File System Support with TrueFFS
for information.
Choose the translation layer appropriate to the technology used by your flash
medium. The main variants of flash devices are NOR and NAND. TrueFFS
provides support for:
■
NOR devices.
552
10 Flash File System Support: TrueFFS
10.3 Creating a System with TrueFFS
■
NAND devices that conform to the SSFDC specification.
The translation layer is provided in binary form only. The translation layer
components are listed in Table 10-2.
Component Description
INCLUDE_TL_FTL The translation layer for NOR flash devices. If you can
execute code in flash, your device uses NOR logic.
The component descriptor files specify the dependency between the translation
layers and the MTDs; therefore, when configuring through Workbench or the
command-line vxprj facility, you do not need to explicitly select a translation layer.
The build process handles it for you.
For more information about the translation layer, see the VxWorks Device Driver
Developer’s Guide: Flash File System Support with TrueFFS.
Inclusion of the socket driver is automatic for BSPs that provide the driver. By
including the core TrueFFS component, INCLUDE_TFFS, in VxWorks, the build
process checks for a socket driver, sysTffs.c, in the BSP directory and includes that
file in the system.
If your BSP does not provide a socket driver, follow the procedure described in the
VxWorks Device Driver Developer’s Guide: Flash File System Support with TrueFFS for
writing a socket driver. To include the socket driver in your system, a working
version of the socket driver (sysTffs.c) must be located in you BSP directory.
Build the system from Workbench or from the command line with vxprj.
553
VxWorks
Kernel Programmer's Guide, 6.6
! WARNING: If the flash array for your system is used for a boot image (a boot loader
or self-booting VxWorks system) as well as file system media, space must be
reserved for the boot image before you format the flash for TrueFFS. For more
information, see 10.3.6 Reserving a Region in Flash for a Boot Image, p.556.
First, boot your system. After the system boots and registers the socket driver(s),
start the shell. From the shell, run tffsDevFormat( ) or sysTffsFormat( ) to format
the flash memory for use with TrueFFS—use the latter if the BSP provides it
(sysTffsFormat( ) performs some setup operations and then calls
tffsDevFormat( )).
For example, the tffsDevFormat( ) routine takes two arguments, a drive number
and a format argument:
tffsDevFormat (int tffsDriveNo, int formatArg);
NOTE: You can format the flash medium even if though block device driver has
not yet been associated with the flash.
NOTE: The size of the flash media available for use is reduced by one sector for
TrueFFS internal use.
The first argument for tffsDevFormat( ), tffsDriveNo, is the drive number (socket
driver number), which identifies the flash medium to be formatted. Most systems
have a single flash drive, but TrueFFS supports up to five.
The socket registration process determines the drive number. Drive numbers are
assigned to the flash devices on the basis of the order in which the socket drivers
are registered in sysTffsInit( ) at boot time. The first registered is drive 0, the
second is drive 1, and so on up to 4. Details of this process are described in see the
VxWorks Device Driver Developer’s Guide: Flash File System Support with TrueFFS.
554
10 Flash File System Support: TrueFFS
10.3 Creating a System with TrueFFS
TFFS_STD_FORMAT_PARAMS
To facilitate calling tffsDevFormat( ) from the shell, you can simply pass zero (or
a NULL pointer) for the second argument, formatArg. Doing makes use of the
TFFS_STD_FORMAT_PARAMS macro, which defines default values for the
tffsDevFormatParams structure. The macro defines the default values used in
formatting a flash disk device.
Do not use this macro if the flash device is shared with a boot loader. If the BSP 10
provides sysTffsFormat( ) use that routine instead.
TFFS_STD_FORMAT_PARAMS is defined in tffsDrv.h as:
#define TFFS_STD_FORMAT_PARAMS {{0, 99, 1, 0x10000l, NULL, {0,0,0,0},
NULL, 2, 0, NULL}, FTL_FORMAT_IF_NEEDED}
The meaning of these default values, and other possible arguments for the
members of this structure, are described below.
formatParams
555
VxWorks
Kernel Programmer's Guide, 6.6
formatFlags
In order to use flash media for a boot image as well as a file system, a portion of
the flash memory (a fallow region) must be reserved so that it is excluded from the
area subject to formatting and the run-time operations of TrueFFS. Note that the
fallow region can be used for purposes other than boot loader code (for example,
a system startup log or configuration data).
If TrueFFS is used with the vxWorks_romResident VxWorks image type, TrueFFS
must be read-only. For information about boot image types, see 3.3 Boot Loader
Image Types, p.133 and 2.4.1 VxWorks Image Types, p.15.
Using sysTffsFormat( )
556
10 Flash File System Support: TrueFFS
10.3 Creating a System with TrueFFS
This routine first sets up a pointer to a tffsFormatParams structure that has been
initialized with a value for bootImageLen (which defines the boot image region);
then it calls tffsDevFormat( ).
Several BSPs, among them the ads860 BSP, include a sysTffsFormat( ) routine that
reserves 0.5 MB for the boot image. For example:
STATUS sysTffsFormat (void)
{
STATUS status;
tffsDevFormatParams params =
{
#define HALF_FORMAT
/* lower 0.5MB for bootimage, upper 1.5MB for TFFS */
#ifdef HALF_FORMAT
{0x80000l, 99, 1, 0x10000l, NULL, {0,0,0,0}, NULL, 2, 0, NULL},
#else 10
{0x000000l, 99, 1, 0x10000l, NULL, {0,0,0,0}, NULL, 2, 0, NULL},
#endif /* HALF_FORMAT */
FTL_FORMAT_IF_NEEDED
};
If your BSP does not provide sysTffsFormat( ), you must modify the
tffsFormatParams structure to reserve a fallow region before you call
tffsDevFormat( ).
Change the bootImageLen member of the tffsFormatParams structure to a value
that is at least as large as the boot image. The area defined by bootImageLen is
excluded from TrueFFS activity (formatting and wear-leveling).
For more information about bootImageLen and other members of the structure,
see Specifying Format Options, p.554.
557
VxWorks
Kernel Programmer's Guide, 6.6
Once you have created a boot image region, you can write the boot image to the
flash device using tffsBootImagePut( ). This routine bypasses TrueFFS (and its
translation layer) and writes directly into any location in flash memory. However,
because tffsBootImagePut( ) relies on a call to tffsRawio( ), you cannot use this
routine once the TrueFFS volume is mounted.
The arguments to tffsBootImagePut( ) are the following:
driveNo
The same drive number as the one used as input to the format routine.
offset
Offset from the start of flash at which the image is written (most often specified
as zero).
filename
Pointer to the boot image.
! WARNING: Because tffsBootImagePut( ) lets you write directly to any area of flash,
it is possible to accidentally overwrite and corrupt the TrueFFS-managed area of
flash if you do not specify the parameters correctly. For more information about
how to use this utility, see the reference entry for tffsBootImagePut( ) in the
VxWorks API reference.
Use the usrTffsConfig( ) routine to mount the file system on a TrueFFS flash drive.
Its arguments are the following:
drive
Specifies the drive number of the TFFS flash drive; valid values are 0 through
the number of socket interfaces in BSP.
removable
Specifies whether the media is removable. Use 0 for non-removable, 1 for
removable.
fileName
Specifies the mount point, for example, '/tffs0/'.
558
10 Flash File System Support: TrueFFS
10.3 Creating a System with TrueFFS
The following example runs usrTffsConfig( ) to attach a drive to the file system,
and then runs devs to list all drivers:
-> usrTffsConfig 0,0,"/flashDrive0/"
-> devs
drv name
0 /null
1 /tyCo/0
1 /tyCo/1
5 host:
6 /vio
2 /flashDrive0/
One way to test your drive is by copying a text file from the host (or from another
type of storage medium) to the flash file system on the target. Then, copy the file
to the console or to a temporary file for comparison, and verify the content. The
following example (using dosFs on TrueFFS) is run from the shell:
->@copy "host:/home/myHost/.cshrc" "/flashDrive0/myCshrc"
Copy Ok: 4266 bytes copied
Value = 0 = 0x0
->@copy "/flashDrive0/myCshrc"
...
...
...
Copy Ok: 4266 bytes copied
Value = 0 = 0x0
559
VxWorks
Kernel Programmer's Guide, 6.6
This example formats RFA and PCMCIA flash for two drives.
The first lines of this example format the board-resident flash by calling the helper
routine, sysTffsFormat( ), which preserves the boot image region. This example
does not update the boot image. It then mounts the drive, numbering it as 0 and
passing 0 as the second argument to usrTffsConfig( ). Zero is used because RFA
is non-removable.
The last lines of the example format PCMCIA flash, passing default format values
to tffsDevFormat( ) for formatting the entire drive. Then, it mounts that drive.
Because PCMCIA is removable flash, it passes 1 as the second argument to
usrTffsConfig( ). (See 10.3.7 Mounting the Drive, p.558 for details on the arguments
to usrTffsConfig( ).)
560
10 Flash File System Support: TrueFFS
10.4 Using TrueFFS Shell Commands
Insert a flash card in the PCMCIA socket. At the shell prompt, type the following
commands:
-> sysTffsFormat
-> usrTffsConfig 0,0,"/RFA/"
-> tffsDevFormat 1,0
-> usrTffsConfig 1,1,"/PCMCIA1/"
Target with a Board-Resident Flash Array and No Boot Image Region Created
This example formats PCMCIA flash for two drives. Neither format call preserves
a boot image region. Then, it mounts the drives, the first is numbered 0, and the
second is numbered 1. PCMCIA is a removable medium.
Insert a flash card in each PCMCIA socket. At the shell prompt, type the following
commands:
-> tffsDevFormat 0,0
-> usrTffsConfig 0,1,"/PCMCIA1/"
-> tffsDevFormat 1,0
-> usrTffsConfig 1,1,"/PCMCIA2/"
561
VxWorks
Kernel Programmer's Guide, 6.6
The following code fragment illustrates the procedure for creating a TrueFFS block
device, creating an XBD block device wrapper, and formatting the TrueFFS media
for an HRFS file system.
/* create block device for the entire disk, */
/*
* If the HRFS file system exists already, it will be
* automatically instantiated and we are done.
*/
/*
* But if a file system does not exist or it isn't HRFS
* format it.
*/
For a description of the full set of steps needed to implement HRFS with TrueFFS,
see 10.5.2 TrueFFS With HRFS Shell Command Example, p.562.
The following steps illustrate the procedure for implementing TrueFFS with HRFS
using the kernel shell.
1. Perform a low-level format of the flash device. This step must be performed
before you use the flash device for the first time.
> sysTffsFormat
562
10 Flash File System Support: TrueFFS
10.5 Using TrueFFS With HRFS
value = 0 = 0x0
Note that the hrfsDiskFormat( ) routine is designed for convenient use from 10
the shell; the hrfsFormat( ) routine is used in code.
5. The TFFS device is now ready for use. List the contents.
> ll "/tffs:0"
Listing Directory /tffs:0:
drwxrwxrwx 1 0 0 8192 Jan 1 00:05 ./
drwxrwxrwx 1 0 0 8192 Jan 1 00:05 ../
value = 0 = 0x0
After the media and file system and been prepared, for subsequent reboots the
procedure is slightly simpler as the flash media does not have to be formatted for
TrueFFS and HRFS.
1. Create the TrueFFS block device.
> dev = tffsDevCreate (0,0)
New symbol "dev" added to kernel symbol table.
dev = 0x461eb8: value = 33535048 = 0x1ffb448
3. The TFFS device is now ready for use. List the contents.
> ll "/tffs:0"
Listing Directory /tffs:0:
drwxrwxrwx 1 0 0 8192 Jan 1 00:05 ./
drwxrwxrwx 1 0 0 8192 Jan 1 00:05 ../
value = 0 = 0x0
563
VxWorks
Kernel Programmer's Guide, 6.6
564
11
Error Detection and Reporting
11.1 Introduction
VxWorks provides an error detection and reporting facility to help debugging
software faults. It does so by recording software exceptions in a specially
designated area of memory that is not cleared between warm reboots. The facility
also allows for selecting system responses to fatal errors, with alternate strategies
for development and deployed systems.
565
VxWorks
Kernel Programmer's Guide, 6.6
The key features of the error detection and reporting facility are:
■
A persistent memory region in RAM used to retain error records across warm
reboots.
■
Mechanisms for recording various types of error records.
■
Error records that provide detailed information about run-time errors and the
conditions under which they occur.
■ The ability to display error records and clear the error log from the shell.
■ Alternative error-handing options for the system’s response to fatal errors.
■ Macros for implementing error reporting in user code.
The hook routines described in the edrLib API reference can be used as the basis
for implementing custom functionality for non-RAM storage for error records.
For more information about error detection and reporting routines in addition to
that provided in this chapter, see the API reference entries for edrLib, edrShow,
edrErrLogLib, and edrSysDbgLib.
For information about related facilities, see 6.8 Memory Error Detection, p.333.
NOTE: This chapter provides information about facilities available in the VxWorks
kernel. For information about facilities available to real-time processes, see the
corresponding chapter in the VxWorks Application Programmer’s Guide.
566
11 Error Detection and Reporting
11.2 Configuring Error Detection and Reporting Facilities
To use the error detection and reporting facility, the kernel must be configured
with the following components:
■
INCLUDE_EDR_PM
■
INCLUDE_EDR_ERRLOG
■
INCLUDE_EDR_SHOW
■
INCLUDE_EDR_SYSDBG_FLAG
As a convenience, the BUNDLE_EDR component bundle may be used to include all
of the above components.
NOTE: The persistent memory region is not supported for the symmetric multipro-
cessing (SMP) configuration of VxWorks. For general information about VxWorks
SMP and for information about migration, see 15. VxWorks SMP and
15.15 Migrating Code to VxWorks SMP, p.704.
A cold reboot always clears the persistent memory region. The pmInvalidate( )
routine can also be used to explicitly destroy the region (making it unusable) so
that it is recreated during the next warm reboot.
The persistent-memory area is write-protected when the target system includes an
MMU and VxWorks has been configured with MMU support.
The size of the persistent memory region is defined by the PM_RESERVED_MEM
configuration parameter. By default the size is set to six pages of memory.
By default, the error detection and reporting facility uses one-half of whatever
persistent memory is available. If no other applications require persistent memory,
the component may be configured to use almost all of it. This can be accomplished
by defining EDR_ERRLOG_SIZE to be the size of PM_RESERVED_MEM less the size
of one page of memory.
If you increase the size of the persistent memory region beyond the default, you
must create a new boot loader with the same PM_RESERVED_MEM value. The
memory area between RAM_HIGH_ADRS and sysMemTop( ) must be big enough
567
VxWorks
Kernel Programmer's Guide, 6.6
to copy the VxWorks boot loader. If it exceeds the sysMemTop( ) limit, the boot
loader may corrupt the area of persistent memory reserved for core dump storage
when it loads VxWorks. The boot loader, must therefore be rebuilt with a lower
RAM_HIGH_ADRS value.
! WARNING: If the boot loader is not properly configured (as described above), this
could lead into corruption of the persistent memory region when the system boots.
The EDR_RECORD_SIZE parameter can be used to change the default size of error
records. Note that for performance reasons, all records are necessarily the same
size.
The pmShow( ) shell command (for the C interpreter) can be used to display the
amount of allocated and free persistent memory.
For more information about persistent memory, see 6.6 Reserved Memory, p.330
and the pmLib API reference.
! WARNING: A VxWorks 6.x boot loader must be used to ensure that the persistent
memory region is not cleared between warm reboots. Prior versions of the boot
loader may clear this area.
The error detection and reporting facilities provide for two sets of responses to
fatal errors. See 11.5 Fatal Error Handling Options, p.571 for information about these
responses, and various ways to select one for a run-time system.
568
11 Error Detection and Reporting
11.3 Error Records
■
severity level
The event type identifies the context in which the error occurred (during system
initialization, or in a process, and so on).
The severity level indicates the seriousness of the error. In the case of fatal errors,
the severity level is also associated with alternative system’s responses to the error
(see 11.5 Fatal Error Handling Options, p.571).
The event types are defined in Table 11-1, and the severity levels in Table 11-2.
Type Description
The information collected depends on the type of events that occurs. In general, a
complete fault record is recorded. For some events, however, portions of the
record are excluded for clarity. For example, the record for boot and reboot events
exclude the register portion of the record.
569
VxWorks
Kernel Programmer's Guide, 6.6
Error records hold detailed information about the system at the time of the event.
Each record includes the following generic information:
■
date and time the record was generated
■
type and severity
■
operating system version
■
task ID
■
process ID, if the failing task in a process
■ task name
■ process name, if the failing task is in a process
■ source file and line number where the record was created
■ a free form text message
It also optionally includes the following architecture-specific information:
■ memory map
■ exception information
■ processor registers
■ disassembly listing (surrounding the faulting address)
■ stack trace
Command Action
570
11 Error Detection and Reporting
11.5 Fatal Error Handling Options
Command Action
The shell’s command interpreter provides comparable commands. See the API
references for the shell, or use the help edr command.
In addition to displaying error records, each of the show commands also displays
the following general information about the error log: 11
571
VxWorks
Kernel Programmer's Guide, 6.6
The operative error handling mode is determined by the system debug flag (see
11.5.2 Setting the System Debug Flag, p.573). The default is deployed mode.
Table 11-4 describes the responses in each mode for each of the event types. It also
lists the routines that are called when fatal records are created.
The error handling routines are called response to certain fatal errors. Only fatal
errors—and no other event types—have handlers associated with them. These
handlers are defined in installDir/vxworks-6.x/target/config/comps/src/edrStub.c.
Developers can modify the routines in this file to implement different system
responses to fatal errors. The names of the routines, however, cannot be changed.
Deployed
Mode
Event Type Debug Mode (default) Error Handling Routine
Note that when the debugger is attached to the target, it gains control of the system
before the error-handling option is invoked, thus allowing the system to be
debugged even if the error-handling option calls for a reboot.
In order to provide the option of debug mode error handling for fatal errors,
VxWorks must be configured with the INCLUDE_EDR_SYSDBG_FLAG
component, which it is by default. The component allows a system debug flag to
be used to select debug mode, as well as reset to deployed mode (see 11.5.2 Setting
the System Debug Flag, p.573). If INCLUDE_EDR_SYSDBG_FLAG is removed from
VxWorks, the system defaults to deployed mode (see Table 11-4).
572
11 Error Detection and Reporting
11.5 Fatal Error Handling Options
How the error detection and reporting facility responds to fatal errors, beyond
merely recording the error, depends on the setting of the system debug flag. When
the system is configured with the INCLUDE_EDR_SYSDBG_FLAG component, the
flag can be used to set the handling of fatal errors to either debug mode or
deployed mode (the default).
For systems undergoing development, it is obviously desirable to leave the system
in a state that can be more easily debugged; while in deployed systems, the aim is
to have them recover as best as possible from fatal errors and continue operation.
The debug flag can be set in any of the following ways:
■ Statically, with boot loader configuration.
■ Interactively, at boot time.
■ Programmatically, using APIs in application code. 11
When a system boots, the banner displayed on the console displays information
about the mode defined by the system debug flag. For example:
ED&R Policy Mode: Deployed
The system can be set to either debug mode or deployed mode with the f boot
loader parameter when a boot loader is configured and built. The value of 0x000 is
used to select deployed mode. The value of 0x400 is used to select debug mode. By
default, it is set to deployed mode.
For information about configuring and building boot loaders, see 3.7 Customizing
and Building Boot Loaders, p.146.
573
VxWorks
Kernel Programmer's Guide, 6.6
To change the system debug flag interactively, stop the system when it boots. Then
use the c command at the boot-loader command prompt. Change the value of the
the f parameter: use 0x000 for deployed mode (the default) or to 0x400 for debug
mode.
The state of the system debug flag can also be changed in user code with the
edrSysDbgLib API.
574
11 Error Detection and Reporting
11.7 Sample Error Record
All the macros use the same parameters. The trace parameter is a boolean value
indicating whether or not a traceback should be generated for the record. The msg
parameter is a string that is added to the record.
<<<<<Memory Map>>>>>
<<<<<Exception Information>>>>>
data access
Exception current instruction address: 0x002110cc
Machine Status Register: 0x0000b032
Data Access Register: 0x50000000
Condition Register: 0x20000080
Data storage interrupt Register: 0x40000000
<<<<<Registers>>>>>
575
VxWorks
Kernel Programmer's Guide, 6.6
<<<<<Disassembly>>>>>
<<<<<Traceback>>>>>
576
12
Target Tools
577
VxWorks
Kernel Programmer's Guide, 6.6
12.1 Introduction
The Wind River host development environment provides tools that reside and
execute on the host machine. This approach conserves target memory and
resources. However, there are many situations in which it is desirable to make use
of target-resident facilities: a target-resident shell, kernel object-module loader,
debugging facilities, and system symbol table. The uses for these target-resident
tools include the following:
■
Debugging a deployed system over a serial connection.
■
Developing and debugging network protocols, where it is useful to see the
target's view of a network.
■
Loading kernel modules from a target disk, from ROMFS, or over the network,
and running them interactively (or programmatically).
The target based tools are partially independent of each other. For example, the
kernel shell may be used without the kernel object-module loader, and vice versa.
However, for any of the other individual tools to be completely functional, the
system symbol table is required.
In some situations, it may be useful to use both the host-resident development
tools and the target-resident tools at the same time. In this case, additional facilities
are required so that both environments maintain consistent views of the system.
For more information, see 12.4.5 Synchronizing Host and Kernel Modules List and
Symbol Table, p.625.
For the most part, the target-resident facilities work the same as their host
development environment counterparts. For more information, see the
appropriate chapters of the Wind River Workbench User’s Guide and the VxWorks
Command-Line Tools User’s Guide.
This chapter describes the target-resident kernel shell, kernel object-module
loader, debugging facilities, and system symbol table. It also provides an overview
of the most commonly used VxWorks show routines, which are executed from the
shell. In addition, it describes the WDB target agent. WDB is a target-resident,
run-time facility required for connecting host tools with a VxWorks target system.
578
12 Target Tools
12.2 Kernel Shell
NOTE: The kernel shell operates only with int, long, short, char, double, or float
data types.
12
The kernel shell includes both a C interpreter and a command interpreter. Their
basic differences are as follows:
■ The command interpreter is designed primarily for starting, monitoring, and
debugging real-time process (RTP) applications. It can also be used in
conjunction with the kernel object module loader to load and unload kernel
object modules. It provides a UNIX-like shell environment.
■
The C interpreter is designed primarily for monitoring and debugging
kernel-based code. It can be used for loading, running, and unloading object
modules in conjunction with the kernel object-module loader. In addition, it
provides some APIs for starting and monitoring RTP applications. The C
interpreter operates on C routines.
For detailed information about the interpreters, see the Wind River Workbench Host
Shell User’s Guide.
For information about the commands supported by each interpreter, see Interpreter
Commands and References, p.580.
1. In versions of VxWorks prior to 6.0, the kernel shell was called the target shell. The new
name reflects the fact that the target-resident shell runs in the kernel and not in a process.
579
VxWorks
Kernel Programmer's Guide, 6.6
For information about adding new commands to the command interpreter, and
creating interpreters for the kernel shell, see 12.2.19 Adding Custom Commands to the
Command Interpreter, p.596 and 12.2.20 Creating a Custom Interpreter, p.601.
To switch between the shell’s C and command interpreters, use the cmd command
when the C interpreter is active to invoke the command interpreter, and the C
command when the command interpreter is active to invoke the C interpreter. The
following example illustrates switching from the C interpreter to the command
interpreter and back again (note the difference in the shell prompt for each
interpreter):
-> cmd
[vxWorks *]# C
->
You can also execute a command from the interpreter that is not active.
For information about individual C interpreter routines, see the usrLib, dbgLib,
and usrShellHistLib sections in the VxWorks Kernel API Reference, as well as
entries for the various show routine libraries.
The dbgLib routines are particularly useful (for example, semaphores can be
created and manipulated from the shell). Note that the kernel shell can also call any
C routine that returns a data type supported by the shell (int, long, short, char,
double, or float).
For information about the command interpreter commands, see the VxWorks
Kernel Shell Command Reference.
For information about help available from the kernel shell itself, see 12.2.6 Using
Kernel Shell Help, p.587.
The major differences between the target and host shells are:
■
The host and kernel shells do not provide exactly the same set of commands.
The kernel shell, for example, has commands related to network, shared data,
580
12 Target Tools
12.2 Kernel Shell
environment variables, and some other facilities that are not provided by the
host shell. However, the host and kernel shells provide a very similar set of
commands for their command and C interpreters.
■
Each shell has its own distinct configuration parameters, as well as those that
are common to both.
■
Both shells include a command and a C interpreter. The host shell also
provides a Tcl interpreter and a gdb interpreter. The gdb interpreter has about
40 commands and is intended for debugging processes (RTPs); and it
references host file system paths.
■ For the host shell to work, VxWorks must be configured with the WDB target
agent component. For the kernel shell to work, VxWorks be configured with
the kernel shell component, as well as the target-resident symbol tables
component.
■ The host shell can perform many control and information functions entirely on
the host, without consuming target resources.
12
■ The kernel shell does not require any Wind River host tool support.
■ The host shell uses host system resources for most functions, so that it remains
segregated from the target. This means that the host shell can operate on the
target from the outside, whereas the kernel shell is part of the VxWorks kernel.
For example, because the kernel shell task is created with the taskSpawn( )
VX_UNBREAKABLE option, it is not possible to set breakpoints on a function
executed within the kernel shell task context. Therefore, the user must create a
new task, with sp( ), to make breakable calls. For example, from the kernel
shell you must do this:
-> b printf
-> sp printf, "Test\n"
Conflicts in task priority may also occur while using the kernel shell.
■
The kernel shell has its own set of terminal-control characters, unlike the host
shell, which inherits its setting from the host window from which it was
invoked. (See 12.2.7 Using Kernel Shell Control Characters, p.588.)
581
VxWorks
Kernel Programmer's Guide, 6.6
■
The kernel shell correctly interprets the tilde operator in pathnames for UNIX
and Linux host systems (or remote file systems on a UNIX or Linux host
accessed with ftp, rsh, NFS, and so on), whereas the host shell cannot. For
example, the following command executed from the kernel shell (with the C
interpreter) by user panloki would correctly locate the kernel module
/home/panloki/foo.o on the host system and load it into the kernel:
-> ld < ~/foo.o
■ When the kernel shell encounters a string literal (“...”) in an expression, it
allocates space for the string, including the null-byte string terminator, plus
some additional overhead.2 The value of the literal is the address of the string
in the newly allocated storage. For example, the following expression allocates
12-plus bytes from the target memory pool, enters the string in that memory
(including the null terminator), and assigns the address of the string to x:
-> x = "hello there"
The following expression can be used to return the memory to the target
memory pool (see the memLib reference entry for information on memory
management):
-> free (x)
This is because if strings were only temporarily allocated, and a string literal
was passed to a routine being spawned as a task, by the time the task executed
and attempted to access the string, the kernel shell would have already
released (and possibly even reused) the temporary storage where the string
was held.
If the accumulation of memory used for strings has an adverse effect on
performance after extended development sessions with the kernel shell, you
can use the strFree() routine (with the C interpreter) or the equivalent
string free command (with the command interpreter).
The host shell also allocates memory on the target if the string is to be used
there. However, it does not allocate memory on the target for commands that
can be performed at the host level (such as lkup( ), ld( ), and so on).
2. The amount of memory allocated is rounded up to the minimum allocation unit for the
architecture in question, plus the amount for the header for that block of memory.
582
12 Target Tools
12.2 Kernel Shell
Required Components
To use the kernel shell, you must configure VxWorks with the INCLUDE_SHELL
component. The configuration parameters for this component are described in
Table 12-1.
You must also configure VxWorks with components for symbol table support,
using either the INCLUDE_STANDALONE_SYM_TBL or INCLUDE_NET_SYM_TBL
component. For information about configuring VxWorks with symbol tables, see
12.4.1 Configuring VxWorks with Symbol Tables, p.620.
583
VxWorks
Kernel Programmer's Guide, 6.6
Optional Components
Component Description
584
12 Target Tools
12.2 Kernel Shell
Component Description
585
VxWorks
Kernel Programmer's Guide, 6.6
INCLUDE_DISK_UTIL
Provides file utilities, such as ls and cd (it is required by
INCLUDE_DISK_UTIL_SHELL_CMD).
INCLUDE_SYM_TBL_SHOW
Provides symbol table show routines, such as lkup.
It can also be useful to include components for the kernel object-module loader
and unloader (see 12.3.1 Configuring VxWorks with the Kernel Object-Module Loader,
p.606). These components are required for the usrLib commands that load
modules into, and unload modules from, the kernel (see 12.2.10 Loading and
Unloading Kernel Object Modules, p.589).
Note that the BUNDLE_STANDALONE_SHELL and BUNDLE_NET_SHELL
component bundles are also available to provide for a standalone kernel shell or a
networked kernel shell.
The kernel shell can be configured statically with various VxWorks component
parameter options (as part of the configuration and build of the operating stem),
as well as configured dynamically from the shell terminal for a shell session.
The default configuration is defined for all shell sessions of the system with the
component parameter SHELL_DEFAULT_CONFIG. However, the configuration for
the initial shell session launched at boot time can be set differently with the
SHELL_FIRST_CONFIG parameter, and the configuration for remote sessions
(telnet or rlogin) can be set with SHELL_REMOTE_CONFIG.
Each of these component parameters provide various sets of shell configuration
variables that can be set from the command line. These include INTERPRETER,
LINE_EDIT_MODE, VXE_PATH, AUTOLOGOUT, and so on.
Some of the configuration variables are dependent on the inclusion of other
VxWorks components in the operating system. For example, RTP_CREATE_STOP
is only available if VxWorks is configured with process support and the command
interpreter component (INCLUDE_RTP and INCLUDE_SHELL_INTERP_CMD).
With the C interpreter, shConfig( ) can be used to reconfigure the shell
interactively. Similarly, using the command interpreter, the shell configuration can
be displayed and changed with the set config command.
586
12 Target Tools
12.2 Kernel Shell
The kernel shell starts automatically after VxWorks boots, by default. If a console
window is open over a serial connection, the shell prompt appears after the shell
banner. 12
For information about booting VxWorks, and starting a console window, see the
Wind River Workbench User’s Guide: Setting up Your Hardware.
The shell component parameter SHELL_START_AT_BOOT controls if an initial
shell session has to be started (TRUE) or not (FALSE). Default is TRUE. If set to
FALSE, the shell session does not start. It is up to the user to start it either
programmatically (from an application), from the host shell, from a telnet or rlogin
shell session or from the wtxConsole (a host tool). Use shellInit( ) or
shellGenericInit( ) to start a shell session.
Note that when a user calls a routine from the kernel shell, the routine is executed
in the context of the shell task. So that if the routine hangs, the shell session will
hang as well.
For either the C or the command interpreter, the help command displays the basic
set of interpreter commands.
For information about references with detailed information on interpreter
commands, see Interpreter Commands and References, p.580. Also see the Wind River
Workbench Host Shell User’s Guide for information about interpreter use.
587
VxWorks
Kernel Programmer's Guide, 6.6
The kernel shell has its own set of terminal-control characters, unlike the host shell,
which inherits its setting from the host window from which it was invoked.
Table 12-4 lists the kernel shell’s terminal-control characters. The first four of these
are defaults that can be mapped to different keys using routines in tyLib (see also
tty Special Characters, p.396).
Command Description
CTRL+D Logs out when the terminal cursor is at the beginning of a line.
ESC Toggles between input mode and edit mode (vi mode only).
The shell line-editing commands are the same as they are for the host shell. See the
ledLib API references.
The history of kernel shell activity can be recorded with the histSave( ) and
histLoad( ) commands for the C interpreter, and the history save and history load
commands for the command interpreter. The commands allow you to save the
588
12 Target Tools
12.2 Kernel Shell
shell history to, and load it from, a file. The commands are provided by the
INCLUDE_SHELL_HISTORY_FILE component and by the
INCLUDE_HISTORY_FILE_SHELL_CMD component.
For more information about these commands, see the usrShellHistLib entry in the
VxWorks Kernel API Reference (for the C interpreter), and the VxWorks Kernel Shell
Command Reference (for the command interpreter).
Aliases can be created for shell commands, as with a UNIX shell. They can be
defined programatically using the shellCmdAliasAdd( ) and
shellCmdAliasArrayAdd( ) routines (see Sample Custom Commands, p.601 for
examples).
For information about creating command aliases interactively, see the VxWorks
Command-Line Tools User’s Guide.
12
Kernel object modules can be dynamically loaded into a running VxWorks kernel
with the target-resident loader. For information about configuring VxWorks with
the loader, and about its use, see 12.3 Kernel Object-Module Loader, p.605.
NOTE: For information about working with real-time processes from the shell, see
the Wind River Workbench Host Shell User’s Guide, the VxWorks Application
Programmer’s Guide: Applications and Processes, and the online Wind River Host Shell
API Reference.
The following is a typical load command from the shell, in which the user
downloads appl.o using the C interpreter:
-> ld < /home/panloki/appl.o
The ld( ) command loads an object module from a file, or from standard input into
the kernel. External references in the module are resolved during loading.
Once an application module is loaded into target memory, subroutines in the
module can be invoked directly from the shell, spawned as tasks, connected to an
interrupt, and so on. What can be done with a routine depends on the flags used
to download the object module (visibility of global symbols or visibility of all
symbols).
589
VxWorks
Kernel Programmer's Guide, 6.6
Modules can be reloaded with reld( ), which unloads the previously loaded
module of the same name before loading the new version. Modules can be
unloaded with unld( ).
For more information about ld, see the VxWorks API reference for usrLib. For
more information about reld( ) and unld( ), see the VxWorks API reference for
unldLib. Note that these routines are meant for use from the shell only; they
cannot be used programmatically.
Undefined symbols can be avoided by loading modules in the appropriate order.
Linking independent files before download can be used to avoid unresolved
references if there are circular references between them, or if the number of
modules is unwieldy. The static linker ldarch can be used to link interdependent
files, so that they can only be loaded and unloaded as a unit. (See Statically Linking
Kernel Application Modules, p.63)
Unloading a code module releases all of the resources used when loading the
module, as far as that is possible. This includes removing symbols from the target’s
symbol table, removing the module from the list of modules loaded in the kernel,
removing the text, data, and bss segments from the kernel memory they were
stored in, and freeing that memory. It does not include freeing memory or other
resources (such as semaphores) that were allocated or created by the module itself
while it was loaded.
The kernel shell includes task-level debugging utilities for kernel space if VxWorks
has been configured with the INCLUDE_DEBUG component. For information
about the debugging commands available, see the dgbLib entry in the VxWorks
Kernel API Reference for the C interpreter; and see the VxWorks Kernel Shell
Command Reference for the command interpreter.
The kernel shell can be used for task mode debugging of SMP systems, but it
cannot be used for system mode debugging. Software breakpoints are always
persistent—that is, retained in target memory (for UP systems they are not). Kernel
shell debug commands are, however, not affected by persistent software
breakpoints. For example, disassembling an address on which a software
breakpoint is installed displays the real instruction.
590
12 Target Tools
12.2 Kernel Shell
Also note the following with regard to using the kernel shell to debug SMP
systems:
■
Breakpoint exceptions that occur while holding an ISR-callable spinlock are
ignored.
■
Breakpoint exceptions that occur while holding a task-only spinlock are
ignored.
■ Breakpoint exceptions that occur while interrupts are locked are ignored.
■ The output of different tasks can be intermingled. For example, creating a task
that performs a printf( ) operation, as follows:
-> sp printf,"Hello world."
Task spawned: id = 0x61707478, name = Hello world.t1
value = 1634759800 = 0x61707478 = 'x'
->
For information about the SMP configuration of VxWorks, see 15. VxWorks SMP.
591
VxWorks
Kernel Programmer's Guide, 6.6
Console login security can be provided for the kernel shell by adding the
INCLUDE_SECURITY component to the VxWorks configuration. In addition, the
shell’s SHELL_SECURE component parameter must be set to TRUE (it is set to
FALSE by default).
With this configuration, the shell task is not launched at startup. Instead, a login
task runs on the console, waiting for the user to enter a valid login ID and
password. After validation of the login, the shell task is launched for the console.
When the user logs out from the console, the shell session is terminated, and a new
login task is launched.
Also see Remote Login Security, p.594.
592
12 Target Tools
12.2 Kernel Shell
Users can log into a VxWorks system with telnet and rlogin and use the kernel
shell, provided that VxWorks has been configured with the appropriate
components. VxWorks can also be configured with a remote-login security feature
that imposes user ID and password constraints on access to the system.
Note that VxWorks does not support rlogin access from the VxWorks system to
the host.
When VxWorks is first booted, the shell’s terminal is normally the system console.
You can, however, use telnet to access the kernel shell from a host over the
network if VxWorks is built with the INCLUDE_TELNET_CLIENT component
(which can be configured with the TELNETD_MAX_CLIENTS parameter). This
component creates the tTelnetd task when the system boots. It is possible to start
12
several shells for different network connections. (Remote login is also available
with the wtxConsole tool.)
To access the kernel shell over the network, use the telnet command with the name
of the target VxWorks system. For example:
% telnet myVxBox
UNIX host systems can also use rlogin to access to the kernel shell from the host.
VxWorks must be configured with the INCLUDE_RLOGIN component to create the
tRlogind task.
To end an rlogin connection to the shell, you can do any of the following:
■ Use the CTRL+D key combination.
■ Use the logout( ) command with the shell’s C interpreter, or the logout
command with the command interpreter.
■
Type the tilde and period characters at the shell prompt:
-> ~.
593
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks can be configured with a remote-login security feature that imposes user
ID and password constraints on access to the system. The INCLUDE_SECURITY
component provides this facility.
Note that loginEncryptInstall( ) allows for use of other encryption routines (such
as SHA512).
A user is then prompted for a login user name and password when accessing the
VxWorks target remotely. The default login user name and password provided
with the supplied system image is target and password.
The default user name and password can be changed with the loginUserAdd( )
routine, as follows:
-> loginUserAdd "fred", "encrypted_password"
The default user name and password can be changed with loginUserAdd( ), which
requires an encrypted password. To create an encrypted password, use the
vxencrypt tool on the host system. The tool prompts you to enter a password, and
then displays the encrypted version. The user name and password can then be
changed with the loginUserAdd( ) command with the shell’s C interpreter. For
example, mysecret is encrypted as bee9QdRzs, and can be used with the user
name fred as follows to change the default settings:
-> loginUserAdd "fred", " bee9QdRzs"
To define a group of login names, include a list of loginUserAdd( ) calls in a
startup script and run the script after the system has been booted. Or include the
loginUserAdd( ) calls in usrAppInit( ); for information in this regard, see
2.6.10 Configuring VxWorks to Run Applications Automatically, p.66.
NOTE: The values for the user name and password apply only to remote login into
the VxWorks system. They do not affect network access from VxWorks to a remote
system; See Wind River Network Stack for VxWorks 6 Programmer’s Guide.
The remote-login security feature can be disabled at boot time by specifying the
flag bit 0x20 (SYSFLAG_NO_SECURITY) in the flags boot parameter.
Also see 12.2.13 Console Login Security, p.592.
594
12 Target Tools
12.2 Kernel Shell
The do/while loop is necessary for waiting for the shell script to terminate.
Shell data is available with the shellDataLib library. This allows the user to
associate data values with a shell session (uniquely per shell session), and to access
them at any time. This is useful to maintain default values, such as the memory
dump width, the disassemble length, and so on. These data values are not
accessible interactively from the shell, only programatically.
595
VxWorks
Kernel Programmer's Guide, 6.6
Shell configuration variables are available using the shellConfigLib library. This
allows the user to define default configurations for commands or for the shell
itself. Such variables already exist for the shell (see the configuration variables
RTP_CREATE_STOP or LINE_EDIT_MODE). They behave similarly to environment
strings in a UNIX shell. These variables can be common to all shell sessions, or local
to a shell session. They can be modified and displayed interactively by the shell
user with the command set config or the shell routine shConfig( ).
The kernel shell’s command interpreter consists of a line parser and of a set of
commands. It can be extended with the addition of custom commands written in
C. (The host shell’s command interpreter can likewise be extended, but with
commands written in Tcl.)
The syntax of a command statement is standard shell command-line syntax,
similar to that used with the UNIX sh shell or the Windows cmd shell. The syntax
is:
command [options] [arguments]
Blank characters (such as a space or tab) are valid word separators within a
statement. A blank character can be used within an argument string if it is escaped
(that is, prefixed with the back-slash character) or if the argument is quoted with
double quote characters. The semicolon character is used as a command separator,
used for entering multiple commands in a single input line. To be used as part of
an argument string, a semicolon must escaped or quoted.
The command parser splits the statement string into a command name string and
a string that consists of the options and the arguments. These options and
arguments are then passed to the command routine.
The command name may be a simple name (one word, such as reboot) or a
composite name (several words, such as task info). Composite names are useful
for creating classes of commands (commands for tasks, commands for processes,
and so on).
Options do not need to follow any strict format, but the standard UNIX option
format is recommended because it is handled automatically by the command
parser. If the options do not follow the standard UNIX format, the command
routine must parse the command strings to extract options and arguments. See
Defining a Command, p.598 for more information.
596
12 Target Tools
12.2 Kernel Shell
The special double-dash option (--) is used in the same way as in UNIX. That is, all
elements that follow it are treated as an argument string, and not options. For
example, if the command test that accepts option -a, -b and -f (the latter with an
extra option argument), then the following command sets the three options, and
passes the arg string as an argument:
test -a -b -f arg
However, the next command only sets the -a option. Because they follow the
double-dash, the -b, -f and arg elements of the command are passed to the C
routine of the test command as strings:
test -a -- -b -f file arg
597
VxWorks
Kernel Programmer's Guide, 6.6
This section describes the conventions used for creating commands and provides
information about examples of commands that can serve as models. Also see the
shellInterpCmdLib API reference.
It may also be useful to review the other shell documentation in order to facilitate
the task of writing new commands. See the shellLib, shellDataLib, and
shellConfigLib API references, as well as the material on the command interpreter
in the VxWorks Command-Line Tools User’s Guide.
Defining a Command
The string cmdFullname is the name of the command. It may be a composite name,
such as foo bar. In this case, foo is the top-level command name, and bar is a
sub-command of foo. The command name must be unique.
The func element is the name of the routine to call for that command name.
The string opt can be used in several different ways to define how option input is
passed to the command routine:
■ If opt is not NULL, it describes the possible options that the command accepts.
Each option is represented as a single character (note that the parser is case
sensitive). If an option takes an argument, a colon character (:) must be added
after the option character. For example, the following means that the
command accepts the -a, -v, and -f arg options:
avf:
■
If opt not NULL, and consists only of a semicolon, the option input is passed to
the command routine as a single string. It is up to the routine to extract options
and arguments.
■
If opt is NULL, the parser splits the input line into tokens and passes them as
traditional argc/argv parameters to the command routine.
Note that the command routine must be coded in a manner appropriate to how the
opt string is used in the command-definition structure (see Writing the Command
Routine, p.599).
598
12 Target Tools
12.2 Kernel Shell
This section describes how to write the C routine for a command, including how
the routine should handle command options.
The command definition structure and the command routine must be coordinated,
most obviously with regard to the command routine name, but also with regard to
the opt element of the structure defining the command:
■ If the opt element is not equal to NULL, the declaration of the routine must
include the options array:
int func
(
SHELL_OPTION options[] /* options array */
...
)
599
VxWorks
Kernel Programmer's Guide, 6.6
int func
(
int argc, /* number of argument */
char ** arcv /* pointer on the array of arguments */
...
)
In this declaration, argc is the number of arguments of the command, and argv
is an array that contains the argument strings.
In the first case the parser populates the options[] array.
In the second case it splits and passes the arguments as strings using argc/argv to
the routine.
When the opt element is used to define options, the order in which they are listed
is significant, because that is the order in which they populate the options[] array.
For example, if opt is avf:
■ Option a is described by the first cell of the options array: options[0].
■ Option v is described by the second cell of the options array: options[1].
■ Option f is described by the third cell of the options array: options[2].
■ The argument of option f is options[2].string.
Each cell of the options array passed by the parser to func is composed of a boolean
value (TRUE if the option is set, FALSE if not) and a pointer to a string (pointer to
an argument), if so defined. For example, if -a has been defined, the value of
options[0].isSet value is TRUE. Otherwise it is FALSE.
A boolean value indicates if it is the last cell of the array. If the option string opt is
only a colon, the argument string of the command is passed to func without any
processing, into the string field of the first element of the options array.
The return value of the command routine is an integer. By convention, a return
value of zero means that the command has run without error. Any other value
indicates an error value.
See Defining a Command, p.598 for information about the command-definition
structure. See Sample Custom Commands, p.601 for examples of command
structures and routines.
The shell commands have to be registered against the shell interpreter. This can be
done at anytime; the shell component does not need to be initialized before
commands can be registered.
600
12 Target Tools
12.2 Kernel Shell
A command is registered in a topic section. The topic name and the topic
description must also be registered in the command interpreter database. The
routine used to do so is shellCmdTopicAdd( ). This routine accepts two
parameters: a unique topic name and a topic description. The topic name and
description are displayed by the help command.
Two routines are used to register commands: shellCmdAdd( ) adds a single
command, and shellCmdArrayAdd( ) adds an array of commands.
See Sample Custom Commands, p.601 for information about code that illustrates
command registration.
The code can be used with Wind River Workbench as a downloadable kernel
module project, or included in a kernel project. For information about using
Workbench, see the Wind River Workbench User’s Guide.
It can also be built from the command line with the command make CPU=cpuType.
For example:
make CPU=PENTIUM2
This resulting module can then be loaded into the kernel, using Workbench
(including the host shell) or kernel shell.
The tutorialShellCmdInit( ) routine must be called to register the commands
before they can be executed, regardless of how the code is implemented.
The kernel shell is designed to allow customers to add their own interpreter. Two
interpreters are provided by Wind River: the C interpreter and the command
interpreter. An interpreter receives the user input line from the shell, validates the
input against its syntax and grammar, and performs the action specified by the
input line (such as redirection, calling a VxWorks function, reading or writing
memory, or any other function that an interpreter might perform).
601
VxWorks
Kernel Programmer's Guide, 6.6
602
12 Target Tools
12.2 Kernel Shell
The interpreter prompt may contain format conversion characters that are
dynamically replaced when printed by another string. For example %/ is replaced
by the current shell session path, %n is replaced by the user name, and so on (see
the shellPromptLib API reference for more information). Moreover, it is possible
to add new format strings with shellPromptFmtStrAdd( ). For example the
command-related process adds the format string %c to display the name of the
current working memory context.
The current interpreter is defined by the shell configuration variable named
INTERPRETER (the C macro SHELL_CFG_INTERP is defined for it in shellLib.h).
A shell user can switch to is own interpreter by setting the value of the
configuration variable INTERPRETER to its interpreter name. This can be done
either interactively or programmatically. For illustrative purposes, the following
commands change the interpreter from C to command and back again at the shell
command line:
-> shConfig "INTERPRETER=Cmd"
[vxWorks]# set config INTERPRETER=C
->
It is also possible to created custom commands to allow for switching to and from
a custom interpreter, similar to those used to switch between the command and C
interpreters (C is used with the command interpreter, and cmd with the C
interpreter to switch between the two).
603
VxWorks
Kernel Programmer's Guide, 6.6
The following code fragment sets the command interpreter for the current session:
shellConfigValueSet (CURRENT_SHELL_SESSION, SHELL_CFG_INTERP, "Cmd");
For more information, see the set config command and the
shellConfigValueSet( ) routine in the shellConfigLib API reference.
If you choose to link the module with VxWorks instead of downloading it, you
have to make the shellInterpDemoInit( ) call in the user startup code (see
2.6.8 Linking Kernel Application Object Modules with VxWorks, p.64 and
2.6.10 Configuring VxWorks to Run Applications Automatically, p.66).
The code can be used with Wind River Workbench as a downloadable kernel
module project, or included in a kernel project. For information about using
Workbench, see the Wind River Workbench User’s Guide.
604
12 Target Tools
12.3 Kernel Object-Module Loader
The kernel loader also enables you to dynamically extend the operating system,
since once code is loaded, there is no distinction between that code and the code
that was compiled into the image that booted.
Finally, you can configure the kernel loader to optionally handle memory
allocation, on a per-load basis, for modules that are downloaded. This allows
flexible use of the target's memory. The loader can either dynamically allocate
memory for downloaded code, and free that memory when the module is
unloaded; or, the caller can specify the addresses of memory that has already been
allocated. This allows the user more control over the layout of code in memory. For
more information, see 12.3.5 Specifying Memory Locations for Loading Objects, p.612.
The functionality of the kernel loader is provided by two components: the loader
proper, which installs the contents of object modules in the target system's
memory; and the unloader, which uninstalls object modules. In addition, the
loader relies on information provided by the system symbol table.
! CAUTION: Do not unload an object module while its tasks are running. Doing so
may result in unpredictable behavior.
605
VxWorks
Kernel Programmer's Guide, 6.6
By default, the kernel object-module loader is not included in VxWorks. To use the
loader, you must configure VxWorks with the INCLUDE_LOADER component.
Adding the INCLUDE_LOADER component automatically includes several other
components that together provide complete loader functionality. These
components are:
INCLUDE_UNLOADER
Provides facilities for unloading object modules.
INCLUDE_MODULE_MANAGER
Provides facilities for managing loaded modules and obtaining information
about them. For more information, see the VxWorks API reference for
moduleLib.
INCLUDE_SYM_TBL
Provides facilities for storing and retrieving symbols. For more information,
see 12.4 Kernel Symbol Tables, p.619 and the VxWorks API reference for
symLib.
INCLUDE_SYM_TBL_INIT
Specifies a method for initializing the system symbol table.
! CAUTION: If you want to use the target-resident symbol tables and kernel
object-module loader in addition to the host tools, you must configure VxWorks
with the INCLUDE_WDB_MDL_SYM_SYNC component to provide host-target
symbol table and module synchronization. This component is included by default
when both the kernel loader and WDB agent are included in VxWorks. For more
information, see 12.4.4 Using the VxWorks System Symbol Table, p.624.
The kernel loader and unloader are discussed further in subsequent sections, and
in the VxWorks API references for loadLib and unldLib.
606
12 Target Tools
12.3 Kernel Object-Module Loader
The API routines, shell C interpreter commands, and shell command interpreter
commands available for loading and unloading kernel modules are described in
Table 12-5 and Table 12-6.
Note that the kernel loader routines can be called directly from the C interpreter or
from code. The shell commands, however, should only be called from the shell and
not from within programs.3 In general, shell commands handle auxiliary
operations, such as opening and closing a file; they also print their results and any
error messages to the console.
Routine Description
Command Description
The use of some of these routines and commands is discussed in the following
sections.
607
VxWorks
Kernel Programmer's Guide, 6.6
For detailed information, see the loadLib, unldLib, and usrLib API references, the
shell command reference, as well as 12.3.3 Summary List of Kernel Object-Module
Loader Options, p.608.
The kernel loader's behavior can be controlled using load flags passed to loadLib
and unldLib routines. Many of these flags can be combined (using a logical OR
operation); some are mutually exclusive. The tables in this section group these
options by category.
Hex
Option Value Description
Hex
Option Value Description
LOAD_NO_SYMBOLS 0x2 No symbols from the module are registered in the system's
symbol table. Consequently, linkage against the code module
is not possible. This option is useful for deployed systems,
when the module is not supposed to be used in subsequent
link operations.
608
12 Target Tools
12.3 Kernel Object-Module Loader
Hex
Option Value Description
LOAD_LOCAL_SYMBOLS 0x4 Only local (private) symbols from the module are registered in
the system's symbol table. No linkage is possible against this
code module’s public symbols. This option is not very useful
by it self, but is one of the base options for
LOAD_ALL_SYMBOLS.
LOAD_GLOBAL_SYMBOLS 0x8 Only global (public) symbols from the module are registered
in the system's symbol table. No linkage is possible against
this code module's private symbols. This is the kernel loader's
default when the loadFlags parameter is left as NULL.
LOAD_ALL_SYMBOLS 0xC Local and global symbols from the module are registered in
the system's symbol table. This option is useful for debugging.
12
Table 12-9 Kernel Loader Options for Code Module Visibility
Hex
Option Value Description
HIDDEN_MODULE 0x10 The code module is not visible from the moduleShow( )
routine or the host tools. This is useful on deployed systems
when an automatically loaded module should not be
detectable by the user. It only affects user visibility, and does
not affect linking with other modules.
Hex
Option Value Description
UNLD_KEEP_BREAKPOINTS 0x1 The breakpoints are left in place when the code
module is unloaded. This is useful for debugging, as
all breakpoints are otherwise removed from the
system when a module is unloaded.
609
VxWorks
Kernel Programmer's Guide, 6.6
Table 12-10 Kernel Unloader Options for Breakpoints and Hooks (cont’d)
Hex
Option Value Description
UNLD_FORCE 0x2 By default, the kernel unloader does not remove the
text sections when they are used by hooks in the
system. This option forces the unloader to remove
the sections anyway, at the risk of unpredictable
results.
610
12 Target Tools
12.3 Kernel Object-Module Loader
Hex
Option Value Description
LOAD_FULLY_LINKED 0x20 Provides for loading fully linked modules (that is, modules
without any unresolved symbols or relocations).
Note that symbol tables are not required when VxWorks is configured with
support for loading fully-linked object modules (the option is listed in
Table 12-13). For more information, see loadModuleAt( ) in the VxWorks Kernel
API Reference.
611
VxWorks
Kernel Programmer's Guide, 6.6
For information about loading C++ modules from the shell, see 13.4 Using C++ in
Signal Handlers and ISRs, p.651. Also see 12.3.3 Summary List of Kernel Object-Module
Loader Options, p.608 for C++ kernel loader and unloader options.
612
12 Target Tools
12.3 Kernel Object-Module Loader
for instance, if the data segment contains sections requiring 128 and 264 byte
alignment, in that order, allocate memory aligned on 264 bytes.
The kernel unloader can remove the segments from wherever they were installed,
so no special instructions are required to unload modules that were initially loaded
at specific addresses. However, if the base address was specified in the call to the
loader, then, as part of the unload, unloader does not free the memory area used
to hold the segment. This allocation was performed by the caller, and the
de-allocation must be as well.
The following sections describe the criteria used to load modules and issues with
loading that may need to be taken into account.
Relocatable object files are used for modules that can be dynamically loaded into
the VxWorks kernel and run. In contrast to an executable file, which is fully linked
and ready to run at a specified address, a relocatable file is an object file for which
text and data sections are in a transitory form, meaning that some addresses are
not yet known. Relocatable object modules are generated by the compiler with .o
extension (similar to the ones produced as an intermediate step between the
application source files—.c, .s, .cpp— and the corresponding executable files that
run in VxWorks processes).
Relocatable files are used for downloadable modules because the layout of the
VxWorks image and downloaded code in memory are not available to a compiler
running on a host machine. Therefore, the code handled by the target-resident
kernel loader must be in relocatable form, rather than an executable. The loader
itself performs some of the same tasks as a traditional linker in that it prepares the
code and data of an object module for the execution environment. This includes the
linkage of the module's code and data to other code and data.
Once installed in the system's memory, the entity composed of the object module's
code, data, and symbols is called a code module. For information about installed
code modules, see the VxWorks API reference for moduleLib.
613
VxWorks
Kernel Programmer's Guide, 6.6
The kernel object-module loader performs some of the same tasks as a traditional
linker in that it prepares the code and data of an object module for the execution
environment. This includes the linkage of the module's code and data to other code
and data.
The loader is unlike a traditional linker in that it does this work directly in the
target system's memory, and not in producing an output file.
In addition, the loader uses routines and variables that already exist in the
VxWorks system, rather than library files, to relocate the object module that it
loads. The system symbol table (see 12.4.4 Using the VxWorks System Symbol Table,
p.624) is used to store the names and addresses of functions and variables already
installed in the system.This has the side effect that once symbols are installed in the
system symbol table, they are available for future linking by any module that is
loaded. Moreover, when attempting to resolve undefined symbols in a module,
the loader uses all global symbols compiled into the target image, as well as all
global symbols of previously loaded modules. As part of the normal load process,
all of the global symbols provided by a module are registered in the system symbol
614
12 Target Tools
12.3 Kernel Object-Module Loader
table. You can override this behavior by using the LOAD_NO_SYMBOLS load flag
(see Table 12-8).
The system symbol table allows name clashes to occur. For example, suppose a
symbol named func exists in the system. A second symbol named func is added to
the system symbol table as part of a load. From this point on, all links to func are
to the most recently loaded symbol. See also, 12.4.1 Configuring VxWorks with
Symbol Tables, p.620.
The kernel object-module loader loads code modules in a sequential manner. That
is, a separate load is required for each separate code module. The user must,
therefore, consider dependencies between modules and the order in which they
must be loaded to link properly.
Suppose a user has two code modules named A_module and B_module, and
A_module references symbols that are contained in B_module. The user may 12
either use the host-resident linker to combine A_module and B_module into a
single module, or they should load B_module first, and then load A_module.
When code modules are loaded, they are irreversibly linked to the existing
environment; meaning that, once a link from a module to an external symbol is
created, that link cannot be changed without unloading and reloading the module.
Therefore dependencies between modules must be taken into account when
modules are loaded to ensure that references can be resolved for each new module,
using either code compiled into the VxWorks image or modules that have already
been loaded into the system.
Failure to do so results in incompletely resolved code, which retains references to
undefined symbols at the end of the load process. For diagnostic purposes, the
loader prints a list of missing symbols to the console. This code should not be
executed, since the behavior when attempting to execute an improperly relocated
instruction is not predictable.
Normally, if a load fails, the partially installed code is removed. However, if the
only failure is that some symbols are unresolved, the code is not automatically
unloaded (but the API returns NULL to indicate failure programmatically). This
allows the user to examine the result of the failed load, and even to execute
portions of the code that are known to be completely resolved. Therefore, code
modules that have unresolved symbols must be removed by a separate unload
command (unld( ) with the C interpreter, or module unload with the command
interpreter).
615
VxWorks
Kernel Programmer's Guide, 6.6
Note that the sequential nature of the loader means that unloading a code module
which has been used to resolve another code module may leave references to code
or data which are no longer available. Execution of code holding such dangling
references may have unexpected results.
See Statically Linking Kernel Application Modules, p.63.
Common symbols provide a challenge for the kernel object-module loader that is
not confronted by a traditional linker. Consider the following example:
#include <stdio.h>
int willBeCommon;
616
12 Target Tools
12.3 Kernel Object-Module Loader
■
Common symbols are identified with any matching symbol that was not in the
original boot image (LOAD_COMMON_MATCH_USER).
Note that these options only control the loader’s behavior with regard to the
operation in which they are used—they only affect what happens with the symbols
of the module being loaded. For example, consider the case in which module A has
common symbols, and module B has undefined symbols that are resolved by
module A. If module A is loaded with the LOAD_COMMON_MATCH_NONE
option, this does not prevent module B from being linked against A’s symbols
when B is loaded next. That is, the load flag used with module A does not prevent
the loader from resolving undefined references in module B against module A.
The option to specify matching of common symbols may be set in each call using
the loadLib API. Extreme care should be used when mixing the different possible
common matching behaviors for the loader. It is much safer to pick a single
matching behavior and to use it for all loads. For detailed descriptions of the
matching behavior under each option, see Table 12-12.
NOTE: Note that the shell load command, ld, has a different mechanism for 12
controlling how common symbols are handled and different default behavior. For
details, see the reference entry for usrLib.
Some programming languages (such as C++) use the weak binding class in addition
to the global and local classes. The ELF specification mandates that a weak symbol
is ignored if there is an existing global symbol with the same name. This is the
default behavior provided by the VxWorks 6.x object module loader, specified
with the LOAD_WEAK_MATCH_ALL loader option.
Note, however, that the default behavior for VxWorks 5.x was to always register
weak symbols as globals, regardless of any existing definitions. To replicate this
behavior, use the LOAD_WEAK_MATCH_NONE loader option.
Symbols can be removed from object files by stripping them, which is commonly
done with the GNU strip utility. The main purpose of stripping object files is to
reduce their size.
617
VxWorks
Kernel Programmer's Guide, 6.6
For some architectures, function calls are performed using relative branches by
default. This causes problems if the routine that is called resides further in memory
than a relative branch instruction can access (which may occur if the board has a
large amount of memory).
In this case, a module load fails; the kernel module loader prints an error message
about relocation overflow and sets the
S_loadElfLib_RELOCATION_OFFSET_TOO_LARGE errno (kernel shell).
To deal with this problem, compilers (both GNU and Wind River) have options to
prevent the use of relative branches for function calls. See the VxWorks Architecture
Supplement for more information.
The loader cannot perform small data area (SDA) relocation. If a kernel module is
built with SDA, the loader will not load it, but generates the following error:
■ S_loadLib_SDA_NOT_SUPPORTED
The Wind River Compiler (diab) assembler flag -Xwarn-use-greg can be used to
generate the following warning if code accesses the SDA reserved registers:
Xwarn-use-greg=0x2004
The objdump and readelf tools can be used to see if there are any SDA relocations
in a module. The relocation types pertaining to SDA are described in the ELF
architecture ABI supplement.
The SDA_DISABLE makefile variable can be used to disable SDA, as follows:
SDA_DISABLE=TRUE
618
12 Target Tools
12.4 Kernel Symbol Tables
Symbol Entries
12
Each symbol in the table comprises these items:
name
The name is a character string derived from the name in the source code.
value
The value is usually the address of the element that the symbol refers to: either
the address of a routine, or the address of a variable (that is, the address of the
contents of the variable). The value is represented by a pointer.
group
The group number of the module that the symbol comes from.
symRef
The symRef is usually the module ID of the module that the symbol comes
from.
type
The type is provides additional information about the symbol. For symbols in
the system symbol table, it is one of the types defined in
installDir/vxworks-6.x/target/h/symbol.h. For example, SYM_UNDF,
SYM_TEXT, and so on. For user symbol tables, this field can be user-defined.
619
VxWorks
Kernel Programmer's Guide, 6.6
Symbol Updates
The symbol table is updated whenever modules are loaded into, or unloaded from,
the target. You can control the precise information stored in the symbol table by
using the kernel object-module loader options listed in Table 12-8.
You can easily search all symbol tables for specific symbols. To search from the
shell with the C interpreter, use lkup( ). You can also use symShow( ) for general
symbol information. For details, see the API references for these commands.
To search programmatically, use the symbol library API's, which can be used to
search the symbol table by address, by name, and by type, and a function that may
be used to apply a user-supplied function to every symbol in the symbol table. For
details, see the symLib reference entry.
VxWorks can be configured with support for user symbols tables or with support
for both user symbol tables and a system symbol table.
For information about user symbol tables, see 12.4.6 Creating and Using User Symbol
Tables, p.625. For information about the system symbol table, see 12.4.4 Using the
VxWorks System Symbol Table, p.624.
620
12 Target Tools
12.4 Kernel Symbol Tables
621
VxWorks
Kernel Programmer's Guide, 6.6
A built-in system symbol table copies information into wrapper code, which is
then compiled and linked into the kernel when the system is built.
Although using a built-in symbol table can produce a larger VxWorks image file
than might otherwise be the case, it has several advantages, particularly for
production systems:
■ It requires less memory than using a loadable symbol table—as long as you are
not otherwise using the kernel object-module loader and associated
components that are required for a loadable symbol table.
■ It does not require that the target have access to a host (unlike the
downloadable symbol table).
■ It is faster to load the single image file than loading separate files for the
VxWorks image and the loadable symbol table .sym file because some remote
operations4 on a file take longer than the data transfer to memory.
■ It is useful in deployed ROM-based systems that have no network
connectivity, but require the shell as user interface.
A built-in system symbol table relies on the makeSymTbl utility to obtain the
symbol information. This utility uses the gnu utility nmarch to generate
information about the symbols contained in the image. Then it processes this
information into the file symTbl.c that contains an array, standTbl, of type
SYMBOL described in Symbol Entries, p.619. Each entry in the array has the symbol
name and type fields set. The address (value) field is not filled in by makeSymTbl.
The symTbl.c file is treated as a normal .c file, and is compiled and linked with the
rest of the VxWorks image. As part of the normal linking process, the toolchain
linker fills in the correct address for each global symbol in the array. When the
build completes, the symbol information is available in the image as a global array
of VxWorks symbols. After the kernel image is loaded into target memory at
622
12 Target Tools
12.4 Kernel Symbol Tables
system initialization, the information from the global SYMBOL array is used to
construct the system symbol table.
The definition of the standTbl array can be found in the following files after the
VxWorks image is built:
■
installDir/vxworks-6.x/target/config/bspName/symTbl.c for images built
directly from a BSP directory.
■ installDir/vxworks-6.x/target/proj/projDir/buildDir/symTbl.c for images using
the project facility.
A loadable symbol table is built into a separate object module file (vxWorks.sym
file). This file is downloaded to the system separately from the system image, at
which time the information is copied into the symbol table.
12
The loadable system symbol table uses an ELF file named vxWorks.sym file, rather
than the symTbl.c file. The file is created by using the objcopy utility to strip all
sections, except the symbol information, from the final VxWorks image.
During boot and initialization, the vxWorks.sym file is downloaded using the
kernel object-module loader, which directly calls loadModuleAt( ). To download
the vxWorks.sym file, the loader uses the current default device, which is
described in 7.3.1 Filenames and the Default Device, p.365.
To download the VxWorks image, the loader also uses the default device, as is
current at the time of that download. Therefore, the default device used to
download the vxWorks.sym file may, or may not, be the same device. This is
because the default device can be set, or reset, by other initialization code that runs.
This modification can happen after the VxWorks image is downloaded, but before
the symbol table is downloaded.
Nevertheless, in standard VxWorks configurations, that do not include
customized system initialization code, the default device at the time of the
623
VxWorks
Kernel Programmer's Guide, 6.6
download of the vxWorks.sym, is usually set to one of the network devices, and
using either rsh or ftp as the protocol.
Once it is initialized, the VxWorks system symbol table includes a complete list of
the names and addresses of all global symbols in the compiled image that is
booted. This information is needed on the target to enable the full functionality of
the target tools libraries.
The target tools maintain the system symbol table with up-to-date name and
address information for all of the code statically compiled into the system or
dynamically downloaded. (The LOAD_NO_SYMBOLS option can be used to hide
loaded modules, so that their symbols do not appear in the system symbol table;
see Table 12-9).
Symbols are dynamically added to, and removed from, the system symbol table
when:
■ modules are loaded and unloaded
■ variables are dynamically created from the shell
■ the wdb agent synchronizes symbol information with the host (see
12.4.5 Synchronizing Host and Kernel Modules List and Symbol Table, p.625)
The exact dependencies between the system symbol table and the other target tools
are as follows:
■ Kernel Object-Module Loader: The kernel loader requires the system symbol
table. The system symbol table does not require the presence of the loader.
■ Debugging Facilities: The target-based symbolic debugging facilities and user
commands such as i and tt, rely on the system symbol table to provide
information about entry points of tasks, symbolic contents of call stacks, and
so on.
■
Kernel Shell: The kernel shell does not strictly require the system symbol table,
but its functionality is greatly limited without it. The kernel shell requires the
system symbol table to provide the ability to run functions using their
symbolic names. The kernel shell uses the system symbol table to execute shell
commands, to call system routines, and to edit global variables. The kernel
shell also includes the library usrLib, which contains the commands i, ti, sp,
period, and bootChange.
624
12 Target Tools
12.4 Kernel Symbol Tables
■
WDB Target Agent: The WDB target agent adds symbols to the system symbol
table as part of the symbol synchronization with the host.
If the facilities provided by the symbol table library are needed for user
(non-operating system) code, another symbol table should be created and
manipulated using the symbol library. See 12.4.6 Creating and Using User Symbol
Tables, p.625.
NOTE: If you choose to use both the host-resident and target-resident tools at the
same time, use the synchronization method to ensure that both the host and target
resident tools share the same list of symbols.
12.4.5 Synchronizing Host and Kernel Modules List and Symbol Table
If both host tools and target tools are going to be used with a target system, the
modules list and symbol table maintained on the host system must be
synchronized with the modules list and symbol table maintained on the target. 12
This ensures that the host and target tools share the same list of symbols.
The host tools maintain their own modules list and symbol table—the target server
modules list and symbol table— on the host. In this chapter it is referred to as the
host modules list and symbol table.
Module list and symbol table synchronization is provided automatically when
VxWorks is configured with the WDB target agent and the kernel object-module
loader (INCLUDE_WDB and INCLUDE_LOADER). To remove this feature, you
need only remove the INCLUDE_WDB_MDL_SYM_SYNC component.
Note that the modules and symbols synchronization will only work if the WDB
agent is in task mode. If the WDB agent is in system mode, the modules and
symbols added from both the host and the target will not be synchronized.
For information about WDB, see 12.6 WDB Target Agent, p.628.
Although it is possible for user code in the kernel to manipulate symbols in the
system’s symbol table, this is not a recommended practice. Addition and removal
of symbols to and from the symbol table should only be carried out by operating
system libraries. Any other use of the system symbol table may interfere with the
proper operation of the operating system; and even simply introducing additional
625
VxWorks
Kernel Programmer's Guide, 6.6
symbols could have an adverse and unpredictable effect on linking any modules
that are subsequently downloaded.
Therefore, user-defined symbols should not be added programmatically to the
system symbol table. Instead, when user code in kernel space requires a symbol
table for its own purposes, a user symbol table should be created. For more
information, see the VxWorks API reference for symLib.
626
12 Target Tools
12.5 Show Routines
Call Description
627
VxWorks
Kernel Programmer's Guide, 6.6
Call Description
628
12 Target Tools
12.6 WDB Target Agent
WDB can be configured for system mode debugging, task mode debugging, or
both (switching between the two modes under the control of host tools). In task
mode, WDB runs as a kernel task. In system mode, WDB operates independently
of the kernel, and the kernel is under WDB’s control. With system mode, WDB can
be started before VxWorks is running, which can be particularly useful in the early
stages of porting a BSP to a new board. (See Debugging Mode Options, p.634 and
12.6.6 Starting the WDB Target Agent Before the VxWorks Kernel, p.642).
The WDB agent’s interface to communications drivers avoids the run-time I/O
system, so that the WDB agent remains independent of the run-time OS. Drivers
for the WDB agent are low-level drivers that provide both interrupt-driven and
polling-mode operation. Polling mode is required to support system-level control
of the target.
The WDB agent synthesizes the target-control strategies of task-level and
system-wide debugging. The agent can execute in either mode and switch
dynamically between them, provided the appropriate drivers are present in the
Board Support Package (BSP). This permits debugging of any aspect of an
embedded application whether it is a task, an interrupt service routine, or the 12
kernel itself.
NOTE: If both host tools and target tools are going to be used with a target system,
the modules list and symbol table maintained on the host system must be
synchronized with the modules list and symbol table maintained on the target.
This ensures that the host and target tools share the same list of symbols. See the
discussion of the INCLUDE_WDB_MDL_SYM_SYNC component in Additional
Options, p.636, and 12.4.5 Synchronizing Host and Kernel Modules List and Symbol
Table, p.625.
629
VxWorks
Kernel Programmer's Guide, 6.6
The WDB components required for different types of host-target connections are
described in Table 12-16. VxWorks should be configured with only one WDB
communication component.
Component Description
630
12 Target Tools
12.6 WDB Target Agent
Component Description
! WARNING: Both VxWorks and the host target connection must be configured for
the same type of host-target communication facilities. For example, if a serial
connection is going to be used, then VxWorks must be configured with
INCLUDE_WDB_COMM_SERIAL and the host target server must be configured
with the wdbserial back end. For more information about target connection
configuration, see the Wind River Workbench User’s Guide: New Target Server
Connections.
631
VxWorks
Kernel Programmer's Guide, 6.6
If you want to use a different device, set this parameter to the name of the
device (for example, dc).
WDB_END_DEVICE_UNIT
If WDB_END_DEVICE_NAME is specified, set this parameter to the unit
number of the END device you want to use.
632
12 Target Tools
12.6 WDB Target Agent
WDB_TTY_CHANNEL
The channel number. Use 0 if you have only one serial port on the target. Use
1 (the default) if you want to keep the VxWorks console on the first serial port.5
If your target has a single serial channel, you can use the target server virtual
console to share the channel between the console and the target agent. You
must configure your system with the CONSOLE_TTY parameter set to NONE
and the WDB_TTY_CHANNEL parameter set to 0.
When multiplexing the virtual console with WDB communications, excessive
output to the console may lead to target server connection failures. The
following may help resolve this problem:
■
Decrease the amount of data being transmitted to the virtual console from
your application.
■
Increase the time-out period for the target server.
■
Increase the baud rate of the target agent and the target server connection.
INCLUDE_WDB_TTY_TEST 12
When set to TRUE, this parameter causes words WDB READY to be displayed
on the WDB serial port on startup. By default, this parameter is set to TRUE.
WDB_TTY_ECHO
When set to TRUE, all characters received by the WDB agent are echoed on the
serial port. As a side effect, echoing stops the boot process until a target server
is attached. By default, this parameter is set to FALSE.
5. VxWorks serial channels are numbered starting at 0. Thus Channel 1 corresponds to the
second serial port if the board’s ports are labeled starting at 1. If your board has only one
serial port, you must change WDB_TTY_CHANNEL to 0 (zero).
633
VxWorks
Kernel Programmer's Guide, 6.6
WDB provides two debugging mode options by default: system mode and task
mode. With system mode, the entire system is stopped when a breakpoint is hit.
This allows you to set breakpoints anywhere, including ISRs. Note that for SMP
systems, software breakpoints are persistent—that is they are retained in target
memory (for UP systems they are not).
With task mode, a task or group of tasks is stopped when a breakpoint is set, but
an exception or an interrupt does not stop if it hits a breakpoint. When the WDB
agent is configured for task mode, the tWdbTask task is used to handle all WDB
requests from the host.
You can include support for both modes, which allows tools such as the host shell
or the debugger to dynamically switch from one mode to the other.
For information about WDB behavior with SMP systems, see 12.6.2 WDB Target
Agent and VxWorks SMP, p.638. For information about the SMP configuration of
VxWorks, see 15. VxWorks SMP.
634
12 Target Tools
12.6 WDB Target Agent
WDB_RESTART_TIME
The delay (in seconds) before restarting the WDB agent task when it gets an
error (the default is 10 seconds).
WDB_TASK_OPTIONS
The options parameter of the WDB task.
WDB_TASK_PRIORITY
The priority of the WDB task. The default priority is 3.
WDB_SPAWN_STACK_SIZE
The stack size used by tasks spawned by the WDB target agent.
The INCLUDE_WDB_RTP component provides support for real time process (RTP)
operations (creation, deletion) and notifications (creation, deletion). This
component is automatically included if the system supports real time processes
(INCLUDE_RTP) and task debugging mode (INCLUDE_WDB_TASK). 12
Initialization Options
635
VxWorks
Kernel Programmer's Guide, 6.6
connection cannot be used with this mode because the network has not been
initialized when WDB starts. Also see 12.6.6 Starting the WDB Target Agent Before
the VxWorks Kernel, p.642.
When WDB starts after kernel initialization, all WDB features are fully supported.
It is, of course, not possible to debug kernel initialization activity.
Additional Options
The following components provide additional optional functions. You can include
or exclude them based on your requirements.
The INCLUDE_WDB_BANNER component displays the WDB banner on the
console.
The INCLUDE_WDB_BP component provides support for breakpoints in the WDB
agent itself. This component is needed if you want to debug the target from a host
tool. The configuration parameter for this component is WDB_BP_MAX, which
specifies the maximum number of breakpoints allocated on the target at startup.
When this number of breakpoints is reached, it is still possible to allocate space for
new breakpoints in task mode. In system mode, however, it is not possible to set
additional breakpoints once the limit has been reached.
The INCLUDE_WDB_BP_SYNC component provides a breakpoint synchronization
mechanism between host tools and target system. If this component is included in
the VxWorks configuration, host tools are notified of any breakpoint creations and
deletions that are made from the kernel shell. The component is automatically
included when debug is provided for both the kernel shell (with
INCLUDE_DEBUG) and the host tools (with INCLUDE_WDB_TASK_BP).
The INCLUDE_WDB_CTXT component provides support for context operations:
creation, deletion, suspension, resumption. A context can be a task, a real time
process, or the system itself.
The INCLUDE_WDB_DIRECT_CALL component allows you to call functions in the
WDB agent context directly.
The INCLUDE_WDB_EVENTPOINTS component adds support for eventpoints. An
eventpoint can be a breakpoint, an eventpoint on context creation, or an eventpoint
on context deletion. This component is the core component for all eventpoint
types. Each time an eventpoint is hit, the corresponding event is sent to the target
server.
The INCLUDE_WDB_EVENTS component adds support for asynchronous events.
Asynchronous events are sent from the target to target server, to notify host tools
636
12 Target Tools
12.6 WDB Target Agent
about event activity on the target; for example, if a breakpoint has been hit, an
exception occurred, or a context (task or process) has started or exited. The
component is required (and is automatically included) when using breakpoints,
exception notification, context start/exit notification, and so on.
The INCLUDE_WDB_EXC_NOTIFY component adds support for exception
notification. When an exception occurs on the target, the appropriate event is sent
to the target server.
The INCLUDE_WDB_EXIT_NOTIFY component adds support for context deletion
notification. To be notified of a context exit, an eventpoint of type WDB_CTX_EXIT
must be set. Tools set this eventpoint when they must be notified. This component
supports notification for task and real time process contexts.
The INCLUDE_WDB_FUNC_CALL component handles function calls by spawning
tasks to run the functions. This service is only available in task mode.
The INCLUDE_WDB_GOPHER component provides support for the Gopher
information gathering language. It is used by many host tools and cannot be
removed from a system that uses other WDB options. The configuration 12
parameters for this component are:
■ WDB_GOPHER_TAPE_LEN, which defines the length of one gopher tape.
Gopher tapes are used to record and upload data processed by the gopher.
The default tape length is 1400 words, each of which are 32 bits wide.
■ WDB_GOPHER_TAPE_NB, which defines the maximal number of gopher
tapes that can be dynamically allocated. At startup, only one gopher tape
is available. As needed, more tapes can be allocated. Dynamic allocation
of tapes is only available in task mode. The default number of tapes is 10.
The INCLUDE_WDB_MEM component provides support for reading from, and
writing to, target memory.
The INCLUDE_WDB_REG component provides support for reading from, and
writing to, registers. The WDB_REGS_SIZE configuration parameter defines the
size of an internal memory buffer used by the WDB agent to store coprocessor
registers (to allow access to the registers in system mode).
The INCLUDE_WDB_START_NOTIFY component provides support for context
creation notification. To be notified of a context exit, an eventpoint of type
WDB_CTX_START must be set. Tools set this eventpoint when they must be
notified. This component supports task and real time process contexts.
The INCLUDE_WDB_TASK_BP component provides support for breakpoints in
task debugging mode. This component is automatically included when WDB
637
VxWorks
Kernel Programmer's Guide, 6.6
This section describes the behavior of the WDB target agent when used with an
SMP configuration of VxWorks. For information about the SMP configuration of
VxWorks, see 15. VxWorks SMP.
638
12 Target Tools
12.6 WDB Target Agent
Wind River host tools can be used to debug VxWorks targets on a TIPC network
that do not have direct access to the host by way of TCP/IP or a serial line. In order
to do so, however, one of the VxWorks targets on the TIPC network must serve as
a gateway system.
A gateway must have access to both the host’s TCP/IP network and the targets’
TIPC network, and it must run a target agent proxy server. The proxy server
supports both networking protocols and provides the link between the host target
server and the WDB agent on the target system, thus allowing for remote
debugging of the other VxWorks targets on the TIPC network. The proxy server
can support multiple connections between the host system and different VxWorks
targets.
639
VxWorks
Kernel Programmer's Guide, 6.6
Note that with TIPC, WDB system mode debugging is not supported over TIPC
(see Debugging Mode Options, p.634).
For information about TIPC, see the Wind River TIPC for VxWorks 6 Programmer’s
Guide.
The VxWorks gateway target and the other VxWorks targets on the TIPC network
(to which the host tools attach) must each be configured with different WDB
components:
■ The gateway target must be configured with the INCLUDE_WDB_PROXY and
INCLUDE_WDB_PROXY_TIPC components (as well as with both TIPC and
TCP/IP support).
■ Any other targets to which host tools will attach must be configured with the
basic INCLUDE_WDB component and the INCLUDE_WDB_COMM_TIPC
component (as well as with TIPC support).
When the INCLUDE_WDB_COMM_TIPC component is included, WDB system
mode is excluded from the configuration, as it is not supported with TIPC
communication.
For information about the configuration parameters for these components, see
Basic WDB Configuration, p.630 and TIPC Network Connection Configuration, p.632.
To establish a connection between the host and the targets on the TIPC network,
first boot the gateway and other targets.
Wind River Workbench provides options for connecting with a target running a
WDB agent proxy. See the Wind River Workbench User’s Guide for more information.
From the command line, the syntax for starting a target server connection with a
target running the WDB agent proxy is as follows:
tgtsvr -V -B wdbproxy -tipc -tgt targetTipcAddress -tipcpt tipcPortType -tipcpi
tipcPortInstance wdbProxyIpAddress/name
In this command:
■
targetTipcAddress is the TIPC address of the target to which you want to
connect.
640
12 Target Tools
12.6 WDB Target Agent
■
tipcPortType is the TIPC port type used for the WDB connection (the default is
70).
■
tipcPortInstance is the TIPC port instance used for the WDB connection (the
default is 71).
■
wdbProxyIpAddress/name is the IP address or target name of the gateway target
that is running the WDB proxy agent.
Component Description 12
INCLUDE_WDB_BANNER Prints a banner to console after the agent is
initialized.
INCLUDE_WDB_VIO Provides the VxWorks driver for accessing
virtual I/O.
INCLUDE_WDB_USER_EVENT Provides the ability to send user events to the
host.
You can also reduce the maximum number of WDB breakpoints with the
WDB_BP_MAX parameter of the INCLUDE_WDB_BP component. If you are using a
serial connection, you can also set the INCLUDE_WDB_TTY_TEST parameter to
FALSE.
If you are using a communication path that supports both system and task mode
agents, then by default both agents are started. Since each agent consumes target
memory (for example, each agent has a separate execution stack), you may wish to
exclude one of the agents from the target system. You can configure the target to
use only a task-mode or only a system-mode agent with the INCLUDE_WDB_TASK
or INCLUDE_WDB_SYS options.
641
VxWorks
Kernel Programmer's Guide, 6.6
12.6.6 Starting the WDB Target Agent Before the VxWorks Kernel
By default, the WDB target agent is initialized near the end of the VxWorks
initialization sequence. This is because the default configuration calls for the agent
to run in task mode and to use the network for communication; thus, WDB is
initialized after the kernel and the network.
In some cases—such as during BSP development—you may want to start the agent
before the kernel, and initialize the kernel under the control of the host tools.
VxWorks Configuration
642
12 Target Tools
12.6 WDB Target Agent
This causes the project code generator to make the usrWdbInit( ) call earlier in
the initialization sequence. It will be called from usrInit( ) just before the
kernel is started.6
Run-time Operation
When the host target server has connected to the system-mode WDB target agent,
you can resume the system to start the kernel under the agent’s control.
After connecting to the target agent, set a breakpoint in usrRoot( ), then continue
the system. The routine kernelInit( ) starts the multi-tasking kernel with
usrRoot( ) as the entry point for the first task. Before kernelInit( ) is called,
interrupts are still locked. By the time usrRoot( ) is called, interrupts are unlocked.
Errors before reaching the breakpoint in usrRoot( ) are most often caused by a
stray interrupt: check that you have initialized the hardware properly in the BSP
sysHwInit( ) routine. Once sysHwInit( ) is working properly, you no longer need
to start the agent before the kernel.
12
NOTE: If you use a serial connection when you start WDB before the kernel, you
must modify the SIO driver so that it can properly deal with interrupts and the
order of system initialization in this context. See the VxWorks Device Driver
Developer's Guide: Additional Drivers for detailed information.
! CAUTION: When the agent is started before the kernel, there is no way for the host
to get the agent’s attention until a breakpoint occurs. This is because only system
mode is supported and the WDB communication channel is set to work in polled
mode only. On the other hand, the host does not really need to get the agent’s
attention: you can set breakpoints in usrRoot( ) to verify that VxWorks can get
through this routine. Once usrRoot( ) is working, you can start the agent after the
kernel (that is, within usrRoot( )), after which the polling task is spawned
normally.
6. The code generator for prjConfig.c is based on the component descriptor language, which
specifies when components are initialized. The component descriptor files are searched in a
specified order, with the project directory being last, and overriding the default definitions
in the generic descriptor files. For more information, see 2.8.2 CDF Precedence and CDF Instal-
lation, p.75.
643
VxWorks
Kernel Programmer's Guide, 6.6
I set a breakpoint on a function I called from the kernel shell, but the breakpoint is
not being hit. Why not?
Explanation
The kernel shell task runs with the VX_UNBREAKABLE option. Functions that are
called directly from the kernel shell command prompt, are executed within the
context of the kernel shell task. Therefore, breakpoints set within the directly called
function will not be hit.
644
12 Target Tools
12.7 Common Problems
Solution
Instead of running the function directly, use taskSpawn( ) with the function as the
entry point, or the shell’s C interpreter sp( ) command.
Insufficient Memory
Explanation
The kernel loader calls the device drivers through a VxWorks’ transparent
mechanism for file management, which makes calls to open, close, and ioctl. If you
use the kernel loader to load a module over the network (as opposed to loading
from a target-system disk), the amount of memory required to load an object
module depends on what kind of access is available to the remote file system over 12
the network. This is because, depending on the device that is actually being used
for the load, the calls initiate very different operations.
For some devices, the I/O library makes a copy of the file in target memory.
Loading a file that is mounted over a device using such a driver requires enough
memory to hold two copies of the file simultaneously. First, the entire file is copied
to a buffer in local memory when opened. Second, the file resides in memory when
it is linked to VxWorks. This copy is then used to carry out various seek and read
operations. Therefore, using these drivers requires sufficient memory available to
hold two copies of the file to be downloaded, as well as a small amount of memory
for the overhead required or the load operation.
Also consider that loading a module sometimes requires additional space, as the
sections have to be aligned in memory (whereas the toolchain may compact them
all in the object file to save space). See 12.3.5 Specifying Memory Locations for Loading
Objects, p.612.
Solution
Download the file using a different device. Loading an object module from a host
file system mounted through NFS only requires enough memory for one copy of
the file (plus a small amount of overhead).
645
VxWorks
Kernel Programmer's Guide, 6.6
Explanation
Some architectures have instructions that use less than 32 bits to reference a nearby
position in memory. Using these instructions can be more efficient than always
using 32 bits to refer to nearby places in memory.
The problem arises when the compiler has produced such a reference to something
that lies farther away in memory than the range that can be accessed with the
reduced number of bits. For instance, if a call to printf( ) is encoded with one of
these instructions, the load may succeed if the object code is loaded near the kernel
code, but fail if the object code is loaded farther away from the kernel image.
For additional information, see Function Calls, Relative Branches, and Load Failures,
p.618.
Solution
T he loader prints the relocation type and offset as part of the error message to
facilitate diagnostics. The offset and types can be retrieved with the readelf -r
command. Relocation types are described in the ELF architecture supplement.
Recompile the object file using -Xcode-absolute-far for the Wind River compilers,
and for GNU compilers, the appropriate long call option, -mlongcall (for PPC
architecture). See the VxWorks Architecture Supplement for the appropriate options.
Missing Symbols
Symbols in code modules downloaded from the host do not appear from the
kernel shell, and vice versa. Symbols created from the host shell are not visible
from the kernel shell, or symbols created from the kernel shell are not visible from
the host shell. Why is this happening, and how can I get them to appear?
Explanation
646
12 Target Tools
12.7 Common Problems
Solution
Check to see if the module and symbol synchronization is enabled for the target
server as well as compiled into the image. For more information, see
12.4.5 Synchronizing Host and Kernel Modules List and Symbol Table, p.625.
Including the kernel loader causes the amount of available memory to be much
smaller. How can I get more memory?
Explanation
Including the kernel loader causes the system symbol table to be included. This
symbol table contains information about every global symbol in the compiled
VxWorks image.
Using the kernel loader takes additional memory away from your application—
12
most significantly for the target-resident symbol table required by the kernel
loader.
Solution
Use the host tools rather than the target tools and remove all target tools from your
VxWorks image.
The system symbol table failed to download onto my target. How can I use the
kernel shell to debug the problem, since I cannot call functions by name?
Solution
Use addresses of functions and data, rather than using the symbolic names. The
addresses can be obtained from the VxWorks image on the host, using the nmarch
utility.
The following is an example from a UNIX host:
> nmarch vxWorks | grep memShow
0018b1e8 T memShow
0018b1ac T memShowInit
647
VxWorks
Kernel Programmer's Guide, 6.6
Use this information to call the function by address from the kernel shell. (The
parentheses are mandatory when calling by address.)
-> 0x0018b1e8 ()
cumulative
alloc 21197888 142523 148 -
value = 0 = 0x0
For modules that are relocated, use nm on the module to get the function address
(which is the offset within the module's text segment) then add to that value the
starting address of the text segment of the module when it was loaded in memory.
648
13
C++ Development
13.1 Introduction
This chapter provides information about C++ development for VxWorks using the
Wind River and GNU toolchains.
! WARNING: Wind River Compiler C++ and GNU C++ binary files are not
compatible.
649
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: This chapter provides information about facilities available in the VxWorks
kernel. For information about facilities available to real-time processes, see the
corresponding chapter in the VxWorks Application Programmer’s Guide.
650
13 C++ Development
13.3 C++ Code Requirements
! WARNING: Failure to use the VX_FP_TASK option when spawning a task that uses
C++ can result in hard-to-debug, unpredictable floating-point register corruption
at run-time.
If you reference a (non-overloaded, global) C++ symbol from your C code you
must give it C linkage by prototyping it using extern "C":
#ifdef __cplusplus
extern "C" void myEntryPoint ();
#else
void myEntryPoint ();
#endif
You can also use this syntax to make C symbols accessible to C++ code. VxWorks
C symbols are automatically available to C++ because the VxWorks header files
13
use this mechanism for declarations.
Each compiler has its own C++ libraries and C++ headers (such as iostream and
new). The C++ headers are located in the compiler installation directory rather
than in installDir/vxworks-6.x/target/h. No special flags are required to enable the
compilers to find these headers.
NOTE: In releases prior to VxWorks 5.5, Wind River recommended the use of the
flag -nostdinc. This flag should not be used with the current release since it prevents
the compilers from finding headers such as stddef.h.
651
VxWorks
Kernel Programmer's Guide, 6.6
The VxWorks loader only supports C++ modules that are self-contained. A
self-contained C++ module is one that does not use classes from other C++
modules, and whose classes are not used by other C++ modules. In particular, a
module must either contain its own copy of the standard library, or not use the
C++ standard library at all.
To produce self-contained modules, all C++ object files that are to be downloaded
should be linked into a single downloadable object module.
Unloading a C++ module that is not self-contained may result in dangling
references from objects created in other modules to data structures in the unloaded
module. Specifically, this can happen if the iostreams portion of the standard
library is initialized from a module that is later unloaded. In that case, any further
use of iostreams may fail with a kernel exception (accessing an invalid address).
! WARNING: C++ object files must be linked into one downloadable kernel module.
For information about the kernel loader, see 12.3 Kernel Object-Module Loader,
p.605.
Before a C++ module can be downloaded to the VxWorks kernel, it must undergo
an additional host processing step, which for historical reasons, is called munching.
Munching performs the following tasks:
■ Initializes support for static objects.
■ Ensures that the C++ run-time support calls the correct constructors and
destructors in the correct order for all static objects.
■ For the Wind River Compiler, collapses any COMDAT sections automatically;
for the GNU compiler, collapses any linkonce automatically.
652
13 C++ Development
13.5 Downloadable Kernel Modules in C++
Munching Examples
For each toolchain, the following examples compile a C++ application source file,
hello.cpp, run munch on the .o, compile the generated ctdt.c file, and link the
application with ctdt.o to generate a downloadable module, hello.out.
4. Link the original object file with the munched object file to create a
downloadable module:
$ dld -tPPC604FH:vxworks61 -X -r4 -o hello.out hello.o ctdt.o
NOTE: The -r4 option collapses any COMDAT sections contained in the input files.
653
VxWorks
Kernel Programmer's Guide, 6.6
4. Link the original object file with the munched object file to create a
downloadable module:
ccppc -r -nostdlib -Wl,-X \
-T installDir/vxworks-6.1/target/h/tool/gnu/ldscripts/link.OUT \
-o hello.out hello.o ctdt.o
NOTE: The VxWorks kernel object module loader does not support linkonce
sections directly. Instead, the linkonce sections must be merged and collapsed into
standard text and data sections before loading. The GNU -T option collapses any
linkonce sections contained in the input files.
If you use the VxWorks makefile definitions, you can write a simple munching rule
which (with appropriate definitions of CPU and TOOL) works across all
architectures for both GNU and Wind River Compiler toolchains.
CPU = PPC604
TOOL = gnu
TGT_DIR = $(WIND_BASE)/target
include $(TGT_DIR)/h/make/defs.bsp
default : hello.out
%.o : %.cpp
$(CXX) $(C++FLAGS) -c $<
%.out : %.o
$(NM) $*.o | $(MUNCH) > ctdt.c
$(CC) $(CFLAGS) $(OPTION_DOLLAR_SYMBOLS) -c ctdt.c
$(LD_PARTIAL) $(LD_PARTIAL_LAST_FLAGS) -o $@ $*.o ctdt.o
After munching, downloading, and linking, the static constructors and destructors
are called. This step is described next.
The kernel loader provides both manual and automatic options for calling static
constructors and destructors.
Automatic invocation is the default strategy. Static constructors are executed just
after the module is downloaded to the target and before the module loader returns
to its caller. Static destructors are executed just prior to unloading the module.
654
13 C++ Development
13.6 C++ Compiler Differences
Manual invocation means that the user must call static constructors explicitly, after
downloading the module, but before running the application. It also requires the
user to call static destructors explicitly, after the task finishes running, but before
unloading the module.
Static constructors are called by invoking cplusCtors( ). Static destructors are
called by invoking cplusDtors( ). These routines take an individual module name
as an argument. However, you can also invoke all of the static constructors or
destructors that are currently loaded into a system by calling these routines
without an argument.
! CAUTION: When using the manual invocation method, constructors for each
module must only be run once.
You can change the strategy for calling static constructors and destructors at
run-time with the cplusXtorSet( ) routine. To report on the current strategy, call
cplusStratShow( ).
For more information on the routines mentioned in this section, see the API entries
in the online reference manuals.
13
Also see 12.3.3 Summary List of Kernel Object-Module Loader Options, p.608 for
information about the C++ loader and unloader options.
! WARNING: Wind River Compiler C++ and GNU C++ binary files are not
compatible.
The following sections briefly describe the differences in compiler support for
template instantiation and run-time type information.
655
VxWorks
Kernel Programmer's Guide, 6.6
In C, every function and variable used by a program must be defined in exactly one
place (more precisely one translation unit). However, in C++ there are entities
which have no clear point of definition but for which a definition is nevertheless
required. These include template specializations (specific instances of a generic
template; for example, std::vector int), out-of-line bodies for inline functions, and
virtual function tables for classes without a non-inline virtual function. For such
entities a source code definition typically appears in a header file and is included
in multiple translation units.
To handle this situation, both the Wind River Compiler and the GNU compiler
generate a definition in every file that needs it and put each such definition in its
own section. The Wind River compiler uses COMDAT sections for this purpose,
while the GNU compiler uses linkonce sections. In each case the linker removes
duplicate sections, with the effect that the final executable contains exactly one
copy of each needed entity.
NOTE: Only the WRC linker can be used to process files containing COMDAT
sections. Similarly only the GNU linker can be used on files containing linkonce
sections. Furthermore the VxWorks target and host loaders are not able to process
COMDAT and linkonce sections. A fully linked VxWorks image will not contain
any COMDAT or linkonce sections. However intermediate object files compiled
from C++ code may contain such sections. To build a downloadable C++ module,
or a file that can be processed by any linker, you must perform an intermediate link
step using the -r5 option (WRC) or specifying the link.OUT linker script (GCC).
See 10.4.1 Munching C++ Application Modules for full details. (Note that while the
the -r5 and -r4 options—the latter referred to elsewhere in this chapter—both
collapse COMDAT files, their overall purpose is different, and their use is
mutually exclusive in a single linker command.)
It is highly recommended that you use the default settings for template
instantiation, since these combine ease-of-use with minimal code size. However it
is possible to change the template instantiation algorithm; see the compiler
documentation for details.
656
13 C++ Development
13.6 C++ Compiler Differences
-Xcomdat
This option is the default. When templates are instantiated implicitly, the
generated code or data section are marked as comdat. The linker then
collapses identical instances marked as such, into a single instance in memory.
-Xcomdat-off
Generate template instantiations and inline functions as static entities in the
resulting object file. Can result in multiple instances of static member-function
or class variables.
For greater control of template instantiation, the -Ximplicit-templates-off option
tells the compiler to instantiate templates only where explicitly called for in source
code; for example:
template class A<int>; // Instantiate A<int> and all member functions.
template int f1(int); // Instantiate function int f1{int).
13
GNU Compiler
! CAUTION: The VxWorks dynamic loader does not support linkonce sections
directly. Instead, the linkonce sections must be merged and collapsed into
standard text and data sections before loading. This is done with a special link step
described in 13.5.2 Munching a C++ Application Module, p.652.
-fno-implicit-templates
This is the option for explicit instantiation. Using this strategy explicitly
instantiates any templates that you require.
657
VxWorks
Kernel Programmer's Guide, 6.6
Both compilers support Run-time Type Information (RTTI), and the feature is
enabled by default. This feature adds a small overhead to any C++ program
containing classes with virtual functions.
For the Wind River Compiler, the RTTI language feature can be disabled with the
-Xrtti-off flag.
For the GNU compiler, the RTTI language feature can be disabled with the
-fno-rtti flag.
13.7 Namespaces
Both the Wind River and GNU C++ compilers supports namespaces. You can use
namespaces for your own code, according to the C++ standard.
The C++ standard also defines names from system header files in a namespace
called std. The standard requires that you specify which names in a standard
header file you will be using.
The following code is technically invalid under the latest standard, and will not
work with this release. It compiled with a previous release of the GNU compiler,
but will not compile under the current releases of either the Wind River or GNU
C++ compilers:
#include <iostream.h>
int main()
{
cout << "Hello, world!" << endl;
}
The following examples provide three correct alternatives illustrating how the
C++ standard would now represent this code. The examples compile with either
the Wind River or the GNU C++ compiler:
// Example 1
#include <iostream>
int main()
{
std::cout << "Hello, world!" << std::endl;
}
658
13 C++ Development
13.8 C++ Demo Example
// Example 2
#include <iostream>
using std::cout;
using std::endl;
int main()
{
cout << "Hello, world!" << endl;
}
// Example 3
#include <iostream>
using namespace std;
int main()
{
cout << "Hello, world!" << endl;
}
659
VxWorks
Kernel Programmer's Guide, 6.6
Then, to build a bootable image containing the factory example, run make as
shown below:
make ADDED_MODULES=factory.o
Then, from the WindSh, load the factory module, as shown below:
ld < factory.out
Full documentation on what you should expect to see is provided in the source
code comments for the demo program.
660
PART II
Multiprocessing Technologies
661
VxWorks
Kernel Programmer's Guide, 6.6
662
14
Overview of Multiprocessing
Technologies
14.1 Introduction
VxWorks provides various multiprocessor technologies, for asymmetric
multiprocessing (AMP) and symmetric multiprocessing (SMP) systems. These
include VxWorks SMP (an optional product), shared memory objects (VxMP),
distributed shared memory (DSHM), TIPC over DSHM, and message channels.
663
VxWorks
Kernel Programmer's Guide, 6.6
664
14 Overview of Multiprocessing Technologies
14.6 Message Channels
14
665
VxWorks
Kernel Programmer's Guide, 6.6
666
15
VxWorks SMP
Optional Product
667
VxWorks
Kernel Programmer's Guide, 6.6
15.1 Introduction
VxWorks SMP is a configuration of VxWorks designed for symmetric
multiprocessing (SMP). It provides the same distinguishing RTOS characteristics
of performance and determinism as the uniprocessor (UP) configuration. The
differences between the SMP and UP configurations are limited, and strictly
related to support for multiprocessing.
This chapter describes the features provided by VxWorks to support symmetric
multiprocessing. It discusses the features that are unique to the SMP configuration,
as well as the differences in operating system facilities and programming practices
used for the UP configuration and the SMP configuration. It also provides
guidelines for migrating UP code to SMP code. In this chapter, the terms VxWorks
SMP and VxWorks UP is used to identify the uniprocessor and symmetric
multiprocessing configurations of VxWorks, respectively.
For information about features that are common to both the VxWorks SMP and
VxWorks UP configurations—such as multitasking, I/O, file systems, and so on—
see Part ICore Technologies, p.1.
668
15 VxWorks SMP
15.2 Technology Overview
15.2.1 Terminology
The terms CPU and processor are often used interchangeably in computer
documentation. However, it is useful to distinguish between the two for hardware
that supports SMP. In this guide, particularly in the context of VxWorks SMP, the
terms are used as follows:
CPU
A single processing entity capable of executing program instructions and
processing data (also referred to as a core, as in multicore).
processor
A silicon unit that contains one or more CPUs.
multiprocessor
A single hardware system with two or more processors.
uniprocessor
A silicon unit that contains a single CPU.
For example, a dual-core MPC8641D would be described as a processor with two
CPUs. A quad-core Broadcom 1480 would be described as a processor with four
CPUs.
Uniprocessor code may not always execute properly on an SMP system, and code
15
that has been adapted to execute properly on an SMP system may still not make
optimal use of symmetric multiprocessing. The following terms are therefore used
to clarify the state of code in relation to SMP:
SMP-ready
Runs correctly on an SMP operating system, although it may not make use of
more than one CPU (that is, does not take full advantage of concurrent
execution for better performance).
SMP-optimized
Runs correctly on an SMP operating system, uses more than one CPU, and
takes sufficient advantage of multitasking and concurrent execution to
provide performance gains over a uniprocessor implementation.
669
VxWorks
Kernel Programmer's Guide, 6.6
With few exceptions, the SMP and uniprocessor (UP) configurations of VxWorks
share the same API—the difference amounts to only a few routines. There is binary
compatibility for both kernel and RTP applications between UP and SMP
configurations of VxWorks (for the same VxWorks release), as long as the
applications are based on the subset of APIs used by VxWorks SMP. A few
uniprocessor APIs are not suitable for an SMP system, and they are therefore not
provided. Similarly, SMP-specific APIs are not relevant to a uniprocessor system—
but default to appropriate uniprocessor behaviors (such as task spinlocks
defaulting to task locking), or have no effect.
VxWorks SMP is designed for symmetric target hardware. That is, each CPU has
equivalent access to all memory and all devices. VxWorks SMP can therefore run
on targets with multiple single-core processors or with multicore processors, as
long as they provide a uniform memory access (UMA) architecture with
hardware-managed cache-coherency.
This section provides a brief overview of areas in which VxWorks offers alternate
or additional features designed for symmetric multiprocessing. The topics are
covered in detail later in this chapter.
Multitasking
Scheduling
670
15 VxWorks SMP
15.2 Technology Overview
Mutual Exclusion
Because SMP systems allow for truly concurrent execution, the uniprocessor
mechanisms for disabling (masking) interrupts and for suspending task
preemption in order to protect critical regions are inappropriate for—and not
available in—an SMP operating system. Enforcing interrupt masking or
suspending task preemption across all CPUs would defeat the advantages of truly
concurrent execution and drag multiprocessing performance down towards the
level of a uniprocessor system.
VxWorks SMP therefore provides specialized mechanisms for mutual exclusion
between tasks and interrupts executing and being received (respectively)
simultaneously on different CPUs. In place of uniprocessor task and interrupt
locking routines—such as taskLock( ) and intLock( )—VxWorks SMP provides
spinlocks, atomic memory operations, and CPU-specific mutual exclusion
facilities.
CPU Affinity
By default, any task can run on any of the CPUs in the system (which generally
provides the best load balancing) and interrupts are routed to CPU 0 (the bootstrap
CPU). There are instances, however, in which it is useful to assign specific tasks or
interrupts to a specific CPU. VxWorks SMP provides this capability, which is
referred to a as CPU affinity. 15
The hardware required for use with VxWorks SMP must consist of symmetric
multiprocessors—either multicore processors or hardware systems with multiple
single CPUs. The processors must be identical, all memory must be shared
between the CPUs (none may be local to a CPU), and all devices must be equally
accessible from all CPUs.That is, targets for VxWorks SMP must adhere to the
uniform memory access (UMA) architecture.
Figure 15-1 illustrates the typical target hardware for a dual CPU SMP system.
671
VxWorks
Kernel Programmer's Guide, 6.6
Devices
Interrupts
Programmable
Interrupts Interrupt Controller Interrupts
CPU 0 CPU 1
Cache Cache
Snooping
Shared Memory
672
15 VxWorks SMP
15.2 Technology Overview
The features of VxWorks SMP may be highlighted by comparison with the way
VxWorks is used in asymmetric multiprocessing (AMP), using the same target
hardware in both cases. VxWorks AMP technologies include VxMP, TIPC (over
shared memory), and distributed shared memory (DSHM). The relationship
between CPUs and basic uses of memory in SMP and AMP systems are illustrated
in Figure 15-2 and Figure 15-3.
673
VxWorks
Kernel Programmer's Guide, 6.6
Memory
VxWorks
RTP A
Memory
VxWorks A
VxWorks B
Shared
Memory
674
15 VxWorks SMP
15.2 Technology Overview
In an SMP configuration the entire physical memory space is shared between the
CPUs. This memory space is used to store a single VxWorks SMP image (text, data,
bss, heap). It is also used to store any real-time processes (RTPs) that are created
during the lifetime of the system. Because both CPUs can potentially read from,
write to and execute any memory location, any kernel task or user (RTP) task can
be executed by either CPU.
In an AMP configuration there is one copy of the VxWorks image in memory for
each CPU. Each operating system image can only be accessed by the CPU to which
it belongs. It is therefore impossible for CPU 1 to execute kernel tasks residing in
VxWorks CPU 0's memory, or the reverse. The same situation applies for RTPs. An
RTP can only be accessed and executed by the instance of VxWorks from which it
was started.
In an AMP system some memory is shared, but typically the sharing is restricted
to reading and writing data. For example, for passing messages between two
instances of VxWorks. Hardware resources are mostly divided between instances
of the operating system, so that coordination between CPUs is only required when
accessing shared memory.
With an SMP system, both memory and devices are shared between CPUs, which
requires coordination within the operating system to prevent concurrent access to
shared resources.
15
675
VxWorks
Kernel Programmer's Guide, 6.6
! CAUTION: Boot loaders for VxWorks SMP must not be built with the SMP build
option—neither with Workbench nor with vxprj. For more information about boot
loaders for VxWorks SMP, see 15.4 Booting VxWorks SMP, p.678.
Default VxWorks SMP images are provided in project directories parallel to those
for VxWorks UP images. For example, for the hpcNet8641 BSP, the directories are
as follows:
■ installDir/vxworks-6.x/target/proj/hpcNet8641_diab_smp
■ installDir/vxworks-6.x/target/proj/hpcNet8641_gnu_smp
■ installDir/vxworks-6.x/target/proj/hpcNet8641_diab
■ installDir/vxworks-6.x/target/proj/hpcNet8641_gnu
There are several configuration parameters that are specific to VxWorks SMP,
which are provided by the INCLUDE_KERNEL component. These parameters are
as follows:
676
15 VxWorks SMP
15.3 VxWorks SMP Configuration and Build
VX_SMP_NUM_CPUS
Defines the number of CPUs that should be enabled for VxWorks SMP. The
maximum number of CPUs for each architecture is as follows: ARM = 4,
IA32 = 8, MIPS = 16, PowerPC = 8, VxWorks Simulator = 32.
ENABLE_ALL_CPUS
Enables all CPUs that have been configured for the system’s use with
VX_SMP_NUM_CPUS. The default is TRUE, in which case VxWorks boots with
all CPUs enabled and running. The parameter can be set to FALSE for
debugging purposes, in which case only CPU 0 (the bootstrap CPU) will be
enabled by the VxWorks initialization code. The kernelCpuEnable( ) routine
can then be used to enable a specific CPU once the system has booted.
VX_ENABLE_CPU_TIMEOUT
The time-out value (in seconds) for the period during which additional cores
may be enabled. When kernelCpuEnable( ) is called, it waits for the time
defined by VX_ENABLE_CPU_TIMEOUT for the additional core to come up. If
ENABLE_ALL_CPUS is set to TRUE, the value of VX_ENABLE_CPU_TIMEOUT
is used as the time-out period for enabling all CPUs.
677
VxWorks
Kernel Programmer's Guide, 6.6
! CAUTION: Boot loaders for VxWorks SMP must not be built with the SMP build
option—neither with the SMP selection for a Workbench VxWorks Image Project
(VIP), nor with the -smp option for vxprj. Boot loaders built with the SMP build
option will not function properly.
For detailed information about VxWorks boot loaders, see 3. Boot Loader.
678
15 VxWorks SMP
15.5 Programming for VxWorks SMP
multiprocessing system. Also note that VxWorks SMP maintains an idle task for
each CPU, and that idle tasks must not be interfered with.
The use of mutual exclusion facilities is one of the critical differences between
uniprocessor and SMP programming. While some facilities are the same for
VxWorks UP and VxWorks SMP, others are necessarily different. In addition,
reliance on implicit synchronization techniques—such as relying on task priority
instead of explicit locking—do not work in an SMP system (for more information
on this topic, see 15.15.4 Implicit Synchronization of Tasks, p.708).
Unlike uniprocessor systems, SMP systems allow for truly concurrent execution,
in which multiple tasks may execute, and multiple interrupts may be received and
serviced, all at the same time. In most cases, the same mechanisms—semaphores,
message queues, and so on—can be used in both uniprocessor and SMP systems
for mutual exclusion and coordination of tasks (see 4.8 Intertask and Interprocess
Communication, p.194).
However, the specialized uniprocessor mechanisms for disabling (masking)
interrupts and for suspending task preemption in order to protect critical regions
are inappropriate for—and not available in—an SMP system. This is because they
would defeat the advantages of truly concurrent execution by enforcing masking
or preemption across all CPUs, and thus drag a multiprocessing system down 15
towards the performance level of uniprocessor system.
The most basic differences for SMP programming therefore have to do with the
mechanisms available for mutual exclusion between tasks and interrupts
executing and being received (respectively) on different CPUs. In place of
uniprocessor task and interrupt locking routines—such as taskLock( ) and
intLock( )—VxWorks SMP provides the following facilities:
■ spinlocks for tasks and ISRs
■ CPU-specific mutual exclusion for tasks and ISRs
■
atomic memory operations
■
memory barriers
As with the uniprocessor mechanisms used for protecting critical regions,
spinlocks and CPU-specific mutual exclusion facilities should only used when
they are guaranteed to be in effect for very short periods of time. The appropriate
use of these facilities is critical to making an application SMP-ready (see
15.2.1 Terminology, p.669).
679
VxWorks
Kernel Programmer's Guide, 6.6
Note that both spinlocks and semaphores provide full memory barriers (in
addition to the memory barrier macros themselves).
For more information about these topics, see 15.6 Spinlocks for Mutual Exclusion and
Synchronization, p.681, 15.7 CPU-Specific Mutual Exclusion, p.687, 15.8 Memory
Barriers, p.689, and 15.9 Atomic Memory Operations, p.692.
By default, any task can run on any of the CPUs in the system (which generally
provides the best load balancing) and interrupts are routed to CPU 0. There are
cases, however, in which it is useful to assign tasks or interrupts to a specific CPU.
VxWorks SMP provides this capability, which is referred to a as CPU affinity.
For more information about interrupt and CPU affinity, see 15.10 CPU Affinity,
p.693.
VxWorks SMP includes a per-CPU idle task that does not exist in VxWorks UP.
The idle task has the lowest priority in the system, below the range permitted for
application use (for more information, see 4.3.1 Task Priorities, p.166). Idle tasks
make an SMP system more efficient by providing task context when a CPU enters
and exits an idle state.
The existence of idle tasks does not affect the ability of a CPU to go to sleep (when
power management is enabled) if there is no work to perform. Do not perform any
operations that affect the execution of an idle task.
The kernelIsCpuIdle( ) and kernelIsSystemIdle( ) routines provide information
about whether a specific CPU is executing an idle task, or whether all CPUs are
executing idle tasks (respectively).
For information about configuration options for idle tasks, see 15.3 VxWorks SMP
Configuration and Build, p.676.
! WARNING: Do not suspend, stop, change the priority, attempt a task trace, or any
similar operations on an idle task. Deleting, suspending, or stopping an idle task
causes the system to crash due to an exception in the scheduler. Changing the
priority of an idle task to a higher priority puts the CPU into a low power mode
prematurely. That is, simply do not use the task ID (tid) of an idle task as a
parameter to any VxWorks routine except taskShow( ).
680
15 VxWorks SMP
15.6 Spinlocks for Mutual Exclusion and Synchronization
RTP Applications
As in VxWorks UP systems, RTP (user mode) applications have a more limited set
of mutual exclusion and synchronization mechanisms available to them than
kernel code or kernel applications. In VxWorks SMP, they can make use of
semaphores and atomic operations, but not spinlocks, memory barriers, or
CPU-specific mutual exclusion mechanisms. In addition, the semExchange( )
routine provides for an atomic give and exchange of semaphores.
681
VxWorks
Kernel Programmer's Guide, 6.6
Types of Spinlocks
Unlike the behavior associated with semaphores, a task that attempts to take a
spinlock that is already held by another task does not pend; instead it continues
executing, simply spinning in a tight loop waiting for the spinlock to be freed.
The terms spinning and busy waiting—which are both used to describe this
activity—provide insight into both the advantages and disadvantages of
spinlocks. Because a task (or ISR) continues execution while attempting to take a
spinlock, the overhead of rescheduling and context switching can be avoided
(which is not the case with a semaphore). On the other hand, spinning does no
useful work, and ties up one or more of the CPUs.
Spinlocks should therefore only be used when they are likely to be efficient; that is,
when they are going to be held for very short periods of time (as with taskLock( )
and intLock( ) in a uniprocessor system). If a spinlock is held for a long period of
682
15 VxWorks SMP
15.6 Spinlocks for Mutual Exclusion and Synchronization
time, the drawbacks are similar to intLock( ) and taskLock( ) being held for a long
time in VxWorks UP—increased interrupt and task latency.
Acquisition of a spinlock on one CPU does not affect the processing of interrupts
or scheduling of tasks on other CPUs. Tasks cannot be deleted while they hold a
spinlock.
For detailed cautionary information about spinlock use, see 15.6.3 Caveats With
Regard to Spinlock Use, p.685 and 15.6.4 Routines Restricted by Spinlock Use, p.685.
683
VxWorks
Kernel Programmer's Guide, 6.6
Spinlocks that are used to address contention between ISRs—or between a task
and other tasks and ISRs—are referred to as ISR-callable spinlocks.
These spinlocks can be acquired by both tasks and ISRs. They disable (mask)
interrupts on the local CPU, which prevents the caller from being preempted while
it holds the spinlock (which could otherwise lead to a livelock). If a task acquires
an ISR-callable spinlock, task preemption is also suspended on the local CPU while
that task holds the spinlock. This allows the task to execute the critical section that
the spinlock is protecting. Interrupts and tasks on other CPUs are not affected. The
routines used for ISR-callable spinlocks are listed in Table 15-1.
For VxWorks UP, ISR-callable spinlocks are implemented with the same behavior
as the interrupt locking routines intLock( ) and intUnlock( ).
Routine Description
Spinlocks that are used to address contention between tasks alone (and not ISRs)
are called task-only spinlocks. These spinlocks disable task preemption on the local
CPU while the caller holds the lock (which could otherwise lead to a livelock
situation). This prevents the caller from being preempted by other tasks and allows
it to execute the critical section that the lock is protecting. Interrupts are not
disabled and task preemption on other CPUs is not affected. The routines used for
task-only spinlocks are listed in Table 15-2.
For VxWorks UP, task-only spinlocks are implemented with the same behavior as
the task locking routines taskLock( ) and taskUnlock( ).
684
15 VxWorks SMP
15.6 Spinlocks for Mutual Exclusion and Synchronization
Routine Description
Certain routines should not be called while the calling entity (task or ISR) holds a
spinlock. This restriction serves to prevent a task or ISR from entering a kernel
critical region while it already holds a spinlock—and cause the system to enter a
685
VxWorks
Kernel Programmer's Guide, 6.6
livelock state (for more information, see 15.6.3 Caveats With Regard to Spinlock Use,
p.685). The routine restriction also apply to intCpuLock( ) (for more information
about this routine see 15.7.1 CPU-Specific Mutual Exclusion for Interrupts, p.687).
This restriction applies because the kernel requires interrupts to be enabled to
implement its multi-CPU scheduling algorithm.
It is outside the scope of this document to list all the VxWorks spinlock restricted
routines. However, generally speaking these are routines related to the creation,
destruction and manipulation of kernel objects (semaphores, tasks, message
queues, and so on) as well as any routine that can cause a scheduling event.
While the restriction imposed by spinlock use may seem to be a hindrance, it really
should not be. Spinlocks are meant for very fast synchronization between
processors. Holding a spinlock and attempting to perform notable amounts of
work, including calling into the kernel, results in poor performance on an SMP
system, because either task preemption or interrupts, or both, are disabled when a
CPU owns a spinlock.
Table 16 identifies some of the routines restricted by spinlock and CPU lock use.
Library Routines
686
15 VxWorks SMP
15.7 CPU-Specific Mutual Exclusion
Library Routines
intLib intDisconnect( )
687
VxWorks
Kernel Programmer's Guide, 6.6
The routines listed in Table 15-3 are used for disabling and enabling interrupts on
the local CPU.
Note that in a uniprocessor system they default to the behavior of intLock( ) and
intUnlock( ).
Routine Description
For more information about these routines, see the intLib entry in the VxWorks
API references.
CPU-specific mutual exclusion for tasks allows for suspending task preemption on
the CPU on which the calling task is running. That is, it provides for local CPU task
locking, and effectively prevents any other task from running on the local CPU. For
example, task A running on CPU 0 can perform a local CPU task lock operation so
that no other task can run on CPU 0 until it releases the lock or makes a blocking
call.
The calling task is also prevented from migrating to another CPU until the lock is
released.
Execution on other CPUs in the SMP system is not affected. In order to be an
effective means of mutual exclusion, therefore, all tasks that should participate in
the mutual exclusion scenario should have CPU affinity set for the local CPU (for
information, see 15.10.1 Task CPU Affinity, p.693).
The routines listed in Table 15-4 are used for suspending and resuming task
preemption on the local CPU.
Note that in a uniprocessor system they default to the behavior of taskLock( ) and
taskUnlock( ).
688
15 VxWorks SMP
15.8 Memory Barriers
Routine Description
For more information about these routines, see the taskLib entry in the VxWorks
API references.
689
VxWorks
Kernel Programmer's Guide, 6.6
while (!workAvailable);
doWork (pWork); /* error - pWork might not be visible to this CPU yet */
It is very likely that the pWork pointer used by CPU 1 will contain incorrect data
because CPU 0 reorders its write operations to system memory, which causes CPU
1 to observe the change to the workAvailable variable before the value of the
pWork variable has been updated. In a case like this, the likely result is a system
crash due to de-referencing an invalid pointer.
To solve the memory ordering problem, VxWorks provides a set of memory
barrier operations. The sole purpose of memory barrier operations is to provide a
way to guarantee the ordering of operations between cooperating CPUs. Memory
barriers fall into three general classes:
■ read memory barrier
■ write memory barrier
■ full (read/write) memory barrier
690
15 VxWorks SMP
15.8 Memory Barriers
By inserting a memory barrier between the read operations, you can guarantee
that the reads occur in the appropriate order:
a = *pAvalue; /* will occur before read of *pBvalue */
VX_MEM_BARRIER_R();
b = *pBvalue; /* will occur after read of *pAvalue */
While VX_MEM_BARRIER_R( ) can ensure that the read operations occur in the
correct order, this guarantee is not helpful unless the writer of the shared data also
ensures that the writes of the shared data also occur in the correct order. For this
reason, the VX_MEM_BARRIER_R( ) and VX_MEM_BARRIER_W( ) macros
should always be used together.
691
VxWorks
Kernel Programmer's Guide, 6.6
Routine Description
692
15 VxWorks SMP
15.10 CPU Affinity
Routine Description
VxWorks SMP provides the ability to assign tasks to a specific CPU, after which
the scheduler ensures the tasks are only executed on that CPU. This assignment is
referred to as task CPU affinity.
While the default SMP operation in which any task can run on any CPU often
provides the best overall load balancing, there are cases in which assigning a
specific set of tasks to a specific CPU can be useful. For example, if a CPU is
dedicated to signal processing and does no other work, the cache remains filled
with the code and data required for that activity. This saves the cost of moving to
another CPU—which is incurred even within single piece of silicon, as the L1 cache
is bound to a single CPU, and the L1 must be refilled with new text and data if the
task migrates to a different CPU.
Another example is a case in which profiling an application reveals that some of its
tasks are frequently contending for the same spinlock, and a fair amount of
execution time is wasted waiting for a spinlock to become available. Overall
performance could be improved by setting task CPU affinity such that all tasks
693
VxWorks
Kernel Programmer's Guide, 6.6
involved in spinlock contention run on the same CPU. This would free up more
time other CPUs for other tasks.
Task CPU affinity can be set in the following manner:
■
A task can set its own CPU affinity or the CPU affinity of another task by
calling taskCpuAffinitySet( ).
■
A newly created task inherits the CPU affinity (if any) of the parent task. A task
created or initialized by any of the following routines inherits the CPU affinity
of the calling task: taskSpawn( ), taskCreate( ), taskInit( ), taskOpen( ), and
taskInitExcStk( ).
The creating task’s CPU affinity is not inherited, however, when the task that
is created is an RTPs initial task. For example, if a task invokes rtpSpawn( ),
the initialization task of the resulting RTP does not inherit the CPU affinity of
the caller.
The taskLib library provides routines for managing task CPU affinity. They are
described in Table 15-6.
Routine Description
The routine taskCpuAffinitySet( ) takes a CPU set variable (of type cpuset_t) to
identify the CPU to which the task should be assigned. Similarly, the
taskCpuAffinityGet( ) routine takes a pointer to a cpuset_t variable for the
purpose of recording the CPU affinity for a given task.
In both cases the CPUSET_ZERO( ) macro must be used to clear the cpuset_t
variable before the call is made. For taskCpuAffinitySet( ), the CPUSET_SET( )
macro must be used after CPUSET_ZERO( ) and before the routine itself is called.
To remove task CPU affinity, use the CPUSET_ZERO( ) macro to clear the
cpuset_t variable, and then make the taskCpuAffinitySet( ) call again.
For more information about using these routines and macros see Task CPU Affinity
Examples, p.695 and CPU Set Variables and Macros, p.698
694
15 VxWorks SMP
15.10 CPU Affinity
By default, real-time process (RTP) tasks inherit the CPU affinity setting of the task
that created the RTP. If the parent task has no specific CPU affinity (that is, it can
execute on any available CPU and may migrate from one CPU to the other during
its lifetime), then the RTP's tasks have no specific CPU affinity either. If the parent
task has its affinity set to a given CPU, then by default, the RTP tasks inherit this
affinity and execute only on the same CPU as the parent task.
The RTP_CPU_AFFINITY_NONE option for rtpSpawn( ) can be used to create an
RTP in which tasks have no CPU affinity, despite the fact that the RTP’s parent task
may have itself have had one.
The following sample code illustrates the sequence to set the affinity of a newly
created task to CPU 1.
STATUS affinitySetExample (void)
{
cpuset_t affinity;
int tid;
/* Create the task but only activate it after setting its affinity */
tid = taskCreate ("myCpu1Task", 100, 0, 5000, printf,
(int) "myCpu1Task executed on CPU 1 !", 0, 0, 0, 15
0, 0, 0, 0, 0, 0);
if (tid == NULL)
return (ERROR);
/* Clear the affinity CPU set and set index for CPU 1 */
CPUSET_ZERO (affinity);
CPUSET_SET (affinity, 1);
return (OK);
}
The next example shows how a task can remove its affinity to a CPU:
695
VxWorks
Kernel Programmer's Guide, 6.6
{
cpuset_t affinity;
CPUSET_ZERO (affinity);
The kernelLib and vxCpuLib libraries provide routines for getting information
about, and for managing, CPUs. They are described in Table 15-7 and Table 15-8.
696
15 VxWorks SMP
15.11 CPU Information and Management
Routine Description
The kernelCpuEnable( ) routine allows you to enable a specific CPU. Once a CPU
is enabled, it starts dispatching tasks as directed by the scheduler. All CPUs are
enabled by default, but the ENABLE_ALL_CPUS component parameter can be used
to boot VxWorks SMP with just CPU 0 enabled (for more information see
ENABLE_ALL_CPUS, p.677). Then, kernelCpuEnable( ) can be used to selectively
enable individual CPUs.
Routine Description
697
VxWorks
Kernel Programmer's Guide, 6.6
VxWorks SMP provides a CPU set variable type, and CPU set macros for
manipulating variables defined by that type. The variable and macros must be
used in conjunction with various routines—such as taskCpuAffinitySet( )—for
getting information about CPUs and managing their use.
The cpuset_t variable type is used for identifying the CPUs that have been
configured into a VxWorks SMP system with the target BSP, which may be a
subset of the CPUs in the hardware platform.
Each bit in a cpuset_t variable corresponds to a specific CPU, or CPU index, with
the first bit representing CPU 0 (the bootstrap CPU). The first bit corresponds to
index 0, the second to 1, the third to 2, and so on (regardless of the physical location
of the CPUs in the hardware).
As an example, for an eight CPU hardware system, for which the BSP configures
four CPUs for VxWorks SMP, the CPUSET_ZERO( ) macro would clear all the bits
in a cpuset_t variable, and then a call to vxCpuIndexGet( ) would set the first four.
CPU set macros must be used to set and unset CPU indices (change the bits of
cpuset_t variables). These macros are described in Table 15-9CPU Set Macros,
p.699. In order to use these macros, include the cpuset.h header file.
! CAUTION: Do not manipulate cpuset_t type variables directly. Use CPU set
macros.
698
15 VxWorks SMP
15.11 CPU Information and Management
Macro Description
For an example of how CPU set macros are used, see 15.10.1 Task CPU Affinity,
p.693. For more information about the macros, see the entry for cpuset in the
VxWorks API references.
699
VxWorks
Kernel Programmer's Guide, 6.6
700
15 VxWorks SMP
15.13 Optimizing SMP Performance
And spy( ) reports the number of ticks spent in kernel, interrupt, idle, and task
code for each CPU. The output looks like the following:
-> spy
value = 1634761696 = 0x61707be0
->
NAME ENTRY TID PRI total % (ticks) delta % (ticks)
------------ ------------ ---------- --- --------------- ---------------
tJobTask 0x60056ae0 0x603d2010 0 0% ( 0) 0% ( 0)
tExcTask 0x60055cf0 0x601a3b30 0 0% ( 0) 0% ( 0)
tLogTask logTask 0x603d7c38 0 0% ( 0) 0% ( 0)
tNbioLog 0x600577b0 0x603db110 0 0% ( 0) 0% ( 0)
tShell0 shellTask 0x6051cec8 1 0% ( 0) 0% ( 0)
tWdbTask wdbTask 0x603c7840 3 0% ( 0) 0% ( 0)
tSpyTask spyComTask 0x61707be0 5 0% ( 0) 0% ( 0)
tAioIoTask1 aioIoTask 0x60443888 50 0% ( 0) 0% ( 0)
tAioIoTask0 aioIoTask 0x60443c88 50 0% ( 0) 0% ( 0)
tNet0 ipcomNetTask 0x60485020 50 0% ( 0) 0% ( 0)
ipcom_syslog 0x60109060 0x60485c78 50 0% ( 0) 0% ( 0)
ipnetd 0x6010d340 0x603bf0c8 50 0% ( 0) 0% ( 0)
tAioWait aioWaitTask 0x60443590 51 0% ( 0) 0% ( 0)
tIdleTask0 idleTaskEntr 0x60389a38 287 100% ( 1000) 100% ( 500)
tIdleTask1 idleTaskEntr 0x603a2000 287 100% ( 1000) 100% ( 500)
KERNEL 0% ( 0) 0% ( 0)
INTERRUPT 0% ( 0) 0% ( 0)
TOTAL 200% ( 1000) 200% ( 500)
Note that while timexLib can avoid precision errors by auto-calibrating itself and
doing several calls of the functions being monitored, it suffers from the lack of
scheduling management during the calls. The tasks can move between CPUs while
the measurements take place. Depending on how often this occurs, this is likely to
have an impact the precision of the measurement.
701
VxWorks
Kernel Programmer's Guide, 6.6
Threading
Using Spinlocks
702
15 VxWorks SMP
15.13 Optimizing SMP Performance
Using vmBaseLib
The vmBaseLib library is the VxWorks MMU management library that allows
kernel applications and drivers to manage the MMU. An important task of an SMP
operating system is to ensure the coherency of the translation look aside buffers
(TLBs) of the MMU contained in each CPU. Some CPUs, like the MPC8641D, have
hardware that ensures TLBs are always coherent. Other CPUs, such as the
BCM1480 and Intel Dual Core Xeon LV, do not have this capability. In these cases
the operating system is responsible for propagating MMU events that affect TLB
coherency to all CPUs in the system.
While not all events require propagation—it is generally limited to events that
modify an existing page mapping such as with vmStateSet( )—the propagation
that must be performed has a negative impact on some VxWorks SMP vmBaseLib 15
routines. To reduce the negative impact on your system’s performance, minimize
the number of calls to vmStateSet( ), and so on. For example, if a region with
special settings is needed from time to time during system operation, it is better to
set it up once during startup, and then reuse it as needed, rather than creating and
destroying a region for each use.
For some applications and systems, assigning specific interrupts or specific tasks
to designated CPUs can provide performance advantages. For more information,
see 15.10.2 Interrupt CPU Affinity, p.696 and 15.10.1 Task CPU Affinity, p.693.
703
VxWorks
Kernel Programmer's Guide, 6.6
704
15 VxWorks SMP
15.15 Migrating Code to VxWorks SMP
In addition, multiple CPUs introduce complexities with regard to objects that are
global in a uniprocessor system, but must be CPU-specific in an SMP system.
Migrating code from VxWorks UP to VxWorks SMP necessarily involves several
steps between uniprocessor code and hardware to SMP code and hardware.
The migration process also involves using different multitasking facilities,
different BSP support, and so on. Some parts of migration activity involve
replacing a uniprocessor technology with an SMP one—such as replacing
taskLock( ) with spinLockTaskTake( )—while others involve changing the use of
features that have different behaviors in the VxWorks UP and VxWorks SMP (for
example, some vmBaseLib routines).
This section provides an overview of the migration process, a summary of the
operating system facilities that need to be taken into account in migration, and
more detailed information about individual migration issues. It does not provide
a completely self-contained discussion of migration to SMP. It is necessarily a
supplement to the preceding material in this chapter, which provides information
about the core features of VxWorks SMP. Incorporation of these features naturally
forms the basis for migrating code from VxWorks UP to VxWorks SMP. The
material following the discussion of general issues—15.15.1 Code Migration Path,
p.705 and 15.15.2 Overview of Migration Issues , p.707—therefore covers some of the
less tidy aspects of migration.
15
This section describes the migration model and the recommended path for
migrating applications from VxWorks UP to VxWorks SMP.
Wind River recommends that you approach migrating code designed for an earlier
version of VxWorks UP to the current version of VxWorks SMP with the following
steps:
705
VxWorks
Kernel Programmer's Guide, 6.6
706
15 VxWorks SMP
15.15 Migrating Code to VxWorks SMP
Incompatible Uniprocessor
Features and Practices SMP Features and Practices Reference
tlsLib routines __thread storage class Task Local Storage: tlsLib, p.714
Accessing global variables Replace with CPU-specific 15.15.8 SMP CPU-Specific Variables
that are CPU-specific variables variable routines and and Uniprocessor Global Variables,
or inaccessible in SMP. practices. p.714
707
VxWorks
Kernel Programmer's Guide, 6.6
Incompatible Uniprocessor
Features and Practices SMP Features and Practices Reference
Drivers that are not VxBus VxBus-compliant drivers. 15.15.10 Drivers and BSPs, p.717
compliant.
Also note that the custom scheduler framework is not supported for VxWorks
SMP.
As in VxWorks UP systems, RTP (user mode) applications have a more limited set
of mutual exclusion and synchronization mechanisms available to them than
kernel code or kernel applications. In VxWorks SMP, they can make use of
semaphores and atomic operations, but not spinlocks, memory barriers, or
CPU-specific mutual exclusion mechanisms. In addition, the semExchange( )
routine provides for an atomic give and exchange of semaphores.
708
15 VxWorks SMP
15.15 Migrating Code to VxWorks SMP
B will not run until Task A releases the CPU is an invalid assumption on an SMP
system.
Implicit synchronization based on task priority is not easy to detect. Careful
review of all code that causes a task to become ready to run would be a useful
approach. For example, review code that uses the following types of routines:
■
Routines That Create Tasks
709
VxWorks
Kernel Programmer's Guide, 6.6
In VxWorks SMP it is, for example, possible for a task to be running at the same
time that an ISR is executing. This is not possible in VxWorks UP, and therefore
requires changes to the way mutual exclusion between a task and an ISR is done.
A common synchronization method between an ISR and a task in VxWorks is the
binary semaphore. This mechanism works equally well in VxWorks SMP, and
therefore code that uses binary semaphores in this manner need not be modified
for VxWorks SMP—provided the ISR is running with interrupts enabled when it
calls semGive( ). This is also true of other messaging and synchronization
routines, such as message queues and VxWorks events. Note, however, that when
an ISR wakes up a task (by giving a binary semaphore, sending a VxWorks event,
sending a message to a message queue, etc.), the awakened task may start running
immediately on another CPU.
For more information about uniprocessor synchronization mechanisms and the
SMP alternatives, see 15.15.7 Unsupported Uniprocessor Routines and SMP
Alternatives , p.711.
While the routines provided in VxWorks UP and VxWorks SMP are largely the
same, there are a few that have different behaviors in the VxWorks SMP due to the
requirements of multiprocessor systems, and their use has restrictions.
cacheLib Restrictions
The only way for the hardware cache coherency to be effective is to have the caches
turned on at all times. VxWorks SMP therefore turns on the caches of each CPU as
it is enabled, and never allows them to be disabled. Calling cacheEnable( ) in
VxWorks SMP always returns OK. Calling cacheDisable( ) in VxWorks SMP
always returns ERROR, with errno set to S_cacheLib_FUNCTION_UNSUPPORTED.
710
15 VxWorks SMP
15.15 Migrating Code to VxWorks SMP
These routines are not supported in VxWorks SMP. If they are called, they return
ERROR and set errno to S_cacheLib_FUNCTION_UNSUPPORTED
vmBaseLib Restrictions
VxWorks SMP does not provide APIs for changing memory page attributes. On an
SMP system it is essential that the RAM regions that are shared between the CPUs
never be allowed to get out of coherency with one another. If a single page in
system RAM were to have its attributes changed so that it no longer correctly
participates in the hardware coherency protocol, any operating system use of that
page (for spinlocks, shared data structures, and so on) would be at risk of
unpredictable behavior. This unpredictable behavior might even occur long after
the offending change in the page attributes has occurred. This type of problem
would be extremely difficult to debug, because of the underlying assumption that
15
the hardware coherency in SMP simply works.
These routines are called to modify the attributes of a single page of virtual
memory. In an SMP system, the caching attributes of a page cannot be modified.
Attempting to do so causes these routines to return ERROR with errno set to
S_vmLib_BAD_STATE_PARAM.
Some of the routines available in VxWorks UP are not supported in VxWorks SMP
because their functionality is at odds with truely concurrent execution of tasks and
ISRs, or because they would degrade performance to an unacceptible extent. SMP
alternatives provide comparable functionality that is designed for symmetric
multiprocessing.
711
VxWorks
Kernel Programmer's Guide, 6.6
In VxWorks UP, the intLock( ) routine is used by a task or ISR to prevent VxWorks
from processing interrupts. The typical use of this routine is to guarantee mutually
exclusive access to a critical section of code between tasks, between tasks and ISRs,
or between ISRs (as with nested ISRs—when ISR can be preempted by an ISR of
higher priority).
This mechanism would be inappropriate for a multiprocessor system, and
VxWorks SMP provides the following alternatives for interrupt locking:
■ If interrupt locking is used to make a simple pseudo-atomic operation on a
piece of memory, atomic operations may be a suitable alternative.
■ If interrupt locking is used as a mutual exclusion mechanism between tasks
only, semaphores or task-only spinlocks are a suitable replacements. Spinlock
acquisition and release operations are faster than semaphore operations, so
they would be suitable to protect a short critical section that needs to be fast.
Semaphores are suitable for longer critical sections.
■ If interrupt locking is used as a mutual exclusion mechanism between tasks
and ISRs, or between ISRs, ISR-callable spinlocks are a suitable replacement.
■ If interrupt locking is used as a mutual exclusion mechanism between tasks
only, taskCpuLock( ) can be used instead as long as all tasks taking part in the
mutual exclusion scenario have the same CPU affinity. This alternative should
not be used in custom extensions to the operating system other than as a
temporary measure when migrating code the from VxWorks UP to VxWorks
SMP.
■ If interrupt locking is used as a mutual exclusion mechanism between tasks,
between tasks and ISRs, or between ISRs, then intCpuLock( ) can be used as
long as all tasks and ISRs taking part in the mutual exclusion scenario have the
same CPU affinity. This alternative should not to be used in custom extensions
to the operating system other than as a temporary measure when migrating
the code from VxWorks UP to VxWorks SMP.
Note that for VxWorks SMP, ISR-callable spinlocks are implemented with the
same behavior as the interrupt locking routines intLock( ) and intUnlock( ).
For information about SMP mutual exclusion facilities, see 15.6 Spinlocks for Mutual
Exclusion and Synchronization, p.681, 15.7 CPU-Specific Mutual Exclusion, p.687, and
15.9 Atomic Memory Operations, p.692.
712
15 VxWorks SMP
15.15 Migrating Code to VxWorks SMP
In VxWorks UP, task locking routines are used by a task to prevent the scheduling
of any other task in the system, until it calls the corresponding unlock routine. The
typical use of these routines is to guarantee mutually exclusive access to a critical
section of code.
With VxWorks UP, the kernel routine taskLock( ) is used to lock out all other tasks
in the system by suspending task preemption (also see Task Locking in RTPs:
taskRtpLock( ) and taskRtpUnlock( ), p.713). This mechanism would be
inappropriate for a multiprocessor system, and VxWorks SMP provides the
following alternatives:
■ Semaphores.
■ Atomic operations.
■ task-only spinlocks. Spinlock acquisition and release operations are faster than
semaphore operations (the other alternative in this case) so they would be
suitable to protect a short critical section that needs to be fast.
■ The taskCpuLock( ) routines for situations where all tasks taking part in the
task-locking scenario have the same CPU affinity. This alternative should not
be used in custom extensions to the operating system other than as a
temporary measure when migrating the code from VxWorks UP to VxWorks 15
SMP.
Note that for VxWorks UP, task-only spinlocks are implemented with the same
behavior as the task locking routines taskLock( ) and taskUnlock( ).
For information about SMP mutual exclusion facilities, see 15.6 Spinlocks for Mutual
Exclusion and Synchronization, p.681, 15.7 CPU-Specific Mutual Exclusion, p.687, and
15.9 Atomic Memory Operations, p.692.
713
VxWorks
Kernel Programmer's Guide, 6.6
The VxWorks UP task local storage routines provided by tlsLib for user-mode
(RTP) applications are not compatible with an SMP environment, as more than one
task using the same task variable location could be executing concurrently. The
tlsLib routines are as follows:
■ tlsKeyCreate( )
■ tlsValueGet( )
■ tlsValueSet( )
■ tlsValueOfTaskGet( )
■ tlsValueOfTaskSet( )
The __thread storage class should be used instead. For more information, see
4.7.3 Task-Specific Variables, p.191.
Some objects that are global in a uniprocessor system (such as errno) are
CPU-specific entities in VxWorks SMP, and others are inaccessible or non-existent
in VxWorks SMP.
! CAUTION: Wind River recommends that you do not manipulate any CPU-specific
or global variables directly. Using the appropriate API is recommended to prevent
unpredictable behavior and to ensure compatibility with future versions of
VxWorks.
714
15 VxWorks SMP
15.15 Migrating Code to VxWorks SMP
The SMP CPU-specific variables that can be accessed indirectly with appropriate
routine are as follows:
■
errno
■
taskIdCurrent
■
intCnt
■ isrIdCurrent
errno
! CAUTION: Do not access errno directly from assembly code. Do not access errno
directly from C or C++ code that does not include errno.h.
15
taskIdCurrent
intCnt
In an SMP system, specific interrupts are dedicated to a specific CPU. The intCnt
variable is used to track the number of nested interrupts that exist on a specific
CPU. Code that references this variable should be changed to use the intCount( )
routine instead.
isrIdCurrent
The isrIdCurrent variable is used to identify the ISR executing on the specific CPU.
This global is only available if the INCLUDE_ISR_OBJECTS component is included
in VxWorks.Code that accesses isrIdCurrent must be changed to use the
isrIdSelf( ) routine instead.
715
VxWorks
Kernel Programmer's Guide, 6.6
The VxWorks UP variables that do not exist in VxWorks SMP—or that must not be
accessed by user code in any way—are as follows:
■
vxIntStackBase
■
vxIntStackEnd
■
kernelIsIdle
■ windPwrOffCpuState
vxIntStackBase
The vxIntStackBase variable identifies base of the interrupt stack used for
processing interrupts. For VxWorks SMP, each CPU has a vxIntStackBase to
process interrupts since interrupts may be processed by multiple CPUs
simultaneously. There is no routine for accessing this variable and it must not be
accessed by user code.
vxIntStackEnd
The vxIntStackEnd variable identifies the end of the interrupt stack for each CPU.
There is no routine for accessing this variable and it must not be accessed by user
code.
kernelIsIdle
windPwrOffCpuState
In an SMP system memory coherency is required to ensure that each CPU sees the
same memory contents. Depending on the CPU architecture, some memory access
attributes may not be suitable for a system where memory coherency is required.
For information in this regard, see the VxWorks Architecture Supplement.
716
15 VxWorks SMP
15.15 Migrating Code to VxWorks SMP
Both drivers and BSPs developed for VxWorks SMP must adhere to the
programming practices described throughout this chapter. Drivers must also
conform to the VxBus driver model. BSPs, in addition to providing support for
VxBus, must provide facilities different from the VxWorks UP for reboot handling,
CPU enumeration, interrupt routing and assignment, and so on. For more
information, see VxWorks Device Driver Developer’s Guide and VxWorks BSP
Developer’s Guide.
15
717
VxWorks
Kernel Programmer's Guide, 6.6
718
16
Shared-Memory Objects:
VxMP
16.1 Introduction
VxMP is a VxWorks component that provides shared-memory objects dedicated
to high-speed synchronization and communication between tasks running in
separate instances of VxWorks.
Shared-memory objects are a class of system objects that can be accessed by tasks
running on different processors. The object’s data structures reside in memory
accessible by all processors. Shared-memory objects are an extension of local
VxWorks objects. Local objects are only available to tasks on a single processor.
VxMP supplies the following types of shared-memory objects:
■
shared semaphores (binary and counting)
719
VxWorks
Kernel Programmer's Guide, 6.6
■
shared message queues
■
shared-memory partitions (system- and user-created partitions)
Shared-memory objects provide the following advantages:
■
A transparent interface that allows shared-memory objects to be manipulated
with the same routines that are used for manipulating local objects.
■ High-speed inter-processor communication—no going through an
unnecessary network stack.
■ The shared memory can reside either in dual-ported RAM or on a separate
memory board.
VxMP consists of the following facilities: a name database (smNameLib), task
synchronization and resource tracking with semaphores (semSmLib), messaging
with message queues (msgQSmLib) to build a custom protocol, and a
shared-memory allocator (smMemLib).
NOTE: VxMP can only be used in kernel space. It cannot be used in user space
(real-time processes).
720
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
Kernel tasks running on different CPUs can provide and obtain the object ID of
shared memory objects in a variety of ways, including shared message queues and
data structures in shared memory. The most convenient method, however, is by
using the VxMP name database to publish and access the object ID.
After the shared-memory facilities are initialized at run-time, all processors are
treated alike. Kernel tasks on any CPU can create and use shared-memory objects.
No processor has priority over another from a shared-memory object’s point of
view.1
There are few restrictions on shared-memory object use (they cannot, for example,
be used at interrupt level), and they are easily portable between uniprocessor and
multiprocessor systems, which can be advantageous in the development process.
Note that throughout the remainder of this chapter, system objects under
discussion refer to shared objects unless otherwise indicated.
VxMP provides a transparent interface that makes it easy to execute code using
shared-memory objects on both a multiprocessor system and a uniprocessor
system.
Only the object creation routines are different for shared-memory objects. After
creation, the same routines as used for operations on local objects can be used for 16
the shared memory objects. This allows an application to run in either a
uniprocessor or a multiprocessor environment with only minor changes to system
configuration, initialization, and object creation.
Using shared-memory objects on a uniprocessor system is useful for testing an
application before porting it to a multiprocessor configuration. However, for
objects that are used only locally, local objects always provide the best
performance.
1. Do not confuse this type of priority with the CPU priorities associated with VMEbus access.
721
VxWorks
Kernel Programmer's Guide, 6.6
Note, however, on systems where the processors have different byte ordering, you
must call the ntohl and htonl macros to byte-swap the application shared data that
is passed with message queues and so on. VxMP handles the endianness of all
system data structures and IDs internally. Names are byte-streams (strings) so
they are not subject to endianness issues. The IDs returned by the name database
are converted internally and the ID obtained by the user has the correct
endianness.
Shared-memory objects are only available to kernel tasks. Unlike local semaphores
and message queues, shared-memory objects cannot be used at interrupt level. No
routines that use shared-memory objects can be called from ISRs. An ISR is
dedicated to handle time-critical processing associated with an external event;
therefore, using shared-memory objects at interrupt time is not appropriate. On a
multiprocessor system, run event-related, time-critical processing on the CPU on
which the time-related interrupt occurred.
Note that shared-memory objects are allocated from dedicated shared-memory
pools, and cannot be deleted.
When using shared-memory objects, the maximum number of each object type
must be specified; see 16.5.7 Dual-Port or External Memory, p.750. If applications are
creating more than the specified maximum number of objects, it is possible to run
out of memory. For more information in this regard, see 16.7 Troubleshooting,
p.755.
The VxMP name database allows the association of any value to any name, such as
a shared-memory object’s ID with a unique name. It can communicate or publish a
shared-memory block’s address and object type. The name database provides
name-to-value and value-to-name translation, allowing objects in the database to
be accessed either by name or by value.
While other methods exist for making an object’s ID known to other nodes (such
as with message queues, by being written to a shared memory block at a
pre-determined offset, and so on), the name database is the most convenient
method for doing so—it is simpler and it allows any node access to the information
at will.
722
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
Typically, the kernel task that creates an object also publishes the object’s ID by
means of the name database. By adding the new object to the database, the task
associates the object’s ID with a name. Tasks on other processors can look up the
name in the database to get the object’s ID. After the task has the ID, it can use it to
access the object. For example, task t1 on CPU 1 creates an object. The object ID is
returned by the creation routine and entered in the name database with the name
myObj. For task t2 on CPU 0 to operate on this object, it first finds the ID by
looking up the string myObj in the name database.
Routine Description
16
This same technique can be used to publish a shared-memory address. For
example, task t1 on CPU 0 allocates a portion of memory and adds the address to
the database with the name mySharedMem. Task t2 on CPU 1 can find the address
of this shared memory by looking up the address in the name database using the
string mySharedMem.
Tasks on different processors can use an agreed-upon name to get a newly created
object’s value. See Table 16-1 for a list of name service routines. Note that
retrieving an ID from the name database need occur only one time for each task,
and usually occurs during application initialization. An ID can simply be retrieved
on a per-processor basis, if it is stored in a global variable (for example). However,
it is generally a good practice to retrieve IDs on a per-task basis.
The name database service routines automatically convert to or from network-byte
order; do not call htonl( ) or ntohl( ) C macros explicitly for values provided by the
name database. These C macros must, however be used on application shared data
that is passed between processors with different byte orders using message queues
and so on. For more information in this regard, see 16.2.2 Multiprocessing and Byte
Ordering, p.721.
723
VxWorks
Kernel Programmer's Guide, 6.6
The output is sent to the standard output device, and looks like the following:
Name in Database Max : 100 Current : 5 Free : 95
Name Value Type
----------------- ------------- -------------
myMemory 0x3835a0 SM_BLOCK
myMemPart 0x3659f9 SM_PART_ID
myBuff 0x383564 SM_BLOCK
mySmSemaphore 0x36431d SM_SEM_B
myMsgQ 0x365899 SM_MSG_Q
724
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
task on any CPU in the system can use the semaphore by first getting the
semaphore ID (for example, from the name database). When it has the ID, it can
then take or give the semaphore.
In the case of employing shared semaphores for mutual exclusion, typically there
is a system resource that is shared between tasks on different CPUs and the
semaphore is used to prevent concurrent access. Any time a task requires exclusive
access to the resource, it takes the semaphore. When the task is finished with the
resource, it gives the semaphore.
For example, there are two tasks, t1 on CPU 0 and t2 on CPU 1. Task t1 creates the
semaphore and publishes the semaphore’s ID by adding it to the database and
assigning the name myMutexSem. Task t2 looks up the string myMutexSem in
the database to get the semaphore’s ID. Whenever a task wants to access the
resource, it first takes the semaphore by using the semaphore ID. When a task is
done using the resource, it gives the semaphore.
In the case of employing shared semaphores for synchronization, assume a task on
one CPU must notify a task on another CPU that some event has occurred. The task
being synchronized pends on the semaphore waiting for the event to occur. When
the event occurs, the task doing the synchronizing gives the semaphore.
For example, there are two tasks, t1 on CPU 0 and t2 on CPU 1. Both t1 and t2 are
monitoring robotic arms. The robotic arm that is controlled by t1 is passing a
physical object to the robotic arm controlled by t2. Task t2 moves the arm into
position but must then wait until t1 indicates that it is ready for t2 to take the object. 16
Task t1 creates the shared semaphore and publishes the semaphore’s ID by adding
it to the database and assigning the name objReadySem. Task t2 looks up the
string objReadySem in the database to get the semaphore’s ID. It then takes the
semaphore by using the semaphore ID. If the semaphore is unavailable, t2 pends,
waiting for t1 to indicate that the object is ready for t2. When t1 is ready to transfer
control of the object to t2, it gives the semaphore, readying t2 on CPU1.
There are two types of shared semaphores, binary and counting. Shared
semaphores have their own create routines and return a SEM_ID. Table 16-3 lists
725
VxWorks
Kernel Programmer's Guide, 6.6
the create routines. All other semaphore routines, except semDelete( ), operate
transparently on the created shared semaphore.
task2 ( )
{ Pend Queue Semaphore
... State
semTake (semSmId,t);
... task2 EMPTY
}
task1
Executes on CPU 1
before task2:
task1 ( )
{
... Binary Shared Semaphore
semTake (semSmId,t);
...
}
SHARED MEMORY
The use of shared semaphores and local semaphores differs in several ways:
■
The shared semaphore queuing order specified when the semaphore is created
must be FIFO. If it is not, an error is generated, and errno is set to
S_msgQLib_INVALID_QUEUE_TYPE.
Figure 16-1 shows two tasks executing on different CPUs, both trying to take
the same semaphore. Task 1 executes first, and is put at the front of the queue
because the semaphore is unavailable (empty). Task 2 (executing on a different
CPU) tries to take the semaphore after task 1’s attempt and is put on the queue
behind task 1.
■
Shared semaphores cannot be given from interrupt level. If they are, an error is
generated, and errno is set to S_intLib_NOT_ISR_CALLABLE.
■
Shared semaphores cannot be deleted. Attempts to delete a shared semaphore
return ERROR and set errno to S_smObjLib_NO_OBJECT_DESTROY.
Use semInfo( ) to get the shared task control block of tasks pended on a shared
semaphore. Use semShow( ) to display the status of the shared semaphore and a
726
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
The output is sent to the standard output device, and looks like the following:
Semaphore Id : 0x36431d
Semaphore Type : SHARED BINARY
Task Queuing : FIFO
Pended Tasks : 2
State : EMPTY
TID CPU Number Shared TCB
------------- ------------- --------------
0xd0618 1 0x364204
0x3be924 0 0x36421c
16
727
VxWorks
Kernel Programmer's Guide, 6.6
The following code example depicts two tasks executing on different CPUs and
using shared semaphores. The routine semTask1( ) creates the shared semaphore,
initializing the state to full. It adds the semaphore to the name database (to enable
the task on the other CPU to access it), takes the semaphore, does some processing,
and gives the semaphore. The routine semTask2( ) gets the semaphore ID from the
database, takes the semaphore, does some processing, and gives the semaphore.
/* semExample.h - shared semaphore example header file */
/*
* semTask1 - shared semaphore user
*/
728
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
semGive (semSmId);
return (OK);
}
#include <vxWorks.h>
#include <semLib.h>
#include <semSmLib.h>
#include <smNameLib.h>
#include <stdio.h>
#include "semExample.h"
/*
* semTask2 - shared semaphore user
*/
semGive (semSmId);
return (OK);
}
729
VxWorks
Kernel Programmer's Guide, 6.6
Shared message queues are FIFO queues used by kernel tasks to send and receive
variable-length messages on any of the CPUs that have access to the shared
memory. They can be used either to synchronize tasks or to exchange data between
kernel tasks running on different CPUs. See 4. Multitasking and the API reference
for msgQLib for a complete discussion of message queues.
To use a shared message queue, a task creates the message queue and publishes its
ID. A task that wants to send or receive a message with this message queue first
gets the message queue’s ID. It then uses this ID to access the message queue.
For example, consider a typical server/client scenario where a server task t1 (on
CPU 1) reads requests from one message queue and replies to these requests with
a different message queue. Task t1 creates the request queue and publishes its ID
by adding it to the name database assigning the name requestQue. If task t2 (on
CPU 0) wants to send a request to t1, it first gets the message queue ID by looking
up the string requestQue in the name database. Before sending its first request,
task t2 creates a reply message queue. Instead of adding its ID to the database, it
publishes the ID by sending it as part of the request message. When t1 receives the
request from the client, it finds in the message the ID of the queue to use when
replying to that client. Task t1 then sends the reply to the client by using this ID.
To pass messages between kernel tasks on different CPUs, first create the message
queue by calling msgQSmCreate( ). This routine returns a MSG_Q_ID. This ID is
used for sending and receiving messages on the shared message queue.
Like their local counterparts, shared message queues can send both urgent or
normal priority messages.
730
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
task2 ( ) Message
{ Pend Queue Queue
...
msgQReceive (smMsgQId,...); task2
... EMPTY
} task1
task1 ( )
{
...
msgQReceive (smMsgQId,...); Shared Message Queue
...
}
SHARED MEMORY
The use of shared message queues and local message queues differs in several
ways:
16
■
The shared message queue task queueing order specified when a message
queue is created must be FIFO. If it is not, an error is generated and errno is set
to S_msgQLib_INVALID_QUEUE_TYPE.
Figure 16-2 shows two tasks executing on different CPUs, both trying to
receive a message from the same shared message queue. Task 1 executes first,
and is put at the front of the queue because there are no messages in the
message queue. Task 2 (executing on a different CPU) tries to receive a
message from the message queue after task 1’s attempt and is put on the queue
behind task 1.
■
Messages cannot be sent on a shared message queue at interrupt level. (This is
true even in NO_WAIT mode.) If they are, an error is generated, and errno is
set to S_intLib_NOT_ISR_CALLABLE.
■
Shared message queues cannot be deleted. Attempts to delete a shared
message queue return ERROR and sets errno to
S_smObjLib_NO_OBJECT_DESTROY.
731
VxWorks
Kernel Programmer's Guide, 6.6
To achieve optimum performance with shared message queues, align send and
receive buffers on 4-byte boundaries.
To display the status of the shared message queue as well as a list of tasks pended
on the queue, call msgQShow( ) (VxWorks must be configured with the
INCLUDE_MSG_Q_SHOW component.) The following example displays detailed
information on the shared message queue 0x7f8c21 as indicated by the second
argument (0 = summary display, 1 = detailed display).
-> msgQShow 0x7f8c21, 1
value = 0 = 0x0
The output is sent to the standard output device, and looks like the following:
Message Queue Id : 0x7f8c21
Task Queuing : FIFO
Message Byte Len : 128
Messages Max : 10
Messages Queued : 0
Receivers Blocked : 1
Send timeouts : 0
Receive timeouts : 0
Receivers blocked :
TID CPU Number Shared TCB
---------- -------------------- --------------
0xd0618 1 0x1364204
In the following code example, two tasks executing on different CPUs use shared
message queues to pass data to each other. The server task creates the request
message queue, adds it to the name database, and reads a message from the queue.
The client task gets the smRequestQId from the name database, creates a reply
message queue, bundles the ID of the reply queue as part of the message, and
sends the message to the server. The server gets the ID of the reply queue and uses
it to send a message back to the client. This technique requires the use of the
network byte-order conversion C macros htonl( ) and ntohl( ), because the
numeric queue ID is passed over the network in a data field.
/* msgExample.h - shared message queue example header file */
732
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
/* This file contains the code for the message queue server task. */
#include <vxWorks.h>
#include <msgQLib.h>
#include <msgQSmLib.h>
#include <stdio.h>
#include <smNameLib.h>
#include "msgExample.h"
#include "netinet/in.h"
/*
* serverTask - receive and process a request from a shared message queue
*/
FOREVER
{
if (msgQReceive (smRequestQId, (char *) &request, sizeof (REQUEST_MSG),
WAIT_FOREVER) == ERROR)
return (ERROR);
733
VxWorks
Kernel Programmer's Guide, 6.6
/* This file contains the code for the message queue client task. */
#include <vxWorks.h>
#include <msgQLib.h>
#include <msgQSmLib.h>
#include <smNameLib.h>
#include <stdio.h>
#include "msgExample.h"
#include "netinet/in.h"
/*
* clientTask - sends request to server and reads reply
*/
STATUS clientTask
(
char * pRequestToServer /* request to send to the server */
/* limited to 100 chars */
)
{
MSG_Q_ID smRequestQId; /* request message queue */
MSG_Q_ID smReplyQId; /* reply message queue */
REQUEST_MSG request; /* request text */
int objType; /* dummy variable for smNameFind */
char serverReply[MAX_MSG_LEN]; /*buffer for server’s reply */
734
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
return (OK);
}
The shared-memory allocator allows kernel tasks on different CPUs to allocate and
release variable size portions of memory that are accessible from all CPUs with
access to the shared-memory system. Two sets of routines are provided: low-level
routines for manipulating user-created shared-memory partitions, and high-level
routines for manipulating a shared-memory partition dedicated to the
shared-memory system pool. (This organization is similar to that used by the
local-memory manager, memPartLib.)
Shared-memory blocks can be allocated from different partitions. Both a
shared-memory system partition and user-created partitions are available.
User-created partitions can be created and used for allocating data blocks of a
particular size. Memory fragmentation is avoided when fixed-sized blocks are
allocated from user-created partitions dedicated to a particular block size.
735
VxWorks
Kernel Programmer's Guide, 6.6
semaphore. Whenever a task is finished with the shared data, it must give the
semaphore.
For example, assume two tasks executing on two different CPUs must share data.
Task t1 executing on CPU 1 allocates a memory block from the shared-memory
system partition and converts the local address to a global address. It then adds the
global address of the shared data to the name database with the name
mySharedData. Task t1 also creates a shared semaphore and stores the ID in the
first field of the data structure residing in the shared memory. Task t2 executing on
CPU 2 looks up the string mySharedData in the name database to get the address
of the shared memory. It then converts this address to a local address. Before
accessing the data in the shared memory, t2 gets the shared semaphore ID from the
first field of the data structure residing in the shared-memory block. It then takes
the semaphore before using the data and gives the semaphore when it is done
using the data.
User-Created Partitions
736
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
The shared-memory system partition is analogous to the system partition for local
memory. Table 16-4 lists routines for manipulating the shared-memory system
partition.
Call Description
Routines that return a pointer to allocated memory return a local address (that is,
an address suitable for use from the local CPU). To share this memory across
processors, this address must be converted to a global address before it is
announced to tasks on other CPUs. Before a task on another CPU uses the memory,
it must convert the global address to a local address. Macros and routines are
provided to convert between local addresses and global addresses; see the header
file smObjLib.h and the API reference for smObjLib.
The following code example uses memory from the shared-memory system
partition to share data between kernel tasks on different CPUs. The first member
of the data structure is a shared semaphore that is used for mutual exclusion. The
737
VxWorks
Kernel Programmer's Guide, 6.6
send task creates and initializes the structure, then the receive task accesses the
data and displays it.
/* buffProtocol.h - simple buffer exchange protocol header file */
#include <vxWorks.h>
#include <semLib.h>
#include <semSmLib.h>
#include <smNameLib.h>
#include <smObjLib.h>
#include <stdio.h>
#include "buffProtocol.h"
/*
* buffSend - write to shared semaphore protected buffer
*/
/*
* Initialize shared buffer structure before adding to database. The
* protection semaphore is initially unavailable and the receiver blocks.
*/
738
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
/*
* Convert address of shared buffer to a global address and add to
* database.
*/
return (OK);
}
#include <vxWorks.h>
#include <semLib.h>
#include <semSmLib.h>
#include <smNameLib.h>
#include <smObjLib.h>
#include <stdio.h>
#include "buffProtocol.h" 16
/*
* buffReceive - receive shared semaphore protected buffer
*/
739
VxWorks
Kernel Programmer's Guide, 6.6
return (OK);
}
This example is similar to Example 16-3, which uses the shared-memory system
partition. This example creates a user-defined partition and stores the shared data
in this new partition. A shared semaphore is used to protect the data.
/* memPartExample.h - shared memory partition example header file */
#include <vxWorks.h>
#include <memLib.h>
740
16 Shared-Memory Objects: VxMP
16.2 Using Shared-Memory Objects
#include <semLib.h>
#include <semSmLib.h>
#include <smNameLib.h>
#include <smObjLib.h>
#include <smMemLib.h>
#include <stdio.h>
#include "memPartExample.h"
/*
* memPartSend - send shared memory partition buffer
*/
741
VxWorks
Kernel Programmer's Guide, 6.6
return (OK);
}
#include <vxWorks.h>
#include <memLib.h>
#include <stdio.h>
#include <semLib.h>
#include <semSmLib.h>
#include <stdio.h>
#include "memPartExample.h"
/*
* memPartReceive - receive shared memory partition buffer
*
* execute on CPU 1 - use a shared semaphore to protect shared memory
*/
return (OK);
}
742
16 Shared-Memory Objects: VxMP
16.3 System Requirements
Test-and-Set Cycle
All CPUs in the system must support indivisible test-and-set operations across the
(VME) bus. The indivisible test-and-set operation is used by the spinlock
mechanism to gain exclusive access to internal shared data structures. Because all
the boards must support hardware test-and-set, the parameter SM_TAS_TYPE
743
VxWorks
Kernel Programmer's Guide, 6.6
! CAUTION: Boards that make use of VxMP must support hardware test-and-set
(indivisible read-modify-write cycle). PowerPC is an exception; see the VxWorks
Architecture Supplement.
Inter-CPU Notification
CPUs must be notified of any event that affects them. The preferred method is for
the CPU initiating the event to interrupt the affected CPU. The use of interrupts is
dependent on the capabilities of the hardware. If interrupts cannot be used, a
polling scheme can be employed, although it generally results in a significant
performance penalty. For information about configuration for different
CPU-notification facilities, see 16.5.3 Mailbox Interrupts and Bus Interrupts, p.747
The maximum number of CPUs that can use shared-memory objects is 20 (CPUs
numbered 0 through 19; the default is 10). For information about configuring
VxMP for the number of CPUs in a system, see 16.5.1 Maximum Number of CPUs,
p.746.
Spinlock Operation
The performance of a system using shared memory objects can be affected by the
operation of spinlocks, which are used internally for cross-processor
synchronization. The spinlocks may need to be tuned for proper operation, and
interrupt latency is increased while spinlocks are held. However, spinlocks are
used only for very short periods of time to protect critical regions (in a manner
similar to the use of interrupt locking on uniprocessor systems.
Internal shared-memory object data structures are protected against concurrent
access by a spinlock mechanism. The spinlock mechanism operates as a loop in
744
16 Shared-Memory Objects: VxMP
16.4 Performance Considerations
which an attempt is made to gain exclusive access to a resource (in this case an
internal data structure). An indivisible hardware test-and-set operation is used for
this mutual exclusion. If the first attempt to take the lock fails, multiple attempts
are made.
For the duration of the spinlock, interrupts are disabled to avoid the possibility of
a task being preempted while holding the spinlock. As a result, the interrupt
latency of each processor in the system is increased. However, the interrupt
latency added by shared-memory objects is constant for a particular CPU.
For more information about and spinlocks and performance tuning, see
16.7 Troubleshooting, p.755.
Shared-memory objects and the shared-memory network2 use the same memory
region, anchor address, and interrupt mechanism. Both facilities make use of the
same shared memory region, the same shared-memory anchor, the same interrupt,
some data structures, their traffic goes over the same bus, and so on. While their
software does not effectively interact, using them together can result in reduced
performance.
If the two facilities are used together, the shared-memory anchor must be
configured in the same way for each. The shared-memory anchor is a location
accessible to all CPUs on the system. It stores a pointer to the shared-memory
header, a pointer to the shared-memory packet header (used by the 16
shared-memory network driver), and a pointer to the shared-memory object
header. For information about using the shared memory anchor with
shared-memory objects, see 16.5.4 Shared-Memory Anchor, p.747.
For information about shared-memory network, see the Wind River Network Stack
for VxWorks 6 Programmer’s Guide.
745
VxWorks
Kernel Programmer's Guide, 6.6
The maximum number of CPUs that can use shared-memory objects is 20 (CPUs
numbered 0 through 19). This limitation is imposed by the VMEbus hardware
itself. The practical maximum is usually a smaller number that depends on the
CPU, bus bandwidth, and application. The number is set with the SM_CPUS_MAX
configuration parameter of the INCLUDE_SM_COMMON component. By default it
is set to 10. Note that if the number is set higher than the number of boards that are
actually going to be used, it will waste memory.
746
16 Shared-Memory Objects: VxMP
16.5 Configuring VxWorks for Shared Memory Objects
! CAUTION: For the MC68K in general, if the MMU is off, data caching must be
turned off globally; see the API reference for cacheLib.
747
VxWorks
Kernel Programmer's Guide, 6.6
NOTE: The shared memory anchor is used by both VxMP and the shared-memory
network driver (if both are included in the system). For information about using
VxMP and the shared memory network driver at the same time, see Shared-Memory
Objects and Shared-Memory Network Driver, p.745.
748
16 Shared-Memory Objects: VxMP
16.5 Configuring VxWorks for Shared Memory Objects
SHARED MEMORY
SM_ANCHOR_ADRS .
.
0x600 (default) . Shared-Memory
pointer to shared-memory Anchor
objects’ shared-memory region
~
~ ~
~
shared-memory objects Shared-Memory
Region
If the size of the objects created exceeds the shared-memory region, an error
message is displayed on CPU 0 during initialization.
749
VxWorks
Kernel Programmer's Guide, 6.6
At runtime, VxMP sets aside memory for the configured number of objects, and
then uses what is left over for the shared memory system partition.
The key distinction between configuration for dual-port and for external memory
is as follows:
■ For dual-port memory, the SM_OFF_BOARD parameter is set to FALSE for the
master CPU and to TRUE for all slave CPUs.
■ For external memory, the SM_OFF_BOARD parameter is set to TRUE for all
CPUs (master and slave CPUs).
The following sections describe configurations for each type of memory use.
The configuration illustrated in Figure 16-4 uses the shared memory in the master
CPU’s dual-ported RAM.
CPU 0 CPU 1
RAM sm=0x1800600
anchor 0x600
Local address of
allocated VMEbus address 0
pool is 0x1000000
In this example, the settings for the master (CPU 0) are as follows: the
SM_OFF_BOARD parameter is FALSE and SM_ANCHOR_ADRS is 0x600 (the value
is specific to the processor architecture), SM_OBJ_MEM_ADRS is set to NONE,
750
16 Shared-Memory Objects: VxMP
16.5 Configuring VxWorks for Shared Memory Objects
NOTE: When using dual-ported memory, the shared memory can be allocated
from the master’s kernel heap by setting SM_OBJ_MEM_ADRS to NONE. Note,
however, the following caveats in this regard:
■
The entire kernel heap must be mapped onto the shared bus address space
(that is, the VMEbus address space), since the memory can be allocated from
anywhere within it. The memory space of the shared bus mapped on the slaves
must also be large enough to see the whole heap.
■ Mapping the entire kernel heap might mean mapping the entire kernel or the
entire local RAM of the master. In this case, there is the potential risk of a
malfunctioning remote target overwriting critical kernel text and data
structures on the master.
Wind River recommends the alternative of assigning a static address to
SM_OBJ_MEM_ADRS and mapping only SM_OBJ_MEM_SIZE bytes of local RAM
onto the shared bus memory space.
For the slave (CPU 0) in this example, the board maps the base of the VME bus to
the address 0x1000000. SM_OFF_BOARD is TRUE and the anchor address is
0x1800600. This is calculated by taking the VMEbus address (0x800000) and
adding it to the anchor address (0x600). Many boards require further address
16
translation, depending on where the board maps VME memory. In this example,
the anchor address for the slave is 0x1800600, because the board maps the base of
the VME bus to the address 0x1000000.
751
VxWorks
Kernel Programmer's Guide, 6.6
External RAM
CPU 0 CPU 1 Board (1MB)
anchor
anchor = 0x3000000 sm=0x2100000
VMEbus address
of RAM on external
board = 0x2000000
For the master (CPU 0) in this example, the SM_OFF_BOARD parameter is TRUE,
SM_ANCHOR_ADRS is 0x3000000, SM_OBJ_MEM_ADRS is set to
SM_ANCHOR_ADRS, and SM_OBJ_MEM_SIZE is set to 0x100000.
For the slave (CPU 1), SM_OFF_BOARD is TRUE and the anchor address is
0x2100000. This is calculated by taking the VMEbus address of the memory board
(0x2000000) and adding it to the local VMEbus address (0x100000).
This section describes the configuration settings for a multiprocessor system with
three CPUs and dual-ported memory.
The master is CPU 0, and shared memory is configured from its dual-ported
memory. This application has 20 tasks using shared-memory objects, and uses 12
message queues and 20 semaphores. The maximum size of the name database is
the default value (100), and only one user-defined memory partition is required.
On CPU 0, the shared-memory pool is configured to be on-board. This memory is
allocated from the processor’s system memory. On CPU 1 and CPU 2, the
shared-memory pool is configured to be off-board. Table 16-6 shows the
parameter values set for the INCLUDE_SM_OBJ and INCLUDE_SM_COMMON
752
16 Shared-Memory Objects: VxMP
16.5 Configuring VxWorks for Shared Memory Objects
components. Note that for the slave CPUs, the value of SM_OBJ_MEM_SIZE is not
actually used.
16
753
VxWorks
Kernel Programmer's Guide, 6.6
SM_OBJ_MAX_SEM 20
SM_OBJ_MAX_NAME 100
SM_OBJ_MAX_MSG_Q 12
SM_OBJ_MAX_MEM_PART 1
SM_OFF_BOARD FALSE
SM_MEM_ADRS NONE
SM_MEM_SIZE 0x10000
SM_OBJ_MEM_ADRS NONE
SM_OBJ_MEM_SIZE 0x10000
SM_OBJ_MAX_SEM 20
SM_OBJ_MAX_NAME 100
SM_OBJ_MAX_MSG_Q 12
SM_OBJ_MAX_MEM_PART 1
SM_OFF_BOARD TRUE
SM_MEM_ADRS SM_ANCHOR_ADRS
SM_MEM_SIZE 0x10000
SM_OBJ_MEM_ADRS NONE
SM_OBJ_MEM_SIZE 0x10000
754
16 Shared-Memory Objects: VxMP
16.6 Displaying Information About Shared Memory Objects
! CAUTION: If the master CPU is rebooted, it is necessary to reboot all the slaves. If
16
a slave CPU is to be rebooted, it must not have tasks pended on a shared-memory
object.
16.7 Troubleshooting
Problems with shared-memory objects can be due to a number of causes. This
section discusses the most common problems and a number of troubleshooting
tools. Often, you can locate the problem by rechecking your hardware and
software configurations.
Use the following list to confirm that your system is properly configured:
755
VxWorks
Kernel Programmer's Guide, 6.6
■
Be sure to verify that VxWorks is configured with the INCLUDE_SM_OBJ
component for each processor using VxMP.
■
Be sure the anchor address specified is the address seen by the CPU. This can
be defined statically (with the SM_ANCHOR_ADRS configuration parameter),
or at boot time (with the sm boot loader parameter) if the target is booted with
the shared-memory network.
■ If there is heavy bus traffic relating to shared-memory objects, bus errors can
occur. Avoid this problem by changing the bus arbitration mode or by
changing relative CPU priorities on the bus.
■ If memAddToPool( ), memPartSmCreate( ), or smMemAddToPool( ) fail,
check that any address you are passing to these routines is in fact a global
address.
■ If applications create more than the specified maximum number of objects, it
is possible to run out of memory. If this happens, the shared object creation
routine returns an error and errno is set to S_memLib_NOT_ENOUGH_MEM.
To solve this problem, first increase the maximum number of shared-memory
objects of corresponding type; see Table 16-5 for a list of the applicable
configuration parameters. This decreases the size of the shared-memory
system pool because the shared-memory pool uses the remainder of the
shared memory. If this is undesirable, increase both the number of the
corresponding shared-memory objects and the size of the overall
shared-memory region, SM_OBJ_MEM_SIZE. See 16.5 Configuring VxWorks for
Shared Memory Objects, p.746 for a discussion of configuration parameters.
■ Operating time for the spinlock cycle can vary greatly because it is affected by
the processor cache, access time to shared memory, and bus traffic. If the lock
is not obtained after the maximum number of tries specified by the
SM_OBJ_MAX_TRIES parameter), errno is set to
S_smObjLib_LOCK_TIMEOUT. If this error occurs, set the maximum number
of tries to a higher value. Note that any failure to take a spinlock prevents
proper functioning of shared-memory objects. In most cases, this is due to
problems with the shared-memory configuration (see above).
756
16 Shared-Memory Objects: VxMP
16.7 Troubleshooting
■
The routine smObjShow( ) displays the status of the shared-memory objects
facility on the standard output device. It displays the maximum number of
tries a task took to get a spinlock on a particular CPU. A high value can
indicate that an application might run into problems due to contention for
shared-memory resources. For information about smObjShow( ), see
16.6 Displaying Information About Shared Memory Objects, p.755 and the API
reference for the routine.
■ The shared-memory heartbeat can be checked to verify that the master CPU
has initialized shared-memory objects. The shared-memory heartbeat is in the
first 4-byte word of the shared-memory object header. The offset to the header
is in the sixth 4-byte word in the shared-memory anchor. (See the Wind River
Network Stack for VxWorks 6 Programmer’s Guide.)
Thus, if the shared-memory anchor were located at 0x800000:
[VxWorks Boot]: d 0x800000
800000: 8765 4321 0000 0001 0000 0000 0000 002c *.eC!...........,*
800010: 0000 0000 0000 0170 0000 0000 0000 0000 *...p............*
800020: 0000 0000 0000 0000 0000 0000 0000 0000 *................*
757
VxWorks
Kernel Programmer's Guide, 6.6
758
17
Distributed Shared Memory:
DSHM
17.1 Introduction
The VxWorks distributed shared memory (DSHM) facility is a middleware
subsystem that allows multiple services to communicate over different types of
buses that support shared-memory communication. DSHM provides two main
features for the services that make use of distributed shared memory: messaging
over shared memory, and allocation of shared memory resources to services for
their use in writing custom data. DSHM currently provides optional support for
TIPC for communication over shared memory media.
Custom services can be developed for use with DSHM, and custom DSHM
hardware interfaces can be developed for hardware that is not supported currently
by Wind River. This chapter provides information about how to pursue both of
these development activities.
759
VxWorks
Kernel Programmer's Guide, 6.6
760
17 Distributed Shared Memory: DSHM
17.2 Technology Overview
then sends a message signalling the arrival of the data. Then it pends, waiting for
the buffer to be available again. When the buffer is available, Node 2 then sends a
message signalling this event.
17.2.1 Architecture
Figure 17-1 illustrates the architecture of the VxWorks distributed shared memory
facility. This architecture is designed for a system in which memory is distributed
across all nodes in the system.
Applications
DSHM Services
DSHM API 17
DHSM
DSHM
DSHM MUX Adaptation
Utilities
Layer
Hardware Buses
761
VxWorks
Kernel Programmer's Guide, 6.6
Services interface directly with the DSHM facility. They are generally hidden from
the user. For example, the TIPC bearer accessed by way of the socket API
interacting with the TIPC stack.
The DSHM management service runs on each node in the system and handles the
messages that deal with nodes appearing and disappearing, management of
resources, and so on. One instance of the service runs on each node in the system
as service number zero. Each is registered as a service when a hardware interface
is ready to handle incoming messages.
Custom services can written based on the DSHM APIs. For information in this
regard, see 17.4 Developing Custom Services, p.770.
Hardware Interface
762
17 Distributed Shared Memory: DSHM
17.2 Technology Overview
Figure 17-2 illustrates the flow of messages and data using distributed shared
memory in a two-node system using a TIPC bearer. In this example, the DSHM
service knows that when it receives a message, it is intended for the TIPC bearer.
The DSHM MUX routes the message to that service. Then, if the message is a
DSHM_SVC_NET_SEND message, the TIPC bearer delivers the DSHM buffer to the
TIPC stack (the message could also be a management message for the DSHM
bearer, that does not go to TIPC). This type of message is associated with a buffer
in shared memory, with contents that are identified as a TIPC message. Only the
contents of the buffer are delivered to the TIPC stack. From that point on, DSHM
does not have anything to do with the buffer. TIPC takes care of delivering it to the
application, if it is application data (the contents could be a TIPC management
message).
By way of analogy, you can think of DSHM as replacing the ethernet portion, the
physical layer, of a network. DSHM performs a role similar to an ethernet device
when it receives data. An ethernet device receives data in ethernet frame format.
The software managing the device receives the physical data in a buffer in memory
that it provided to the device. It then takes ownership of that buffer and replaces
it in the device with another one (if available). Then, that buffer may be formatted,
and then given to whomever it was sent to. With VxWorks END devices, 17
networking MUX routes it to the correct protocol stack, which then routes it to the
correct socket, which delivers it to the correct application.
Broadcasting
There are two ways of accomplishing broadcasting of data packets with DSHM:
true broadcast and replicast.
True broadcast
The true broadcast implementation uses the broadcasting facility of the underlying
hardware interface implementation, coupled with the fact that the local shared
memory can also be read by remote nodes, on top of being written. The idea is to
have a certain memory pool that is written by the broadcasting node, where the
data packet is copied to. The node then sends a broadcast message of a certain type
that specifies that there is a broadcast data packet to be read from the broadcasting
763
VxWorks
Kernel Programmer's Guide, 6.6
node. This is sent using the broadcast address, in essence making use of the
hardware interface's capability to send broadcast messages efficiently. When a
remote node receives the message, it reads the data packet from the broadcasting
node's shared memory. Some kind of mutual exclusion between the nodes might
have to be used to prevent reusing (rewriting) the packet before all remote nodes
have read it.
This method of broadcasting should be used whenever possible since it puts less
burden on the broadcasting node, in effect sharing it on all receiving nodes.
However, it might be impossible to use in certain cases. The TIPC DSHM bearer,
when using shared memory messaging where a send operation can fail because of
a full message queue, is an example of this. If the message sending operation
cannot fail on a particular hardware interface implementation, it can be used.
Replicast
Replicasting is based on the concept of putting the burden of sending the broadcast
packet on the sender. The broadcasting node in effect has to obtain a buffer for
every node the broadcast is destined to, and copy the data into them. Then, a
unicast message has to be send to each one of them. It can be the same type as a
regular message signalling the arrival of a data packet since they are in essence the
exact same thing.
The TIPC bearer uses this type of broadcasting when running over the default
messaging implementation over shared memory, to be able to regulate the amount
of messages that are sent to each node and thus ensure that sending a message, in
a sane system, will always succeed. This allows for better flow control.
764
17 Distributed Shared Memory: DSHM
17.2 Technology Overview
1. Request buffer
Node 0 Node 1
2. Lend buffer
RAM RAM
3. Write to buffer
17
Send Operation
The DSHM_BUILD call fills the message header with the appropriate values
The arguments (defined in dshm.h) are as follows: message (an array of type
765
VxWorks
Kernel Programmer's Guide, 6.6
4. Node 0 sends message to Node 1 telling it that there telling it that there is data
in the buffer.
DSHM_BUILD(msg, DSHM_VNIC, 0, 1, DSHM_VNIC_SEND);
DSHM_DAT32_SET(1, pBuffer);
Then remote node reads the message and passes the packet to the stack.
For information about the C macro functions used in this example, see
17.4.2 DSHM Messaging Protocols and Macro Functions, p.772.
Broadcast Operation
766
17 Distributed Shared Memory: DSHM
17.3 Configuring VxWorks for DSHM
! CAUTION: Boot loaders must not be built with the AMP build option—neither
with Workbench nor with vxprj. For more information about boot loaders, see
3. Boot Loader.
Core support
These are needed for correct functionality in a VxWorks system. They should be
pulled in automatically by any other component that depends on DSHM. 17
INCLUDE_DSHM
BSP support enabling. This component is only present in BSPs that support
DSHM. This is because DSHM needs some specific hardware to run, namely
the possibility of having shared memory across VxWorks instances.
MUX. Allows multiple services to use the messaging system over the same
medium, such as the shared memory between two cores on a multi-core chip.
It also allows the usage of the same DSHM API by multiple concurrent media
(for example, if there was a multi-core AMP system on a VME blade).
The INCLUDE_DSHM parameters are as follows:
■
Maximum instances of hw buses (DSHM_MAX_HW). Unless there are two
concurrent buses in your system, this should always be 1.
■
Maximum number of services per bus (DSHM_MAX_SERVICES). The
default is 2: one for the management service, and one for a user service,
767
VxWorks
Kernel Programmer's Guide, 6.6
which might be one provided by Wind River, such as the TIPC bearer. If
you intend to run more than one service concurrently, increase this value.
INCLUDE_DSHM_ADAPT
VxWorks adaptation layer. DSHM is meant to be portable to other operating
systems. This component is the adaptation layer for VxWorks.
This is the selection of the messaging type on a particular bus if more than one is
available. Currently, only shared memory messaging
(INCLUDE_DSHM_MSG_SM) is available. It should always be selected.
Peer-to-Peer Drivers
These are the drivers that implement the shared memory messaging and
housekeeping of the shared memory used for data passing. They are different for
each BSP that provides support for DSHM. However, some parts are shared
between implementations.
Note that the driver implementation is a VxBus implementation. VxBus will
always get added to the VxWorks image if it is not already selected.
INCLUDE_DSHM_BUS
DSHM virtual bus. This should always be selected. It provides the framework
for drivers.
INCLUDE_DSHM_BUS_PLB
DSHM virtual bus on PLB. The current implementation all are for a
Processor-local-bus type bus, such as the one on a multi-core chip. Even the
VxWorks simulator implementation follows that model, as if the VxWorks
simulator instances would share a local bus. This must be selected. It is also
where most of the parameters live. The parameters are as follows:
DSHM_BUS_PLB_NODE
Address of local node. This is the unique address on the shared bus. The
current drivers are able to find their own address at runtime, using the
processor number assigned at boot time with the boot line. Use -1 for this,
or another number to force a specific one for a specific image. The address
must be less than the next parameter.
DSHM_BUS_PBL_MAXNODES
Maximum number of nodes. There can be no more nodes than this value
in the system. Note that all nodes must agree on that value so that it works
as intended.
768
17 Distributed Shared Memory: DSHM
17.3 Configuring VxWorks for DSHM
DSHM_BUS_PLB_NENTRIES
Number of entries in the shared memory. Each message sent over DSHM
is sent asynchronously and takes up one entry in the message queue.
When the queue is full, if another message is sent, an error code is returned
to the sender. For implementation reasons, the real number of concurrent
messages is actually one less than this number.
DSHM_BUS_PLB_NRETRIES
Number of retries. When trying to send a message, the internals will
actually retry sending in the case where the queue is full. If you do not
want that to happen, you can set it to 0. If you would like more retries, pick
a higher number. It can help getting less sending errors. WARNING: This
is a ‘busy’ retry, in effect hogging the CPU.
DSHM_BUS_PLB_RMW
Read-modify-write routine. This is per-bus type. It should be left alone.
DSHM_BUS_PLB_POOLSIZE
Shared memory pool size. If you decide to share more or less memory on
this node, adjust this number accordingly.
DSHM_BUS_PLB_ENTRY_SIZE
Currently unsupported.
Virtual bus controller and peer drivers are BSP-specific, but have to be selected
for DSHM to work properly. They cannot be put in as defaults since each
different hardware implementation component has a different name. The
components are as follows: 17
■ wrSbc8561 and hpcNet8641: INCLUDE_DSHM_BUS_CTLR_8641 and
INCLUDE_DSHM_VX8641
■ Any VxWorks simulator: INCLUDE_DSHM_BUS_CTLR_SIM and
INCLUDE_DSHM_VXSIM
■ Any sibyte board (sb1250/sb1480): INCLUDE_DSHM_BUS_CTLR_SIBYTE
and INCLUDE_DSHM_VXSIBYTE
INCLUDE_DSHM_DEBUG
Provides a debugging aid that allows for multiple levels of debugging output,
selectable at runtime. You can choose an initial level (DSHM_INIT_DBG_LVL).
By default, it is OFF (no message is printed). See dshm/debug/dshmDebug.h
for more information on usage.
769
VxWorks
Kernel Programmer's Guide, 6.6
Services
INCLUDE_DSHM_SVC_MNG
Services provided by Wind River. The node manager must be present. The
TIPC bearer is a special case and lives under the TIPC component directory.
For information about the TIPC bearer, see the Wind River TIPC Programmer’s
Guide.
Utilities
These are utilities provided for service and hardware interface writers. They are
used internally in the TIPC bearer and the hardware interface implementations
provided by Wind River. They are pulled in when needed.
770
17 Distributed Shared Memory: DSHM
17.4 Developing Custom Services
■
Control and command messages that do not need any data transfer, for a
specific application. The service could support a small payload, instant
feedback as to whether the message got through or not, and so on.
Services provide functionality over a specific bus type, and make calls to the
DSHM APIs to interface with the bus (to obtain a shared memory, to send
messages, and so on).
Services should also register callback routines for events such as a node joining or
departing from a system. When an instance of a DSHM hardware interface
discovers a node, it calls dshmSvcNodeJoin( ), which calls callbacks installed by
all services that need to be notified.
When discovering a node, the hardware interface propagates that information to
all services registered on that bus. This allows services to take actions such as
initialization and allocation of data structures used for that particular node. The
services have their join callback invoked at that point. The callbacks are described
17.5.2 Callbacks, p.782.
If a node is declared gone or dead, the hardware interface instance calls
dshmSvcNodeLeave( ), which similarly calls callbacks installed by all services that
need to be notified. If a service on the local node decides to quit, it can call
dshmMuxSvcWithdraw( ), which calls a callback registered by the service to do
cleanup.The service can then call dshmMuxSvcWithdrawComplete( ) when it is
satisfied that the cleanup is completed.
Callbacks can be used to take care of allocating shared memory pools, network
buffers, and so on. The custom service writer provides the desired functionality. 17
All DSHM services are identified internally by unique service numbers. Wind
River reserves zero for the DSHM management service (for information about the
service, see DSHM Management Service and Custom Services, p.762).
For the greatest efficiency, use the smallest service numbers possible, since they are
used directly as indices into arrays. Service number should also be implemented
as configurable parameters, in the event that there is a conflict with another
software provider’s usage. The numbers should also be documented if the
software is provided to a third party.
The maximum number of services is defined with the DSHM_MAX_SERVICES
parameter of the INCLUDE_DSHM component. This parameter should be set to
one more than the number of services that will be supported (that is N-1), because
771
VxWorks
Kernel Programmer's Guide, 6.6
the service number zero is reserved for the DSHM management service. If that
number is 3, for example) each bus can have 2 (plus the management service)
services each, which can be totally different.
Each custom service must provide its own message types for its own protocol.
DSHM provides a set of C macro functions to facilitate building messages. The
macros can, for example, be used to build the message header, access each one of
the per-message type parameters, and so on. The messaging macros are as follows:
■ DSHM_DAT8_SET( )
■ DSHM_DAT16_SET( )
■ DSHM_DAT32_SET( )
■ DSHM_DAT_GET( )
■ DSHM_DAT8_GET( )
■ DSHM_DAT16_GET( )
■ DSHM_DAT32_GET( )
■ DSHM_SVC_GET( )
■ DSHM_SRC_GET( )
■ DSHM_DST_GET( )
■ DSHM_TYP_GET( )
The macros are defined in installDir/vxworks-6.x/target/h/dshm/dshm.h.
The following macros builds a correctly formatted message in the msg parameter
using the four other parameters:
#define DSHM_BUILD(msg, svc, dest, src, type)
The next macros accesses the data of a specified size at a specified offset:
#define DSHM_DAT[8|16|32]_[GET|SET](msg, offset)
The offset units depends on the width of data to be set or retrieved. For example,
the following call retrieves the second byte in the message:
DSHM_DAT8_GET(msg, 1)
If a message body is comprised of one word, one byte, one byte, one half-word and
one word, the following would retrieve each, respectively:
DSHM_DAT32_GET(msg, 0)
DSHM_DAT8_GET(msg, 4)
772
17 Distributed Shared Memory: DSHM
17.4 Developing Custom Services
DSHM_DAT8_GET(msg, 5)
DSHM_DAT16_GET(msg, 3)
DSHM_DAT32_GET(msg, 2)
The following macros used for obtaining a pointer to a message, and for casting a
pointer to a DSHM message pointer:
DSHM(variable_name);
DSHM_TYPE(ptr_name);
DSHM_CAST(ptr);
For an example of how the macros are used, see 17.4.4 Service Code Example, p.775.
17
773
VxWorks
Kernel Programmer's Guide, 6.6
The APIs described in Table 17-1 are provided by dshmMuxLib for use by custom
services.
Routine Description
Once a service is up and running, the bulk of the API calls that are used would be
dshmMuxMsgSend( ) and dshmMuxMsgRecv( ), which are called on a
one-to-one basis with the number of messages directed to the local node, as well as
dshmMuxSvcObjGet( ) and dshmMuxSvcObjRelease( ) for obtaining the object
when sending or receiving. If data buffers are exchanged and are dynamic—as
with a network driver service—dshmMuxHwAddrToOff( ) and
dshmMuxHwOffToAddr( ) would also be used for converting buffer pointer to
774
17 Distributed Shared Memory: DSHM
17.4 Developing Custom Services
shared memory offsets and back. The remainder of the APIs would be used
infrequently, primarily for housekeeping functions.
/*
* Copyright (c) 2007 Wind River Systems, Inc.
*
* The right to copy, distribute, modify or otherwise make use
* of this software may be licensed only pursuant to the terms
* of an applicable Wind River license agreement.
*/
/*
modification history
--------------------
01a,28sep07,bwa written.
*/
#include <taskLib.h>
#include <sysLib.h>
#include <stdio.h>
#include <stdlib.h>
#include <dshm/dshm.h>
#include <dshm/dshmMuxLib.h>
/* 17
DESCRIPTION
This is an example service, where two nodes each have one task that depends
on
the remote node having finished some operation. DSHM messages are used as
synchronization events across nodes.
*/
775
VxWorks
Kernel Programmer's Guide, 6.6
/* message types */
/* seconds */
/* callback prototypes */
static STATUS rx
(
svc_obj * const pObj, /* the service object */
DSHM(msg) /* message received */
);
static STATUS join
(
svc_obj * const pObj, /* the service object */
const uint_t addr /* address of node joining */
);
static void leave
(
svc_obj * const pObj, /* the service object */
const uint_t addr /* address of node leaving */
);
/* worker task */
static void worker (SEM_ID sync, uint16_t hw, uint16_t svc, uint16_t remote);
/* service init */
/****************************************************************************
**
*
* dshmTestMsgStart - start test service
*
*/
void dshmTestDemoSync
(
const char * const pHwName /* hw interface name */
)
776
17 Distributed Shared Memory: DSHM
17.4 Developing Custom Services
{
svc_obj *pObj; /* the service object */
int hw; /* hw registration number */
DSHM(msg); /* the messages */
/* service callbacks */
/****************************************************************************
**
*
* rx - invoked when receiving a message
*
* This service handles the following types of messages:
* - remote has joined: perform needed initialization.
* - remote sync: signal worker task that it can resume its work.
*/
static STATUS rx
(
svc_obj * const pObj, /* the service object */
DSHM(msg) /* message received */
)
{
uint16_t src;
777
VxWorks
Kernel Programmer's Guide, 6.6
/****************************************************************************
**
*
* join - initialize service interaction with a remote node
*
* This service expects one remote node to interact with. If no node has
* previously joined the service, this routine will create the
synchronization
* semaphore, record the remote node's address and spawn the worker task that
* waits for synchronization events.
*/
if (pObj->remote.addr != DSHM_ADDR_INVALID)
{
return ERROR; /* remote node already registered */
}
778
17 Distributed Shared Memory: DSHM
17.4 Developing Custom Services
if (tid == ERROR)
{
logMsg ("Could not spawn worker task\n",
0,0,0,0,0,0);
pObj->remote.addr = DSHM_ADDR_INVALID;
semDelete (pObj->remote.sync);
return ERROR;
}
return OK;
}
/****************************************************************************
**
*
* leave - invoked when a remote node disappears
*
* This routine cleans up the node-specific service data when a remote node
* disappears.
*/
/****************************************************************************
**
*
* worker - worker task that waits on sync events from remote node
*
* This routine simulates work that needs synchronization from a remote node.
* It pends on a semaphore that is given when the remote peer sends a sync
* event, signalling the local node that it a condition needed by it has been
779
VxWorks
Kernel Programmer's Guide, 6.6
FOREVER
{
DSHM(msg);
780
17 Distributed Shared Memory: DSHM
17.5 Developing a Hardware Interface
A peer (remote) node is seen as a device by DSHM. Peer node device drivers
provide the means for accessing the distributed shared memory on a peer nodes,
and for signalling peer nodes with inter-processor interrupts.
Most of the functionality required by DSHM for the bus controller device driver
and peer node device driver can be provided in driver code, although some
functionality may require support in the BSP (for example, triggering interrupts on
the sb1250).
DSHM supports multiple concurrent buses that provide shared memory
functionality on the same node. As the DSHM MUX takes care of such
combinations, the BSP does not have to.
The interfaces provided by DSHM conform to the VxBus driver model. These
interfaces see the local node as a virtual controller and peer nodes as virtual
devices sitting on the virtual controller. The code can therefore be reused for
different BSPs that support the same devices. Furthermore, common base drivers
exist for similar implementations, such as multicore devices.
Currently, implementations of DSHM hardware interfaces are provided for the
following multicore devices: MIPS sb1250 and sb1480, and PowerPC hpcNet8641.
Most of the code is shared between them. These BSPs can be used as starting points
to implement support for similar BSPs. They should be a good starting point for 17
implementing support for different buses as well, not only for the local bus used
by multi-core devices, but particularly for bus controller drivers that are designed
to use shared memory as the messaging interface (including VMEbus).
Note that it is especially important that the local node should also be seen as a
remote node on the bus interface (to be able to participate in broadcasts for
example). If a service needs to treat the local node differently, the service should
be written so as to address this requirement—and not the bus controller driver.
Bus controller device drivers differ primarily in their xxxInit2( ) routine. The major
differences due to the location of the anchor for shared memory messaging and the
location of the pool of shared memory. In addition, some parameters (such as
shared memory) can be defined statically for some systems, but not for others; it
depends on the configuration of the physical shared memory.
781
VxWorks
Kernel Programmer's Guide, 6.6
The peer node device drivers also mostly differ in their the xxxInit2( ) routines,
and for mostly the same reasons as the bus controller device driver. An instance of
a peer driver is a peer device. There is a peer device for each node in the DSHM
system, on a bus. Peer devices must be able to find the remote shared memory data
structures in order to message the remote node that it is responsible for.
Also, peer drivers can differ in the means of interrupting the remote nodes, and so
on. For example, the hpcNet8641 uses the EPIC interrupt controller to send an
inter-processor interrupt to remote nodes, while the SB1250 uses a different
mechanism.
VxBus has three initialization phases. During the first phase, the kernel memory
allocator is not initialized yet, so malloc( ) cannot be used to get memory from the
kernel heap. If a device driver needs to be initialized during the first phase—and
it needs to dynamically allocate memory—VxBus has its own memory allocator
that can be used (which is more limited and manages a very small amount of
memory).
17.5.2 Callbacks
Each bus controller device driver must provide a set of callbacks that are invoked
by the DSHM MUX. The required types of callbacks are as follows:
Allocate
Shared memory allocation callback. A pool of shared memory on the local
target is managed by the bus controller driver. The memory can be reserved
for each service by passing a empty-size pool to the bus controller. If a
non-empty pool is passed, the interface can allocate from it. The dshmMem
library provides allocation and de-allocation routines that can be used by the
bus interface as its allocation and free callbacks. The interface writer is also free
to implement custom callbacks.
Free
Free shared memory callback. The counterpart to the allocation callback.
Transmit
Message transmission callback. This routine sends one message to the
destination provided.
Broadcast
Method for broadcasting a message to all nodes on the bus, including the local
node.
782
17 Distributed Shared Memory: DSHM
17.5 Developing a Hardware Interface
Test-and-Set
Test-and-set primitive provided by the bus controller driver. An atomic
operation that checks if a zero is stored at a memory location and replaces it
with a non-zero value, if so. Returns TRUE if so, FALSE if not.
Clear
The opposite of test-and-set. Can be set to NULL if not needed (such as for
multi-core devices). Some bus controllers need a special implementation
(some VMEbus bus controller chips need it).
Offset-To-Address
Callback that converts a shared memory offset from the start of the address
that the shared memory is visible, to a local pointer to that same location.
Address-To-Offset
Callback that converts a pointer to a local shared memory address into an
offset from the start of the address at which the shared memory is visible. This
allows passing values that can be converted to pointers on the remote nodes,
in case they see the shared memory at a different address.
The offset-to-address and address-to-offset routines should always be used
when passing addresses (of buffers) between nodes, unless the service is only
meant to be used in situations in which the shared memory address is the same
on all nodes.
Local Address
Obtain the address of the local node on the common bus. This is the
address used for messaging. 17
783
VxWorks
Kernel Programmer's Guide, 6.6
784
17 Distributed Shared Memory: DSHM
17.5 Developing a Hardware Interface
785
VxWorks
Kernel Programmer's Guide, 6.6
The messaging system provided with this release of DSHM is based on shared
memory. It does not require any special hardware support except for providing
shared memory between nodes, and a way of sending interrupts to peer (remote)
nodes. The shared memory itself must be fully coherent between nodes.
Coherency is required both for messaging and for the shared memory pool used
by services. It can be achieved by using a non-cached region of memory, by using
a snooping mechanism on a cached region, and so on.
Note that certain instructions used for inter-process synchronization may require
specific cache modes. For example, on PowerPC architectures, the ll/sc primitives
used to implement atomic operations require the cache to be in a certain mode;
otherwise they cause an exception. Consult your hardware architecture
documentation in this regard.
Each node using shared memory messaging must provide a small data structure
at a well-known address, accessible by all nodes in the system, called the anchor,
which is defined in the BSP. This data structure's size is determined as follow:
(12 + n * 4) bytes
where n is the max number of nodes allowed (for information about configuring
the maximum number of nodes, see 17.3 Configuring VxWorks for DSHM, p.767).
Most of the data in the anchor is for discovery and keep alive signal. It also
provides the location of the rest of the shared memory provided by the node.
As noted earlier, the action of sending a message must be protected by a memory
barrier as well (with the DSHM_MEM_BARRIER( ) macro). With the
shared-memory messaging implementation, this is achieved through a spinlock
mechanism that keeps messaging structures coherent when accessed concurrently
by multiple nodes.
Since this messaging mechanism relies on a portion of shared memory reserved in
advance (of a fixed size), the number of messages, that can be sent by the
messaging node and that have not yet been processed by the receiving node, is
786
17 Distributed Shared Memory: DSHM
17.5 Developing a Hardware Interface
limited. If the sender fills the message input on the remote node, it will try
re-sending a limited number of times (configurable) before giving up and
returning an error to the caller. Note that the sender is busy-waiting (spinning in a
loop) while repeatedly trying to send.
To avoid busy-waiting while sending a message, special consideration should be
given to the implementation services.
Services that are used over a bus controller device driver that uses shared memory
messaging should try to throttle their messages to prevent sending more than are
allowed. This should be combined with a configurable setting for the number of
concurrent messages to achieve the desired effect.
For example, consider a system with three nodes. Each node provides two buffers
for each of its peer to write to. Each of its peer does the same. So, the local node has
two buffers on each peer to write to, and two buffers of its own, per-peer, from
which it reads.
The service keeps a local view of the status of the remote buffers. In this service, a
message is sent every time a buffer is filled to let the remote node that it should
read it. A message is also sent when a local buffer is read as well, to signal the
remote node that it can use it again. In this case, the concurrent messages that can
have been sent by one node to the other are as follows:
■ Two messages that tell the remote node to read the buffers (the service should
stop sending when the buffers are full, so it won't send a next message until it
got a reply from the remote node telling it can reuse the first buffer)
17
■ Two messages that tell the remote node that it can reuse its two buffers (the
local node only sends these after the remote node has sent it messages telling
it to read the buffers, and we know it is limited to two of them at once).
So if this service is the only one running in this system, four message entries should
be sufficient.
The number of required messages entries must, however, be set to one more than
the number calculated. This is due to the implementation of the messaging system,
which uses a ring and must have one empty entry. In addition, for performance
considerations, the number of entries in the ring must be a power-of-two. If a
number that is not a power-of-two parameter is specified, it is automatically
rounded down to the nearest power-of-two.
787
VxWorks
Kernel Programmer's Guide, 6.6
Bus controller device drivers using shared memory messaging must use a method
other than sending a join (JOIN) message to remote peers when a node comes up.
The reason behind this has to do with rebooting nodes and re-initialization of
shared memory data structures. This method is comprised of two types of
messages, reset (RST) and acknowledge (ACK). The following diagram illustrates
the state machine of the discovery mechanism, for each peer relative to the remote
node.
The first letter of the state is the state of the current node and the second is the state
of the remote peer, as viewed by the local node. The R is reset, U is unknown, and A
is received acknowledgment. The state transitions show the event received/action
taken. The AA designation is the fully-up connection state.
There is a generic management service that exists in the VxWorks adaptation layer
component. It can be registered with a hardware interface by calling
dshmMngSvcInstall( ) (see dshmBusCtlrSim.c under target/src/dshm/drivers for
an example of usage). To be of any use, the driver must provide a
dshmBusCtlrMethodNodeReadyId method. Look in dshmBusCtlrPlb.c for an
example (that file is the base driver on top of which dshmBusCtlrSim.c is built).
dshmBusCtlrPlbNodeReady is registered as a method in the method table.
This service handles two types of messages currently: DSHM_TYP_MNG_JOIN and
DSHM_TYP_MNG_QUIT, although the second is a placeholder in the VxWorks
adaptation layer that simply acknowledge the message and logs a console
message. The implementation is in target/src/dshm/service/dshmSvcMng.c, and
can be used as a starting point for a more involved implementation if needed. The
service is very minimal at this point since all the current hardware interface use
another mechanism for detecting topology changes.
One of the main goals of this service is to propagate the information that a node
has either appeared or disappeared to the services, which are the modules that
provide the bulk of the functionality that is of interest to end users. Services that
register a join and a leave callback receive events those events when the hardware
interface sees a change in the nodes topology.
788
17 Distributed Shared Memory: DSHM
17.5 Developing a Hardware Interface
The routines described in Table 17-1 are for use by a hardware interface.
Routine Description
17
789
VxWorks
Kernel Programmer's Guide, 6.6
790
18
Message Channels
18.1 Introduction
Message channels are a socket-based facility that provides for inter-task
communication within a memory boundary, between memory boundaries (kernel
and processes), between nodes (processors) in a multi-node cluster, and between
between multiple clusters. In addition to providing a superior alternative to TCP
for multi-node intercommunication, message channels provide a useful
alternative to message queues for exchange data between two tasks on a single
node.
791
VxWorks
Kernel Programmer's Guide, 6.6
792
18 Message Channels
18.2 Message Channel Facilities
793
VxWorks
Kernel Programmer's Guide, 6.6
applications
Socket ApplicationLibraries
Connection-Oriented
TIPC Message Passing
Protocol
Distributed Systems
Infrastructure
basic configuration
The basic configuration of VxWorks with support for message channels includes
SAL, SNS, COMP, and DSI—which provides support for single node use. TIPC
must be added to the basic configuration for multi-node use. TIPC can also be
added to provide connection-less socket types on a single node (for use outside of
message channels).
For detailed information about configuration, see 18.7 Configuring VxWorks for
Message Channels, p.807. For information about TIPC, see the Wind River TIPC
Programmer’s Guide.
794
18 Message Channels
18.3 Multi-Node Communication with TIPC
18
18.4 Single-Node Communication with COMP and DSI
The Connection-Oriented Message Passing protocol (COMP) provides services for
multi-node as well as the protocol for single-node communication. The underlying
transport mechanism for single-node message channels is based on the COMP
protocol, which provides a fast method for transferring messages across memory
boundaries on a single node.
COMP, using the AF_LOCAL family, is designed for use with the standard socket
API. Because it provides connection-oriented messaging, the socket type
associated with message channels is the SOCK_SEQPACKET. The protocol is
connection-based, like other stream-based protocols such as TCP, but it carries
variable-sized messages, like datagram-based protocols such as UDP.
795
VxWorks
Kernel Programmer's Guide, 6.6
Express Messaging
Show Routines
Because COMP is based on the standard socket API, traditional network show
routines can be used, such as netstat( ). In addition, information on local sockets
can be retrieved with the unstatShow( ) routine (for more information, see the
VxWorks API reference entry).
796
18 Message Channels
18.4 Single-Node Communication with COMP and DSI
797
VxWorks
Kernel Programmer's Guide, 6.6
DSI_NUM_SOCKETS 200
DSI_DATA_32 50
DSI_DATA_64 100
DSI_DATA_128 200
DSI_DATA_256 40
DSI_DATA_512 40
DSI_DATA_1K 10
DSI_DATA_2K 10
DSI_DATA_4K 10
DSI_DATA_8K 10
DSI_DATA_16K 4
DSI_DATA_32K 0
DSI_DATA_64K 0
The DSI pool is configured more strictly and more efficiently than the core network
pool since it is more contained, fewer scenarios are possible, and everything is
known in advance (as there is only the one node involved). The
DSI_NUM_SOCKETS parameter controls the size of the system pool. It controls the
number of clusters needed to fit a socket, for each family and each protocol
supported by the back end. Currently, only the AF_LOCAL address family is
supported by COMP.
The clusters allocated in the back end are of these sizes:
■
aligned sizeof (struct socket)
■
aligned sizeof (struct uncompcb)
■
aligned sizeof (struct sockaddr_un)
One cluster of size 328 and of size 36 are needed for each socket that is created since
currently, the COMP protocol is always linked to a DSI socket. Only one cluster of
sizeof (struct sockaddr_un) is required, therefore the size of the system pool is
798
18 Message Channels
18.5 Socket Name Service
The [SNS:] prefix is the only prefix accepted, and it can be omitted. The scope can
have the following values: private, node, cluster, or system. These values
designate an access scope for limiting access to the same single memory space (the
799
VxWorks
Kernel Programmer's Guide, 6.6
kernel or a process), the same node (the kernel and all processes on that node), a
set of nodes, or the entire system, respectively. A server can be accessed by clients
within the scope that is defined when the server is created with the salCreate( )
routine (see 18.6.1 SAL Server Library, p.804).
NOTE: The SNS server creates a COMP socket for its own use for local
communication. It has the socket address of 0x0405. All of the SAL routines send
messages to the SNS server at this socket address.
For a multi-node system, each node in the system must be configured with the
Socket Name Service (SNS). Note that VxWorks SNS components for multi-node
use are different from those used on single node systems (see 18.7 Configuring
VxWorks for Message Channels, p.807).
When a distributed SNS server starts on a node at boot time, it uses a TIPC bind
operation to publish a TIPC port name. This is visible to all other nodes in the zone.
The other existing SNS servers then register the node in their tables of SNS servers.
800
18 Message Channels
18.5 Socket Name Service
The snsShow( ) shell command provides information about all sockets that are
accessible from the local node, whether the sockets are local or remote. The
command is provided by the VxWorks INCLUDE_SNS_SHOW component.
801
VxWorks
Kernel Programmer's Guide, 6.6
The following examples illustrate snsShow( ) output from three different nodes in
a system.
The output of the snsShow( ) command is fairly self-explanatory. The first field is
the name of the socket. If the name is longer than the space allocated in the output,
802
18 Message Channels
18.6 Socket Application Libraries
the entire name is printed and the other information is presented on the next line
with the name field containing several dashes.
The scope values are priv for private, node for node, clust for cluster, and systm
for system.
The family types can be TIPC for AF_TIPC or LOCAL for AF_LOCAL.
The socket type can be SEQPKT for SOCK_SEQPACKET, RDM.
The protocol field displays a numeric value and a location indicator. The numeric
value is reserved for future use, and currently only zero is displayed. The final
character in the field indicates whether the socket was created on a remote or local
node, with an asterisk (*) designating remote.
The address field indicates the address of the socket. All addresses of the form
/comp/socket belong to the AF_LOCAL family. All addresses of the form
<x.y.z>,refID belong to the AF_TIPC family. The TIPC address gives the TIPC
portID which consists of the nodeID and the unique reference number.
803
VxWorks
Kernel Programmer's Guide, 6.6
without having to know the socket addresses used by the server (see 18.5 Socket
Name Service, p.799).
! CAUTION: SAL applications should not use any of the following as part of a name:
* ? @ : # / < > % | [ ] { } , \\ \ ' & ; = + $
804
18 Message Channels
18.6 Socket Application Libraries
A server can be accessed by clients within the scope that is defined when the server
is created with the salCreate( ) routine.
The scope is identified as part of the first parameter, with one the following values:
private, node, cluster, or system. These values designate an access scope for
limiting access to the same task (kernel or process), the same node (the kernel and
all processes on that node), and a set of nodes, respectively. The second parameter
identifies the protocol (with 1 being COMP and 33 being TIPC; 0 is used for all
supported families). The third parameter identifies the socket type.
For example, the following call would create a socket named foo with cluster
scope, with the COMP protocol:
salCreate("foo@cluster",1,5)
! CAUTION: A COMP (single node) socket can be created with cluster or system
scope, but this setting has no effect in a multi-node system. That is, in a multi-node
system, SNS will not transmit this information to other nodes because a COMP
socket is only available on the node on which it was created.
NOTE: It is possible to create both a COMP socket and a TIPC socket with the same
name. Only the TIPC socket information is sent to other nodes in a multi-node
system (assuming the scope is set appropriately).
Once created, a SAL server must be configured with one or more processing
routines before it is activated. These routines can be configured by calling
salServerRtnSet( ).
Once the server is ready, salRun( ) is called to start the server activities. The 18
salRun( ) routine never returns unless there is an error or one of the server
processing routines requests it. You must call salDelete( ) to delete the server and
its sockets regardless of whether or not the routine has terminated. This is
accomplished with salDelete( ). This routine can be called only by tasks in the
process (or the kernel) where the server was created. In order for tasks outside the
process to remove a service name from SNS, salRemove( ) must be used. The
salRemove( ) routine does not close sockets, nor does it delete the server. It only
deletes the SNS entry, and therefore access to any potential clients.
For more information, including sample service code, see the VxWorks API
reference for the salServer library.
805
VxWorks
Kernel Programmer's Guide, 6.6
The SAL client library provides a simple means for implementing a socket-based
client application. The data structures and routines provided by SAL allow the
application to easily communicate with socket-based server applications that are
registered with the Socket Name Service (see 18.5 Socket Name Service, p.799).
Additional routines can be used to communicate with server applications that are
not registered with the SNS. The SAL client library is made of the following
routines:
salOpen( )
Establishes communication with a named socket-based server.
salSocketFind( )
Finds sockets for a named socket-based server.
salNameFind( )
Finds services with the specified name.
salCall( )
Invokes a socket-based server.
A client application typically calls salOpen( ) to create a client socket and connect
it to the named server application. The client application can then communicate
with the server by passing the socket descriptor to standard socket API routines,
such as send( ) and recv( ).
As an alternative, the client application can perform a send( ) and recv( ) as a single
operation using salCall( ). When the client application no longer needs to
communicate with a server it calls the standard socket close( ) routine to close the
socket to the server.
A client socket can be shared between two or more tasks. In this case, however,
special care must be taken to ensure that a reply returned by the server application
is handled by the correct task.
The salNameFind( ) and salSocketFind( ) routines facilitate the search of the
server and provide more flexibility for the client application.
The salNameFind( ) routine provides a lookup mechanism for services based on
pattern matching, which can be used with (multiple) wild cards to locate similar
names. For example, if the names are foo, foo2, and foobar, then a search using
foo* would return them all. The scope of the search can also be specified. For
example, a client might want to find any server up to a given scope, or only within
a given scope. In the former case the upto_ prefix can be added to the scope
806
18 Message Channels
18.7 Configuring VxWorks for Message Channels
specification. For example, upto_node defines a search that look for services in all
processes and in the kernel in a node.
Once a service is found, the salSocketFind( ) routine can be used to return the
proper socket ID. This can be useful if the service has multiple sockets, and the
client requires use of a specific one. This routine can also be used with wild cards,
in which case the first matching server socket is returned.
For more information, including sample client code, see the VxWorks API
reference for the salClient library.
In addition to the COMP, DSI, and SAL components, one of the four following
components listed below is required for SNS support.
807
VxWorks
Kernel Programmer's Guide, 6.6
Multi-Node Options
■ INCLUDE_SNS_MP to run SNS as a kernel daemon, supporting distributed
named sockets for multi-node communication.
■ INCLUDE_SNS_MP_RTP to start SNS as a process (RTP) automatically at boot
time, supporting distributed named sockets for multi-node communication.
Additional system configuration is required to run SNS as a process; for
information in this regard, see Running SNS as a Process, p.808.
Note that including a distributed SNS server automatically includes TIPC.
In order to run SNS as a process (RTP), the developer must also build the server,
add it to ROMFS, configure VxWorks with ROMFS support, and then rebuild the
entire system:
a. Build installDir/vxworks-6.x/target/usr/apps/dsi/snsd/snsd.c (using the
makefile in the same directory) to create snsServer.vxe.
b. Copy snsServer.vxe to the ROMFS directory (creating the directory first,
if necessary.
The INCLUDE_SNS_RTP and INCLUDE_SNS_MP_RTP components must
know the location of the server in order to start it at boot time. They expect
to find the server in the ROMFS directory. If you wish to store the server
somewhere else (in another file system to reduce the VxWorks image size,
for example) use the SNS_PATHNAME parameter to identify the location.
c. Configure VxWorks with the ROMFS component.
d. Rebuild VxWorks.
These steps can also be performed with Wind River Workbench (see the Wind River
Workbench User’s Guide). For information about ROMFS, see 8.8 Read-Only Memory
File System: ROMFS, p.518.
808
18 Message Channels
18.7 Configuring VxWorks for Message Channels
! CAUTION: It is recommended that you do not change the default values of the
SNS_PRIORITY and SNS_STACK_SIZE parameters. The default for SNS_PRIORITY
is 50 and the default for SNS_STACK_SIZE is 20000.
Show Routines
809
VxWorks
Kernel Programmer's Guide, 6.6
810
18 Message Channels
18.8 Comparison of Message Channels and Message Queues
■
Message channels provide location transparency. An endpoint can be referred
to by a name, that is by a simple string of characters (but a specific address can
also be used). Message queues only provide location transparency for
inter-process communication when they are created as public objects.
■
Message channels provide a simple interface for implementing a client/server
paradigm. A location transparent connection can be established by using two
simple calls, one for the client and one for the server. Message queues do not
provide support for client/server applications.
■ Message channels use the standard socket interface and support the select( )
routine; message queues do not.
■ Message channels cannot be used with VxWorks events; message queues can.
■ Message queues can be used within an ISR, albeit only the msgQsend( )
routine. No message channel routines can be used within an ISR.
■ Message queues are based entirely on a proprietary API and are therefore
more difficult to port to a different operating systems than message channels,
which are based primarily on the standard socket API.
Message channels are better suited to applications that are based on a client/server
paradigm and for which location transparency is important.
18
811
VxWorks
Kernel Programmer's Guide, 6.6
812
Index
A aioShow( ) 385
aioSysDrv 384
abort character (kernel shell) (CTRL+C) 592 aioSysInit( ) 384
changing default 591 ANSI C
abort character kernel shell) (CTRL+C) 591 function prototypes 53
access routines (POSIX) 267 header files 54
ADDED_C++FLAGS 62 stdio package 380
ADDED_CFLAGS application modules
modifying run-time 62 linking 63
affinity, CPU 693 make variables 62
interrupt 696 makefiles
task 693 include files, using 62
aio_cancel( ) 385 application modules, see object modules
AIO_CLUST_MAX 384 applications
aio_error( ) 387 building kernel-based 62
testing completion 390 configuring to run automatically 66
aio_fsync( ) 385 downloading kernel application modules 64
AIO_IO_PRIO_DFLT 385 kernel component requirements 62
AIO_IO_STACK_DFLT 385 kernel-based 51
AIO_IO_TASKS_DFLT 385 linking with VxWorks 64
aio_read( ) 385 starting automatically 66
aio_return( ) 387 structure for VxWorks-based applications 52
aio_suspend( ) 385 architecture,kernel 9
testing completion 390 archive file attribute (dosFs) 497
AIO_TASK_PRIORITY 385 ARCHIVE property (component object) 83
AIO_TASK_STACK_SIZE 385 dummy component, creating a 74
aio_write( ) 385 using 70
aiocb, see control block (AIO) asynchronous I/O (POSIX) 383
aioPxLibInit( ) 385 see also control block (AIO)
see online aioPxLib
813
VxWorks
Kernel Programmer's Guide, 6.6
814
Index
815
VxWorks
Kernel Programmer's Guide, 6.6
816
Index
817
VxWorks
Kernel Programmer's Guide, 6.6
working with, in VxWorks 393–404 see also block devices; CBIO interface; clusters;
Dinkum C and C++ libraries 52 FAT tables
direct-access devices see online dosFsLib
initializing for rawFs 508 code examples
RAM disks 413 block devices, initializing 465, 489
disks file attributes, setting 498
changing maximum contiguous area on devices,
dosFs file systems 494 finding the 502
file systems, and 454–523 RAM disk, creating and formatting 494
mounting volumes 510 configuring 481, 483
organization (rawFs) 507 crash recovery 502
RAM 413 creating 463, 484
reformatting for dosFs 465, 486 devices, naming 366
synchronizing directories, reading 496
dosFs file systems 495 disk space, allocating 499
displaying information methods 499
disk volume configuration, about 495 disk volume
TrueFFS flash file systems, about 551 configuration data, displaying 495
distributed shared memory disks, changing 494
architecture 761 FAT tables 486
callbacks 782 file attributes 496
communication model 763 inconsistencies, data structure 502
configuration 767 initializing 485
custom services 770 ioctl( ) requests, supported 479, 503
driver initialization 781 MSFT Long Names 487
hardware interface APIs 789 open( ), creating files with 372
hardware interface development 780 partitions, creating and mounting 465, 486
macro functions 772 reformatting disks 465, 486
management service 788 short names format (8.3) 487
messaging protocols 772 starting I/O 496
messaging support 785 subdirectories
MUX registration 784 creating 495
service APIs 774 removing 496
service code example 775 synchronizing volumes 495
service numbers 771 TrueFFS flash file systems 550
technology overview 760 volumes, formatting 465, 486
documentation 4 dosFsCacheCreate( ) 488
DOS_ATTR_ARCHIVE 497 dosFsCacheDelete( ) 488
DOS_ATTR_DIRECTORY 497 dosFsChkDsk( ) 488
DOS_ATTR_HIDDEN 497 dosFsDrvNum global variable 485
DOS_ATTR_RDONLY 497 dosFsFmtLib 481
DOS_ATTR_SYSTEM 497 dosFsLib 481
DOS_ATTR_VOL_LABEL 497 dosFsShow( ) 495
DOS_O_CONTIG 501 dosFsVolFormat( ) 486
dosFs file systems 481 downloading
818
Index
819
VxWorks
Kernel Programmer's Guide, 6.6
820
Index
821
VxWorks
Kernel Programmer's Guide, 6.6
822
Index
823
VxWorks
Kernel Programmer's Guide, 6.6
824
Index
825
VxWorks
Kernel Programmer's Guide, 6.6
826
Index
827
VxWorks
Kernel Programmer's Guide, 6.6
828
Index
829
VxWorks
Kernel Programmer's Guide, 6.6
830
Index
831
VxWorks
Kernel Programmer's Guide, 6.6
832
Index
833
VxWorks
Kernel Programmer's Guide, 6.6
834
Index
T taskPriorityGet( ) 179
taskPrioritySet( ) 167
T_SM_BLOCK 724 taskRegsGet( ) 179
T_SM_MSG_Q 724 taskRegsSet( ) 179
T_SM_PART_ID 724 taskRestart( ) 181
T_SM_SEM_B 724 taskResume( ) 181
T_SM_SEM_C 724 taskRotate( ) 167
tape devices tasks
SCSI, supporting 415 __thread task variables 191
tapeFs file systems blocked 167
SCSI drivers, and 415 contexts 160
target control blocks 160, 179, 182, 246
name (tn) (boot parameter) 143 creating 172–173
target agent delayed 163
task (tWdbTask) 12 delayed-suspended 163
target agent, see WDB 628 delaying 161, 163, 181, 240–241
Target Server File System (TSFS) 520 deleting safely 180–181
boot program for, configuring 155 code example 181
configuring 522 semaphores, using 208
error handling 522 displaying information about 179
file access permissions 522 error status values 184–187
sockets, working with 521 see also errnoLib(1)
task control blocks (TCB) 160, 179, 182, 246 exception handling 187
task variables see also signals; sigLib(1); excLib(1)
__thread storage class 191 tExcTask 12
taskActivate( ) 173 executing 181
taskCreate( ) 173 hooks
taskCreateHookAdd( ) 182 see also taskHookLib(1)
taskCreateHookDelete( ) 183 extending with 182–184
taskDelay( ) 181 troubleshooting 183
taskDelete( ) 180 IDs 177 Index
taskDeleteHookAdd( ) 183 interrupt level, communicating at 250
taskDeleteHookDelete( ) 183 pipes 399
taskIdListGet( ) 179 kernel shell (tShell) 12
taskIdSelf( ) 179 logging (tLogTask) 11
taskIdVerify( ) 179 names 177
taskInfoGet( ) 179 automatic 178
taskIsPended( ) 179 private 177
taskIsReady( ) 179 public 177
taskIsSuspended( ) 179 network (tNet0) 12
taskLock( ) 167 option parameters 173
taskName( ) 179 pended 163
taskNameToId( ) 179 pended-suspended 163
taskOptionsGet( ) 175 priority inversion safe (tJobTask) 13
taskOptionsSet( ) 175 priority, setting
835
VxWorks
Kernel Programmer's Guide, 6.6
836
Index
troubleshooting
SCSI devices 422
U
shared-memory objects (VxMP option) 755 unnamed semaphores (POSIX) 292, 293, 294–296
TrueFFS flash file systems 545 usrAppInit( ) 66
and boot image region 556 usrFdiskPartCreate( ) 465, 486
boot image region usrScsiConfig( ) 417
creating 556 usrTffsConfig( ) 558
writing to 558
building
device formatting 554
drive mounting 558 V
Memory Technology Driver (MTD) 548
overview 547 variables
socket driver 549 __thread task variables 191
displaying information about 551 global 190
drives static data 190
attaching to dosFs 558 task 191
formatting 554 virtual memory, see memory management, virtual
mounting 558 memory
numbering 554 Virtual Root File System 459
Memory Technology Driver (MTD) VM, seememory management, virtual memory
component selection 548 VMEbus interrupt handling 243
JEDEC device ID 548 volume labels (dosFs)
truncation of files 375 file attribute 497
tty devices 394 VX_ALTIVEC_TASK 174
see online tyLib VX_DSP_TASK 174
control characters (CTRL+x) 396 VX_FP_TASK 174, 651
ioctl( ) functions, and 398 VX_FP_TASK option 174
line mode 395 VX_GLOBAL_NO_STACK_FILL 175
selecting 395 VX_NO_STACK_FILL 174
options 394 VX_PRIVATE_ENV 174 Index
all, setting 395 VX_UNBREAKABLE 174
none, setting 395 vxencrypt 594
raw mode 395 VxMP shared-memory objects 719
X-on/X-off 395 VxMP, see shared-memory objects (VxMP option)
tyAbortSet( ) 591 vxsize command 65
tyBackspaceSet( ) 397 VxWorks
tyDeleteLineSet( ) 397 components 5, 17
tyEOFSet( ) 397 components, and application requirements 62
tyMonitorTrapSet( ) 397 configuration 14
TYPE property (parameter object) 87 configuration and build 5
configuring applications to run automatically
66
customizing code 39
header files 53
image types 15
837
VxWorks
Kernel Programmer's Guide, 6.6
W
WAIT_FOREVER 213
watchdog timers 240–241
code examples
creating a timer 241
WDB
target agent proxy 639
WDB target agent 628
and exceptions 642
scaling 641
starting before kernel 642
wdCancel( ) 241
wdCreate( ) 241
wdDelete( ) 241
wdStart( ) 241
workQPanic 249
write( ) 374
pipes and ISRs 399
writethrough mode, cache 447
X
XBD I/O component 404
838