B0700ax W

Download as pdf or txt
Download as pdf or txt
You are on page 1of 102
At a glance
Powered by AI
This document provides guidance on planning and sizing a Foxboro DCS system.

This document provides guidance to users on planning and sizing a Foxboro DCS system. It includes information on assessing load, configuring object manager parameters, and worksheets for estimating requirements.

The document contents include text from pages 1-3 of the guide, covering topics like the document structure, overview, and access to spreadsheets. It also lists page numbers for referenced pages.

Foxboro™ DCS

System Planning and Sizing


User’s Guide

*B0700AX*,*W*

B0700AX, Rev W
August 2019

www.schneider-electric.com
Legal Information

The Schneider Electric brand and any trademarks of Schneider Electric SE and its subsidiaries referred to in this guide
are the property of Schneider Electric SE or its subsidiaries.

All other brands may be trademarks of their respective owners. This guide and its content are protected under
applicable copyright laws and furnished for informational use only. No part of this guide may be reproduced or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), for any
purpose, without the prior written permission of Schneider Electric.

Schneider Electric does not grant any right or license for commercial use of the guide or its content, except for a non-
exclusive and personal license to consult it on an "as is" basis. Schneider Electric products and equipment should be
installed, operated, serviced, and maintained only by qualified personnel.

As standards, specifications, and designs change from time to time, information contained in this guide may be subject
to change without notice.

To the extent permitted by applicable law, no responsibility or liability is assumed by Schneider Electric and its
subsidiaries for any errors or omissions in the informational content of this material or consequences arising out of or
resulting from the use of the information contained herein.
Contents
Figures................................................................................................................................... vii

Tables..................................................................................................................................... ix

Preface................................................................................................................................. xiii
Revision Information ............................................................................................................. xiii
Related Documents ................................................................................................................ xiii
Schneider Electric Products Mentioned in this Document ...................................................... xv
Global Customer Support ....................................................................................................... xv
We Welcome Your Comments ................................................................................................ xv

1. Overview ........................................................................................................................... 1
Before You Begin ...................................................................................................................... 1
Introduction .............................................................................................................................. 2
Accessing Spreadsheets .............................................................................................................. 4
Accessing Spreadsheets from the Electronic Documentation Media ...................................... 4
Object Manager Multicast Optimization (OMMO) ................................................................. 5

2. System Planning................................................................................................................ 7
Workstations ............................................................................................................................. 7
Virus Scanning ..................................................................................................................... 8
Virus Scan Software on Windows Platforms .................................................................... 8
Virus Scan Software on Solaris Platforms ......................................................................... 8
SMONs ............................................................................................................................... 8
SMONs on I/A Series Software v8.4.x and Earlier ........................................................... 8
SMONs on I/A Series Software v8.5-v8.8 and Foxboro DCS
Control Core Services v9.0 or Later ................................................................................. 9
Limitations on Number of Switches Assigned to a Single System Monitor ...................... 9
OS Configurable Parameters ................................................................................................ 9
FoxView ............................................................................................................................. 13
Alarming ............................................................................................................................ 16
Historians ........................................................................................................................... 17
Printers ............................................................................................................................... 19
Application Object Services ................................................................................................ 19
Applications ....................................................................................................................... 19
Number of Self-Hosting or Auto-Checkpointing FDC280s, FCP280s, or
CP270s Supported By a Boot Host .................................................................................... 20
Control ................................................................................................................................... 20
Alarming ............................................................................................................................ 21

iii
B0700AX – Rev W Contents

Control Distribution .......................................................................................................... 22


Peer-to-Peer Relationships .................................................................................................. 22
OM Scan Load ................................................................................................................... 22
AIM*API Application Examples .................................................................................... 23
Peer-To-Peer Examples .................................................................................................. 24
FoxView Application Examples ..................................................................................... 25
Control Processor Load Analysis ........................................................................................ 26
Block Processing Cycle ....................................................................................................... 26
Running with 100 ms and 50 ms Block Processing Cycle (BPC) ................................... 26
Phasing .......................................................................................................................... 26
CNI Planning ......................................................................................................................... 27
CNI Data Throughput Limits and CPU Usage .................................................................. 27
Limitation When Working With Maximum Connections Through a CNI ........................ 28
Inter-CNI traffic (Bytes Per Second) .................................................................................. 28
Broadcast Considerations ................................................................................................... 29
Remote Compound List Configuration on the Remote CNI ......................................... 29
Retry Sink Connections ..................................................................................................... 30
Connection Time Considerations ....................................................................................... 31
Recovery After a Detected Failure of All Communication Between CNIs .......................... 32
Limitation of Sink Points in a Workstation to help Ensure Reconnection
Following CNI Reboot ....................................................................................................... 33
I/O Points ............................................................................................................................... 34
Network .................................................................................................................................. 34
System Planning Summary Tables .......................................................................................... 36
Data Access to Control Core Services Objects ......................................................................... 38

3. System Sizing .................................................................................................................. 41


Workstations with Multiple CPU Cores Enabled .................................................................... 41
Workstation Summary Worksheet ..................................................................................... 41
Workstations with Single CPU Core Enabled ......................................................................... 44
Workstation Summary Worksheet ..................................................................................... 46
Control Processors .................................................................................................................. 51
Maximum Loading Table ................................................................................................... 51
CNI Sizing .............................................................................................................................. 55
Product Specification and the CPU Idle Time Limits ......................................................... 55
Inter-Network Traffic ............................................................................................................. 55
Example – Gradual Migration ............................................................................................ 56
LI Traffic Rates ....................................................................................................................... 61
Connections Size of System Manager Server and Client: ......................................................... 61

Appendix A. Object Manager Multicast Optimization (OMMO) ....................................... 63


Object Manager Multicast Optimization Configuration Options ....................................... 64
Import Table Behavior with OMMO Feature .................................................................... 65

iv
Contents B0700AX – Rev W

Compound with Invalid Block or Parameter Behavior with OMMO Feature .................... 66

Appendix B. Upgrading SMONs on I/A Series Software v8.4.x


and Earlier to v8.5-v8.8 or Control Core Services v9.0 or Later .......................................... 67
Overview ................................................................................................................................. 67
System Definition v2.9 ....................................................................................................... 67
Quick Fixes Needed ................................................................................................................ 67
Installation Sequence ............................................................................................................... 68
Known Detected Issues/Workarounds .................................................................................... 69

Appendix C. Site Planning Worksheets ............................................................................... 73

Glossary................................................................................................................................ 77

Index .................................................................................................................................... 81

v
B0700AX – Rev W Contents

vi
Figures
2-1. CNI Data BiDirectional Throughput and CPU Usage ............................................... 27
2-2. Inter-CNI traffic (Bytes Per Second) ........................................................................... 28
2-3. Compound Broadcast Transmission Density .............................................................. 29
2-4. Compound Broadcast Density Plot for 10K Compounds ........................................... 30
2-5. Retry Sink Connections Operation Time .................................................................... 31
2-6. Number of Points vs. Time to Connect the First and Last Points ................................ 32
3-1. Original Nodebus Traffic Rates ................................................................................... 58
3-2. Adding an ATS in Extender Mode .............................................................................. 58
3-3. Migrate Nodebus 1 ..................................................................................................... 59
3-4. Migrate Nodebus 4 and Nodebus 5 ............................................................................ 60
3-5. Final Migration ........................................................................................................... 60
B-1. New Nodebus Test - System Monitors ........................................................................ 69
B-2. HOME Display .......................................................................................................... 70
B-3. “Greyed” SMON Selected ........................................................................................... 71
B-4. Selecting NETWORK Button .................................................................................... 72

vii
B0700AX – Rev W Figures

viii
Tables
2-1. OS Configurable Parameters ....................................................................................... 10
2-2. CPs vs. % CPU Load Per Alarms Per Second, For # of Alarm Destinations ................ 17
2-3. OM Scan Load: AIM*API Application Examples ........................................................ 23
2-4. OM Scan Load: Peer-To-Peer Examples ..................................................................... 24
2-5. OM Scan Load: FoxView Application Examples ......................................................... 25
2-6. Windows Multicore Workstation System Planning Summary ..................................... 36
2-7. Windows Workstation System Planning Summary ..................................................... 36
2-8. Solaris Workstation System Planning Summary .......................................................... 37
2-9. Data Access to CIO Objects on CP270 With OMMO Feature Disabled .................... 39
3-1. Windows Workstation with Multiple CPU Core Specification ................................... 41
3-2. Workstation with Multiple CPU Cores Summary Worksheet Example ...................... 42
3-3. Alarm Manager for Workstation with Multiple CPU Cores Worksheet Example ........ 43
3-4. FoxView Worksheet for Workstation with Multiple Cores Enabled Example ............. 43
3-5. AIM*Historian for Workstation with Multiple Cores Enabled Worksheet Example ... 44
3-6. Windows Workstation With Single CPU Core Specification ...................................... 45
3-7. Solaris Workstation Specification ................................................................................ 45
3-8. Workstation with Single CPU Core Summary Worksheet Example ............................ 46
3-9. Alarm Manager for Workstation with Single CPU Core Worksheet Example ............. 47
3-10. Default AST Table for Number of Alarm Managers on a Windows-Based
Workstation
with Single CPU Core (Local and Remote) ................................................................. 48
3-11. Default AST Table for Number of Alarm Managers on a Solaris-Based Workstation
(Local and Remote) ..................................................................................................... 48
3-12. FoxView Worksheet for Workstation with Single Core Enabled Example ................... 49
3-13. AIM*Historian for Workstation with Single Core Enabled Worksheet Example ......... 50
3-14. Loading Summary ....................................................................................................... 52
3-15. Station Free Memory (Bytes) ...................................................................................... 53
3-16. Peer-to-Peer Data ........................................................................................................ 53
3-17. Resource Table ............................................................................................................ 54
B-1. Quick Fixes for Upgrading SMON from I/A Series Software v8.4 or Earlier ............... 68
C-1. Workstation Summary Worksheet .............................................................................. 73
C-2. Alarm Manager Worksheet ......................................................................................... 74
C-3. FoxView Worksheet .................................................................................................... 75
C-4. AIM*Historian Worksheet .......................................................................................... 76

ix
B0700AX – Rev W Tables

x
Safety Information

Important Information
Read these instructions carefully and look at the equipment to
become familiar with the device before trying to install, operate, ser-
vice, or maintain it. The following special messages may appear
throughout this manual or on the equipment to warn of potential
hazards or to call attention to information that clarifies or simplifies
a procedure.

The addition of either symbol to a "Danger" or


"Warning" safety label indicates that an electrical
hazard exists which will result in personal injury if
the instructions are not followed.

This is the safety alert symbol. It is used to alert you to


potential personal injury hazards. Obey all safety messages
that follow this symbol to avoid possible injury or death.

DANGER
DANGER indicates a hazardous situation which, if not avoided, will
result in death or serious injury.

WARNING
WARNING indicates a hazardous situation which, if not avoided, could
result in death or serious injury.

CAUTION
CAUTION indicates a hazardous situation which, if not avoided, could
result in minor or moderate injury.

NOTICE
NOTICE is used to address practices not related to physical injury.
Please Note
Electrical equipment should be installed, operated, serviced, and main-
tained only by qualified personnel. No responsibility is assumed by
Schneider Electric for any consequences arising out of the use of this
material.

A qualified person is one who has skills and knowledge related to the con-
struction, installation, and operation of electrical equipment and has
received safety training to recognize and avoid the hazards involved.
Preface
This document provides system planning and sizing guidelines for Foxboro DCS Control Core
Services systems with the Foxboro DCS Control Network for the platforms residing on the
control network:
♦ Windows® stations (with multiple CPU cores enabled) with Control Core Services
v9.1 or later
♦ Windows® stations (running single CPU core) with I/A Series software v8.4.1-v8.8
or Control Core Services v9.0 or later
♦ Solaris® stations with I/A Series software v8.3

Revision Information
For this revision of this document (B0700AX, Rev. W), these changes were made:
Throughout Rebranded to EcoStruxure™
Foxboro™ DCS
Chapter 3 “System Sizing” Added the section “Connections Size of
System Manager Server and Client:” on
page 61
Appendix A “Object Added this Appendix
Manager Multicast
Optimization (OMMO)”

Related Documents
♦ Address Translation Station User’s Guide (B0700BP)
♦ AIM*AT Suite AIM*API User's Guide (B0193YN)
♦ Alarm and Display Manager Configurator (ADMC) (B0700AM)
♦ Application Object Services User’s Guide (B0400BZ)
♦ Control Processor 270 (CP270) and Field Control Processor 280 (CP280) Integrated
Control Software Concepts (B0700AG)
♦ Control Processor 270 (CP270) On-Line Image Update (B0700BY)
♦ Standard and Compact 200 Series Subsystem User's Guide (B0400FA)
♦ Enclosures and Mounting Structures Site Planning and Installation User's Guide
(B0700AS)
♦ K-Series Enclosures Overview - Site Planning and Installation User's Guide (B0700GN)
♦ Field Device Controller 280 (FDC280) User's Guide (B0700GQ)
♦ Field Control Processor 280 (FCP280) Sizing Guidelines and Excel® Workbook
(B0700FY)
♦ Field Control Processor 280 (FCP280) User’s Guide (B0700FW)

xiii
B0700AX – Rev W Preface

♦ Field Control Processor 270 (FCP270) Sizing Guidelines and Excel Workbook
(B0700AV)
♦ Field Control Processor 270 (FCP270) User’s Guide (B0700AR)
♦ Control Network Interface (CNI) User's Guide (B0700GE)
♦ Field Device System Integrator (FBM230/231/232/233) User’s Guide (B0700AH)
♦ FOUNDATION™ fieldbus User’s Guide for the Redundant FBM228 Interface (B0700BA)
♦ FoxPanels Configurator (B0700BB)
♦ FoxView Software V10.4 (B0700FC)
♦ I/A Series Information Suite AIM*Historian User’s Guide (B0193YL)
♦ Implementing FOUNDATION Fieldbus on the I/A Series System (B0700BA)
♦ Integrated Control Block Descriptions (B0193AX)
♦ Integrated Control Configurator (B0193AV)
♦ Model H90, H91, P90, and P91 System Administration Guide (Windows Server 2003,
R2, with Service Pack 2) (B0700BX)
♦ Model H90 Workstation Server for Windows Server (R) 2016 Operating System
(PSS 31H-4H90-16)
♦ Model H90 Workstation Servers for the Windows Server® 2008 R2 Operating System
(PSS 31H-4H90)
♦ Model H91 Workstation Servers for the Windows Server® 2008 R2 Operating System
(PSS 31H-4H91)
♦ Object Manager Calls (B0193BC)
♦ Power, Earthing (Grounding, EMC and CE Compliance (B0700AU)
♦ Process Operations and Displays (B0700BN)
♦ Software Utilities (B0193JB)
♦ System Definition: A Step-By-Step Procedure (B0193WQ)
♦ System Administration Guide (Solaris 10 Operating System) (B0700CT)
♦ System Definition Release Notes for Windows 7 and Windows Server 2008 (B0700SH)
♦ System Management Displays (B0193JC)
♦ Transient Data Recorder and Analyzer User’s Guide (B0700AL)
♦ The Foxboro DCS Control Network Architecture Guide (B0700AZ)
♦ Control Core Services V9.x System Error Messages (B0700AF)
♦ Virtualization User’s Guide (B0700VM)
♦ Workstation Alarm Management (B0700AT)
♦ Workstation Server for Windows 2008 Software Overview Microsoft Windows Server
2008 Operating System (PSS 21S-1B11 B3)
♦ Workstation Server for Windows 2003 Software Overview Microsoft Windows Server
2003 Operating System (PSS 21S-1B10 B3)
♦ Z-Module Control Processor 270 (ZCP270) User’s Guide (B0700AN)
♦ Z-Module Control Processor 270 (ZCP270) Sizing Guidelines and Excel Workbook
(B0700AW).

xiv
Preface B0700AX – Rev W

Most of these documents are available on the Foxboro DCS Electronic Documentation media
(K0174MA). The latest revision of this document is available through Global Customer Support
at: https://pasupport.schneider-electric.com

Schneider Electric Products Mentioned in this


Document

EcoStruxure™ Foxboro™ DCS Control Core Services


EcoStruxure™ Foxboro™ DCS Control Network
EcoStruxure™ Foxboro DCS Control Network Interface (CNI)
EcoStruxure™ Foxboro™ System Manager
EcoStruxure™ Foxboro™ DCS FCP280
EcoStruxure™ Foxboro™ DCS FDC280
EcoStruxure™ Foxboro™ DCS FoxPanels
H92 EcoStruxure™ Foxboro™ DCS Standard Workstation
EcoStruxure™ Foxboro™ DCS Annunciator Keyboards
EcoStruxure™ Foxboro™ Historian

Global Customer Support


For support contact:
https://pasupport.schneider-electric.com (registration required)

We Welcome Your Comments


We want to know about any corrections, clarifications, or further information you would find
useful. Send us an email at: [email protected].

xv
B0700AX – Rev W Preface

xvi
1. Overview
This chapter explains the subject of sizing and the sizing spreadsheets and worksheets.

Before You Begin


Additional performance and sizing guidelines for the Windows Server 2016 platform can be
found in these documents:
♦ The Hardware and Software Specific Instructions document is included with your
Windows Server 2016 hardware. For information on the current list of these docu-
ments for supported servers, refer to the latest Control Core Services installation
guide.
For a Windows Server 2016 platform running on a virtual machine, see the Virtual-
ization User's Guide for Windows Server 2016 (B0700HD).
♦ Model H90 Workstation Server for Windows Server 2016 Operating System
(PSS 31H-4H90-16)
Additional performance and sizing guidelines for the Windows Server 2008 R2 Standard platform
can be found in these documents:
♦ The Hardware and Software Specific Instructions document is included with your
Windows Server 2008 R2 hardware. For information on the current list of these doc-
uments for supported servers, refer to the latest Control Core Services installation
guide.
For a Windows Server 2008 R2 Standard platform running on a virtual machine, see
the Virtualization User’s Guide (B0700VM).
♦ Model H90 Workstation Servers for the Windows Server® 2008 R2 Operating System
(PSS 31H-4H90)
♦ Model H91 Workstation Servers for the Windows Server® 2008 R2 Operating System
(PSS 31H-4H91)
♦ Workstation Server for Windows 2008 Software Overview Microsoft Windows Server
2008 Operating System (PSS 21S-1B11 B3)
Additional performance and sizing guidelines for the Windows Server 2003 platform can be
found in these documents:
♦ Model H90, H91, P90, and P91 System Administration Guide (Windows Server 2003,
R2 with Service Pack 2) (B0700BX)
♦ Workstation Server for Windows 2003 Software Overview Microsoft Windows Server
2003 Operating System (PSS 21S-1B10 B3).
This document is for engineers, application managers, plant managers, and other qualified and
authorized personnel involved in the planning and sizing of the implementation of the Foxboro
DCS Control Network in the Foxboro DCS system. It is also suitable for Foxboro personnel.
Prior to using this document, you need to be familiar with Control Core Services process control
principles and strategies. For more information on the various Control Core Services software and
hardware elements, see the Related Documents.

1
B0700AX – Rev W 1. Overview

You also need to be familiar with Microsoft® Excel™ operating principles and procedures prior
to using spreadsheets.

Introduction
This document is the top level user’s guide for planning and sizing the Foxboro DCS Control
Core Services elements of the Foxboro DCS Control Network for:
♦ (Workstations with multiple CPU cores enabled) Control Core Services v9.1 or later
for the Windows operating system
♦ (Workstations running a single CPU core) I/A Series software v8.4.1-v8.8 or Control
Core Services v9.0 or later for the Windows operating system
♦ I/A Series software v8.3 for the Solaris operating system
Lower level documents are referenced to provide detailed specific descriptions, suggestions, and
procedures for major areas such as Control, the control network, and I/O communications. Sys-
tem planning is described with respect to the overall performance and sizing of your Foxboro
DCS system, and does not take into consideration factors such as cost, environment, installation,
and configuration. These factors are described in sales guidelines, sales tools, and other user docu-
ments.
Spreadsheets and worksheets are provided to make a variety of sizing calculations for Control
Core Services workstations and control stations. Control Core Services sizing spreadsheets are
Microsoft Excel® application software packages that execute on a Windows PC and provide auto-
mated calculations based on user input. Worksheets are provided for manual calculations if
spreadsheets are not available.
Use the spreadsheets and worksheets before you configure the final system to expedite the config-
uration process and avoid reconfiguring. They can also be used to assist in developing a process
control strategy that allows for optimum usage of all stations while providing for expedient
throughput for process control blocks.
Determine the distribution of equipment and software to plan and size the control network for
performance. This enables the system to respond well to user actions, control the process in real
time, and meet published performance and sizing specifications for control, alarming, AIM*His-
torian data collection, and so forth.
Additional planning and sizing is needed if the control network is connected through an Address
Translation Station (ATS) to a Nodebus network. Chapter 3 “System Sizing” describes the sizing
calculations for inter-network traffic between the control network and the Nodebus network. For
more information on planning recommendations regarding the ATS usage, see “Standard
I/A Series Migration Strategies” in V8.3 Software for the Solaris Operating System Release Notes and
Installation Procedures (B0700RR).

2
1. Overview B0700AX – Rev W

This document along with the lower level reference documents, sizing spreadsheets and work-
sheets will help you plan and size your system. They provide information and data calculations
about control stations loading, workstations loading, and network traffic. Here are some fre-
quently asked questions.
Control Stations:
♦ How many control stations do I need to support the number and type of I/O points
in my system?
♦ How do I distribute my control process load between control stations?
♦ How many peer-to-peer connections can my system support?
♦ What is the estimated Field Bus Scan Load percentage for each control station?
♦ What is the estimated Control Block Load percentage for each control station?
♦ What is the estimated Sequence Block Load percentage for each control station?
♦ What is the estimated Total Control Cycle Load percentage for each control station?
♦ What is the estimated OM Scan Load percentage for each control station?
♦ What is the estimated CPU Load percentage for each control station to support my
AIM*Historian application?
♦ What is the estimated CPU Load percentage for each control station to support my
FoxView displays?
♦ What is the estimated CPU Load percentage for each control station to support my
workstation applications?
♦ What is the estimated CPU Load percentage for each control station if I choose to use
the default Aprint services for alarm notification?
♦ What is the estimated Idle Time percentage for each control station to support sus-
tained alarm rates, alarm bursts, and alarm destinations?
♦ What is the estimated memory consumption for each control station?
♦ Do the sizing estimates for any control station exceed the recommended control sta-
tion CPU loading guidelines?
Workstations:
♦ Do the default OS configurable parameter settings for each workstation satisfy the
number of connections I need between the workstation and control stations?
♦ What is the estimated CPU Load percentage for each workstation to support my
AIM*Historian application?
♦ What is the estimated CPU Load percentage for each workstation to support my Fox-
View displays?
♦ What is the estimated CPU Load percentage for each workstation if I choose to use
the default Aprint services for alarm notification?

3
B0700AX – Rev W 1. Overview

♦ Do the sizing estimates for any workstation exceed the recommended workstation
CPU Load percentage loading guidelines?
♦ Does the workstation support the multiple CPU core feature (refer to the Hardware
and Software Specific Instructions document shipped with the workstation), and if so,
should I implement it?
♦ How many OM objects will have to be created for each of the unique objects need to
be monitored by this workstation?
Network Traffic:
♦ What is my estimated control network traffic flow and can my network configuration
handle the estimated sustained and peak traffic rates?
♦ If connecting to a Nodebus system using ATSs, do I need to do a total replacement of
LAN Interfaces (LIs) or can I do a gradual migration using an ATS running in
Extender mode?

NOTE
All references to workstations apply to both Windows and Solaris workstations,
unless explicitly referred to as either a Windows workstation or a Solaris
workstation.

Accessing Spreadsheets
Spreadsheets can be accessed from the Global Customer Support (https://pasupport.schneider-elec-
tric.com). These spreadsheets can be run on any personal computer that has Microsoft Excel soft-
ware. You need to use Microsoft Office 97 or a later version of MS-Excel.
For hardware and software requirements for your workstation, refer to documentation for the
Excel spreadsheet. Also, refer to the MS-Excel help for general principles of operation.

Accessing Spreadsheets from the Electronic Documentation


Media
To access the spreadsheets on the Foxboro DCS Electronic Documentation media (K0174MA):
1. Insert the CD-ROM disk or DVD-ROM disk in the CD drive.
2. Install the software.
3. Access the file for the spreadsheet desired.

4
1. Overview B0700AX – Rev W

Object Manager Multicast Optimization (OMMO)


For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core Services
v9.0 or later, a new option called the Object Manager Multicast Optimization (OMMO) feature
is available to reduce the network processing overhead on the control network and I/A Series
Nodebus as well to reduce the OM processing overhead on the Foxboro stations (Application
Workstations, Control Processors, etc.). When enabled, OMMO reduces the number of multicast
network communications initiated by the Object Manager.
OMMO is disabled by default. For technical details on enabling and configuring this feature, see
“Object Manager Multicast Optimization (OMMO)” in Appendix A “Object Manager Multicast
Optimization (OMMO)”.

5
B0700AX – Rev W 1. Overview

6
2. System Planning
This chapter describes system recommendations and guidelines that you need to follow to help
ensure your Foxboro DCS system does not exceed published Control Core Services performance
and sizing specifications.
The system planning phase results in a Foxboro DCS system that:
♦ Provides fast response to user actions.
♦ Provides real-time control with no overruns.
♦ Handles sustained alarm rates and alarm bursts.
♦ Supports customer applications and data access.
You need to be familiar with the various sizing guidelines related to the configuration of a system
prior to system definition/configuration. For planning control network traffic rates in the
Foxboro DCS Control Network, refer to The Foxboro DCS Control Network Architecture Guide
(B0700AZ). If you are connecting the control network to a Nodebus network using an ATS in
Extender mode, you need to size traffic rates for the LI associated with the ATS in Extender
mode. Refer to the “Standard I/A Series Migration Strategies” section in V8.3 Software for the
Solaris Operating System Release Notes and Installation Procedures (B0700RR) for planning inter-
network communications between the control network and Nodebus network. System planning
also needs that you determine:
♦ Workstation Loading
♦ Control Station Loading
♦ Distribution of I/O
♦ OS Configurable Parameters

Workstations
According to the general workstation CPU loading guideline, you need to keep the sustained
workstation idle time to at least the recommended value of Reserved CPU Load:
♦ Windows (single CPU core)=40%
♦ Windows (multiple CPU core)=25%
♦ Solaris=40%
Reserved CPU Load percentage helps ensure that the workstation has a reserve performance
capacity to support temporary peak loads for virus scanning, alarm bursts, alarm recovery,
historian data reduction, historian archiving, large application startups, end of shift reports, file
printing, file copies, network backup/restore, and so forth.
Workstation planning needs you to consider:
♦ Virus Scanning
♦ System Monitor configuration
♦ OS Configurable Parameters
♦ FoxView displays

7
B0700AX – Rev W 2. System Planning

♦ Alarming
♦ AIM*Historian
♦ Application Object Services
♦ Customer applications
The CPU Load percentage varies significantly depending on platform type. For platforms with
the multicore feature enabled, McAfee scan times are reduced by 40% and the CPU utilization is
reduced by 50%, resulting in improved responsiveness during these scans. Platforms with the
multicore feature enabled have their CPU utilization reduced by 25% during BESR backups,
resulting in improved responsiveness when creating backups.

Virus Scanning
Virus scan protection is needed even if there are no external network connections, because it helps
to protect against file transfers done from local devices.

Virus Scan Software on Windows Platforms


For stations with the Windows 10 or Windows Server 2016 operating system, Schneider Electric
requires that you install qualified virus protection software on your workstation.
For stations with Windows Server 2008 or earlier operating systems, McAfee® Virus Protection
Software V8.5i, 8.7i, 8.8i or later has been qualified and is recommended for use on V8.2 or later
Windows workstations on the control network. These versions are not supported on Windows
workstations running previous versions of I/A Series software. Additionally, V8.2 and later
Windows workstations do not support versions of McAfee Virus Scan software prior to V8.5i.
McAfee virus protection software contains a file exclusion protection option that allows you to
exclude virus checking for selected files. Enabling the file exclusion protection feature can
considerably reduce peak load usage and is recommended for all the workstations with single
CPU cores enabled. (For workstations with multiple CPU cores enabled, McAfee’s CPU impact
needs to be negligible with regard to the peak CPU load.)
Because the sustained CPU Load percentage for virus protection is extremely low and the
workstation has a Reserved CPU Load of 40% (for single CPU core workstations) or 25% (for
multiple CPU core enabled workstations) that can handle temporary peak loads, the workstation
summary sizing worksheet does not need a separate line entry for virus scanning; virus scan
loading is absorbed in the line entry for Reserved CPU Load percentage.

Virus Scan Software on Solaris Platforms


Currently there is no virus scan software available for workstations running I/A Series software
v8.3 for the Solaris operating system.

SMONs
System Monitor (SMON) is used to monitor the status of stations and devices on the control
network.

SMONs on I/A Series Software v8.4.x and Earlier


For I/A Series software v8.4.x and earlier, The System Definition software allows up to 30
SMONs per system.

8
2. System Planning B0700AX – Rev W

NOTE
If you want to upgrade SMONs on I/A Series software v8.4.x and earlier, refer to
Appendix B “Upgrading SMONs on I/A Series Software v8.4.x and Earlier to v8.5-
v8.8 or Control Core Services v9.0 or Later”.

SMONs on I/A Series Software v8.5-v8.8 and Foxboro DCS


Control Core Services v9.0 or Later
For I/A Series software v8.5-v8.8 or Control Core Services v9.0 or later, SMON support on
Windows workstations is expanded from 30 to 128 System Monitors on stations and switches
with I/A Series software v8.5-v8.8 or Control Core Services v9.0 or later. However, pre-V8.5
stations still only support 30 SMONs. In order to interoperate, the System Monitor name of the
host workstation has to appear in the first 30 names.
If you want to use more than 30 SMONs on a Foxboro DCS system, keep these caveats in mind:
♦ The current limit of 64 stations per System Monitor Domain remains the same.
♦ SysDef 2.10 or later is needed to configure an instance of the control network with
more than 30 System Monitors.
♦ SysDef 2.10 or later can be used to configure up to 128 System Monitors in a Foxboro
DCS system that consists of a Windows-based control network connected to a Node-
bus I/A Series system. Legacy I/A Series systems retain their limit of 30 System
Monitors.
♦ Versions of SMDH running on the Nodebus may need Quick Fixes, contact the
Global Customer Support at: https://pasupport.schneider-electric.com

Limitations on Number of Switches Assigned to a Single System Monitor


In order to optimize the performance of System Monitors on a system with I/A Series software
v8.x or Control Core Services v9.0 or later, the total number of switches assigned to a single
System Monitor cannot exceed fifteen (15), and the total number of switch ports for the switches
assigned to a single System Monitor cannot exceed five hundred (500).

OS Configurable Parameters
Workstations support OS configurable parameters that enable you to fine tune OS extension
resources for a particular application. These OS configurable parameters consist mainly of Object
Manager shared memory resources. They include:
♦ Number of OM lists for change-driven data access
♦ Number of Foxboro DCS objects that can be imported to minimize system multicast
messages
♦ Number of OM objects which also supports the number of Application Objects.
Default values have been set for a typical workstation that supports the recommended guidelines
for workstation applications such as FoxView, Alarm Manager, AIM*Historian, and so forth. You
need not modify the default settings. The OS configurable parameter usage can be viewed using
the /usr/local/show_params utility. Refer to Application Object Services User’s Guide (B0400BZ)
for information on setting OS configurable parameters.

9
B0700AX – Rev W 2. System Planning

Table 2-1 contains a list of OS configurable parameters with default and maximum values
followed by a brief description of each parameter and typical usage of the control network in a
Foxboro DCS system:

Table 2-1. OS Configurable Parameters

Default Value Min Value Max Value


OS Parameter Name Windows Solaris Windows Solaris Windows Solaris
CMX_NUM_CONNE 200 200 96 96 255 255
CTIONS
URFS_NUM_CONNE 200 200 96 96 200 255
CTIONS
OM_NUM_OBJECTS 4000 4000 450 450 6000 6000
OM_NUM_CONNEC 200 200 30 30 255 255
TIONS
OM_NUM_IMPORT_ 1000 1000 1000 1000 10,000 10,000
VARS
OM_NUM_LOCAL_O 100 100 50 50 300 300
PEN_LISTS
OM_NUM_REMOTE_ 100 100 50 50 100 100
OPEN_LISTS
IPC_NUM_CONN_PR 200 200 96 96 255 255
OCS
IPC_NUM_CONNLES 250 250 96 96 255 250
S_PROCS
GET_SET_TIMEOUT 4 4 2 2 12 12
OM_MULTICAST_OP 0 - 0 - 1 -
TIMIZATION1
IMP_SAVE_PERIOD1 30 - 0 - 1440 -
OMMO_MULTICAST 200 - 50 - 12,000 -
_DELAY1
OMMO_UNICAST_D 100 - 0 - 12,000 -
ELAY1
1.
Parameter available only on Windows systems with I/A Series software, v8.6-v8.8 or Control
Core Services v9.0 or later.

NOTE
Current workstations size the OM Scanner Database for the maximum number of
local lists (OM_NUM_LOCAL_OPEN_LISTS) and maximum number of remote
lists (OM_NUM_REMOTE_OPEN_LISTS), with the maximum of 255 points
per list. Thus, it is not possible to run out of OM Scanner Database entries.

10
2. System Planning B0700AX – Rev W

NOTE
For Solaris workstations with I/A Series software v8.4.2 or later, or Windows work-
stations with I/A Series software v8.6-v8.8 or Control Core Services v9.0 or later,
the values in the “Default Value” column are configured in the \usr\fox\exten\con-
fig\loadable.cfg file, and the values in the “Min Value” and “Max Value” columns
are configured in the \usr\fox\exten\config\user_rules.cfg file.

CMX_NUM_CONNECTIONS
♦ Maximum number of concurrent connections allowed by the workstation.
♦ CMX_NUM_CONNECTIONS ≥ OM_NUM_CONNECTIONS.
URFS_NUM_CONNECTIONS (Solaris Only)
♦ Number of connections used by uRFS.
OM_NUM_OBJECTS
♦ Total number of OM objects that can be created by applications. The number of OM
objects is also used to support the number of Application Objects because they share
OM memory space.
♦ You can use the /usr/local/show_params utility to view the usage of OM objects.
♦ You can use the /opt/fox/bin/tools/som utility (“list” command) to view the names of
OM objects created.
♦ Each FoxView creates ~65 OM objects.
♦ Each Alarm Manager Subsystem creates ~10 OM objects.
OM_NUM_CONNECTIONS
♦ Total number of station connections used by OM Server for local OM change-driven
lists. The number of connections determines how many stations can source data for
workstation displays, AIM*Historian, and user applications.
♦ You can use the /usr/local/show_params utility to view the usage of station
connections.
OM_NUM_IMPORT_VARS
♦ Total number of entries used to save station addresses for Foxboro DCS objects to
minimize message multicasts.
♦ You can use the /usr/local/show_params utility to view the usage of Foxboro DCS
objects imported.
♦ You can use the /opt/fox/bin/tools/som utility (“imp” command) to view the names of
imported Foxboro DCS objects (for example, compounds).
OM_NUM_LOCAL_OPEN_LISTS
♦ Total number of workstation OM lists that can be opened for change-driven data
access.
♦ Each FoxView opens one list per 1-75 display points.
♦ Each AIM*Historian opens (through AIM*API) one list per 1-255 points sampled.
♦ Each user application opens (through AIM*API) one list per 1-255 points requested
for change-driven access.

11
B0700AX – Rev W 2. System Planning

♦ You can use the /usr/local/show_params utility to view the usage of local OM lists.
♦ You can use the /opt/fox/bin/tools/som utility (“opdb” command) to view local OM
lists.
OM_NUM_REMOTE_OPEN_LISTS
♦ Total number of remote OM lists that source data (for example, remote shared vari-
ables) for corresponding local OM lists (for example, FoxView displays) opened on
other workstations.
♦ You can use the /usr/local/show_params utility to view the usage of remote OM lists.
♦ You can use the /opt/fox/bin/tools/som utility (“opdb” command) to view remote
OM lists on workstations.
♦ You can use the /opt/fox/bin/tools/rsom utility (“opdb” command) to view remote
OM lists on control stations.
IPC_NUM_CONN_PROCS
♦ Maximum number of workstation software processes that register for IPC connected
services.
♦ Control Core Services baseline software running on a Windows workstation con-
sumes approximately 35 Control Core Services processes registered for IPC connected
services.
♦ Control Core Services baseline software running on a Solaris workstation consumes
approximately 35 Control Core Services processes registered for IPC connected
services.
♦ You can use the /usr/local/show_params utility to view the usage of IPC connected
services.
♦ You can use the /opt/fox/bin/tools/sipc utility (“list dt” command) to view the names
of the processes registered for IPC connected services.
IPC_NUM_CONNLESS_PROCS
♦ Maximum number of workstation software processes that register for IPC connection-
less services.
♦ Control Core Services baseline software running on a Windows workstation con-
sumes approximately 65 Control Core Services processes registered for IPC
connectionless services.
♦ Control Core Services baseline software running on a Solaris workstation consumes
approximately 70 Control Core Services processes registered for IPC connectionless
services.
♦ You can use the /usr/local/show_params utility to view the usage of IPC connection-
less services.
♦ You can use the /opt/fox/bin/tools/sipc utility (“list cdt” command) to view the names
of the processes registered for IPC connectionless services.
GET_SET_TIMEOUT
♦ The timeout value (in seconds) for which a station has to wait prior to the timing out
GETVAL and SET_CONFIRM operations.
OM_MULTICAST_OPTIMIZATION

12
2. System Planning B0700AX – Rev W

♦(For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core
Services v9.0 or later only)
Enables (1) or disables (0) the Object Manager Multicast Optimization (OMMO)
feature during workstation boot up, as discussed in Object Manager Calls
(B0193BC).
♦ Reduces the number of multicast network communications initiated by the Object
Manager to reduce the network processing overhead on the control network and
I/A Series Nodebus control network hardware (such as a switch, Address Translation
Station (ATS), etc.) and the OM processing overhead on the Foxboro stations, to
increase the efficiency and robustness of Control Core Services network
communications.
♦ Available on Windows-based workstations with I/A Series software v8.6-v8.8 or Con-
trol Core Services v9.0 or later only.
IMP_SAVE_PERIOD
♦ (For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core
Services v9.0 or later only)
For the Object Manager Multicast Optimization (OMMO) feature, this parameter
determines how often (in minutes) a workstation saves its import table and address
table to local files, which the workstation uses to populate its OM database when
rebooting, as discussed in Object Manager Calls (B0193BC).
♦ Helps to reduce network load as the last known locations of OM objects are loaded
from this file instead of having to be requested from stations on the network through
multicast messages.
OMMO_MULTICAST_DELAY
♦ (For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core
Services v9.0 or later only)
Delay time (in milliseconds) between the opening of OM lists that are sent using mul-
ticast message and those that are sent to stations on the Nodebus. Related to the
Object Manager Multicast Optimization (OMMO) feature discussed in Object
Manager Calls (B0193BC).
OMMO_UNICAST_DELAY
♦ (For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core
Services v9.0 or later only)
Delay time (in milliseconds) between the opening of OM lists that are sent through
unicast message to stations on the control network. Related to the Object Manager
Multicast Optimization (OMMO) feature discussed in Object Manager Calls
(B0193BC).

FoxView
In general, FoxView displays affect Control Core Services control network systems as follows:
♦ Each FoxView display consumes a workstation CPU Load percentage for updating
display values, bar graphs, trend lines, and so forth.
♦ Each FoxView display consumes one workstation OM Server connection per remote
station that sources display points.

13
B0700AX – Rev W 2. System Planning

♦ Each FoxView instance and its display consumes these workstation OS configurable
parameters:
♦ OM_NUM_OBJECTS
♦ OM_NUM_LOCAL_OPEN_LISTS
♦ OM_NUM_CONNECTIONS
♦ Each FoxView display causes a control station OM Scan Load percentage based on the
number of display points the control station scans each second.
♦ Each FoxView display causes each control station that sources display points to con-
sume one OM Scanner connection.
FoxView display updates are based on the display scan rate and the fast scan option configured
when building a display using FoxDraw. The display configurable scan rate (which has a default
of 1 s) applies to all stations sourcing display points. It determines how often the source stations
scan the display points and send updated values to the workstation. The fast scan option applies
only to control stations with a BPC of 100 ms or faster that are configured by SysDef to allow the
OM fast scan option. A display configured with the fast scan option, coupled with a control
station configured for OM fast scan, causes a control station sourcing display points to scan the
points every 100 ms and send updated values to the workstation.
The default display scan rate of 1 second coupled with the default no fast scan option
provides:
♦ Display call-up with initial data values within 1 to 2 seconds.
♦ Display updates of data sourced by Foxboro stations within 1 second.

NOTE
The fast scan option increases the OM Scan load on each control station and
sources display data approximately ten times the normal rate for the display points.
We recommend that you use the FoxView display fast scan option only if you have
control stations running at 100 ms BPC or faster and need an initial display call-up
time less than 1 second or if your data source is external to I/A and the display
update time needs to be faster.

A workstation can support multiple FoxViews (Windows 1-16, Solaris 1-16) and each
workstation worksheet calculates a CPU Load percentage based on a 200-point display with all
the display points changing value every scan cycle. When building FoxView displays, you need to
consider these system impacts:
♦ Displays consume workstation OM Server connections equal to the number of sta-
tions that source the display points. If the number of stations sourcing display points
exceeds the number of OM Server connections, the display will not connect to all
source stations and update all the points. The number of OM Server connections is an
OS configurable parameter (OM_NUM_CONNECTIONS) and can be increased to
correct this condition. The OM multiplexes station connections for all OM lists on a
single workstation. You need not make any modifications to the default value (Win-
dows-200, Solaris-200).
♦ Displays use one local OM list on the workstation for each of 1 to 75 unique display
points. The number of OM local lists is an OS configurable parameter

14
2. System Planning B0700AX – Rev W

(OM_NUM_LOCAL_OPEN_LISTS). The default value (Windows=100, Solaris=


100) does not need you to make any modifications.
♦ Displays need to contain up to 200 points to help ensure an initial call-up time of less
than 2 seconds.
♦ A 200-point display with the default scan rate of 1 second consumes a workstation
CPU Load of:
♦ 1.6% for Windows workstations with multiple CPU cores enabled
♦ 2.0% for Windows workstations with a single CPU core enabled
♦ 5.2% for Solaris workstations.
♦ A 200-point display configured to use the fast scan option consumes a workstation
CPU Load of:
♦ 2.0% for Windows workstations with multiple CPU cores enabled
♦ 4.0% for Windows workstations with a single CPU core enabled
♦ 10% for Solaris workstations.
♦ Displays configured to use the fast scan option increase the workstation CPU Load
percentage because they cause FoxView software to update the display values at a
faster rate and the workstation to process ten times the number of OM Scanner
update messages sent by source control stations every 100 ms.
♦ Displays cause a FCP270/ZCP270 control station that sources display points to have
an OM Scan Load of 1.8% per 1000 points/second changing every scan cycle.

NOTE
For OM scan loading for the FCP280, refer to the Field Control Processor 280
(FCP280) Sizing Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280
(FDC280) Sizing Guidelines and Excel Workbook (B0700GS).

♦ A 200-point display with the default scan rate of 1 second that has all the display
points sourced by a single FCP270/ZCP270 control station increases the control sta-
tion’s OM Scan Load by 0.4%.
♦ Displays configured to use the fast scan option increase (by ten times) the OM Scan
Load percentage on each 100 ms control station that sources display points and is
configured for the fast scan option. Each source control station scans display points
every 100 ms rather than every 1 second and sends 10 times the number of update
messages.
♦ A 200-point display with the display fast scan option that has all the display points
sourced by a single OM fast scan control station causes a FCP270/ZCP270 control
station OM Scan Load of 4.0%.
♦ To configure displays with the fast scan option, consider the number of FoxView dis-
plays that can simultaneously access data from the same control station. This factor is
covered in the OM Scan Loading section of the Control Station spreadsheets.

15
B0700AX – Rev W 2. System Planning

Alarming
In a control network, configure alarm destinations for control station alarms. APRINT services
on each control station send control process alarm messages to the Alarm Management Subsys-
tem (AMS) for configured alarm destinations such as workstations, printers, and AIM*Historian
workstations. It sends multiple alarm messages (1 per destination) for each process alarm occur-
rence.
When planning alarm handling for your system, you need to consider:

NOTE
Be aware that the CP270, FCP280, CNI, and FDC280 sizing workbooks allow the
adjustment of load assumptions related to block load for alarming. For example,
"Average BLNALM Block inputs used" and "Percentage of PID blocks using alarm
options" in CP_Load_Assumptions sheet in the sizing workbook. However, there is
additional, unestimated load, to communicate alarms to their destinations. This is
part of the reason for the 70% limit on Core 1 CPU load, for FDC280, and Overall
Station Load, for CP270, FCP280, and CNI to allow time for this additional alarm
processing. This section helps you estimate that additional alarm processing load.
For help adjusting the estimate for block load for alarming for the FCP280, refer to
the Field Control Processor 280 (FCP280) Sizing Guidelines and Excel® Workbook
(B0700FY). For more information on help adjusting the estimate for block load for
alarming for the FDC280, see the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS) and for more information on the CNI,
see Control Network Interface (CNI) Sizing Guidelines and Excel Workbook
(B0700HL).

♦ How do I create an estimate for the expected sustained alarm rate?


For example, if:
The controller has 5000 blocks, with on average five blocks per control loop, giving
5000/5 = 1000 control loops total, and 5% of the control loops are generating an
alarm per second, then the estimated sustained alarm rate =
1000 * 0.05 = 50 alarms per second.
♦ How many alarm destinations do I need for each controller?
♦ How many alarm messages per second do I need for each controller? Alarm messages
per second = sustained alarm rate * alarm destinations
♦ How heavily loaded are my controllers; how much Idle Time is needed to support the
number of alarm messages?
♦ Aprint increases the control station alarming load significantly for each alarm destina-
tion configured. For an estimate of the % CPU Load per alarm message per second,
refer to Table 2-2. Verify that you count one “alarm message” for each process alarm
message for each destination, not the number of APRINT messages on the wire
(which can contain multiple process alarm messages concatenated together). For
example, if a compound generates 100 alarms per second, but the compound has five
alarm destinations, that counts as 500 alarms per second for the purposes of this
estimation.

16
2. System Planning B0700AX – Rev W

Table 2-2. CPs vs. % CPU Load Per Alarms Per Second, For # of Alarm Destinations1

CPU Load Per Alarms Per Second, For # of Alarm Destinations


CP Type 6 or more 5 4 3 2 1
CP270 0.035 0.036 0.038 0.042 0.047 0.065
CP280 0.022 0.023 0.025 0.028 0.033 0.049
FDC280 0.026 0.027 0.029 0.033 0.038 0.057
1. If your system has more than 16 alarm destinations, add 10% to the resulting estimate to
account for the possibility of slightly less efficient message throughput. For example, if the origi-
nal estimate is 12% CPU load, add 10% of 12 for a final estimate of 12 + 12 * .10 = 12 + 1.2 =
13.2 % CPU load.

♦ Example: On an FCP280, Compound 1 generates 100 process alarms per second,


with 5 alarm destinations, using 0.023 * (100 * 5) = 11.5 % CPU and Compound
2 generates 20 process alarms per second, with 2 alarm destinations, using 0.033 *
(40 * 2) = 2.64 % CPU, for a total of 14.14 % CPU.
♦ The Alarm Management Services (AMS) on a Windows workstation with a single
CPU core enabled consumes 2.5% CPU Load percentage for every 100 alarm mes-
sages per second processed.
The Alarm Management Services (AMS) on a Windows workstation with multiple
CPU cores enabled consumes 1.6% CPU Load percentage for every 100 alarm mes-
sages per second processed.
♦ The Alarm Management Services (AMS) on a Solaris workstation consumes 5% CPU
Load percentage for every 100 alarm messages per second processed plus an additional
5% base load.
♦ The AMS base load on a workstation also depends on the number of Alarm Managers
and the refresh rate of each Alarm Manager.

Historians
In general, AIM*Historian affects the control network as follows:
♦ AIM*Historian consumes workstation CPU Load percentage based on data collection
rates, data reduction, and data archiving.
♦ AIM*Historian consumes workstation Disk Load Time percentage and physical disk
space based on Real-Time Point (RTP) file sizes.
♦ AIM*Historian increases control station CPU Load percentage for OM scanning of
data collection points sourced by the control station.
♦ AIM*Historian consumes these workstation OS configurable parameters:
♦ OM_NUM_LOCAL_OPEN_LISTS
♦ OM_NUM_CONNECTIONS
♦ AIM*Historian consumes one workstation OM Server connection for every station
that sources collection points.

17
B0700AX – Rev W 2. System Planning

♦ AIM*Historian causes a control station to consume one OM Scanner connection for


collection points it sources.
The data collection rate determines how much data is collected in a particular time frame and is
controlled by:
♦ Delta – specifies the minimum change of a data collection point relative to the previ-
ous collected value. A delta is assigned to each data collection point and needs to be
the controlling variable for the data collection rate.
♦ Collection Frequency – specifies how often the data collection points are scanned for
changes on the stations (for example, control stations) that source the data. AIM*His-
torian supports a slow and fast frequency option. By default, the fast frequency is in
effect, and in most of the implementations there is little need to change to the slow
frequency. In many cases, making the collection frequency the controlling variable can
be used as a convenient alternative to fine-tuning the individual deltas of data collec-
tion points.
♦ Max Time Between Samples – verifies that a value is placed in the database at least at
the specified interval regardless if the value has changed more than the delta since the
last collection. This parameter can be considered to have the opposite functionality as
the Collection Frequency.
Data collection points are combined and collected into RTP files. A new RTP file is started
whenever the active one is full or when the specified Real-Time Retention Time parameter
(RTTIME) has expired. The finished files are eventually repacked, which shrinks their sizes to
about one-third without compression, and by an additional 40% to 60% when compression is
on. Data retrieval is less efficient from a large number of small files when compared to a small
number of large files; however, if the RTP files are too large, performance problems can be
detected because RTP files may not comfortably fit into physical memory. Disk activity increases
and system performance may decrease.
A workstation can support multiple AIM*Historian instances, each capable of collecting in excess
of 20,000 points. When using AIM*Historian software, you need to consider these system
impacts:
♦ AIM*Historian software consumes an average workstation CPU Load (Windows=
1.05% (multiple CPU cores) or 2% (single CPU core), Solaris=3%) for data collec-
tions that change at a rate of 1000 points/second.
♦ AIM*Historian software consumes a workstation CPU Load (Windows=1%, Solaris=
1.5%) for every 1000 data collection points reduced at a rate ≥ 15 minutes.
♦ AIM*Historian software consumes a workstation CPU Load (Windows=5%, Solaris=
4%) for every 5000 data collection points archived at the default rate.
♦ AIM*Historian software consumes workstation OM Server connections based on the
number of stations that source the data collection points. If the number of stations
sourcing the data collection points exceeds the number of OM Server connections,
not all the data is collected. The number of OM Server connections is an OS configu-
rable parameter (OM_NUM_CONNECTIONS) and can be increased to correct this
condition. You need not modify the default value. The OM multiplexes station con-
nections for all OM lists on a single workstation.

18
2. System Planning B0700AX – Rev W

♦ AIM*Historian software uses 1 local OM list for each of 1 to 255 data collection
points. The number of OM local lists is an OS configurable parameter
(OM_NUM_LOCAL_OPEN_LISTS) and can be increased if necessary.
♦ AIM*Historian software causes a control station OM Scan Loading of 1.7% per 1000
collection points/second changing every scan cycle.
♦ AIM*Historian software causes a control station OM Scanner connection to be used
by each control station that sources data collection points
♦ The ARCHSIZE parameter controls the size of the RTP files and experience shows
that a good compromise is to configure ARCHSIZE to a value that results in about
one RTP file per day, or very few RTP files per day.
♦ The maximum size of the RTP is estimated to be 1/8 of the physical memory size. For
example, if your Windows computer has 512 MB of RAM, ARCHSIZE need not be
configured greater than 64 MB; if your Solaris computer has 1.0 GB of RAM,
ARCHSIZE need not be configured greater than 128 MB.
♦ A good value for RTTIME is usually 86,400 seconds (1 day).
Refer to I/A Series Information Suite AIM*Historian User’s Guide (B0193YL) for information
regarding the AIM*Historian configuration parameters that you may need to configure based on
your system requirements and constraints. The AIM*Historian Excel spreadsheet HistSize.xls can
be used to estimate the Historian Configuration Parameters.

Printers
The Reserved CPU Load percentage in the workstation sizing spreadsheet is set to include
handling printer operations such as system messages and alarm messages. When deciding which
workstations need to host local printers, consider:
♦ Local printers consume approximately 10% of the CPU Load for printing alarms.
♦ All alarm printers need to be operated in the HSD (High Speed Draft) mode. This
allows better system performance when printing alarms and documents.
♦ Printing reports has about the same CPU Load percentage effect as printing alarms
when the alarm rate is 30 alarms/minute or 10% load.

Application Object Services


Application Object Services (AOS) provides hierarchical application objects that are similar to
control objects but are under the control of the user application. When using AOS, consider:
♦ AOS is not a small application. Its size is approximately 25 MB.
♦ AOS uses Object Manager shared memory. Run calcAppSize to determine the correct
size of the OS configurable parameter OM_NUM_OBJECTS.
♦ The workstation CPU Load percentage is low and optimized, buffered AOA accesses
using the AO API are greater than 10,000 per second on a workstation.
For detailed information about AOS, refer to Application Object Services User’s Guide (B0400BZ).

Applications
It is the responsibility of the user to determine the system impact of customer application
packages or third-party applications installed on the control network. Consider these points when
installing application packages on the workstation:

19
B0700AX – Rev W 2. System Planning

♦ The general workstation CPU Loading guideline is that you need to keep the
Reserved Overhead percentage (Windows (single CPU core)=40%, Windows (multi-
ple CPU cores enabled)=25%, Solaris=40%) to help ensure enough reserve capacity to
support peak loads for process upsets, large application startups, end of shift reports,
printing, file copies, network backup/restore, and so forth.
♦ Customer applications that access Control Core Services data need to estimate the
workstation CPU Load percentage based on AIM*API performance guidelines.
♦ Third-party applications’ specifications for minimal system requirements (for exam-
ple, RAM size) may affect Control Core Services applications like AIM*Historian.
♦ The number of application software packages.
♦ The size of user-developed applications and programs.
♦ The frequency of application executions.
♦ Simultaneous application executions.
♦ Minimizing system broadcasts and multicasts.
For Windows platforms, you can determine the effect an application has on the workstation by
using the Windows Performance Meter (Programs > Administrator Tools > Performance).
The Windows Performance Meter provides metrics for the system, processor, processes, memory,
physical disk, and so forth.
For Solaris platforms, you can determine the effect an application has on the workstation using
the ps command and the perfmeter utility (click Launch > Applications > Utilities >
Performance Meter).
Depending on the number and types of applications being run at the same time, increasing the
workstation memory may improve system performance. Increased memory usually reduces the
amount of paging and swapping to the physical hard disk.

Number of Self-Hosting or Auto-Checkpointing FDC280s,


FCP280s, or CP270s Supported By a Boot Host
If the auto-checkpointing option is enabled, along with the self-hosting option, the loading
requirements are based partially on the CP idle time and the CP database size. The resulting
maximum total time from checkpoint start to the display of an “installed into flash memory”
message is 15 minutes per FDC280, FCP280, or CP270. With an auto-checkpoint rate of two
hours (15 minutes x 8 = 2 hours), this results in a maximum of eight FDC280s, FCP280s, or
CP270s that need to be hosted by a single boot host when both the functions are enabled.
If you slow the auto-checkpoint rate or run small databases, you can host more FDC280s,
FCP280s, or CP270s on a single boot host. Even if the self-hosting option is disabled, the auto-
checkpoint timing is roughly the same. The recommended maximum number of FDC280s,
FCP280s, or CP270s that can be hosted also remains unchanged.

Control
This section provides an overview of the system planning and sizing guidelines needed for you to
adequately plan your control strategy on the control network. For detailed specifications
regarding these control processors, refer to:
♦ Field Device Controller 280 (FDC280) User's Guide (B0700GQ)

20
2. System Planning B0700AX – Rev W

♦ Field Control Processor 280 (FCP280) User’s Guide (B0700FW)


♦ Field Control Processor 270 (FCP270) User’s Guide (B0700AR)
♦ Z-Module Control Processor 270 (ZCP270) User’s Guide (B0700AN)
In general, you have to determine the number of control locations desired and properly size each
control station. The key performance measures associated with sizing a control station are:
♦ Verify the control station has enough execution time to read and write the I/O.
♦ Verify the control station has enough execution time to execute the installed function
blocks.
♦ Verify the control station has enough memory to hold the control database and
sequence code.
♦ Verify the control station has enough Idle Time to send alarm messages to all config-
ured alarm destinations.
♦ Verify the control station has enough execution time to scan all data points it sources
for FoxView displays, AIM*Historian, Peer-to-Peer connections, and workstation
application packages.
♦ Verify the control station has enough OM Scanner connections for data it sources for
FoxView displays, AIM*Historian, Peer-to-Peer connections, and workstation appli-
cation packages.
♦ Verify the control station can support the number of scanner update messages it sends
based on the number of OM Scanner connections and the OM scan rate. Each scan-
ner update message takes an OM Scan Load of 0.03% for the FCP270/ZCP270 (see
B0700FY for the FCP280 or B0700GS for the FDC280). The OM scanner update
messages are per list with up to 100 scan points per message. An application that
opens a list of 225 points can need 3 update messages per scan cycle if the list points
change every scan cycle. A control station scanning at the fast scan option rate of 100
ms sends ten times the number of scanner update messages it would send if it scanned
at 1 second.
♦ Verify the control station has enough OM Server connections for peer-to-peer sink
data.
For detailed operations on sizing these control processors, refer to these manuals:
♦ FDC280 - Field Device Controller 280 (FDC280) Sizing Guidelines and Excel Work-
book (B0700GS)
♦ FCP280 - Field Control Processor 280 (FCP280) Sizing Guidelines and Excel® Work-
book (B0700FY)
♦ FCP270 - Field Control Processor 270 (FCP270) Sizing Guidelines and Excel Workbook
(B0700AV)
♦ ZCP270 - Z-Module Control Processor 270 (ZCP270) Sizing Guidelines and Excel
Workbook (B0700AW)

Alarming
Alarms and status messages are generated by an Alarm block or by alarm options in selected
blocks. Consider these options:
♦ Number of points with alarm indication

21
B0700AX – Rev W 2. System Planning

♦ Priority of alarms
♦ Criticality of alarms within each compound
♦ Devices and applications to be notified of process alarms
♦ Use of the compound alarm inhibit parameter
♦ Frequency of alarms
The frequency of spontaneous alarms impacts the devices configured to be notified of alarms,
communication traffic on the network, and operator responsiveness. Alarming strategies include:
♦ Providing a significant delta to stop nuisance alarming caused by the process drifting
in and out of alarm when it is near a high or low limit
♦ Using the compound alarm inhibit function to stop alarms on a priority level basis.

Control Distribution
Distribution of the various control schemes among the process control hardware, control
processors and Fieldbus Modules, need you to consider:
♦ CP storage memory needed
♦ CP compound or block throughput
♦ Interprocess communication (IPC) connections
♦ Peer-to-peer relationships
♦ FBMs supported per CP

Peer-to-Peer Relationships
Peer-to-peer connections between stations are established when a compound:block.parameter in a
source (supplier of data) station is connected to a compound:block.parameter in a sink (receiver
of data) station. An IPC connection is formed in each station. Multiple peer-to-peer connections
between two stations result in only one IPC connection for each station.
For the sink points and connections supported per station type, refer to Table 3-16 “Peer-to-Peer
Data” on page 53.
The change-driven update rate is 0.5 sec even when the BPC and the OM Scanner is running at a
frequency less than 0.5 sec.

OM Scan Load
The OM Scan Load percentage is based on:
♦ The number of data points scanned/second.
♦ The number and size of scanner update messages sent each second for OM list
updates.
Table 2-3 to Table 2-5 have examples of OM Scan Load for CP270 sourcing data.

22
2. System Planning B0700AX – Rev W

NOTE
For OM scan loading for the FCP280, refer to the Field Control Processor 280
(FCP280) Sizing Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280
(FDC280) Sizing Guidelines and Excel Workbook (B0700GS).

AIM*API Application Examples

Table 2-3. OM Scan Load: AIM*API Application Examples

Points per % Points Changing Source CP270 OM


Description Second Scanned per Second Scan Load %
AIM*API Application 10,000 100 16.4
AIM*API Application 10,000 50 11.7
AIM*API Application 10,000 25 9.4
AIM*API Application 9000 100 14.8
AIM*API Application 9000 50 10.5
AIM*API Application 9000 25 8.5
AIM*API Application 8000 100 13.2
AIM*API Application 8000 50 9.4
AIM*API Application 8000 25 7.5
AIM*API Application 7000 100 11.5
AIM*API Application 7000 50 8.3
AIM*API Application 7000 25 6.6
AIM*API Application 6000 100 9.9
AIM*API Application 6000 50 7.1
AIM*API Application 6000 25 5.6
AIM*API Application 5000 100 8.2
AIM*API Application 5000 50 5.9
AIM*API Application 5000 25 4.7
AIM*API Application 4000 100 6.6
AIM*API Application 4000 50 4.7
AIM*API Application 4000 25 3.8
AIM*API Application 3000 100 5.0
AIM*API Application 3000 50 3.5
AIM*API Application 3000 25 2.8
AIM*API Application 2000 100 3.3
AIM*API Application 2000 50 2.4
AIM*API Application 2000 25 1.9
AIM*API Application 1000 100 1.7
AIM*API Application 1000 50 1.2
AIM*API Application 1000 25 0.9

23
B0700AX – Rev W 2. System Planning

NOTE
AIM*Historian software is an application that uses AIM*API software. The default
list scan rate for AIM*API software is 0.5 seconds. Scanning 5000 points every 0.5
seconds is equivalent to scanning 10,000 points/second.

Peer-To-Peer Examples

Table 2-4. OM Scan Load: Peer-To-Peer Examples

Points per % Points


Number Second Changing per Source CP270 OM
Description of Points Scanned Second Scan Load %
Peer-To-Peer Source Station 5000 10,000 100 17.7
Peer-To-Peer Source Station 5000 10,000 50 12.4
Peer-To-Peer Source Station 4500 9000 100 15.9
Peer-To-Peer Source Station 4500 9000 50 11.1
Peer-To-Peer Source Station 4000 8000 100 14.2
Peer-To-Peer Source Station 4000 8000 50 9.9
Peer-To-Peer Source Station 3500 7000 100 12.4
Peer-To-Peer Source Station 3500 7000 50 8.7
Peer-To-Peer Source Station 3000 6000 100 10.6
Peer-To-Peer Source Station 3000 5000 50 7.4
Peer-To-Peer Source Station 2500 5000 100 8.9
Peer-To-Peer Source Station 2500 5000 50 6.2
Peer-To-Peer Source Station 2000 4000 100 7.1
Peer-To-Peer Source Station 2000 4000 50 5.0
Peer-To-Peer Source Station 1500 3000 100 5.3
Peer-To-Peer Source Station 1500 3000 50 3.7
Peer-To-Peer Source Station 1000 2000 100 3.6
Peer-To-Peer Source Station 1000 2000 50 2.5
Peer-To-Peer Source Station 500 1000 100 1.8
Peer-To-Peer Source Station 500 1000 50 1.3

NOTE
The number of Sink stations does not affect the OM Scan Load percentage on the
Source station. The list scan rate for Peer-To-Peer is 0.5 seconds. Scanning 5000
points every 0.5 second is equivalent to scanning 10,000 points/second.

24
2. System Planning B0700AX – Rev W

FoxView Application Examples

Table 2-5. OM Scan Load: FoxView Application Examples

Number of
WSTA70s, Average
WSVR70s, & Active Average Points Source
AWs with FoxViews Unique per % Points CP270
FoxView per WSTA70, Points Second Changing OM
Connections WSVR70 or per Scanne per Scan
Description to CP270 AW Display d Second Load %
FoxView Application 25 2 200 10,000 100 19.0
FoxView Application 25 1 200 5000 100 9.5
FoxView Application 25 2 200 10,000 50 15.0
FoxView Application 25 1 200 5000 50 7.5
FoxView Application 25 2 100 5000 100 11.5
FoxView Application 25 1 100 2500 100 5.8
FoxView Application 25 2 100 5000 50 7.5
FoxView Application 25 1 100 2500 50 3.8
FoxView Application 10 2 200 4000 100 7.6
FoxView Application 10 1 200 2000 100 3.8
FoxView Application 10 2 200 4000 50 6.0
FoxView Application 10 1 200 2000 50 3.0
FoxView Application 10 2 100 2000 100 4.6
FoxView Application 10 1 100 1000 100 2.3
FoxView Application 10 2 100 2000 50 3.0
FoxView Application 10 1 100 1000 50 1.5
FoxView Application 5 2 200 2000 100 3.8
FoxView Application 5 1 200 1000 100 1.9
FoxView Application 5 2 200 2000 50 3.0
FoxView Application 5 1 200 1000 50 1.5
FoxView Application 5 2 100 1000 100 2.3
FoxView Application 5 1 100 500 100 1.2
FoxView Application 5 2 100 1000 50 1.5
FoxView Application 5 1 100 500 50 0.8
FoxView Application 1 2 200 400 100 0.8
FoxView Application 1 1 200 200 100 0.4
FoxView Application 1 2 200 400 50 0.6
FoxView Application 1 1 200 200 50 0.3
FoxView Application 1 2 100 200 100 0.5
FoxView Application 1 2 100 200 50 0.3
FoxView Application 1 1 100 100 100 0.2
FoxView Application 1 1 100 100 50 0.2

25
B0700AX – Rev W 2. System Planning

NOTE
The OM Scan Load percentage for the CP270 is based on the number of unique
display points, the lists scan rate (default 1.0 second), and the percentage of display
points changing every second. Examples above are for the default 1.0 second scan
rate. Displays configured for the fast scan option rate will have an OM Scan Load
percentage ten times the default list rate.

Control Processor Load Analysis


The estimated number of CPs needed is based on:
♦ Number of FBMs needed (both remote and local)
♦ Number and type of control loops consisting of compounds of related blocks which
perform a single control scheme
♦ Scan rate per block and compound
♦ CP storage memory capacity
♦ CP throughput based on scan rates and block value point counts
♦ Need for fault-tolerant CPs for crucial processes
♦ IPC connections available per CP (supporting various application requests for values)
♦ Peer-to-peer connections (affecting CP storage memory and IPC connections).

Block Processing Cycle


Running with 100 ms and 50 ms Block Processing Cycle (BPC)
Consider the number of blocks to be scanned within a compound and the number of compounds
to be scanned per control station. Control station processing throughput per second is dependent
upon control station type; however, some control blocks count as more than one block and FBM
equipment control blocks (ECBs) have to be factored in as well. An ECB is a software block
created for FBM data storage and is scanned at the fastest scan rate assigned to any control block
connected to any point on the FBM.
Careful planning is necessary to help prevent control blocks or compounds from being unable to
process within a single scan cycle.

Phasing

NOTICE
POTENTIAL LOSS OF DATA

• Be careful when phasing.


• Do not put the blocks at different phases in the same
control loop.

Failure to follow these instructions can result in data loss.

26
2. System Planning B0700AX – Rev W

Phasing of blocks, which is the staggering of scan periods, need to be used to help prevent block
processor overload. Refer to Control Processor 270 (CP270) and Field Control Processor 280
(CP280) Integrated Control Software Concepts (B0700AG) prior to attempting to phase a station.

CNI Planning
These subsections discuss sizing for the CNI. For more information on the CNI, see Control
Network Interface (CNI) User's Guide (B0700GE).

CNI Data Throughput Limits and CPU Usage


The CNI supports a bi-directional update rate of 2000 points per second. The CPU usage when
running at this throughput can be seen at point A on the graph in Figure 2-1.
The CNI has to exceed this throughput in some instances, such as during startup and recovery
from connection loss. To enforce the CPU usage ceiling for burst load conditions, such as the
processing overhead caused by the Object Manager opening and closing lists, the throughput of
updates that the CNI allows is restricted up to 2500 updates per second (each way). This is
indicated at point B on the graph in Figure 2-1. Beyond this point, it is possible that updates
could be received by a local CNI from data sources faster than these can be published to its
remote CNI partner. Under these circumstances, previous values are superseded by the latest data
and as a result of these lost updates, a system message is displayed.

Figure 2-1. CNI Data BiDirectional Throughput and CPU Usage

Up to around 5000 updates per second (2500 in each direction), the CPU percentage usage is
about 5% per 1000 updates. At greater change rates than this, the graph flattens off due to
throttling; updates are folded the latest value received is sent as soon as possible.

27
B0700AX – Rev W 2. System Planning

Limitation When Working With Maximum Connections Through


a CNI
For a CNI, up to 10,000 connections can be made to remote points in another Foxboro DCS
Control Network. Although it is possible to connect 10,000 points, a typical system may reach a
limit before this count is reached.
The number of connections that can be made is dependent on how efficiently these can be
represented in the scan table on the CNI. The scan table consists of a number of rows, each
containing twenty items. When the Object Manager lists opens on the CNI, they are stored
starting with a new row. If the number of points in a list is not divisible by twenty, some entries in
the scan list are not be utilized and thus the maximum number of connections that can be made is
reduced. A system message is displayed when the maximum capacity of the scan table has been
reached. Typically across a mixture of peer to peer and HMI connections, this message is not
displayed until an excess of 9000 connections are made.

Inter-CNI traffic (Bytes Per Second)

Figure 2-2. Inter-CNI traffic (Bytes Per Second)

The measurement of bytes transmitted shows the same flattening at around 2500 items per
second in one direction; each update costs 24 bytes, plus packet headers; the rate does not go
above 2500 items, therefore the steady-state total traffic does not go above approximately
24*2500 = 60,000 bytes per second. The traffic increases if the system is bombarded with
subscription changes, for example an action such as removing thousands of peer-to-peer
connections.

28
2. System Planning B0700AX – Rev W

Broadcast Considerations
If broadcast traffic on the control network is a concern, there are a number of aspects related to
the CNI sizing that impact this.

Remote Compound List Configuration on the Remote CNI


During start-up, the CNI broadcasts each of the compounds that are contained in the remote
Compound List. This broadcast operation is rate limited and so the number of compounds
configured does not increase the level of broadcast traffic, but simply extend the time period for
this. Figure 2-3 provides an estimate of the broadcast traffic that is experienced.

Figure 2-3. Compound Broadcast Transmission Density

The first three complete broadcasts are at a density of two messages per second, with a 10-second
and then 20-second gap between them. After the third complete broadcast, there is a four minute
gap, and the final broadcast of all compound names is at a density of only one message per two
seconds:

29
B0700AX – Rev W 2. System Planning

Figure 2-4. Compound Broadcast Density Plot for 10K Compounds

Retry Sink Connections


Some disconnections and reconfigurations can result in list connections not being made or not
being repaired between the CNI and the source CP, or between another client and its CNI. In
these cases, initiate the “Retry Sink Connections” operation from the System Manager, as
described in System Manager (B0750AP). This operation is similar in function to the
“Checkpoint” operation on a CP, except that the CNI has no control strategy to save.
The “Retry Sink Connections” operation performs two functions:
♦ The re-broadcast of compounds for which this CNI is proxy for the remote control
network, and
♦ The retry of open lists subscribed by the remote control network, for which this CNI
is the sink.
The compounds broadcast in this case is the same as the last broadcast at startup, see Figure 2-3
and Figure 2-4. The broadcast message density is one message per two seconds.
The compounds broadcast is executed prior to the open list retry; therefore the time to execute
open list retry depends on the number of proxy compound names to be broadcast.

30
2. System Planning B0700AX – Rev W

Figure 2-5. Retry Sink Connections Operation Time

Connection Time Considerations


If a significant number of connections need to be established to points on the remote control
network, there is an elapsed time period between the first and last point being connected. As
described, establishing connections need broadcast operations, and the rate limiting of these
operations has an impact. Figure 2-6 shows the elapsed time that may occur as connection counts
increase.
The CNI does not differentiate connection requests from different consumer stations and so
conditions where concurrent connection requests from multiple stations could occur need to be
considered if connection time is of importance. It is an option to employ more than one CNI to
load share data the number of connections between two control networks if better performance is
needed.
The graph in Figure 2-6 shows time to return good updates from connected points. For example,
if the local CNI is rebooted, and has to re-open all requested lists with the source CPs. In
Figure 2-6, the software restart consumes most of the first 40 seconds, before the list-open
operations start.

31
B0700AX – Rev W 2. System Planning

Figure 2-6. Number of Points vs. Time to Connect the First and Last Points

NOTE
This graph is taken from one example measurement.

The actual time until first update received (blue plot) depends on the distribution of connections
across source CPs. If one source CP is very heavily subscribed, and all list-open requests are
directed at that one CP, then it can take longer to send its first update, due to being busy. If
connections are distributed across a number of CPs, the first made lists may start updating while
other lists are still being opened on other CPs. The only certain data in the graph as shown in
Figure 2-6 is that the last point to be connected will send its first update when it is connected, and
not before.

Recovery After a Detected Failure of All Communication


Between CNIs
When communications become unavailable between CNIs, all points are reported Out of Service
(OOS). On recovery of communications, the data updates for all points are delivered from the
local CNI to the remote CNI in order to help ensure that consumers have the latest data. The
throughput of updates is rate limited in the CNI to manage CPU utilisation and the elapsed time
taken to deliver the latest data depends on the number of connections. Another factor that occurs
when the CNIs reconnect relates to synchronisation of the state in the local and remote CNIs. If
there have been changes to connection subscriptions since the communication unavailability, it is
necessary to resynchronize the subscription state between the two CNIs. This can take an elapsed
period of time.
On reconnection, if there is no serious difference between the two CNIs idea of subscriptions, no
work to do to re-synchronize, updates will flow almost immediately, subject only to the

32
2. System Planning B0700AX – Rev W

'throttling' at about 2500 updates per second. 10,000 subscriptions would take about four
seconds to complete initial update after reconnect.
On reconnect, if there is a serious difference between the two CNIs, and subscriptions has to be
re-synchronised by the remote CNI sending all its requirements to local CNI again (for example if
either CNI has been restarted), the time taken is the same as the graph shown in Figure 2-6 for re-
boot for all intents and purposes. Since all lists have to be remade, the resynchronization does not
add significantly to that time that is already being taken.

Limitation of Sink Points in a Workstation to help Ensure


Reconnection Following CNI Reboot
The CNI supports up to 10,000 compounds, and the CNI supports up to 10,000 sink points,
but the supported combination of compounds and sink points in a workstation depends:
♦ On the load on the workstation, especially the change update load, and,
♦ On other broadcasts that may be coming into the workstation from stations other
than the CNI.
When a CNI reboots, it re-broadcasts the compounds1 for which this CNI is proxy for the remote
Foxboro DCS control network. These received compound names are queued in the sink stations -
for example, the workstation - for reconnection to the CNI that just rebooted, and then, by proxy,
to the remote control network. For large numbers of compounds in the CNI’s remote Compound
List and large numbers of sink points in the workstation, it is possible for the workstation's queue
of compound names for reconnection to fill up and for some compound names to be dropped,
resulting in points for those compound names that do not reconnect.
Depending on other activity in the system, it is possible, though not necessarily true, that a “Retry
Sink Connections” operation (discussed in System Manager (B0750AP)) could successfully
reconnect those points that initially did not reconnect upon a CNI reboot; for example, if a
“Retry Sink Connections” operation is performed during a time period during which there are
fewer broadcasts being sent by other stations.
The workstation queue for holding compound names for reconnect processing can hold up to
200 names at a time.
To help ensure that the points reconnect following a CNI reboot or a “Retry Sink Connections”
operation:
♦ Limit the number of sink points in the workstation or,
♦ Limit the number of compounds for the CNI.
These examples help with this trade-off.
♦ If CNI’s compound namespace is 200 compounds or less, all sink points eventually
reconnect, assuming no compound broadcasts from other stations.
♦ If CNI’s compound namespace is greater than 200 compounds, the reconnect out-
come depends on the number of sink points and other workstation load. These
examples involve change update loads into the workstation:

1. At
CNI reboot, the CNI also broadcasts its diagnostic shared variables prior to broadcasting its com-
pound names.

33
B0700AX – Rev W 2. System Planning

♦ Case 1: If 10,000 compounds, 10,000 sink points, and no change update load,
3200 of the 10,000 sink points have been observed to successfully reconnect using
a “Retry Sink Connections” operation.
♦ Case 2: If 10,000 compounds, 3000 sink points, and no change update load,
2950 of the 3000 sink points have been observed to successfully reconnect using a
“Retry Sink Connections” operation.
♦ Case 3: If 10,000 compounds, and 2500 sink points with 1000 changes per sec-
ond, all the 2500 sink points are observed to successfully reconnect using a “Retry
Sink Connections” operation.
The success of broadcast messages depends on other broadcast activity occurring on the control
network.
If repeated “Retry Sink Connections” operations attempts prove unsuccessful at reconnecting all
the workstation's sink points, recover by closing and re-opening the workstation's sink lists; e.g.,
close and re-open displays and restart the Historian collection as needed.

I/O Points
The control station user guides and control station spreadsheets provide recommendations and
sizing guidelines for the I/O:
♦ Legacy Y-module (100 Series) FBMs
♦ DIN Rail Mounted (200 Series) FBMs
♦ FOUNDATION Fieldbus (FF)
♦ PROFIBUS
♦ HART
♦ Modbus
♦ FDSI
♦ DeviceNet

Network
Understand the details of the network traffic flow to plan and implement a control network. A
reasonable qualitative analysis of traffic profiles can be obtained without performing a rigorous
quantitative analysis. To achieve this, a reasonable estimate has to be made. Normally, the designer
needs to know:
♦ What the traffic characteristics are (traffic volume and rates)?
♦ Device throughput
♦ What devices are talking to each other (the traffic flows across the network)?
♦ The physical and logical location of all these devices
♦ What the traffic volumes are by device type and/or technology?
♦ peak and average sustained load
♦ packet/frame size
♦ What is the network percent capacity used?

34
2. System Planning B0700AX – Rev W

♦ How much of the bandwidth is being used?


♦ Is there any evidence of network congestion?
♦ number of packet discards
♦ detected error rates
♦ traffic overhead
♦ If there are nodes connected through ATSs, what is the ambient multicast rate in the
system?
If you do not adequately understand these traffic flows, you can end up with a slow or non-
working network. Answering these basic questions and performing some planning allows for a
nice load balanced redundant system.
These Control Core Services functionality contributes to the network traffic rate:
♦ FoxView – display updates
♦ AIM*Historian – data collection
♦ Alarming – messages to Alarm Managers, Printers, and Historians
♦ Peer-to-Peer
♦ Application Packages – change-driven data access and get/set operations.
However, the network traffic rates for the control network is not a detected issue with gating
because typical traffic rate calculations for the Control Core Services functionality yield a network
bandwidth utilization of <2%. The control network traffic rates need to be considered only in the
case of third-party applications or user applications that generate high packet rates.
Workstation to workstation operations on the control network, such as copying extremely large
files, can result in a temporary high bandwidth usage up to 50% of the switch port utilization.
These types of operations also need to be considered.
The baseline data that is assumed for the control network bandwidth usage (<2%):
♦ Average Control Core Services packet size is 150 bytes.
♦ Average Control Core Services sustained packet rate will not normally exceed 1500
packets/second (based on <2% bandwidth and 150-byte packets).
The table contains packet size and packet rate data for the 100 Mb device port speeds:
Packet Size (bytes) Max Packets/Second
64 148,810
150 73,529 (2% = 1470)
1518 8,127

NOTE
When measuring switch port utilization on the control network, a given measure-
ment applies only to a given link and the conversations on that link.

Refer to The Foxboro DCS Control Network Architecture Guide (B0700AZ) for planning the
control network, and Foxboro DCS Switch Configurator Application Software for the Control
Network User's Guide (B0700CA) for configuring the control network.

35
B0700AX – Rev W 2. System Planning

I/A Series software v8.1 introduced the feature for connecting the control network to a Nodebus
network using ATSs. If using an ATS in Extender mode, calculate inter-network traffic rates
through the ATS in Extender mode to help ensure that its corresponding Nodebus LI traffic rate
does not exceed the maximum recommended sustained rate of 220 packets/second. All stations
that migrate to the control network and continue to communicate to stations on the Nodebus has
to maintain their original Nodebus communication limits.
Copying a large data stream from a Nodebus through an ATS to the control network is not
recommended. Refer to the “Standard I/A Series Migration Strategies” section in V8.3 Software
for the Solaris Operating System Release Notes and Installation Procedures (B0700RR) for specific
details regarding data transfers between the Nodebus and the control network.

System Planning Summary Tables


The specifications listed in Table 2-6 apply to workstations with the new multiple CPU core
feature enabled.

Table 2-6. Windows Multicore Workstation System Planning Summary

Workstation
Windows Workstation OM Server OM OM
Application CPU Load % Connections Local Lists Objects
FoxView: 1.56% 1 per station 3 lists; 65 per
200 point display at sourcing data 1 list for each FoxView
default scan rate of 1 1-75 points
second
FoxView: 1.89% 1 per station 3 lists; 65 per
200 point display at sourcing data 1 list for each FoxView
fast scan rate 0.1 1-75 points
seconds
Alarm Manager: 1.57% Refer to AMS Refer to AMS 10 per
100 message per User’s Guide User’s Guide Alarm
second (B0700AT) (B0700AT) Manager
AIM*Historian: 1.05% 1 per station 4 lists; N/A
Data collection sourcing data 1 list for each
change rate of 1000 1-255 points
points per second

The specifications listed in Table 2-7 and Table 2-8 apply to the workstations with a single CPU
enabled connected to FCP270s and ZCP270s.
For OM scan loading for the FCP280, refer to the Field Control Processor 280 (FCP280) Sizing
Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS).

Table 2-7. Windows Workstation System Planning Summary

Workstation Control Station Control


Windows Workstation OM Server OM OM Control Station OM Scanner Station
Application CPU Load % Connections Local Lists Objects OM Scan Load %1 Connections Idle Time %
FoxView: 2% 1 per station 3 lists; 65 per 0.4% 1 for each station N/A
200 point dis- sourcing data 1 list for each FoxView sourcing data
play at default 1-75 points
scan rate of 1
second

36
2. System Planning B0700AX – Rev W

Table 2-7. Windows Workstation System Planning Summary (Continued)

Workstation Control Station Control


Windows Workstation OM Server OM OM Control Station OM Scanner Station
Application CPU Load % Connections Local Lists Objects OM Scan Load %1 Connections Idle Time %
FoxView: 4% 1 per station 3 lists; 65 per 4.0% 1 for each station N/A
200 point dis- sourcing data 1 list for each FoxView sourcing data
play at fast scan 1-75 points
rate 0.1 sec-
onds
Alarm Man- 2.5% Refer to AMS Refer to AMS 10 per N/A Refer to AMS Decreases
ager: User’s Guide User’s Guide Alarm User’s Guide 0.1% per
100 message (B0700AT) (B0700AT) Manager (B0700AT) alarm mes-
per second sage gener-
ated
AIM*Historian: 2% 1 per station 4 lists; N/A 1.7% 1 for each station N/A
Data collection sourcing data 1 list for each sourcing data
change rate of 1-255 points
1000 points per
second
AIM*Historian: 1% N/A N/A N/A N/A N/A N/A
Data Reduction
of 1000 points
reduced >= 15
minutes
AIM*Historian: 5% N/A N/A N/A N/A N/A N/A
Data Archiving
5000 points at
default rate

1.
FCP270/ZCP270 control station OM scan load percentage is 2% for 1000 points changing every
second and 0.03% for every scanner update message.
For OM scan loading for the FCP280, refer to the Field Control Processor 280 (FCP280) Sizing
Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS).

Table 2-8. Solaris Workstation System Planning Summary

Workstation Control Station Control


Windows Workstation OM Server OM OM Control Station OM Scanner Station
Application CPU Load % Connections Local Lists Objects OM Scan Load %1 Connections Idle Time %
FoxView: 5.2% 1 per station 3 lists; 65 per 0.4% 1 for each station N/A
200 point dis- sourcing data 1 list for each FoxView sourcing data
play at default 1-75 points
scan rate of 1
second
FoxView: 10% 1 per station 3 lists; 65 per 4.0% 1 for each station N/A
200 point dis- sourcing data 1 list for each FoxView sourcing data
play at fast 1-75 points
scan rate 0.1
seconds
Alarm Man- 5% plus base Refer to AMS Refer to AMS 10 per N/A Refer to AMS Decreases
ager: load 5% User’s Guide User’s Guide Alarm User’s Guide 0.1% per
100 message (B0700AT) (B0700AT) Manager (B0700AT) alarm mes-
per second sage
generated

37
B0700AX – Rev W 2. System Planning

Table 2-8. Solaris Workstation System Planning Summary (Continued)

Workstation Control Station Control


Windows Workstation OM Server OM OM Control Station OM Scanner Station
Application CPU Load % Connections Local Lists Objects OM Scan Load %1 Connections Idle Time %
AIM*Historian: 3% 1 per station 4 lists; N/A 1.7% 1 for each station N/A
Data collection sourcing data 1 list for each sourcing data
change rate of 1-255 points
1000
points/second
AIM*Historian: 1.5% N/A N/A N/A N/A N/A N/A
Data Reduc-
tion of 1000
points reduced
>= 15 minutes
AIM*Historian: 4% N/A N/A N/A N/A N/A N/A
Data Archiving
5000 points at
default rate

1.
FCP270/ZCP270 control station OM scan load percentage is 2% for 1000 points changing every
second and 0.03% for every scanner update message.
For OM scan loading for the FCP280, refer to the Field Control Processor 280 (FCP280) Sizing
Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS).

Data Access to Control Core Services Objects


Applications can use the Object Manager (OM) API to access Foxboro DCS objects that can be
either OM Objects, Application Objects (AO), or Control and I/O (CIO) Objects. OM API
library functions for getting and setting Foxboro DCS objects are better suited for situations in
which you want a single transfer of data. These functions use IPC connectionless services and
perform a message multicast (by default) or unicast (if the Object Manager Multicast
Optimization feature is enabled) operation if the Foxboro DCS object’s address is not provided by
the application (for example, om_getval) or if the Foxboro DCS object’s address had not been
imported by the OM during a previous data access (for example, getval with import option).
Control Core Services multicasts are limited by the ATS to 40/sec to Nodebus stations. Nodebus
systems that have CP10s or other GWs based on Comm15 architecture (such as FoxGuard, or
Allen-Bradley stations) need not exceed system multicast rates of 10/second. Systems with the
control network only do not exceed 5000 (5K) multicast messages per second but can also be
limited by control station CPU load.

NOTE
If you estimate that the number of multicast messages on your Nodebus and/or the
control network may exceed these specifications, we recommend that you enable the
Object Manager Multicast Optimization (OMMO) feature, which reduces the
number of multicast messages made and enables many of them (global find, get and
set, etc.) to be performed as unicast messages which have a much lower impact on
the networks. OMMO is available on stations with I/A Series software v8.6-v8.8 or
Control Core Services v9.0 or later. Refer to Object Manager Calls (B0193BC) for
more information on this feature.

38
2. System Planning B0700AX – Rev W

If a Foxboro DCS object’s address is known, the OM API performs direct connectionless send
messages to the station that sources the data. The maximum data access rates for CIO objects are
governed by the access method (multicast versus direct send) and the control station load.
Table 2-9 lists the maximum data access rates to CIO objects for a system with the control
network on which the stations with I/A Series software v8.6-v8.8 or Control Core Services v9.0 or
later have the OMMO feature disabled.

Table 2-9. Data Access to CIO Objects on CP270 With OMMO Feature Disabled

Multicasts (per second) Direct Sends (per second)


CP2701
Idle Time Set Set_Confirm Get Global Find (GF) Set Set_Confirm Get
75% 50 50 50 50 75 75 75
65% 50 50 50 50 75 75 65
55% 50 50 50 50 75 70 60
45% 50 50 50 50 75 60 50
35% 50 40 45 50 75 45 40
25% 50 35 30 40 75 35 35
15% 50 25 25 25 75 25 25

1. For OM scan loading for the FCP280, refer to the Field Control Processor 280 (FCP280) Sizing
Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS).

Notes:
1. When using multicasts, the load on a single control station is the sum of all the get/set
operations performed by all the applications in the entire system because each station
has to process each message.
2. Sequence code generates get/set requests using the OM API. Refer to High Level Batch
Language (HLBL) User’s Guide (B0400DF) for sequence code guidelines.

39
B0700AX – Rev W 2. System Planning

40
3. System Sizing
These sections present sizing information for workstations, control stations, and I/O points. All
data values presented in tables and worksheets have been rounded to one decimal position.

Workstations with Multiple CPU Cores Enabled


Table 3-1shows the initial Workstation Specification (Windows and Solaris platforms) for the
Foxboro DCS control network with Control Core Services v9.1 or later, for workstations with
multiple CPU cores enabled. The performance and sizing metrics are based on each workstation’s
specifications and thus each worksheet Workstation CPU Factor is 1.0.

Table 3-1. Windows Workstation with Multiple CPU Core Specification

Description Value
System Microsoft Windows 7 Professional Service Pack 1
Computer Pentium ® 4 CPU 3.2 GHz (H92)
Intel® Xeon® CPU E5-1603.0@ 2.80 GHz 2.79GHz
8 GB of RAM
Hard Disk Drives Windows (C:) 48.8 GB
IA (D:) 416 GB
Workstation CPU Factor 1.0

The CPU Load percentage varies significantly depending on platform type. For platforms with
the multicore feature enabled, McAfee scan times are reduced by 40% and the CPU utilization
is reduced by 50%, resulting in improved responsiveness during these scans. As well, platforms
with the multicore feature enabled have their CPU utilization reduced by 25% during BESR
backups, resulting in improved responsiveness when creating backups.

Workstation Summary Worksheet


This section describes a workstation summary worksheet that totals the CPU Load percentage for
the workstation based on the detailed worksheets that follow (for example, FoxView, AIM*Histo-
rian). The CPU Load percentage may not exceed 100%; otherwise, you have to kill the processes
of selected applications or transfer them to another workstation.
First, calculate the CPU Load percentage for each of the detailed worksheets that follow and then
enter their corresponding values into the summary worksheet as shown in Table 3-2. The Work-
station Summary Worksheet normalizes the CPU Load percentage based on the Workstation
CPU Factor for Control Core Services v9.1 or later. Entries for Base CPU Load and Reserved
CPU Load are fixed values and need to be entered in the Workstation Summary Worksheet based
on Windows platform guidelines.

41
B0700AX – Rev W 3. System Sizing

NOTE
Values in the summary worksheet are based on the Windows 7 workstation
examples from the worksheets that follow in this section. For example, the values for
the “Alarm Manager” entries in the summary worksheet (Table 3-2) are derived
from the Total CPU Load percentage of “1.57” for the Windows workstations in
Table 3-3.

Table 3-2. Workstation with Multiple CPU Cores Summary Worksheet Example

Value (%),
Windows
Description Workstations
1) Base Control Core Services/I/A Series CPU Load 1.0
(Windows=1.0)
2) Reserved CPU Load: 25.0
Windows (multiple CPU core enabled)=25.0
3) Alarm Manager 1.57
4) FoxView 5.01
5) AIM*Historian 1.05
6) Other Applications1 (for example, TDR, Application 5.0
Object Services, and so forth)

7) Total CPU Load % = 38.63


Items 1 + 2 + ((Sum of Items 3-6)*CPU Factor)
1. Refer to the reference document specific to the application.

Examples:
1. Total CPU Load % for a Windows 7 Workstation:
(1.0+25.0) + ((1)*(1.57+5.01+1.05+5.0)) = 38.63

42
3. System Sizing B0700AX – Rev W

Table 3-3 shows an example calculation for a workstation on the control network with 100
alarms/second and five Alarm Managers.

Table 3-3. Alarm Manager for Workstation with Multiple CPU Cores Worksheet Example

Value (%),
Windows
Description Workstations
1) Number of alarm messages per second from Aprint 100
Services for all CPs
Example: 100 alarms/second
2) Workstation CPU load % for every 100 alarms/second: 1.57%
Windows Formula = 1.57% * (number of alarms / 100)
Windows Example: 1.57% * (100/100) = 1.57%
3) Total CPU Load percentage = items 2 above 1.57%

NOTE
CPU load for Matching and Filtering are not
considered for sizing calculations as they are
negligible.

NOTE
The Sustained Alarm Rate measures the time to process the alarm message traffic
and is independent of the AST refresh rate.

Table 3-4. FoxView Worksheet for Workstation with Multiple Cores Enabled Example

Value (%),
Windows
Description Workstations
1) Number of FoxViews only using displays with default 3.12%
configuration values at (Windows=1.56%, per FoxView
Windows Example: 2 FoxViews = 2 * 1.56 = 4.0%
2) Number of FoxViews using any displays with Fast Scan 1.89%
Option at (Windows=1.8%) per FoxView
Windows Example: 1 FoxView with Fast Scan =
1 * 1.8 = 1.8%

3) Total CPU Load % = sum of items 1 and 2 above 5.01%

NOTE
Actual Total CPU Load percentage is the sum of all FoxView loads.

43
B0700AX – Rev W 3. System Sizing

Table 3-5. AIM*Historian for Workstation with Multiple Cores Enabled Worksheet Example

Value (%),
Windows
Description Workstations
1) CPU Load for data collection change rate 1.05%
(Windows=1.05%) per 1000 points/second
Refer to “Data Collection Rate Example”.
2) Total CPU Load % = item number 1. 1.05%

NOTE
Data archiving and data reduction are not considered for sizing calculations as they
are not sustained loads.

Data Collection Rate Examples


Compute the configured data collection rate:
♦ 1000 points at 0.5 seconds = 2000 points/second
♦ 1000 points at 1.0 seconds = 1000 points/second
♦ 1000 points at 10.0 seconds = 100 points/second
♦ Configured data collection rate = 1000 points/second.
Calculate the data collection change rate (All points changing):
♦ = (configured data collection rate) * (% of points changing)
♦ = (1000 points/second) * (1)
♦ = 1000 points/second.
Calculate the CPU Load for data collection change rate:
♦ CPU Load = (data collection change rate / 1000) * CPU Load for 1000/second
♦ = (1000/1000) * CPU Load % for 1000 pts/second
♦ Windows = 1 * 1.05 = 1.05%

Workstations with Single CPU Core Enabled


Table 3-6 shows the initial Workstation Specification tables (Windows and Solaris platforms) for
the Foxboro DCS control network with I/A Series software v8.3-v8.8 or Control Core Services
v9.0 or later, for workstations with a single CPU core enabled. The performance and sizing met-
rics are based on each workstation’s specifications and thus each worksheet Workstation CPU Fac-
tor is 1.0.

44
3. System Sizing B0700AX – Rev W

Table 3-6. Windows Workstation With Single CPU Core Specification

Description Value
System Microsoft Windows XP Professional Version 2002 Service Pack
2
Computer Pentium® 4 CPU 3.2 GHz (PW380, P92)
512 MB of RAM
Hard Disk Drives XP (C:) 15.6 GB
IA (D:) 217 GB
Workstation CPU Factor 1.0

NOTE
The legacy Windows workstations (for example, PW340, PW360, PW370) have a
Workstation CPU Factor of 1.5 based on performance and sizing specifications for
I/A Series software releases prior to v8.2.

Table 3-7. Solaris Workstation Specification

Description Value
System Solaris 10 Operating System (6/06 distribution)
Computer UltraSPARC IIIi® (Ultra 25® workstation, P82)
1.34 GHz
1 GB of RAM
Hard Disk Drives 160 GB SATA
Workstation CPU Factor 1.0

NOTE
The Workstation CPU Factors for each Solaris workstation that can be migrated
from V7.x to V8.3 software for the Solaris operating system are:
- P79 workstation, SunBlade 150 (550 MHz) = 2.5
- P80 workstation, SunBlade 2000 (900 MHz) = 1.5
- P81 workstation (silver model), SunBlade 1500/S (1.5 GHz) = 1.0
- P81 workstation (red model), SunBlade 1500/R (1.03 GHz) = 1.3

NOTE
The CPU Load percentage varies significantly depending on platform type.

45
B0700AX – Rev W 3. System Sizing

Workstation Summary Worksheet


This section describes a workstation summary worksheet that totals the CPU Load percentage for
the workstation based on the detailed worksheets that follow (for example, FoxView, AIM*Histo-
rian). The CPU Load percentage may not exceed 100%; otherwise, you have to kill the processes
of selected applications or transfer them to another workstation.
First, calculate the CPU Load percentage for each of the detailed worksheets that follow and then
enter their corresponding values into the summary worksheet as shown in Table 3-8. The Work-
station Summary Worksheet normalizes the CPU Load percentage based on the Workstation
CPU Factor for pre-v8.3 stations migrated to I/A Series software v8.3-v8.8 or Control Core Ser-
vices v9.0 or later. Entries for Base CPU Load and Reserved CPU Load are fixed values and need
to be entered in the Workstation Summary Worksheet based on Windows and Solaris platform
guidelines.

NOTE
Values in the summary worksheet are based on the Windows and Solaris
workstation examples from the worksheets that follow in this chapter. For example,
the values for the “Alarm Manager” entries in the summary worksheet as shown in
Table 3-8 are derived from the Total CPU Load percentage of “4.5%” and “16.5%”
calculated for the Windows and Solaris workstations in Table 3-9, “Alarm Manager
for Workstation with Single CPU Core Worksheet Example” on page 47.

Table 3-8. Workstation with Single CPU Core Summary Worksheet Example

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Base Control Core Services/I/A Series CPU Load 1.0 3.0
(Windows=1.0, Solaris=3.0)
2) Reserved CPU Load: 40.0 (single CPU 40.0
Windows (single CPU core)=40.0 core)
Solaris=40.0
3) Alarm Manager 4.5 16.5
4) FoxView 8.0 10.4
5) AIM*Historian 3.1 4.7
6) Other Applications1 (for example, TDR, Application 5.0 5.0
Object Services, and so forth)
7) Total CPU Load % = 61.6 79.6
Items 1 + 2 + ((Sum of Items 3-6)*CPU Factor)
1. Refer to the reference document specific to the application.

46
3. System Sizing B0700AX – Rev W

Examples:
1. Total CPU Load % for a Sun Blade 1500/R Workstation:
(3.0+40.0) + ((1.3)*(16.5+10.4+4.7+5.0)) = 90.6
2. Total CPU Load % for a Sun Blade 2000 Workstation:
(3.0+40.0) + ((1.5)*(16.5+10.4+4.7+5.0)) = 97.9
3. Total CPU Load % for a Sun Blade 150 Workstation:
(3.0+40.0) + ((2.5)*(16.5+10.4+4.7+5.0))= 134.5
This configuration exceeds CPU 100% capacity.
4. Total CPU Load % for a PW340 Workstation:
(1.0+40.0) + ((1.5)*(4.5+8.0+3.1+5.0)) = 71.9
Table 3-9 shows an example calculation for a workstation on the control network with 100
alarms/second and five Alarm Managers.
Table 3-9. Alarm Manager for Workstation with Single CPU Core Worksheet Example

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Number of alarm messages per second from Aprint 100 100
Services for all CPs
Example: 100 alarms/second
2) Workstation CPU load % for every 100 alarms/second: 2.5% 10.0%
Windows Formula = 2.5% * (number of alarms / 100)
Solaris Formula = 5.0% * ((number of alarms / 100) + 1)
Windows Example: 2.5% * (100/100) = 2.5%
Solaris Example: 5.0% * ((100/100)+1) = 10.0%
3) Base load for Number of Alarm Managers 1.0% 4.0%
Windows Formula = Default Windows AST Table 3-10 Lookup *
(default refresh rate / actual refresh rate)
Solaris Formula = Default Solaris AST Table 3-11 Lookup *
(default refresh rate / actual refresh rate)
Windows Example, 5 AMs at default (3.0) AST refresh rate
32K database: 1 * (3.0/3.0) = 1.0%
Solaris Example, 5 AMs at default (3.0) AST refresh rate
32K database: 4 * (3.0/3.0) = 4.0%
4) CPU Load for Matching and Filtering (per Alarm Manager) 1.0% 2.5%
CPU Load % = (Matching and Filtering Load Coefficient
[Windows=0.2%, Solaris=0.5%]) * (Number of Matches/Filters) *
(Number of Alarm Managers)
Windows Example: 5 matches for 1 Alarm Manager
CPU Load = 0.2% * 5 * 1 = 1.0%
Solaris Example: 5 matches for 1 Alarm Manager
CPU Load = 0.5% * 5 * 1 = 2.5%
5) Total CPU Load % = sum of items 2, 3, and 4 above 4.5% 16.5%

47
B0700AX – Rev W 3. System Sizing

NOTE
The Sustained Alarm Rate measures the time to process the alarm message traffic
and is independent of the AST refresh rate.

Table 3-10. Default AST Table for Number of Alarm Managers on a Windows-Based Workstation
with Single CPU Core (Local and Remote)

# Alarm AST Refresh CPU Load % CPU Load % CPU Load % CPU Load %
Managers Rate 1K Database 5K Database 10K Database 32K Database
1 3.0 seconds 0.2 0.2 0.2 0.4
5 3.0 seconds 0.2 0.2 0.4 1.0
10 3.0 seconds 0.2 0.4 0.8 1.7
15 3.0 seconds 0.2 0.7 1.0 2.2
20 3.0 seconds 0.2 0.7 1.0 3.0
25 3.0 seconds 0.2 0.8 1.5 3.5

Table 3-11. Default AST Table for Number of Alarm Managers on a Solaris-Based Workstation
(Local and Remote)

# Alarm AST Refresh CPU Load % CPU Load % CPU Load % CPU Load %
Managers Rate 1K Database 5K Database 10K Database 32K Database
1 3.0 seconds 0.9 0.9 0.9 0.9
5 3.0 seconds 4.0 4.0 4.0 4.0
10 3.0 seconds 8.0 8.0 8.0 8.0
15 3.0 seconds 12.0 12.0 12.0 12.0
20 3.0 seconds 16.0 16.0 16.0 16.0
25 3.0 seconds 20.0 20.0 20.0 20.0

Notes:
1. Each of the Number of Alarm Managers tables measures the time to process alarm
changes and is dependent on the AST refresh rate and independent of the sustained
alarm rate, as long as at least one alarm changes per refresh cycle.
2. CPU load is linear based on AST refresh rate. CPU Load formula is based on refresh
rate in table entry lookup.
Formulas:
Windows CPU Load percentage = (Default Windows AST Table 3-10 Lookup
Value for default AST 3.0 second refresh rate) * (default refresh rate/actual refresh
rate)
Solaris CPU Load percentage = (Default Windows AST Table 3-11 Lookup Value
for default AST 3.0 second refresh rate) * (default refresh rate/actual refresh rate)

48
3. System Sizing B0700AX – Rev W

Example 1: 1 AM at default 3.0 second refresh rate for 32K database


Windows Workstation: 0.4 * (3.0/3.0) = 0.4%
Solaris Workstation: 0.9 * (3.0/3.0) = 0.9%
Example 2: 5 AMs at 0.5 second refresh rate for 32K database
Windows Workstation: 1.0 * (3.0/0.5) = 6.0%
Solaris Workstation: 4.0 * (3.0/0.5) = 24.0%
Example 3: 25 AMs at default 3.0 second refresh rate for 32K database
Windows Workstation: 3.5 * (3.0/3.0) = 3.5%
Solaris Workstation: 20.0 * (3.0/3.0) = 20.0%

Table 3-12. FoxView Worksheet for Workstation with Single Core Enabled Example

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Number of FoxViews only using displays with default 4.0% 10.4%
configuration values at (Windows=2.0%, Solaris=5.2%)
per FoxView
Windows Example: 2 FoxViews = 2 * 2.0 = 4.0%
Solaris Example: 2 FoxViews = 2 * 5.2 = 10.4%
2) Number of FoxViews using any displays with Fast Scan 4.0% 0%
Option at (Windows=4.0%, Solaris=10.0%) per FoxView
Windows Example: 1 FoxView with Fast Scan =
1 * 4.0 = 4.0%
Solaris Example: 0 FoxViews with Fast Scan =
0 * 10.0 = 0%
3) Total CPU Load % = sum of items 1 and 2 above 8.0% 10.4%

NOTE
Actual Total CPU Load percentage is the sum of all FoxView loads.

49
B0700AX – Rev W 3. System Sizing

Table 3-13. AIM*Historian for Workstation with Single Core Enabled Worksheet Example

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) CPU Load for data collection change rate 3.1% 4.7%
(Windows=2.0%, Solaris=3.0%) per 1000 points/second
Refer to “Data Collection Rate Example”.
2) CPU Load for data reduction 3.1% 4.7%
(Windows=1.0%, Solaris=1.5%) per 1000 points, rate >= 15 min-
utes
Windows Example: 3100 pts at a rate >= 15 minutes =
3100/1000 * 1.0% = 3.1%
Solaris Example: 3100 pts at a rate >= 15 minutes =
3100/1000 * 1.5% = 4.7%
3) CPU Load for data archiving 3.1% 2.5%
(Windows=5%, Solaris=4%) per 5000 points at a default rate
Windows Example: 3100 pts at a rate >= 15 minutes =
3100/5000 * 5.0% = 3.1%
Solaris Example: 3100 pts at a rate >= 15 minutes =
3100/5000 * 4.0% = 2.5%

4) Total CPU Load percentage = item number 1. 3.1% 4.7%

NOTE
Items 2 and 3 are encapsulated in the
workstation reserve CPU load because they
are not sustained loads.

Data Collection Rate Examples


1. Compute configured data collection rate:
♦ 1000 points at 0.5 seconds = 2000 points/second
♦ 1000 points at 1.0 seconds = 1000 points/second
♦ 1000 points at 10.0 seconds = 100 points/second
♦ Configured data collection rate = 3100 points/second.
2. Calculate data collection change rate (½ the points changing):
♦ = (configured data collection rate) * (% of points changing)
♦ = (3100 points/second) * (0.5)
♦ = 1550 points/second.
3. Calculate CPU Load for data collection change rate:

50
3. System Sizing B0700AX – Rev W

♦ CPU Load = (data collection change rate / 1000) * CPU Load for
1000/second
♦ = (1550/1000) * CPU Load % for 1000 pts/second
♦ Windows = 1.55 * 2 = 3.1%
♦ Solaris = 1.55 * 3 = 4.7%.

Control Processors
This section summarizes important information about resource and loading for the various con-
trol processors. For more information on sizing guidelines, refer to the document and workbook
specific to your desired control processor:
♦ Field Control Processor 280 (FCP280) Sizing Guidelines and Excel Workbook
(B0700FY)
♦ FDC280 Sizing Guidelines and Excel Workbook (B0700GS)
♦ Field Control Processor 270 (FCP270) Sizing Guidelines and Excel Workbook
(B0700AV)
♦ Z-Module Control Processor 270 (ZCP270) Sizing Guidelines and Excel Workbook
(B0700AW)

NOTE
In the event of a conflict between the information provided in this manual and the
sizing guidelines for a CP/FDC280, the sizing guidelines for the CP/FDC280 have
precedence.

Maximum Loading Table


Foxboro does not recommend any of these maximums (in any phase where applicable). The
control station is considered to be fully loaded with respect to that parameter when this upper
limit is reached. This helps ensure that adequate time remains for functions that are above and
beyond the routine processing load such as checkpointing, alarm message handling, display call-
up, and so forth.

51
B0700AX – Rev W 3. System Sizing

Table 3-14. Loading Summary

Station Block Field FDC280 FCP280 FCP270 ZCP270


Maximum % for “Fieldbus Scan” (% of BPC) N/A 60.0% 60.0% 60.0%
Maximum % for “Control Blocks” (% of BPC) 70.0% 70.0% 70.0% 70.0%
Maximum % for “Sequence Blocks” (% of BPC) 70.0% 70.0% 70.0% 70.0%
Maximum % for “Control Blocks” + “Sequence 70.0% 70.0% 70.0% 70.0%
Blocks” (% of BPC)
Maximum % for “Total Control Cycle”1 (% of BPC) 70.0% 90.0% 90.0% 90.0%
Maximum OM points per second that can be 18,000 18,000 12,000 12,000
scanned.
♦ Actual OM Scan Load percentage will vary based
on the number of points per OM list, but is
expected to be roughly 0 to 20+%.
♦ OM Scan Load has to be considered together
with other loads so the Station Idle Time thresh-
old in this table is not violated.
Minimum percentage for “Station Idle Time” (per- 30.0% 10.0% 10.0% 10.0%
centage of CPU Load)

NOTE
If Station Idle time is less than 30%,
it affects significantly the ability to
generate bursts of alarm messages,
large scale OM updates (such as
100% value change) and non-
scheduled activity such as
checkpoints, self-hosting updates and
database uploads.

In the absence of the above non-scheduled activities,


the Station Idle Time can go as low as 10.0%.
1. The "Total Control Cycle" for the FDC280 consists of the sum of the % of BPC loads for “Control
Blocks”, “Sequence Blocks”, and “Foreign Device Data Load” (FDLOAD).

Notes on Station Block display:


1. You can check the Control Loading for the last 10 block processing cycles (BPCs) by
selecting “Control Loading” which also displays the number of control overruns.
2. You can check the OM Scanner Loading for the last 12 BPC cycles by selecting “OM
Scanner Loading” which also displays the number of scanner overruns.

52
3. System Sizing B0700AX – Rev W

Table 3-15. Station Free Memory (Bytes)

Station Block Field FDC280 FCP280 FCP270 ZCP270


Minimum Free Memory Available 1 1 500,000 500,000
in Station
1. OM list and control database restrictions help prevent the use of too much memory.
The control database is limited up to 22 MB for the FDC280, and 15.75 MB for the
FCP280.

NOTE
Do not load the CP270 so that the “Total Free” memory available is less than the
number of bytes specified in Table 3-15.

NOTICE
POTENTIAL LOSS OF DATA

Verify that the largest continuous area of memory available to


applications (MAXMEM in the Station block) does not go below 14000
bytes for FDC280s, FCP280s, and CP270s, or errors detected in the
Control HMI can occur. If MAXMEM goes below this amount, you can
either reboot the control processor or perform an online upgrade
operation on the non-FDC280 control processor.

Failure to follow these instructions can result in data loss.

Table 3-16. Peer-to-Peer Data

Station Block Field FDC280 FCP280 FCP270 ZCP270


Total sink points 11,250 11,250 7500 7500
OM sink connections 30 30 30 30
OM source connections 200 200 100 100

53
B0700AX – Rev W 3. System Sizing

Table 3-17. Resource Table

Blocks User OM OM Initialized


Hardware Per Block Database Scanner Sink Station
Resources Second Names Size Capacity1 Lists2 Memory3
FDC280 16,000 82624 22 MB 18,000 75 24000KB (will
vary based
on image
version)
FCP280 16,000 8000 15.75 MB 18,000 75 18700KB
FCP270 10,000 4000 5000KB 12,000 50 5700KB
ZCP270 10,000 4000 5000KB 12,000 50 5700KB

1. All source points requested by sink stations are collected into an OM Scanner Table
in the source CP. This “OM Scanner Capacity” specifies the maximum number of
source points that can be in the table. The table is organized in multiple of rows, with
20 entries per row, each row can belong to only one OM sink list. Therefore, the
“OM Scanner Capacity” can be reached only if all OM sink lists in the sink stations
are opened with a multiple of 20 variables per list. For example, if a sink controller
opens a sink list of 75 variables, four (4) scanner rows are used, with variable counts
of 20, 20, 20, and 15, with five (5) variables unused in the fourth scanner row.
2.
This is the maximum number of OM sink lists a CP can open. Multiplying by maxi-
mum number of 150 points per list, the product is the total sink points presented in
Table 3-16 “Peer-to-Peer Data” on page 53.
3. This is the approximate maximum memory available. The actual value for an
initialized CP varies slightly depending on the specific station software revision and
configuration.
4. The FDC280 supports up to 256 field devices and up to 8000 I/O points. For exam-
ples of valid block count combinations, refer to Field Device Controller 280 (FDC280)
Sizing Guidelines and Excel Workbook (B0700GS).

NOTE
For more information on sizing the Foxboro DCS control network, see The Foxboro
DCS Control Network Architecture Guide (B0700AZ).

54
3. System Sizing B0700AX – Rev W

CNI Sizing
Product Specification and the CPU Idle Time Limits

NOTICE
POTENTIAL DATA LOSS

Do not exceed the specifications stated in the Control Network


Interface Product Specification Sheet (PSS 31H-1CNI). Moreover, we
recommend that the CPU idle time reported should be at least 30%
during the operation.

Failure to follow these instructions can result in data loss.

The values related to the CNI specification and the CPU idle time can be calculated before instal-
lation. This calculation can be done using the Control Network Interface (CNI) Sizing Guidelines
and Excel Workbook (B0700HL) and post installation, you can check it using the FoxView CNI
Station Diagnostic Display.
For more information on the CNI sizing guide, see Control Network Interface (CNI) Sizing
Guidelines and Sizing Spreadsheet (B0700HL) and for more information on the FoxView CNI
Station Diagnostic Display, see Control Network Interface (CNI) User's Guide (B0700GE).

Inter-Network Traffic
I/A Series software v8.3-v8.8 or Control Core Services v9.0 and later supports inter-network traf-
fic between the control network and Nodebus network using ATSs. The preferred method of
migration is to replace all Nodebus LIs with ATSs in LI mode at one time. When using the pre-
ferred method, you only need to help ensure that stations that migrate to the control network and
continue to communicate with stations on the Nodebus maintain their original Nodebus com-
munications limits.
If you perform a gradual migration using an ATS in Extender mode followed by ATSs in LI
mode, you have to size the LI traffic rates. The LI with the ATS in Extender mode can become a
bottleneck as each Nodebus migrates to the control network using an ATS in LI mode. Below is a
description of the gradual migration process with sizing calculations needed to help ensure accept-
able inter-network traffic rates. Figure 3-1 depicts a five-node Foxboro DCS system showing the
traffic rates between various LI modules. For example, Figure 3-1 shows a 75 packet per second
traffic rate between Node 4 and Node 5.
1. Determine traffic rates for all Nodebus LIs. Refer to Figure 3-1. Refer to “LI Traffic
Rates” on page 61 for information on computing LI traffic rates on the web.
2. Add connection to the control network by adding ATS in Extender mode to LI (con-
sider using LI with lowest traffic rate). The LI will assume an additional load based on
the ATS traffic rate. Refer to Figure 3-2.
3. Determine the traffic rate for the ATS in Extender mode (traffic between the control
network and Nodebus stations). Compute the new traffic rate for the LI with the ATS

55
B0700AX – Rev W 3. System Sizing

in Extender mode. The new traffic rate for the LI with the ATS in Extender mode =
LI rate + ATS rate to Nodebus stations that are not on the Nodebus that has the ATS
in Extender mode. Refer to Figure 3-2. You can optionally measure traffic rates using
LIPDUS30 shared variable - see Helpful Hint 960.
4. All remaining LIs can be replaced whenever you wish with ATSs in LI mode, as long
as their traffic rates can be added to the LI with the ATS in Extender mode and the LI
does not exceed the maximum recommended sustained traffic rate. Refer to
Figure 3-3. If two or more nodes have high traffic rates between them, migrate the
nodes at the same time. This will not increase the traffic rate through the LI with the
ATS in Extender mode because the traffic between them is routed through the ATSs
in LI mode on the control network. Refer to Figure 3-4.
5. When migrating a node using an ATS in LI mode causes the LI with the ATS in
Extender mode to exceed the maximum recommended sustained traffic rate, you have
to perform a total replacement using ATSs in LI mode (which includes converting the
ATS in Extender mode to LI mode). Refer to Figure 3-5.

NOTE
IP communications cannot transmit across both an ATS and a LAN Interface
station due to filtering implemented within the LI modules. There is an IP address
limit of 64 stations per node. If full IP communication support is needed, the
network migration plan need to be the preferred method of a replacement of all
LAN Interface modules.

Example – Gradual Migration


1. Determine traffic rates for all Nodebus LIs. Refer to Figure 3-1.
♦ LI1 – 100 packets/sec, LI2 – 50 packets/sec, LI3 – 75 packets/sec, LI4 – 100
packets/sec, LI5 – 75 packets/sec
2. Add connection to the control network by adding an ATS in Extender mode to LI
(consider using LI with lowest traffic rate). Refer to Figure 3-2.
♦ ATS in Extender mode is added to LI2, which has the lowest traffic rate.
3. Determine the traffic rate for the ATS in Extender mode (new traffic between stations
on the control network and Nodebus). Compute the new traffic rate for the LI with
the ATS in Extender mode. Compute LI traffic rates for all LIs that have control net-
work traffic. Refer to Figure 3-2.
♦ ATS in Extender mode traffic rate = 50 packets/sec (between stations on the con-
trol network and Nodebus 2)
♦ LI2 traffic rate = LI2 (Nodebus traffic) + ATS in Extender mode traffic rate =
50 (N1↔N2) + 50 (M↔N3) = 100 packets/second
♦ LI3 traffic rate = LI3 (Nodebus traffic rate) + LI control network traffic to Node-
bus 3 = 50 (N1↔N2) + 25 (N3↔N4) + 50 (M↔N3) = 125 packets/second
4. Migrate Nodebus 1 to the control network using an ATS in LI mode (ATS LI1). This
decreases the Nodebus traffic rate for the LI with the ATS in Extender mode by trans-
ferring all traffic between the migrated Nodebus (Nodebus 1) and the LI Nodebus
with the ATS in Extender mode (Nodebus 2) to the ATS in Extender mode. However,

56
3. System Sizing B0700AX – Rev W

it does increase the traffic rate for the LI with the ATS in Extender mode by the Node-
bus traffic rates between the migrated Nodebus (Nodebus 1) and all LIs with no ATS
in Extender mode (N1↔N3, N1↔N4, N1↔N5). Refer to Figure 3-3.
♦ LI2 traffic rate = LI2 - LI1 (N1↔N2) + LI1 (N1↔N3) + LI1 (N1↔N4) +
LI1 (N1↔N5) = 100 - 50 (N1↔N2) + 50 (N1↔N3) + 0 (N1↔N4) +
0 (N1↔N5) = 100 packets/second
♦ ATS in Extender mode traffic rate = ATS in Extender mode + LI1 (Nodebus traf-
fic) = 50 + 100 = 150 packets/second
♦ ATS LI1 traffic rate = LI1 (Nodebus traffic) = 100 packets/second
5. Migrate Nodebus 4 and Nodebus 5 to the control network using ATSs in LI mode
(ATS LI4 and ATS LI5 respectively). Refer to Figure 3-4. Both nodes are migrated at
the same time because they have significant traffic between them, and you do not
want to impact LI2 with the ATS in Extender mode.
♦ LI2 traffic rate = LI2 + LI4 + LI5 = 100 + 25 (N3↔N4) + 0 = 125 packets/second.

NOTE
N4↔N5 traffic is routed through the control network with no impact on LI2.

♦ ATS in Extender mode traffic rate = ATS in Extender mode + LI4 (N3↔N4) =
150 + 25 = 175 packets/second
♦ ATS LI4 traffic rate = LI4 (Nodebus traffic) = 100 packets/second
♦ ATS LI5 traffic rate = LI5 (Nodebus traffic) = 75 packets/second
6. Migrate Nodebus 3 to the control network and change the ATS connected to Node-
bus 2 from Extender mode to LI mode (ATS LI2). Refer to Figure 3-5.
♦ ATS LI1 traffic rate = original LI1 Nodebus traffic rate = 100 packets/second
♦ ATS LI2 traffic rate = original LI2 Nodebus traffic rate = 50 packets/second
♦ ATS LI3 traffic rate = original LI3 Nodebus traffic rate + new control network to
Nodebus 3 traffic rate = 75 + 50 = 125 packets/second
♦ ATS LI4 traffic rate = original LI4 Nodebus traffic rate = 100 packets/second
♦ ATS LI5 traffic rate = original LI5 Nodebus traffic rate = 75 packets/second
The migration from all LI modules to all ATS modules is now complete.

57
B0700AX – Rev W 3. System Sizing

Carrierband LAN

50 (N1↔N2) 50 (N1↔N2) 50 (N1↔N3) 75 (N4↔N5) 75 (N4↔N5)


50 (N1↔N3) 25 (N3↔N4) 25 (N3↔N4)
100 Total 50 Total 75 Total 100 Total 75 Total

LI1 LI2 LI3 LI4 LI5

Nodebus 1 Nodebus 2 Nodebus 3 Nodebus 4 Nodebus 5


(N1) (N2) (N3) (N4) (N5)

Figure 3-1. Original Nodebus Traffic Rates

Carrierband LAN

50 (N1↔N2) 50 (N1↔N2) 50 (N1↔N3) 75 (N4↔N5) 75 (N4↔N5)


50 (N1↔N3) 50 (M↔N3) 25 (N3↔N4) 25 (N3↔N4)
50 (M↔N3)
100 Total 100 Total 125 Total 100 Total 75 Total

LI1 LI2 LI3 LI4 LI5

N1 N2 N3 N4 N5

ATS
(Extender mode)

50 (M↔N3)
50 Total

The Foxboro DCS Control Network (M)

Figure 3-2. Adding an ATS in Extender Mode

58
3. System Sizing B0700AX – Rev W

Carrierband LAN

50 (M↔N3) 50 (N1↔N3) 75 (N4↔N5) 75 (N4↔N5)


50 (N1↔N3) 25 (N3↔N4) 25 (N3↔N4)
50 (M↔N3)
100 Total 125 Total 100 Total 75 Total

LI2 LI3 LI4 LI5

N1 N2 N3 N4 N5

ATS ATS
LI1 (Extender mode)

50 (N1↔N2)
50 (N1↔N2) 50 (M↔N3)
50 (N1↔N3) 50 (N1↔N3)
100 Total 150 Total

The Foxboro DCS Control Network (M)

Figure 3-3. Migrate Nodebus 1

59
B0700AX – Rev W 3. System Sizing

Carrierband LAN

50 (M↔N3) 50 (M↔N3)
50 (N1↔N3) 50 (N1↔N3)
25 (N3↔N4) 25 (N3↔N4)
125 Total 125 Total

LI2 LI3

N1 N2 N3 N4 N5

ATS ATS ATS ATS


LI1 (Extender mode) LI4 LI5

50 (N1↔N2)
50 (M↔N3)
50 (N1↔N2) 50 (N1↔N3) 75 (N4↔N5)
50 (N1↔N3) 25 (N3↔N4) 25 (N3↔N4) 75 (N4↔N5)
100 Total 175 Total 100 Total 75 Total

The Foxboro DCS Control Network (M)

Figure 3-4. Migrate Nodebus 4 and Nodebus 5

N1 N2 N3 N4 N5

ATS ATS ATS ATS ATS


LI1 LI2 LI3 LI4 LI5

50 (N1↔N2) 50 (N1↔N2) 50 (N1↔N3) 75 (N4↔N5) 75 (N4↔N5)


50 (N1↔N3) 25 (N3↔N4) 25 (N3↔N4)
50 (M↔N3)
100 Total 50 Total 125 Total 100 Total 75 Total

The Foxboro DCS Control Network (M)

Figure 3-5. Final Migration

60
3. System Sizing B0700AX – Rev W

LI Traffic Rates
The procedure for computing LI traffic rates using the web is:
1. Go to the Global Customer Support at:
(https://pasupport.schneider-electric.com.)
2. Log in.
3. Select Support > Foxboro > Trouble Shooting Guides.
4. Select Tokenbus/Nodebus Troubleshooting Guide.
5. Select Next until the LAN Traffic Rates screen appears.
6. View Helpful Hint 960.

Connections Size of System Manager Server and


Client:
1. Maximum number of servers that can be connected per node is 2.
2. Maximum number of clients that can be connected per server is 35.
If you try to connect more than 35 System Manager clients to a server you will receive the system
message as it appears here.

61
B0700AX – Rev W 3. System Sizing

62
Appendix A. Object Manager
Multicast Optimization (OMMO)
The Object Manager Multicast Optimization (OMMO) feature, when enabled, reduces the num-
ber of multicast network communications initiated by the Object Manager. This reduces the net-
work processing overhead on The control network and I/A Series Nodebus control network
hardware (such as a switch, Address Translation Station (ATS), etc.) and the OM processing over-
head on the I/A Series stations (Application Workstations, Control Processors, etc.), which
increases the efficiency and robustness of I/A Series network communications.
This feature is available on Windows-based workstations with I/A Series software v8.6 or later and
Foxboro DCS Control Core Services 9.0 or later, and is disabled by default.
This feature is implemented via the following methods:
♦ Auto-importing of OM objects and their network addresses.
All object request operations initiated by the Object Manager, such as global_find,
get/set object value or import OM list operations, add the address indices and
network addresses for each OM object to the Object Manager’s import table and
object manager network address tables automatically, as they are found. (When
OMMO is not enabled, this import operation is performed by user request only.)
This reduces the need for repeated OM multicast requests for the locations of
unknown OM objects. An Application Workstation typically sends an increased
number of multicast request operations at start-up. Included with the OMMO feature
is the capability of saving the contents of the import table locally so that when the
station is restarted the import table will be automatically repopulated. This
significantly reduces the number of network messages during start-up
With the OMMO feature, an un-optimized OM list does not broadcast all of its
points without checking the import table first. If any point in the list is in the import
table, it will be connected via unicast messages.
♦ Unicast messages used in place of multicast messages for OM list requests.
OM lists are opened via unicast message to each relevant station wherever possible.
Instead of the network address information being provided by the caller station, the
information is retrieved from the import table which enables the variables to be
opened from each station via unicast messages. Points that are not found in the
import table are broadcasted for connection, and added to the import table, if
OMMO is enabled.
Unicast messages also reduce traffic on the networks. Address Translation Stations
(ATSes) are used to throttle multicast traffic from The control network to the
Nodebus to help prevent legacy Control Processors on the Nodebus from being
overwhelmed by increased messaging. During a severe communications spike, ATSes
may drop an excessive number of multicast messages. However, ATSes route
unicast messages to the Nodebus at full speed without limit. By using unicast
messages in place of multicast messages, there is a reduction in the number of
messages dropped by the ATSes, and a lowering of the communications overhead on
the legacy Control Processors.

63
B0700AX – Rev W Appendix A. Object Manager Multicast Optimization (OMMO)

While the list of objects in the import table is typically reliable, it is possible for a workstation to
miss address changes for objects in this table (for example, if a workstation is shut down during a
broadcast message informing all stations that an object has moved to a different control station).
When OMMO is enabled, and a global_find/get/set operation is requested for an OM
object, the Object Manager first checks for the object’s address in the workstation’s import table,
and then attempts to perform the operation with a unicast message. If the object is not at its
recorded address, the workstation attempts to find the object’s new location via standard multicast
messaging. If the object is found to have moved, the import table is updated accordingly.
The OMMO feature can be enabled on specific I/A Series workstations within the I/A Series sys-
tem, as desired. These workstations will interoperate with workstations running with this feature
disabled without any issues.
For maximum efficiency, this feature is designed to be implemented in I/A Series systems which
have proceeded through all significant configuration steps (Control Processor and I/O deploy-
ment and configuration, etc.) and are considered stable. Thus, this feature should be enabled only
after an I/A Series system has undergone its major cycles of configuration and deployment, and is
ready for operation.

Object Manager Multicast Optimization Configuration Options


The following parameters have been added to the loadable.cfg file for the Object Manager
Multicast Optimization (OMMO) feature. This file is located under /usr/fox/exten/config,
and includes the following OMMO configuration options:
♦ OM_MULTICAST_OPTIMIZATION - This determines whether the OMMO feature is
enabled or disabled when the workstation boots up:
♦ (Default) OMMO disabled: OM_MULTICAST_OPTIMIZATION % 0
♦ OMMO enabled: OM_MULTICAST_OPTIMIZATION % 1
♦ IMP_SAVE_PERIOD - The option is available for a workstation to save its import table
and address table to local files, which the workstation will use to populate its OM
database when rebooting. This helps to reduce network load as the last known
locations of OM objects will be loaded from this file instead of having to be requested
from stations on the network via multicast messages. This parameter determines how
often (in minutes) a workstation saves its import table and address table to local files:
♦ (Default) 30 (in minutes).
♦ Range: 0 to 1440 (1440 minutes = one day).
To disable saving to file, set to: 0
If the value is configured out of range, the default value of 30 (minutes) is applied. No
change is made to the configuration file.
The files created by this option are as follows:
♦ om_impdb.dat, for the import table, located under /usr/fox/exten/.
♦ om_adrdb.dat, for the address table, located under /usr/fox/exten/.
The contents of the import and address tables can be saved anytime to files by execut-
ing the following command from a command window in /usr/fox/exten/:
om_impdb.exe -save now
Executing this command will save the contents of the import table and the address
table to the files, om_impdb.dat and om_adrdb.dat respectively in

64
Appendix A. Object Manager Multicast Optimization (OMMO) B0700AX – Rev W

/usr/fox/exten/. Then, the next time the workstation is rebooted with OMMO
enabled, the OM import and address tables are automatically restored with these
saved objects and addresses.
These files are saved only if there have been changes to the import table and address
table in the time since the last save. These files are saved in a binary format. Do NOT
attempt to edit these files manually.
If, during start-up, the Object Manager finds that these files’ contents differ from
what it knows about its database (for example, a user changed the size of the database
to be larger or smaller than the database was when these files were created), the Object
Manager deletes these files and repopulates the database and the tables with standard
multicast messages. The Object Manager does not attempt to salvage any part of these
files.
This parameter can be set when the OMMO feature is not enabled, but it will have no
effect on the operation of the Object Manager.
♦ OMMO_MULTICAST_DELAY - When the OMMO feature is enabled, this sets the delay
time (in milliseconds) between the opening of OM lists that are sent via multicast
message and those that are sent to stations on the Nodebus.
♦ (Default) 200 (in milliseconds).
♦ Range: 50 to 12000 (in milliseconds).

NOTE
Consult with Global Customer Support before changing the
OMMO_MULTICAST_DELAY value from the default.

♦ OMMO_UNICAST_DELAY - When the OMMO feature is enabled, this sets the delay
time (in milliseconds) between the opening of OM lists that are sent via unicast
message to stations on The control network.
♦ (Default) 100 (in milliseconds).
♦ Range: 0 to 12000 (in milliseconds).

NOTE
Consult with Global Customer Support before changing the OMMO_UNICAST_DELAY
value from the default.

When the OMMO feature is disabled, the legacy default delay time of 200 milliseconds between
the opening of OM lists is used.
It is not recommended to change loadable.cfg directly as a Day 1 installation would regenerate
the file and override your changes. To change the value of these parameters, modify file
/etc/fox/opsys_usr.cfg, and then, in a command window, execute command:
[DRIVE]:\> \usr\local\verifier -V c
There should be no detected error returned from the execution of this command.

Import Table Behavior with OMMO Feature


The size of the import table can be increased or decreased manually, by setting the
OM_NUM_IMPORT_VARS variable (maximum of 10000). This should be adjusted based on the

65
B0700AX – Rev W Appendix A. Object Manager Multicast Optimization (OMMO)

number of OM objects that will have to be created for each of the unique objects required to be
monitored by the workstation keeping the import table. Since the OMMO feature reduces net-
work load exponentially as the number of OM objects increases, you should take care to size this
table appropriately.
If the number of OM objects for a workstation exceeds the maximum size of the import table
(that is, the import table is filled) and OMMO feature is enabled, the Object Manager defaults to
its former multicast behavior for handling OM objects not in the import table. Objects are
removed from the import table if requested to be unimported, or if the Object Manager cannot
find them on the network.
When OMMO is enabled, the import table can be cleared of all entries by typing the following
command from a command window in /usr/fox/exten/: om_impdb.exe -reset
This command also automatically removes the associated addresses from the OM address table
and removes the two files, om_impdb.dat and om_adrdb.dat. After the import table has been
cleared, it will be repopulated again when an OM object is requested via the OM calls
global_find, get, set or omopen.
Due to the size restrictions on the import table (a default maximum of 1000 variables and its
assignment in the /usr/fox/exten/config/usr_default.cfg configuration file), OM
objects are not imported to the import table, even if OMMO feature is enabled, under the follow-
ing conditions:
♦ A Compound is not imported automatically when a message announcing its creation
is broadcasted by the source station. It is not uncommon to have thousands of
compounds in an I/A Series network.
♦ Letterbugs are not imported automatically.

Compound with Invalid Block or Parameter Behavior with


OMMO Feature
When Object Manager Multicast Optimization functionality is enabled, accessing invalid (non-
existing or misspelled) blocks or parameters on a compound will cause that compound to be un-
imported. This will cause a temporary loss of multicast optimization functionality (multicast mes-
sages will be sent instead of unicast messages) for the next object manager call (get, set, global find
or omopen) that references that compound. Multicast optimization functionality will resume for
the compound when it is re-imported (automatically upon first reference of a valid block or
parameter for the compound).
For example:
♦ If a FoxView display (a block detail display or customer application display) is
activated and includes a connection to a non-existing block or parameter that is
contained in a compound listed in the import table, then the compound name will be
removed from the table and that compound becomes unoptimized.
♦ If a get, set, global find, or omopen call is made from an application program or
command line for a non-existing (or misspelled) block or parameter, then the
associated compound name will be removed from the table and that compound
becomes unoptimized.

66
Appendix B. Upgrading SMONs on
I/A Series Software v8.4.x
and Earlier to v8.5-v8.8 or Control
Core Services v9.0 or Later
This appendix describes the Quick Fixes needed to upgrade System Monitors (SMONs) on
I/A Series software v8.4.x and earlier to I/A Series software v8.5-v8.8 or Control Core Services
v9.0 or later.

Overview
System Monitor (SMON) support on workstations with I/A Series software v8.4.x and earlier,
which support up to 30 SMONs, can be upgraded to support up to 128 SMONs on Windows
stations with I/A Series software v8.5-v8.8 or Control Core Services v9.0 or later and (up to 508)
switches on the Foxboro DCS Control Network. This may be a combination of domains from the
control network and/or the Nodebus. No other quantities or limits for system sizing are changed
as part of this capability. All other system sizing guidelines and constraints still apply.
If adding this capability to a system that is interconnected to the Nodebus, it is necessary to apply
corrections to the SMDH on the Nodebus side.
Configuration of 128 domains is done through the version of SysDef v2.9 distributed as part of
Quick Fix (QF) 1009574 or through the InFusion Engineering Environment (IEE) v1.2 (or
later).

System Definition v2.9


SysDef v2.9 or later provides the support for these SMON upgrades. SysDef v2.9 is provided in
QF 1009574 or through the InFusion Engineering Environment (IEE) v1.2 (or later).

NOTE
If the system is a combination of both the control network and Nodebus, it is neces-
sary to modify the smonlst.cfg files of the Nodebus stations to control which
domains are shown to the operator. QF 1009574 contains instructions and scripts
to help with the modifications.

Quick Fixes Needed


Several applications have to be upgraded to support the new SMON upgrade. The Quick Fixes
listed in Table B-1 are needed to upgrade each of these applications.
Quick Fixes can be downloaded from the Global Customer Support:
https://pasupport.schneider-electric.com

67
B0700AX – Rev WAppendix B. Upgrading SMONs on I/A Series Software v8.4.x and Earlier to v8.5-v8.8 or Control Core Services v9.0 or Later

Table B-1. Quick Fixes for Upgrading SMON from I/A Series Software v8.4 or Earlier

Application Baseline Version Quick Fix


SMON I/A Series software v8.3 QF 1009879 V1.2
SMDH (the control net- I/A Series software v8.4 QF 1011794
work)
SMDH (Nodebus) I/A Series software v6.5 QF 1011797
I/A Series software v7.1 QF 1011798
System Definition 2.9 QF 1009574 Bld.3
(SysDef )
Alarm Management I/A Series software v8.2 QF 1009910
Subsystem (AMS)
HPSTK - QF 1011666
(Optional component)

Installation Sequence
Install in the steps in the given sequence:
1. Read all QF memos which come with the Quick Fixes listed in Table B-1.
2. If necessary, determine the layout of the Nodebus domains.
3. Using System Definition layout, create the Commit media for the system configura-
tion of all System Monitor domains, and SMON/SMDH configuration. For informa-
tion on SysDef, refer to System Definition: A Step-By-Step Procedure (B0193WQ).
4. Install the Commit media on all Window stations with your preferred control
configurator:
♦ Foxboro DCS Control Editors - see Block Configurator User's Guide (B0750AH)
and Hardware Configuration User's Guide (B0750BB)
♦ IACC - I/A Series System Configuration Component (IACC) User's Guide
(B0400BP) for Windows XP or Windows Server 2003, or earlier Windows oper-
ating systems, or I/A Series Configuration Component (IACC) User's Guide
(B0700FE) for Windows 7 or Windows Server 2008 R2 Standard or later.
♦ ICC - Integrated Control Configurator (B0193AV).
5. If necessary, modify the smonlist.cfg file.
6. If necessary, install AMS QF 1009910.
7. Install SMON QF 1009879 V1.2 on Windows stations on the control network.
8. Install the appropriate SMDH Quick fixes (see Table B-1) on Windows stations in the
system being modified.

68
Appendix B. Upgrading SMONs on I/A Series Software v8.4.x and Earlier to v8.5-v8.8 or Control Core Services v9.0 or Later B0700AX – Rev W

Known Detected Issues/Workarounds


The known detected issues with this upgrade and their workarounds are:
1. Nodebus Domain Display – This display may appear “grey”. Operators can double
click through the grey domain. The underlying station status display is available, and
change actions may be initiated.
Operators may also initiate Equipment Change Actions from the control network to
the Nodebus, assuming the workstations are properly configured to do so.
2. Nodebus NETWORK Display – This display may present one or more LAN stations
as grey. The underlying station status display is not available, and Equipment Change
Actions will not be initiated.
Figure B-1 to Figure B-4 provide examples of these two issues detected. They were
obtained from a workstation with Windows XP with I/A Series software v7.1; however,
the results are equivalent on a SOLARIS station with I/A Series software v6.1.x.

Figure B-1. New Nodebus Test - System Monitors

69
B0700AX – Rev WAppendix B. Upgrading SMONs on I/A Series Software v8.4.x and Earlier to v8.5-v8.8 or Control Core Services v9.0 or Later

The HOME display shows the first screen in SMDH with some grey SMONs.

Figure B-2. HOME Display

The SMON display shows the result of selecting a “grey” SMON.

NOTE
Verify that all picks are active.

70
Appendix B. Upgrading SMONs on I/A Series Software v8.4.x and Earlier to v8.5-v8.8 or Control Core Services v9.0 or Later B0700AX – Rev W

Figure B-3. “Greyed” SMON Selected

The NETWORK display shows the result of selecting the NETWORK button.

NOTE
Some LANs are grey.

71
B0700AX – Rev WAppendix B. Upgrading SMONs on I/A Series Software v8.4.x and Earlier to v8.5-v8.8 or Control Core Services v9.0 or Later

Figure B-4. Selecting NETWORK Button

The LAN display shows the results of selecting the grey LAN.

NOTE
No picks are available.

72
Appendix C. Site Planning
Worksheets
This appendix provides a series of blank worksheets for you to fill in when performing your site
planning of the Foxboro DCS Control Network.
Table C-1 to Table C-4 provide worksheets for workstations on the control network.
An example of this Table C-1 worksheet is provided in Table 3-8 “Workstation with Single CPU
Core Summary Worksheet Example” on page 46.

Table C-1. Workstation Summary Worksheet

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Base Control Core Services/I/A Series CPU Load
(Windows=1.0, Solaris=3.0)
2) Reserved CPU Load:
Windows (single CPU core)=40.0
Windows (multiple CPU core enabled)=25.0
Solaris=40.0
3) Alarm Manager

4) FoxView

5) AIM*Historian

6) Other Applications1 (for example, TDR, Application


Object Services, and so forth)

7) Total CPU Load % =


Items 1 + 2 + ((Sum of Items 3-6)*CPU Factor)
1. Refer to the reference document specific to the application.

73
B0700AX – Rev W Appendix C. Site Planning Worksheets

An example of this Table C-2 worksheet is provided in Table 3-9 “Alarm Manager for Worksta-
tion with Single CPU Core Worksheet Example” on page 47.

Table C-2. Alarm Manager Worksheet

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Number of alarm messages per second from Aprint
Services for all CPs
Example: 100 alarms/second
2) Workstation CPU load % for every 100 alarms/second:
Windows Formula = 2.5% * (number of alarms / 100)
Solaris Formula = 5.0% * ((number of alarms / 100) + 1)
Windows Example: 2.5% * (100/100) = 2.5%
Solaris Example: 5.0% * ((100/100)+1) = 10.0%
3) Base load for Number of Alarm Managers
Windows Formula = Default Windows AST Table 3-10 Lookup *
(default refresh rate / actual refresh rate)
Solaris Formula = Default Solaris AST Table 3-11 Lookup *
(default refresh rate / actual refresh rate)
Windows Example, 5 AMs at default (3.0) AST refresh rate
32K database: 1 * (3.0/3.0) = 1.0%
Solaris Example, 5 AMs at default (3.0) AST refresh rate
32K database: 4 * (3.0/3.0) = 4.0%
4) CPU Load for Matching and Filtering (per Alarm Manager)
CPU Load % = (Matching and Filtering Load Coefficient
[Windows=0.2%, Solaris=0.5%]) * (Number of Matches/Filters) *
(Number of Alarm Managers)
Windows Example: 5 matches for 1 Alarm Manager
CPU Load = 0.2% * 5 * 1 = 1.0%
Solaris Example: 5 matches for 1 Alarm Manager
CPU Load = 0.5% * 5 * 1 = 2.5%
5) Total CPU Load % = sum of items 2, 3, and 4 above

74
Appendix C. Site Planning Worksheets B0700AX – Rev W

An example of this Table C-3 worksheet is provided in Table 3-12 “FoxView Worksheet for
Workstation with Single Core Enabled Example” on page 49.

Table C-3. FoxView Worksheet

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Number of FoxViews only using displays with default
configuration values at (Windows=2.0%, Solaris=5.2%)
per FoxView
Windows Example: 2 FoxViews = 2 * 2.0 = 4.0%
Solaris Example: 2 FoxViews = 2 * 5.2 = 10.4%
2) Number of FoxViews using any displays with Fast Scan
Option at (Windows=4.0%, Solaris=10.0%) per FoxView
Windows Example: 1 FoxView with Fast Scan =
1 * 4.0 = 4.0%
Solaris Example: 0 FoxViews with Fast Scan =
0 * 10.0 = 0%
3) Total CPU Load % = sum of items 1 and 2 above

75
B0700AX – Rev W Appendix C. Site Planning Worksheets

An example of this Table C-4 worksheet is provided in Table 3-13 “AIM*Historian for Worksta-
tion with Single Core Enabled Worksheet Example” on page 50.

Table C-4. AIM*Historian Worksheet

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) CPU Load for data collection change rate
(Windows=2.0%, Solaris=3.0%) per 1000 points/second
Refer to “Data Collection Rate Example” below.
2) CPU Load for data reduction
(Windows=1.0%, Solaris=1.5%) per 1000 points, rate >= 15 min-
utes
Windows Example: 3100 pts at a rate >= 15 minutes =
3100/1000 * 1.0% = 3.1%
Solaris Example: 3100 pts at a rate >= 15 minutes =
3100/1000 * 1.5% = 4.7%
3) CPU Load for data archiving
(Windows=5%, Solaris=4%) per 5000 points at a default rate
Windows Example: 3100 pts at a rate >= 15 minutes =
3100/5000 * 5.0% = 3.1%
Solaris Example: 3100 pts at a rate >= 15 minutes =
3100/5000 * 4.0% = 2.5%

4) Total CPU Load percentage = item number 1.

NOTE
Items 2 and 3 are encapsulated in the
workstation reserve CPU load because they
re not sustained loads.

76
Glossary

CNI Control Network Interface


AIM*API AIM* Product Line Application Programming Interface
AIM*Historian AIM*API application that collects Control Core Services Objects
AMS Alarm Management System
AO API The Application Object API that is part of the OM and is used by AOS to
access (for example, create, locate) AO objects.
AO Objects The hierarchically named objects created and managed by either the
Application Object Services (AOS) or by the AO API on an OM station.
They are similar to CIO compounds, blocks, and parameters but provide
increased flexibility. You define the data parameters, attributes, and so
forth. AO objects take the form application:object.attribute.
API Application Programming Interface
AST Alarm Server Task
ATS Address Translation Station
CBL Carrierband LAN
CIO Control & Input/Output
CIO Objects The hierarchically named process control and input/output objects cre-
ated through the control configurator and managed by the control soft-
ware. CIO objects take the form compound:block.parm or
compound.parm.
Control Core See “Foxboro DCS Control Core Services” on page 77
Services
Control Core The set of AO, CIO, and OM objects on Foxboro DCS systems for which
Services Objects the OM provides applications with OM Services.
CP Control Processor. The control processor performs any mix of integrated
first-level automation functions such as continuous, sequential, or discreet
logic functions.
FCP270 Field Control Processor 270
FCP280 Field Control Processor 280
FDC280 Field Device Controller 280
FDSI Field Device System Integrator
Foxboro DCS Core software environment, formerly known as “I/A Series (Intelligent
Control Core Automation Series) software”. A workstation which runs this software is
Services known as a “Foxboro DCS Control Core Services workstation”.
Foxboro DCS Formerly known as “FCS Configuration Tools”, “InFusion Engineering
Control Editors Environment”, or “IEE”, these are the Control software engineering and
configuration tools built on the ArchestrA Integrated Development Envi-
ronment (IDE).

77
B0700AX – Rev W Glossary

Foxboro DCS Formerly known as “The Mesh control network”, a network of Foxboro-
Control Network qualified switches which enable communications between workstations,
control processors, and other similar stations. Subsequent references to
this network use “the control network”.
Foxboro DCS Formerly known as “Foxboro Control Software (FCS)” and “InFusion”, a
Control Software suite of software built on the ArchestrA Integrated Development Environ-
ment (IDE) to operate with the Foxboro DCS Control Core Services.
Foxboro DCS An overall term used to refer to a system which may include either, or
Process both, Foxboro DCS Control Software and Foxboro DCS Control Core
Automation Services.
System
I/O Input/Output
IPC Inter-Process Communications: a proprietary, Foxboro communications
layer for applications.
IPC Connection When two applications in different stations need a stable and consistent
connection between them, an IPC connection is formed. The number of
IPC connections is fixed based on station type except on workstations
where it is an OS configurable parameter. For change-driven data access
through OM open lists, the OM uses one IPC connection on each station
(sink and source) regardless of how many applications open lists on the
sink station.
LI LAN Interface
Multicast An IPC communication mechanism that sends messages to a group of des-
tinations. Since OM resides in every Foxboro station, multicast sends mes-
sages to every station. Therefore the terms broadcast and multicast are used
interchangeably in this document.
Multicore The multiple CPU core feature, allowing current-gen workstations and
servers to run Foxboro software with multiple CPU cores.
Nucleus Plus An embedded real-time operating system that is used on the FDC280,
FCP280, FCP270, ZCP270, and Address Translation Station (ATS) con-
trol stations
OM Object Manager: a proprietary, Foxboro OS extension that supports data
access to Foxboro DCS objects.
OM API The Object Manager API that provides OM Services.
OM List An OM list is a set of points for which an application wants to receive
change-driven data access. These data points can consist of CIO objects,
AO objects, and OM objects that can reside locally or in remote stations.
OM lists can be opened by user applications using AIM*API or by Fox-
boro applications using OM API. When an operator on a workstation
brings up a new display, the connected data points on this display are
requested from the station containing these points through an OM list.
When the AIM*Historian asks for data collection points, it also uses an
OM list. When a CP block has peer-to-peer block connections, it uses an
OM list. While an OM list is open, it exists in the station that has
requested the data (sink side) and a subset of the list exists in the station
that contains remote data (source side).

78
Glossary B0700AX – Rev W

OMMO Object Manager Multicast Optimization - A feature which reduces the


network processing overhead on the Foxboro DCS Control Network and
I/A Series Nodebus as well as the OM processing overhead on the Foxboro
stations (Application Workstations, Control Processors, etc.). When
enabled, OMMO reduces the number of multicast network communica-
tions initiated by the Object Manager.
OM Objects The flat named shared objects created and managed by OM Services,
including shared objects of the types: variable, alias, process, device, letter-
bug, and socket.
OM Scanner An OM process that monitors the database of a station and sends data on
an exception basis (change-driven basis) to other stations that have
requested the data. Examples of stations that request change-driven data
are CPs (for peer-to-peer connections) and Workstations (for displays, his-
torians, and other applications).
The OM scanner task always sends the change-driven data to the OM
server task of the station that requested the data through an OM list.
OM Server An OM process that services all OM message requests. This includes
change-driven data updates, get/set requests, object location requests, etc.
OM Services The object manager services used by applications for creating and deleting
the OM objects and locating and accessing the OM objects, the AO
objects, and the CIO objects.
Peer-to-Peer The control block mechanism that uses OM lists to refresh its block
Connection inputs with data from a remote station. That data can be from CIO, OM
or AO objects. For many of the control strategies peer-to peer connections
are between CIO objects. The block that requests data is referred to as the
sink of the block connection and the block that has the requested remote
data is referred to as the source of the block connection. A block connec-
tion is normally local to another block that exists in the same CP. How-
ever, the full path name defined for a block parameter may be to a CIO
object that is in another CP. This remote type of connection is referred to
as a peer-to-peer block connection.
SMDH System Management Display Handler (Graphical User Interface for Sys-
tems Management)
The control See “Foxboro DCS Control Network” on page 78.
network
The Control See “Foxboro DCS Control Software” on page 78.
Software
Unicast An IPC communication mechanism that sends messages to a single desti-
nation.

79
B0700AX – Rev W Glossary

Workstations Stations that connect to bulk storage devices and optimally to information
networks to allow bi-directional information flow. These processors per-
form computation intensive functions as well as process file requests from
tasks within themselves or from other stations. They also interface to a
CRT and the input devices associated with it. These may be alphanumeric
keyboards, mice, trackballs, touchscreens, or up to two modular key-
boards. Each processor manages the information on its CRT and
exchanges data with other processor modules.
ZCP270 Z-Format Control Processor 270

80
Index
A estimating number of control stations required
Address Translation Station 63 26
Address Translation Station, see ATS execution time 21
AIM*Historian software 17, 77 load analysis 26
CPU load 17, 18 maximum loading table 51
disk load time 17 memory 21
OM scan loading 19 OM scan load 21
OM scanner connections 18, 19 OM scanner connections 21
OM server connections 17 OM server connections 21
OM_NUM_CONNECTIONS 18 planning 20
OM_NUM_LOCAL_OPEN_LISTS 19 CP. See also Control stations
RTP file size (ARCHSIZE) 19 CPU load
RTTIME 19 AIM*Historian 17, 18
worksheet 44, 50, 76 AOS 19
workstation summary worksheet 42, 46, 73 applications 20
Alarm Manager software 16, 42, 46, 73 displays 15
worksheet 43, 47, 74 FoxView 13
Alarming in control stations 21 printers 19
Alarming software 16 reserved 7
AO API 77 worksheet calculations 41, 46
AO objects 77 workstation summary worksheet 42, 43, 46,
AOS software 19 47, 73, 74
CPU load 19
workstations 41, 46
number of objects 19
workstation summary worksheet 42, 46, 73 D
Application Object Services. See also AOS DeviceNet 34
Applications DIN rail mounted FBMs 34
CPU load 20 Disk load time
customer 42, 46, 73 AIM*Historian software 17
performance meter 20 Displays
planning 19 CPU load 15
third-party 35 Distribution of control 22
third-party and customer 20
ARCHSIZE 19 F
AST, alarm server task 77 Fast scan option 14
ATS 77 FBMs
DeviceNet 34
B DIN rail mounted (200 Series) 34
Block processing cycle (BPC) 26 FDSI 34
FOUNDATION Fieldbus 34
C HART 34
CMX_NUM_CONNECTIONS 10, 11 legacy 34
Control distribution 22
Modbus 34
Control stations 77
alarming 21 PROFIBUS 34
FCP270 77

81
B0700AX – Rev W Index

FCP280 77 N
FDSI 34, 77 Network
Field device system integrator. See also FDSI bandwidth utilization 35
FOUNDATION Fieldbus FBMs 34 planning 34
FoxView software 13 Nodebus
CPU load 13 sizing when communicating to the Foxboro
display guidelines and resource usage 14 DCS Control Network 55
OM scan load 14 Number of Alarm Managers worksheet 48
scan rates 14
worksheet 43, 49, 75 O
workstation summary worksheet 42, 46, 73 Object Manager Multicast Optimization 38
Object Manager. See also OM
G OM 78
GET_SET_TIMEOUT 10, 12 API 78
global_find 63, 64, 66 List 78
lists 9
H number of connections 11, 14
HART FBMs 34 number of entries 11
High speed draft mode 19 number of objects 11, 19
Historian software 17 number of open lists 11, 15, 19
number of processes that register for IPC 12
I
number of processes that register for IPC
I/O points 34
IMP_SAVE_PERIOD 10, 13 connectionless 12
Inter-network traffic 55 number of remote lists 12
Inter-process communications. See also IPC objects 79
IPC 78 OS configurable parameters 9
connected services 12 scanner 79
connectionless services 12 server 79
IPC connections 22, 26, 78 server connections 11
IPC_NUM_CONN_PROCS 10, 12 services 79
IPC_NUM_CONNLESS_PROCS 10, 12 OM scan load 3, 22
AIM*Historian software 19
L control stations 21
Legacy FBMs 34 FoxView software 14, 15
LI OM scanner connections
traffic rates 61 AIM*Historian software 18, 19
LI (LAN interface) 78 control stations 21
Loading FoxView software 14
control stations 20, 26, 51 OM server connections
CPU 7 AIM*Historian software 17, 18
workstation 7 control stations 21
Loading summary (% of BPC) worksheet 52 FoxView software 13, 14
OM_MULTICAST_OPTIMIZATION 10, 12
M
OM_NUM_CONNECTIONS 10, 11
Maximum loading table 51 AIM*Historian software 17, 18
McAfee virus scanning software 8
FoxView software 14
Memory 53
Modbus FBMs 34 OM_NUM_IMPORT_VARS 10, 11
OM_NUM_LOCAL_OPEN_LISTS 10, 11
AIM*Historian software 17, 19
FoxView software 14, 15

82
Index B0700AX – Rev W

OM_NUM_OBJECTS 10, 11 fast scan option 14


AOS 19 show_params 9, 11, 12
FoxView software 14 Sink peer-to-peer status worksheet 53
OM_NUM_REMOTE_OPEN_LISTS 10, 12 Sizing
OMMO 5 additional sizing for networks connected via
OMMO_MULTICAST_DELAY 10, 13 ATS
OMMO_UNICAST_DELAY 10, 13 control stations 51
omopen 66 workstations 41, 44
OS configurable parameters 9 SMDH 79
default and maximum values 10, 11 Solaris operating system 45
Other applications som 11, 12
workstation summary worksheet 42, 46, 73 Specifications 41, 45
Spreadsheets
P accessing
Peer-to-peer connections 21, 26, 79 from electronic documentation CD-
Performance ROM 4
increasing 20 Station block display 52
Phasing 26 Station free memory worksheet 53
Planning 7 Summary worksheet 41, 46
AIM*Historian software 17 System Management Display Handler. See also
alarming software 16 SMDH
AOS 19 System monitor (SMON) 8
applications 19 System planning 7
BPC 26 requirements 7
control distribution 22 System sizing 41
control stations 20
FoxView software 13 T
I/O points 34 The Foxboro DCS Control Network 2, 34
OM scan load 22 configuration and references 35
OS configurable parameters 9 sizing when communicating to Nodebus 55
phasing 26 workstation specifications 41, 44
printers 19 Third-party applications 35
System Monitor (SMON) 8 V
The Foxboro DCS Control Network 34 Virus scanning 8
workstations 7
Printers W
planning 19 Windows operating system 2, 45
reserved CPU load 19 performance meter 20
PROFIBUS FBMs 34 Worksheets
AIM*Historian 44, 50, 76
R Alarm Manger 43, 47, 74
RAM FoxView 43, 49, 75
increasing 20 Loading Summary (% of BPC) 52
Reserved CPU load 7 Number of Alarm Managers 48
Resource table worksheet 54
Resource Table 54
RTTIME 19
Sink Peer-to-Peer Status 53
S Station Memory Free 53
Scan rates Workstation Summary 41, 46
default 14 Workstations 80
CPU factor 41, 45

83
B0700AX – Rev W Index

planning 7
sizing 41, 44
specifications 41, 45
summary worksheet 41, 46

Z
ZCP270 80

84
Schneider Electric Systems USA, Inc.
38 Neponset Avenue
Foxborough, Massachusetts 02035–2037
United States of America

Global Customer Support: https://pasupport.schneider-electric.com

As standards, specifications, and designs change from time to time,


please ask for confirmation of the information given in this publication.

© 2014–2019 Schneider Electric. All rights reserved.


B0700AX, Rev W

You might also like