J1 4 Source Term Characterization of FFT
J1 4 Source Term Characterization of FFT
Luna M. Rodriguez, Andrew J. Annunzio, Kerrie J. Long, Sue Ellen Haupt, George S. Young
The Pennsylvania State University, University Park, Pennsylvania
The Defense Threat Reduction Agency Figure 1 depicts the GA procedure for
(DTRA) established that it is important to be able source characterization that has also been proven
to predict the atmospheric transport and to work with identical twin data in Allen et al. 2006,
dispersion (AT&D) of chemical, biological, nuclear, 2007, Haupt 2005, Haupt et. al. 2006, 2007a,
or radioactive (CBNR) materials. However, 2007b, 2007c, and Long et. al. 2010. We begin
sometimes there is inadequate source information with a set of trial solutions that are then fed into an
to predict how these materials transport and AT&D model. The AT&D models used in this study
disperse; therefore it becomes necessary to were a Gaussian Puff Model, a Gaussian Plume
characterize the source of a CBNR airborne Model, and the Second-Order Closure Puff Model
contaminant from remote measurements of the (SCIPUFF). The resulting concentration fields of
resulting concentration field. To generate a these models are then compared via a cost
comprehensive meteorological and tracer AT&D function and the best solutions mate and mutate.
dataset suitable for testing current and future This process iterates until it converges to a best
CBNR algorithms the FUsing Sensor Information solution.
from Observing Networks (FUSION) Field Trial
2007 (FFT07) was executed. Part of the FFT07 4. PHASE 1
data release plan was to make the data available
in phases. In the first of these phases, the actual For Phase I of FFT07 we submitted
release location and quantity of the agent was predictions of the cases containing concentration
withheld and the different research groups with information from 16 sensors. For all of our
CBNR algorithms submitted predictions of predictions we manually calculated the
locations of the source releases. The first part of atmospheric stability, used 10 s averages of the
this paper consists of discussing our Phase 1 concentration data as the time interval, determined
results using our Genetic Algorithm (GA) approach the stop and start time of each release by visual
to source characterization while the second part inspection, and did not filter noise. Some
discusses some lessons learned using Trial data. meteorological data was provided for each case;
however, we used the GA to determine the
2. DATA prevailing wind direction and speed. When using
SCIPUFF as our AT&D model we visually
The initial FFT07 data released, known as inspected the concentration data to see whether it
Trial data, contained readings from 100 sensors was a puff or a plume. When using the Gaussian
with the source information (location and amount) models, every case was run for both puff and
and abundant meteorological information. These plume, and we took the lowest cost function and
data were made available to test and train the submitted that result as our prediction for the case.
current CBNR algorithms with the intention that Figure 2 shows an example of our predictions
sparser datasets could be constructed by data for a puff and a plume case. As you can see in
denial. After 6 months, Phase 1 data, known as Figure 2, the case data has concentration values
Case data, was made available. These Case data from a selection of sensors downwind of the
contained 104 different release events with limited source release. Our predictions vary some
meteorological data and concentration data for depending on the AT&D model used. We were
only four or 16 sensors. able to achieve lower cost function values and
______________________________________ better source location predictions when using
*Corresponding author address: Luna Marie SCIPUFF as our AT&D.
Rodriguez, Department of Meteorology, The
Pennsylvania State University, 402 Walker 5. SENSIVITY
Building University Park, PA, 16802-5013; e-
mail: [email protected] After submitting predictions for Phase 1 we
did a more thorough analysis of the timestep
average, use of the meteorological data, and how
1
to threshold the data for noise. Figure 3 shows an cases from FFT07 allowed analysis of our routines
example of using different averaging periods for with real field data.
the concentration data. The 10 s average was not
as computationally intensive as the 1 s average ACKNOWLEDGEMENTS
yet still captures most maximum concentration
peaks. This work was supported by the Defense
We temporally and spatially averaged the Threat Reduction Agency under grants W911NF-
meteorological observations and compared results 06-C-0162, 01-03-D-0010-0012, and by the
obtained using the provided data with using wind Bunton-Waller Fellowship.
speed and direction computed directly by the GA.
Figure 4 indicated the problems that arise with the
measured wind. In that case, the wind turns REFERENCES
through 180 ° with height. That implies that a key Allen, C. T., G. S. Young, and S. E. Haupt, 2006:
issue is determining what is the appropriate Improving Pollutant Source Characterization by
steering level for the wind advecting the plume. Optimizing Meteorological Data with a Genetic
We compared using a vertically average wind, a Algorithm, Atmos. Environ., 41, 2283-2289.
wind from the mean level of the layer, and a wind Allen, C. T., S.E. Haupt, and G.S. Young, 2007:
speed and direction determined as part of the Source Characterization with a
genetic algorithm optimization. The GA- Receptor/Dispersion Model Coupled with a
determined advecting wind produced better Genetic Algorithm, Journal of Applied
concentration predictions, which resulted in better Meteorology and Climatology, 46, 273-287
estimates of the source location. In the future we Defense Threat Reduction Agency, cited 2008:
would like the GA to determine the advecting wind Mission statement. [Available online at
speed and direction as a function of time. http://www.dtra.mil/about/mission.cfm]
To avoid fitting sensor noise we applied Haupt, S. E., 2005: A Demonstration of Coupled
thresholds equally to all sensors. We found that Receptor/Dispersion Modeling with a Genetic
applying this threshold equally may not be the best Algorithm. Atmos. Environ., 39, 7181-7189.
approach given that our cost function values did Haupt, S. E., G. S. Young, and C. T. Allen, 2006:
not vary much between out high thresholds and Validation of a Receptor/Dispersion Model with a
low thresholds. However, we expect that applying Genetic Algorithm Using Synthetic Data. J. Appl.
thresholds individually to sensors will greatly Meteor. 45, 476-490.Long, K.J., S.E. Haupt, and
improve our predictions. G.S. Young, 2010: Assessing Sensitivity of
Source Term Estimation. Atmospheric
6. DISCUSSION Environment, in press.
Rao, .S., 2007: Source estimation methods for
We have shown some success at estimating atmospheric dispersion, Atmos. Env. 41, 6963-
source term variables with a genetic algorithm and 6973.
several different dispersion models. The trial
2
Figure 1. Schematic of Genetic Algorithm.
Figure 2. Source location predictions for a puff and a plume case submitted to Phase I of FFT07.
3
Figure 3. The Field Average is the spatial mean using different averaging periods for the concentration data
for Trial 15. Sensor 75 Average is the mean using different averaging periods for the concentration data of
sensor 75 for Trial 15.
Figure 2. We temporally averaged the meteorological observations for the Sonic Anemometers and the
SODAR.