0% found this document useful (0 votes)
20 views7 pages

Machine Learning of The Reactor Core Loading Pattern Critical Parameters

Uploaded by

Mihály Fazekas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views7 pages

Machine Learning of The Reactor Core Loading Pattern Critical Parameters

Uploaded by

Mihály Fazekas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Hindawi Publishing Corporation

Science and Technology of Nuclear Installations


Volume 2008, Article ID 695153, 6 pages
doi:10.1155/2008/695153

Research Article
Machine Learning of the Reactor Core Loading
Pattern Critical Parameters

Krešimir Trontl,1 Dubravko Pevec,1 and Tomislav Šmuc2


1 Department of Applied Physics, Faculty of Electrical Engineering and Computing, Unska 3, 10000 Zagreb, Croatia
2 Division of Electronics, Ru der
¯ Bošković Institute, Bijenička 54, 10002 Zagreb, Croatia

Correspondence should be addressed to Krešimir Trontl, [email protected]

Received 11 March 2008; Accepted 23 June 2008

Recommended by Igor Jencic

The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an
optimization algorithm, and a computer code used for evaluating proposed loading patterns. The speed of the optimization
process is highly dependent on the computer code used for the evaluation. In this paper, we investigate the applicability of a
machine learning model which could be used for fast loading pattern evaluation. We employ a recently introduced machine
learning technique, support vector regression (SVR), which is a data driven, kernel based, nonlinear modeling paradigm, in which
model parameters are automatically determined by solving a quadratic optimization problem. The main objective of the work
reported in this paper was to evaluate the possibility of applying SVR method for reactor core loading pattern modeling. We
illustrate the performance of the solution and discuss its applicability, that is, complexity, speed, and accuracy.

Copyright © 2008 Krešimir Trontl et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.

1. INTRODUCTION introduced machine learning technique, support vector


regression (SVR), which has a strong theoretical background
Decrease of the fuel cycle costs is an important factor in in statistical learning theory. SVR is a supervised learning
nuclear power plant management. The economics of the method in which model parameters are automatically deter-
fuel cycle can strongly benefit from the optimization of mined by solving a quadratic optimization problem.
the reactor core loading pattern, that is, minimization of This paper reports on the possibility of applying SVR
the amount of enriched uranium and burnable absorbers method for reactor core loading pattern modeling. Required
placed in the core, while maintaining nuclear power plant size of the learning data set, as a function of targeted
operational and safety characteristics. accuracy, influence of SVR free parameters, as well as input
The usual approach to loading pattern optimization vector definition were studied.
involves high degree of engineering judgment, a set of In Section 2, the support vector regression method is
heuristic rules, an optimization algorithm, and a reactor discussed. Basics of fuel loading pattern development and
physics computer code used for evaluating proposed loading optimization as well as the methodology applied for the
patterns. Since the loading pattern optimization problem investigation of applicability of the SVR method for fuel
is of combinatorial nature and involves heuristics requiring loading pattern evaluation are presented in Section 3. Results
large numbers of core modeling calculations (e.g., genetic and discussion are given in Section 4, while in Section 5 the
algorithms or simulated annealing algorithms), the time conclusions based on this work are drawn.
needed for one full optimization run is essentially deter-
mined by the complexity of the code that evaluates the core
loading pattern. 2. SUPPORT VECTOR REGRESSION
The aim of the work reported in this paper was to
investigate the applicability of a machine learning modeling Machine learning is, by its definition, a study of computer
for fast loading pattern evaluation. We employed a recently algorithms that improve automatically through experience.
2 Science and Technology of Nuclear Installations

One of machine learning techniques is the support vector


machines (SVMs) method, which has a strong theoretical
background in statistical learning theory [1]. The method f (x) + ε
proved to be a very robust technique for complex classifica-

Scalar output values


tion and regression problems. Although, historically speak- ξ f (x)
ing, the first implementation of SVM was for classification
problems [2, 3], in the last decade, the application of SVM ε f (x) − ε
for nonlinear regression modeling is noticeable in different
fields of science and technology [4–10], the main reason
ε
being robustness and good generalization properties of the
method. ξ∗
In the upcoming paragraphs, we will give a short intro-
duction into the support vector regression method, stressing
only the most important theoretical and practical aspects Input vectors
of the technique. Additional information can be found in
referenced literature. Fitted values
Input vectors
In general, the starting point of the machine learning Support vectors Input vectors outside the ε tube
problem is a collection of samples, that is, points, to learn
the model (training set) and a separate set to test the learned Figure 1: The schematic illustration of the SVR using ε-insensitive
model (test set). Since we are interested in developing a cost function (tube).
regression model, we will consider a training data set, as well
as testing data set, comprised of a number of input/output
pairs, representing the experimental relationship between
input variables (−→
x i ) and corresponding scalar output value
(yi ): where Rreg [ f ] denotes regression risk (possible test set error),
−  −  −  based on empirical risk which is expressed through the cost
→ → →
x 1 , y1 , x 2 , y2 , . . . , x n , yn ⊂ Rd × R. (1) function C determined on the points of the training set, and
In our case, the input vector defines the characteristics of a term reflecting the complexity of the regression model.
the loading pattern, while the output value, also referred to Minimization task thus involves simultaneous minimization
as a target value, denotes the parameter of interest. of the empirical risk and minimization of structural com-
The modeling objective is to find a function y = f (− →
x) plexity of the model. Most commonly used cost function
such that it accurately predicts (with ε tolerance) the output (loss functional) related to empirical risk is the so called “ε
value (y) corresponding to a new input vector (− →
x ), yet insensitive loss function”:
⎧ −→ →
f −
unseen by the model (the model has not been trained on that ⎪
 − ⎨ f x − y − ε, for x − y ≥ ε,
particular input vector) [11]. → 
C f x i − yi = ⎪
Due to the high complexity of underlying physical ⎩ 0, otherwise,
process that we are modeling, the required function can be
expected to have high nonlinear properties. In the support (4)
vector regression approach, the input data vector − →
x is
where ε is a parameter representing radius of the tube around
mapped into a higher dimensional feature space F using a
regression function. The SVR algorithm attempts to position
nonlinear mapping function Φ, and a linear regression is
the tube around the data, as depicted in Figure 1 [7], and
performed in that space. Therefore, a problem of nonlinear
according to (4) does not penalize data points for which
regression in low-dimensional input space is solved by linear
calculated values (y) lie inside this tube. The deviations of
regression in high-dimensional feature space.
points that lie more than ε away from the regression function
The SVR technique considers the following linear estima-
are penalized in the optimization through their positive and
tion function:
→ − → negative deviations ξ and ξ ∗ , called “slack” variables.
f −
x = →
w, Φ −
x + b, (2) It was shown that the following function minimizes the
regularized functional given by (3) [1]:
where −→
w denotes the weight vector, b is a constant known as
bias, Φ(−

x ) is the mapping function, and − w , Φ(−
→ →
x ) is the
→∗   ∗
n
→ −  → −  → − 
dot product in feature space, such that Φ : x → F, −

→ →
w ∈ x,→
f − x,→
w = f − α,−
α = x i, →
αi − αi K − x + b,
F [12]. The unknown parameters w and b are estimated i=1
using the data points in the training set. To avoid overfitting (5)
and maximize generalization capability of the model, a
regularized form of the functional, following principles of where α∗i αi are Lagrange multipliers describing − →
w , and are
structural risk minimization (SRM), is minimized: estimated, as well as parameter b, using an appropriate
quadratic programming algorithm, and K(− → −
x i, →
x ) is a so

called kernel function describing the dot product −w , Φ(−
→ →
M
 −
→ 
Rreg [ f ] = C f x i − yi + λ −
→ 2
w , (3) x )
i=1 in the feature space. A number of kernel functions exist [13].
Krešimir Trontl et al. 3

Kernel functions used in this work are described in more two-group cross-section preparation [14]. The calculation is
details in the following section. based on quarter core symmetry, fixed cycle length, and fixed
Due to the character of the quadratic optimization, soluble boron concentration curve.
only some of the coefficients α∗i − αi are nonzero, and The generation phase, that is, the definition of the load-
the corresponding input vectors − →
x are called support ing patterns, has been based on a semirandom algorithm.
vectors (SVs). Input vectors matching zero α∗i − αi coef- In order to narrow the investigated input space as much as
ficients are positioned inside the ε tolerance tube and possible, as well as to stay within the limits of the numbers
are therefore, not interesting for the process of model of available fuel assemblies per batch, we introduced a
generation. Support vectors that are determined in the limitation for every fuel assembly regarding the position
training (optimization) phase are the “most informative” where it can be placed: fuel assemblies originally placed
points, that compress the information content of the training on axes positions could be randomly placed only on axes
set. In most of the SVR formulations, there are two free positions, and vice versa. The central location fuel assembly
parameters to be set by the user: C-cost of the penalty was fixed for every loading pattern.
for data-model deviation, and ε-insensitive zone. These The most important issue in the regression model
two free parameters and the chosen form of the kernel development is the definition of the input space to be
function and its corresponding parameters control the used for SVR model development. Since in a quarter core
accuracy and generalization performance of the regression symmetry setup, the NPP Krško core is defined by 37
model. fuel assemblies, and having in mind the inquiring nature
of the work, we decided to simplify the problem by the
3. METHODOLOGY assumption of the 1/8 core symmetry, resulting in 21 fuel
assemblies defining the core. Fuel assembly (position) is
One of the key processes of both, safe and economical defined by initial enrichment, number of IFBAs, and reactor
operations of nuclear reactor, is in-core fuel management, history, or at least burnup accumulated in previous cycles.
or to be more precise, fuel loading pattern determination Therefore, the number of potential parameters defining the
and optimization. Every method and technique used for input space is 63. The high dimensionality of the input
fuel loading pattern determination and optimization tasks, space generally increases the number of training points and
whether based on engineering judgement, heuristic rules, time required for the development of the SVR of certain
genetic algorithms, or a combination of stated approaches, generalization properties. Therefore, we decided to reduce
requires a large number of potential fuel loading patterns the number of parameters by introducing k-inf at the
evaluation. The evaluation is normally performed using a beginning of the cycle as a new parameter and representing
more or less sophisticated reactor physics code. Usage of such fuel assembly only by k-inf and number of IFBAs (0 for
codes is time consuming. Therefore, in this work, we are old fuel, and 32, 64, 92, and 116 for fresh fuel). Thus,
investigating the possibility of SVR method being used as a the final number of parameters defining the input space
fast tool for loading pattern evaluation. was 42.
However, taking into account that the SVR method is The SVR model would eventually be used in an optimiza-
to be used, a number of factors have to be addressed prior tion algorithm as a fast tool for loading pattern evaluation.
to creating a model. The first is the setting of the loading Therefore, the target parameters which we want to model
pattern that is to be investigated, including the method by should be the most important parameters on which such an
which the experimental data points are to be generated, the evaluation is based. In this work, we used the global core
definition of the input space and parameters used as target effective multiplication factors at the beginning and at the
values. The second is the choice of the kernel function and end of the cycle (keffBOC and keffEOC ), as well as power peaking
N
appropriate free parameters used in the SVR model. Finally, factor (FΔH ) as target parameters for which separate SVR
SVR modeling tools have to be addressed. models were built.

3.1. Computational experiment setup


3.2. Kernel functions
Taking into account the preliminary and inquiring charac-
teristics of the study, we decided to use limited fuel assembly The idea of the kernel function is to enable mathematical
inventory for a single loading pattern optimization as a basis operations to be taken in the input space, rather than
for the development of our regression models. NPP Krško in the high-dimensional feature space [15]. The theory is
Cycle 22 loading pattern has been used as a reference one. based upon reproducing kernel Hilbert spaces (RKHSs)
121 fuel assemblies, grouped in 7 batches that were used for [16].
core loading in Cycle 22 have been used for generating a A number of kernel functions have been proposed in the
large number of randomly generated fuel loading patterns, literature. The particular choice of the kernel that is going
which were then divided into training and testing data sets to be used for mapping nonlinear input data into a linear
and employed in SVR model development process. The feature space is highly dependent on the nature of the data
global core calculations of each of the trial loading patterns representing the problem. It is up to the modeller to select the
have been conducted using MCRAC code of the FUMACS appropriate kernel function. In this paper, the focus is placed
code package, which also includes the LEOPARD code for on two widely used kernel functions, namely, radial basis
4 Science and Technology of Nuclear Installations

function (RBF), also called Gaussian and the polynomial implementation times (Pentium 4 Mobile CPU 1.7 GHz,
function (PF), which are defined by (6) 256 MB RAM, Windows XP SP2), and the relative number
of support vectors as the measure of model generalization
 −
→ −→ 2
→ −  − xi− x j characteristics. The implementation time has been measured
x i, →
KRBF − x j = exp , on 5000 data points. The accuracy of the model was
2σ 2 (6) determined using root mean square error (RMSE) and
− → 
→ − −
→T −
→ d
KPF x i , x j = x i x j + 1 . relative average deviation (RAD) defined as

n  2
In the case of RBF kernel, parameter σ represents the i=1 yi − fi
radius of the Gaussian kernel, while d in the case of PF kernel RMSE = ,
n
represents the degree of the polynomial kernel. n    (7)
 yi − fi / yi × 100%
As already mentioned, the behaviour of the SVR tech- i=1
RAD = ,
nique strongly depends on the selection of the kernel func- n
tion, its corresponding parameters, and general SVR “free” where fi stands for predicted value corresponding to the
parameters (C and ε). All the parameters used in this study target value yi . The metric of interest was also the percentage
were determined by a combination of engineering judgement of tested data points which had the predicted value deviate
and optimization procedure based on the application of from the target value by more than 20%:
genetic algorithms [17].  
 yi − fi 
× 100% > 20% . (8)
3.3. SVR modeling tools yi

Excellent results in SVR application to a wide range of In the case of RBF kernel function, the initial values of
classification and regression problems in different fields of free parameters were estimated using a genetic algorithm
science and technology, initiated creation of a number of (GA) on the LIBSVM code. The ranges for every parameter
implementations of the support vector machines algorithm, (C, ε, and σ) were set, based on engineering judgement,
√ from
some of which are freely available software packages. In 1 to 1000 for C and 0.001 to 2.0, and 1 to 7.07 ( 50) for ε and
this work, we decided to test three often used packages: σ, respectively. The GA was characterized by 20 populations
SVMTorch [18], LIBSVM [19], and WEKA [20]. each consisting of 100 members. The training set consisted of
As stated in the previous subsection, RBF and PF 4500 data points, while the test set had 500 data points. The
kernel functions have been used. The general form of the best result was obtained for C = 371.725, ε = 0.05154, and
kernels is given in (6). However, practical parameterisation σ = 6.4697.
of the functions, that is, their representation, is somewhat In the case of the PF kernel function, we decided to set
different from code to code. For example, parameter g in the d parameter to the commonly used value of 3, while for
LIBSVM notation for RBF represents 1/(2σ)2 . Whenever, simplicity reasons C and ε were set to 371.725 and 0.05154,
a direct comparison of codes has been performed, general respectively. Comparison results for RBF kernel function are
kernel parameters have been set (see (6)), and code specific given in Table 1 while in Table 2 comparison results for PF
parameters were modified to reflect on these values. kernel function are presented.
The results of preliminary tests suggest that appropriate
4. RESULTS AND DISCUSSION regression models using SVM method can be developed for
all target values regardless of the applied code package. The
4.1. Comparison of code packages only difference is the learning time required for the model to
be developed. The implementation or deployment time for
The comparison of three code packages for SVR model- the execution of the model (maximum of 30 seconds for 5000
ing, namely, SVMTorch, LIBSVM, and WEKA, has been calculations) is not the issue. The accuracy for the keffBOC and
conducted using a maximum training set size of 15 000 keffEOC target values is satisfactory, while additional effort has
data points while the test set consisted of 5000 data points. N
to be placed on developing the FΔH model by adjusting SVR
The number of data points for learning models is typically parameters and increasing the training set size.
enlarged until satisfactory results regarding the accuracy are
achieved. In this subsection, only the results of final models 4.2. Training set size influence on SVR model quality
comparison are presented.
Preliminary analyses revealed that preprocessing of the SVR model quality can be interpreted as the time required
input data is required in order to allow normal and reason- for the model learning, accuracy of the model, and general-
ably fast operation of all SVR code packages. Mainly, due to ization characteristics of the model. As shown in the previous
the fact that input variables span extremely different ranges, subsection, model implementation/deployment time is not
scaling of the input data has been performed, including the the key issue.
scaling of the target values (all in the range 0 to 1), using one As discussed previously, the size of the training set
of LIBSVM codes: SVMSCALE. influences all factors of the model quality, and generally
Models for three target values (keffBOC , keffEOC and thorough analysis of that influence is necessary. Here,
N
FΔH ) were compared for the model accuracy, learning and we present the results of preliminary tests conducted for
Krešimir Trontl et al. 5

Table 1: Comparison of results for RBF kernel function.

Accuracy Learning/Implem.
Target value Code package SV [%]
RMSE RAD [%] >20%[%] time [s]
SVMTorch 0.029 6.793 3.44 120/14 5.27
keffBOC LIBSVM 0.029 7.179 3.96 18/3 3.59
WEKA 0.028 6.621 3.20 2250/6 3.77
SVMTorch 0.050 5.048 1.96 10800/30 16.69
keffEOC LIBSVM 0.045 4.550 1.76 1260/15 18.37
WEKA 0.045 4.570 1.98 28160/30 18.22
SVMTorch 0.040 15.060 20.42 13080/13 16.91
N
FΔH LIBSVM 0.039 14.810 19.64 1080/14 17.97
WEKA 0.039 14.801 19.58 33362/14 17.86

Table 2: Comparison results for PF kernel function.

Accuracy Learning/Implem.
Target value Code package SV [%]
RMSE RAD [%] >20% [%] time [s]
SVMTorch 0.030 6.418 3.76 50/11 6.88
keffBOC LIBSVM 0.030 7.610 5.62 9/3 4.83
WEKA∗ 0.030 6.259 3.46 4027/10 2.43
SVMTorch 0.072 7.147 4.92 840/20 19.33
keffEOC LIBSVM 0.058 5.856 3.12 2113/11 30.21
WEKA∗ 0.056 6.095 3.34 31120/45 30.02
SVMTorch 0.044 16.057 22.92 420/18 18.50
N
FΔH LIBSVM 0.039 14.992 20.50 325/8 18.43
WEKA∗ 0.042 15.701 22.40 7000/30 17.17
d
PF kernel in the form KPF (−
→ −
x i, →
x j ) = (−
→T −
xi →

x j) .

×103 Apart from the anomaly observed for the RMSE curve
0 5 10 15 20 25
0.046 16 at the training set size of 5000 data points originating in
28
26
statistical and random characteristic of the training and
0.044 14
Support vector percentage (%)

24 testing data sets, the accuracy (RMSE) and the generalization


0.042 12 22 properties (low SV percentage) of the models increase with
Learning time (s)

20 the increase of the training set size. The learning time is also
0.04 10 18
increased exhibiting a nearly linear trend.
RMSE

16
0.038
8 14
0.036 12
6 10 5. CONCLUSIONS
0.034 8
4 6 This work introduces a novel concept for fast evaluation
0.032 4
2 of reactor core loading pattern, based on general robust
2
0.03 regression model relying on the state of the art research in
0 5 10 15 20 25 the field of machine learning.
×103
Training set size (number of data points) Preliminary tests were conducted on the NPP Krško
RMSE reactor core, using the MCRAC code for the calculation
SV percentage (%) of reference data. Three support vector regression code
Learning time (s) packages were employed (SVMTorch, LIBSVM, and WEKA)
for creating regression models of effective multiplication
Figure 2: Training set size influence on model quality for keffBOC -
factor at the beginning of the cycle (keffBOC ), effective
preliminary tests.
multiplication factor at the end of the cycle (keffEOC ), and
N
power peaking factor (FΔH ).
The preliminary tests revealed a great potential of the
keffBOC model development using LIBSVM code package SVR method application for fast and accurate reactor core
(see Figure 2). The characteristics of applying other code loading pattern evaluation. However, prior to the final con-
packages on all target values are qualitatively very similar. clusion and incorporation of SVR models in optimization
6 Science and Technology of Nuclear Installations

codes, additional tests and analyses are required, mainly [13] A. J. Smola, Learning with kernels, Ph.D. thesis, Technische
focused on the parameters defining input vector, thus Universität Berlin, Berlin, Germany, 1998.
influencing its size, the required size of the training set and [14] B. Petrović, D. Pevec, T. Šmuc, and N. Urli, “FUMACS
parameters defining kernel functions. (FUel MAnagement Code System),” Rudjer Bošković Institute,
In the case of the scenario involving machine learning Zagreb, Croatia, 1991.
[15] S. R. Gunn, “Support vector machines for classification and
from the results of more accurate and time consuming
regression,” Tech. Rep., Faculty of Engineering, Science and
3D code, we do not anticipate any major changes in the
Mathematics, University of Southampton, Southampton, UK,
learning stage of SVR model development, as well as it its May 1998.
implementation. However, generation of training and testing [16] N. Aronszajn, “Theory of reproducing kernels,” Transactions of
data sets would be more demanding (time consuming and the American Mathematical Society, vol. 68, no. 3, pp. 337–404,
requiring more hardware resources). 1950.
These are the issues that are within the scope of our [17] B. Üstün, W. J. Melssen, M. Oudenhuijzen, and L. M. C. Buy-
future research. dens, “Determination of optimal support vector regression
parameters by genetic algorithms and simplex optimization,”
Analytica Chimica Acta, vol. 544, no. 1-2, pp. 292–305, 2005.
REFERENCES [18] R. Collobert and S. Bengio, “SVMTorch: support vector
machines for large-scale regression problems,” The Journal of
[1] V. N. Vapnik, Statistical Learning Theory, John Wiley & Sons, Machine Learning Research, vol. 1, no. 2, pp. 143–160, 2001.
New York, NY, USA, 1998. [19] C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support
[2] E. Osuna, R. Freund, and F. Girosi, “Training support vector vector machines,” Manual, 2001.
machines: an application to face detection,” in Proceedings of [20] I. H. Witten and E. Frank, Data Mining: Practical Machine
the IEEE Computer Society Conference on Computer Vision and Learning Tools and Techniques, Morgan Kaufmann, San Fran-
Pattern Recognition (CVPR ’97), pp. 130–136, San Juan, Puerto sisco, Calif, USA, 2nd edition, 2005.
Rico, USA, June 1997.
[3] B. Schölkopf, Support vector learning, Ph.D. thesis, R. Olden-
bourg, Munich, Germany, 1997.
[4] S. M. Clarke, J. H. Griebsch, and T. W. Simpson, “Analysis
of support vector regression for approximation of complex
engineering analyses,” in Proceedings of the Design Engineering
Technical Conferences and Computers and Information in
Engineering Conference (DETC ’03), pp. 535–543, Chicago, Ill,
USA, September 2003.
[5] T. Gu, W. Lu, X. Bao, and N. Chen, “Using support vector
regression for the prediction of the band gap and melting
point of binary and ternary compound semiconductors,” Solid
State Sciences, vol. 8, no. 2, pp. 129–136, 2006.
[6] E. Myasnikova, A. Samsonova, M. Samsonova, and J. Reinitz,
“Support vector regression applied to the determination of
the developmental age of a Drosophila embryo from its
segmentation gene expression patterns,” Bioinformatics, vol.
18, supplement 1, pp. S87–S95, 2002.
[7] S. Nandi, Y. Badhe, J. Lonari, et al., “Hybrid process
modeling and optimization strategies integrating neural
networks/support vector regression and genetic algorithms:
study of benzene isopropylation on Hbeta catalyst,” Chemical
Engineering Journal, vol. 97, no. 2-3, pp. 115–129, 2004.
[8] D. J. Strauß, G. Steidl, and U. Welzel, “Parameter detection
of thin films from their X-ray reflectivity by support vector
machines,” Applied Numerical Mathematics, vol. 48, no. 2, pp.
223–236, 2004.
[9] D. O. Whiteson and N. A. Naumann, “Support vector
regression as a signal discriminator in high energy physics,”
Neurocomputing, vol. 55, no. 1-2, pp. 251–264, 2003.
[10] K. Trontl, T. Šmuc, and D. Pevec, “Support vector regression
model for the estimation of γ-ray buildup factors for multi-
layer shields,” Annals of Nuclear Energy, vol. 34, no. 12, pp.
939–952, 2007.
[11] N. Cristianini and J. Shawe-Taylor, An Introduction to Support
Vector Machines and Other Kernel-Based Learning Methods,
Cambridge University Press, Cambridge, UK, 2005.
[12] A. J. Smola and B. Schölkopf, “A tutorial on support vector
regression,” Statistics and Computing, vol. 14, no. 3, pp. 199–
222, 2004.

You might also like