Abstract
One method of assessing the image quality of a mammography unit is to estimate a contrast-detail-curve (CDC) that is obtained from images of a technical phantom. It has been proposed to estimate this CDC by using an end-to-end neural network (NN) which only needs one image to determine the CDC. That approach, however, has been developed on the basis of images of one single mammography unit. In this work, we train NNs on synthetic images of contrast-detail phantoms for mammography and test the so-trained NNs on images that are obtained from real mammography units. The goal of this paper is to demonstrate that such a deep learning approach is capable to generalize to predict CDCs for various real mammography units. Our experiments cover various manufacturers and the proposed approach is shown to work across different NN architectures and preprocessing methods which highlights its generalizability.

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
Mammography is an established diagnostic technique to detect early forms of breast cancer [1, 2]. In mammography screening the breast is exposed to low-energy x-rays. Since breast lesions have a different morphology than healthy tissue, potentially cancerous tissue can be detected by the difference in shape and contrast in the recorded x-ray image. To obtain the optimal image quality with the minimal dose of radiation, the image quality of the mammography unit has to be assessed in quality control procedures [3]. According to the European Reference Organisation for Quality Assured Breast Screening and Diagnostic Services (EUREF) guidelines, the image quality for mammography can be assessed by a contrast-detail-curve (CDC) that is obtained from a contrast-detail phantom for mammography (CDMAM phantom) [4, 5]. The CDC can be estimated through an automated readout generated by the CDMAM Analyser software. The EUREF guidelines recommend using at least 16 images of the CDMAM phantom to compute the CDC [4, 5].
In recent years neural networks (NNs) have become a powerful tool in improving many healthcare sectors [6–9]. We have extended this line of work in the field of image quality assessment (IQA) for mammography. To improve upon the current practice of automated readouts, using a NN that requires only one single image as an input to assess the image quality of the mammography unit has been proposed [10]. In [10], a NN was trained on synthetic and real data to predict the CDC for a single mammography unit. The training and test set of the real data were however taken from the same mammography unit. We have extended this work in two ways: first, we trained our NN on synthetic data alone and second, we illustrated our NN’s ability to generalize by testing it on an independent and diverse test set of 31 real mammography units. Our improvements are based on two ideas: changing the objective function to train the NN and using a more diverse synthetic dataset.
On the one hand, to the best of our knowledge, there exists no publicly available large database on mammography IQA which is sufficient to train a NN. On the other hand, using synthetic data [11] to train the NN has the advantage that one can generate arbitrary data to cover the spectrum of many mammography units. Furthermore, a realistic test scenario is provided by evaluating the trained NN subsequently on real mammography units.
This work intends to show that training a NN with synthetic data to assess the image quality of a mammography unit is feasible and data efficient and that the trained NN generalizes well. We used a simple synthetic data generator which generates synthetic data based on the geometry of the CDMAM phantom, the amount of (Gaussian) noise and the image contrast alone [12]. Our goal is to show that our method is robust with respect to different NN architectures and preprocessing methods.
Our paper is structured in the following way. We describe one common method of evaluating IQA in section 2.1, which is used in practice. In section 2.2 we explain our NN approach in detail. In particular, our objective function used for training the NN, details about the synthetic data and our preprocessing methods are described in that section. The empirical results are presented in section 3. Finally, we conclude the paper in section 4 with a discussion on our procedure’s limitations and future directions of research. The
2. Method
2.1. Current practice according to EUREF
The EUREF guidelines recommend CDMAM phantoms to assess the image quality of a mammography unit [4, 5]. The CDMAM phantom comprises a 1 mm thick aluminium plate and a polymethyl methacrylate (PMMA) with grid structure integrated in the PMMA. The total, thickness of the phantom is 5 mm. One image of a CDMAM phantom (version 3.4) is displayed in figure 1. It consists of 205 cells and each cell contains two gold cylinders. The gold cylinders in different cells vary in their diameters and thicknesses. The thickness of the gold cylinders differs on a logarithmic scale. In total, the CDMAM phantom contains gold cylinders of 16 different diameters [4, 5]. In each cell, one of the gold cylinders is placed in the centre and the other one is located randomly in one of the four corners of the cell. The detection limit of the mammography unit is evaluated by comparing the pixel intensities in each of the four corners of the cell, for details see [4, 5]. To locate the disks at each cell, a template is created. The average pixel value under each template is calculated and the corner in which the average has the highest value is chosen. For each diameter, the CDC shows the minimum thickness of a gold cylinder such that it can still be detected. The lower the CDC, the better the image quality of a mammography unit is therefore judged, and the image quality is viewed as being sufficiently good when the CDC lies below a pre-defined limiting CDC.
Figure 1. The grid structure and the gold cylinders of the simulated CDMAM phantom image are similar to the real one. (The contrast of the image has been changed to enhance visibility. Hence, the image is displayed differently to the physicians while they are performing IQA).
Download figure:
Standard image High-resolution imageTo obtain reliable CDCs, the guidelines recommend using at least 16 images of a CDMAM phantom. These images are processed by an automated readout which computes the threshold for 12 of the 16 available diameters. To make this readout comparable with the human-readout method, these values are rescaled by an experimentally found relation [13, 14]. The automated readout can be carried out by the CDMAM Analyser software. If the automated readout fails, it is always possible to perform a human readout to determine the CDC.
2.2. A NN approach to predict CDCs
We have simplified the determination of a CDC via a regression task solved by a NN, which only needs one single image as had already been suggested in [10]. We extended and generalized the results obtained in [10] by using a scale-independent objective function and a larger database to train the NN. To this end, we have tested our approach on real images of 31 mammography units. Table 3 in the
In mammography, synthetic data in the context of IQA have been used before [16, 17]. To train our NN, we used synthetic data generated by the data generator from [12]. This simulator uses a simulated phantom as an accumulation of voxels with certain features associated with each voxel that mimic physical properties of the real phantom.
In the following we describe the important steps, such as the generation of the data, our objective function and the preprocessing methods in detail.
2.2.1. Generating synthetic data
To train the NN on a suitable database, simulated phantoms that capture the salient features of the real phantoms are used. The simulation process is described in detail in [12]. We use the software developed in that paper to generate our synthetic data. The simulation process can be described in three steps:
- 1.The generation of a simulated phantom: the simulated CDMAM phantom is composed of voxels. Each voxel represents either gold, grid or background material. The voxels of different materials are arranged such that the geometry of the simulated phantom resembles the geometry of the real phantom as can be seen in figure 1.
- 2.The simulation of the x-ray path: the path for a photon through the material is computed by Siddon’s algorithm [18], and the absorption of an x-ray with respect to different materials is described by Beer–Lambert’s law. Note that the emission of the radiation is simulated by a point source, while the real x-ray tube emits its photons as a small area source. This effect is taken into account in the degradation step.
- 3.The degradation of the simulated image: real phantom images face several degradation effects like detector noise, scattered radiation, limited spatial resolution, etc. Incorporating all of these effects in detail is challenging. We consequently approximate these effects by applying a Gaussian filter.
In figure 1, it can be seen that the simulated images capture the grid structure and the gold cylinders of the real images. These are the basic geometric structures that are relevant to compute the CDC.
2.2.2. Training the NN
We trained the NN to predict for a single image the same CDC as the CDMAM Analyser software which uses at least 16 images. We have therefore taken the CDC obtained by the CDMAM Analyser software as our ground truth 3 .
In general, the mean squared error (MSE) is a suitable objective for training statistical models for regression problems. It minimizes the squared differences between the prediction and the true value y, but this objective led to a systematic bias for some scenarios in our experiments. This might be related to the scale dependency of the MSE. To account for the different magnitudes of the diameters, we conclude that the mean squared log error (MSLE)

is a better suited objective to perform optimization in our case. The MSLE takes the mean over all n samples while accumulating the differences of all C classes. For a CDC we have C = 12 representing the 12 different diameters that are used by the CDMAM Analyser software.
Let us mention some important properties of (1):
- 1.It is well-defined only if the quotient
is positive.
- 2.It is a positive function and the minimum is obtained if and only if the prediction coincides with the true value:
.
The true values are always positive but the predictions can be close to zero or even negative. Especially at the beginning of the training the NN can produce negative predictions. Therefore, we cannot use (1) as an optimization objective. We shall approximate (1) such that it is well-defined for all values 4 . To this end, we propose

as an approximation of (1). The softplus function is strictly positive and approximates the identity function for large values of x which can be seen by a Taylor expansion. This implies that (2) is always well-defined. For (2) to be a proper approximation of the objective (1), we have to ensure that the minima of
and
coincide.
shifts the position of the minimum of
such that

In addition we have added a regularizer ε > 0 for numerical stability, because the logarithm might diverge for arguments close to zero. The constant ε guarantees finite results of the objective function while training. For our analysis we fixed

but the qualitative form of (2) is independent of these values as can be seen in figure 2.
Figure 2. The curves of a univariate MSLE (1) and its approximation (2) are plotted. This plot illustrates how the approximation continues the domain to the real numbers for different values of β. Larger values of β approximate the objective (1) better but it makes the training for the NN more difficult.
Download figure:
Standard image High-resolution image2.2.3. Preprocessing: reducing the image size
High-resolution images can make the training of a NN very slow or even infeasible. Special methods have thus been developed to tackle this problem [19, 20]. In this work, the very simple, yet efficient method of reducing the amount of pixels by shrinking the image size has been chosen. To check the robustness of our method, we used six different resizing methods: random downsampling (DS) [10], area, linear, nearest and cubic as well as lanczos4 [21] and two different image sizes: and
, respectively (in pixel).
All the resize methods but DS are deterministic. For DS, we averaged the predictions over n = 8 random downsamplings for each image, to reduce the standard deviation and enhance the quality of the prediction.
3. Experiments
In this section, we present our experimental study to evaluate the efficiency of the proposed approach. The code that we used to obtain these results is freely available 5 and has been developed within the PyTorch framework [22]. The standard architectures (ResNet, DenseNet, VGG, EfficientNet) have been taken from PyTorch as well.
We generated 46 sets of simulations. Each set differs in its simulation parameters and contains 50 simulations. In these simulations we varied the exposure between 70 and 120 mAs, the tube voltage between 23 and 33 kVp and the signal-to-noise ratio between 5.5 and 26.5. This gives a dataset of 2300 images in total. These simulations were labeled with the CDMAM Analyser software. The labels computed by the CDMAM Analyser software have been taken as the ground truth to train the NN.
For all scenarios, i.e. architectures and preprocessing methods, the NN was trained for 150 epochs with the Adam optimizer [23] using default settings and a learning rate of 10−4. We reduced the learning rate every 50 epochs by a factor of 0.5. In addition, we used weight decay by a factor 10−4.
3.1. Results for ResNet18 on image size
pixels
In this section we present the prediction of the CDCs on real images obtained for (ResNet18, 500) for all the resizing methods. The ground truth CDCs (black, solid) are obtained by the CDMAM Analyser software from at least 16 images for each unit, but the NN predicts each CDC (colored, dashed) based on a single image alone. We plotted the CDCs for 18 out of 31 different mammography units. To give a good impression of the performance of the NN, we have shown the nine best CDCs (in figure 3) as well as the 9 worst CDCs (in figure 4). We selected these CDCs based on the visual (dis)agreement of the ground truth and the predictions. Studying the CDCs, it can be observed that the predictions do not look smooth in contrast to the true values. The reason for this is that we have not trained the NN to predict the functional form

of the CDCs, which is estimated by the CDMAM Analyser software. Instead, we trained it to predict each diameter independently [4, 5]. It is thus not unexpected that these predictions do not reproduce this functional form exactly. In fact, some of the CDCs predicted by the trained NNs slightly violate monotony. Enforcing monotony could be achieved, for example, by training a NN to predict the coefficients in (3). However, we chose to predict the CDC for each diameter independently in order to not restrict the approach to a specific functional form.
Figure 3. Best predicted CDCs of real images predicted by ResNet18. The image size has been reduced to pixels with all six resizing method. Each subplot refers to a different mammography unit.
Download figure:
Standard image High-resolution imageFigure 4. Worst predicted CDCs of real images predicted by ResNet18. The image size has been reduced to pixels with all six resizing method. Each subplot refers to a different mammography unit.
Download figure:
Standard image High-resolution imageIn figure 4, it can be noted that predictions of some units have a bias—their predictions are either above or below the true value. Even though there are some mammography units on which the performance of our NN is limited, we see in figure 3 that many units are fitted very well. Furthermore, these plots suggest that an ensemble of NNs with different resizing methods could enhance the quality of the prediction. For a quantitative assessment we refer to the tables 1 and 2 below.
Table 1.
in percentage (i.e
) on 31 real images after training various models with different preprocessing methods of resizing images to
or
pixels. The digits in parenthesis are the uncertainty of the significant digits, e.g.
. The scenarios in which
holds are in bold.
Architecture | Size | DS | Area | Linear | Nearest | Cubic | Lanczos4 |
---|---|---|---|---|---|---|---|
DenseNet121 | 250 | 5.3(14) | 5.0(17) | 5.8(21) | 5.2(21) | 5.1(16) | 4.6(20) |
DenseNet121 | 500 | 5.5(35) | 4.4(15) | 5.2(23) | 6.5(39) | 5.7(32) | 4.2(18) |
ResNet18 | 250 | 5.5(30) | 6.0(22) | 6.6(46) | 6.8(41) | 4.5(23) | 4.0(26) |
ResNet18 | 500 | 4.6(23) | 4.8(32) | 4.2(26) | 4.5(31) | 4.2(30) | 4.3(23) |
VGG13bn | 250 | 5.4(35) | 4.5(27) | 5.3(21) | 5.1(25) | 5.7(38) | 4.8(34) |
VGG13bn | 500 | 4.5(23) | 4.9(22) | 5.7(41) | 5.0(25) | 5.4(43) | 5.6(52) |
EfficientNets | 250 | 3.5(23) | 4.8(33) | 3.7(25) | 3.4(24) | 3.7(23) | 4.2(23) |
EfficientNets | 500 | 4.5(23) | 3.8(22) | 3.3(23) | 4.5(26) | 4.6(25) | 4.5(22) |
CNN | 250 | 6.3(32) | 7.0(42) | 6.7(27) | 7.1(38) | 6.0(36) | 6.6(34) |
CNN | 500 | 6.9(41) | 7.5(47) | 4.8(44) | 8.0(37) | 8.4(65) | 9.2(61) |
a The architecture of the custom CNN can be found in the corresponding code.
Table 2. The median variation of the prediction for different inputs in percentage and the corresponding rMAD of all 31 real images after training various models with different preprocessing methods of resizing images to or
pixels. The digits in parenthesis are the uncertainty of the significant digits, e.g.
or
. It can be seen that the prediction of the NN depends only mildly on the chosen image.
Architecture | Size | DS | Area | Linear | Nearest | Cubic | Lanczos4 |
---|---|---|---|---|---|---|---|
DenseNet121 | 250 | 1.8(13) | 1.2(4) | 1.5(10) | 1.5(6) | 1.4(6) | 1.3(7) |
DenseNet121 | 500 | 1.4(5) | 0.9(5) | 1.2(4) | 1.0(4) | 1.0(4) | 0.9(5) |
ResNet18 | 250 | 1.7(8) | 0.7(7) | 1.5(5) | 1.2(7) | 1.0(5) | 0.8(5) |
ResNet18 | 500 | 0.9(4) | 0.8(5) | 0.5(3) | 0.6(2) | 0.7(2) | 0.7(3) |
VGG13bn | 250 | 0.4(3) | 0.3(1) | 0.3(1) | 0.2(1) | 0.3(1) | 0.2(1) |
VGG13bn | 500 | 0.2(1) | 0.2(1) | 0.2(1) | 0.1(1) | 0.1(1) | 0.1(1) |
EfficientNets | 250 | 0.6(1) | 0.4(2) | 0.5(2) | 0.5(2) | 0.6(3) | 0.4(1) |
EfficientNets | 500 | 0.5(2) | 0.4(2) | 0.4(1) | 0.5(2) | 0.5(2) | 0.5(1) |
CNN | 250 | 0.5(2) | 0.1(1) | 0.2(1) | 0.3(2) | 0.2(2) | 0.3(2) |
CNN | 500 | 0.3(2) | 0.1(1) | 0.2(1) | 0.2(1) | 0.2(1) | 0.2(1) |
To obtain an overview of the image-wise performance, we display in figure 5 the image-wise correlation between the predicted and true minimal thickness of the 12 relevant gold cylinders for cubic resizing. This plot gives complementary information because it shows the different predictions of the NN for differently selected images; each point represents the minimal detected thickness for one diameter value of one image of the CDMAM phantom for a specific mammography unit. All points with the same color belong to the same mammography unit. We see that the deviation from the diagonal appears to be small. This indicates that the NN makes meaningful predictions. In addition, points of the same color look clustered in this scatterplot. This implies that the predictions have small variations with respect to different inputs for the same mammography unit. Hence, the NN is able to predict the CDC based on a single image. For all scenarios, the prediction quality and its variation are made quantitative in tables 1 and 2.
Figure 5. The correlation between predicted and true thickness of the gold cylinders is displayed on a log–log scale for the scenario (ResNet18, Cubic, 500). The results of all the 31 real units are summarized in this plot. Each color represents one mammography unit and each point a single prediction made at one diameter for one selected image. To improve the visualization we jittered all the points along the horizontal axis slightly.
Download figure:
Standard image High-resolution image3.2. Comparison of different architectures and preprocessing methods
Our experiments show that the NN makes good predictions under the chosen scenario for almost all units. In table 1 we compare the different scenarios. First, we computed the average value of the approximated root MSLE (ARMSLE) per unit and diameter. To obtain a sensible dimensionless quantity, we related it to , where
and
denote the maximum and minimum of the thicknesses for each diameter j with respect to the CDMAM phantom (version 3.4). Then we averaged this quantity over all C diameters. We call this quantity normalized ARMSLE (NARMSLE):

where md is the number of images per unit d. In contrast to (2), we used the base 10 instead of e for this evaluation because the CDCs are plotted on a logarithmic scale of base 10. This change of base is indicated by an additional subscript. The overall performance has been assessed by the median and the rescaled median absolute value of NARMSLE of all the units. The median absolute value has been rescaled by the inverse of 0.675 (rMAD) 6 . The median and rMAD

of NARMSLE for all of these units are shown in table 1. These results demonstrate that our method depends only slightly on the architecture and preprocessing method. We highlighted all the scenarios in bold whose to show which of them performed particularly well. We chose τ = 0.075. However, we observed that the NN had had difficulties in fitting device 8, device 14 or device 18 in some scenarios. We elaborate on this issue in the
Furthermore, the predictions of the NN hardly depend on the chosen image. Table 2 displays the variation of the predictions. A small value indicates that the prediction of the NN is insensitive to the chosen image recorded by the device. To evaluate this variation for a unit d, the prediction of each of its images was compared to its mean prediction by formula (4) (and replacing the ground truth y with the mean of the predictions
). Table 2 shows the median and rMAD with respect to these quantities. For most scenarios the variations are less than 1%.
These results suggest that the prediction of the NN does not vary significantly when the number of images is increased to obtain a prediction. Indeed, we checked for several units that this is the case. The prediction of the NN is rather stable across the number of images to predict the CDC.
4. Conclusion and future work
In this paper we have proposed a general method to train a NN for IQA in mammography. In contrast to previous work [10], our training strategy is not based on incremental learning and real data. Moreover, in our approach convergence is achieved much faster which is related to our optimization procedure and the large, diverse synthetic dataset.
In addition, our experimental study in section 3 shows that the synthetic data captures the important features of the real images of the CDMAM phantom. More than 90% of the CDCs are fitted well across different scenarios by our method which is displayed in table 1. We tested our approach in 60 scenarios (five architectures, two resizing sizes and six resizing methods) and it works across all scenarios for most mammography units without any fine-tuning. Only three mammography units show erratic behaviour in some scenarios.
A limitation of our experimental evaluation is that most mammography units are of the type CsI–Si detector and Selenium detector (cf table 3). Even though our approach worked on the other two types of image receptors as well, more mammography units of these types should be studied to further validate our approach.
Our work shows for the first time that deep learning based IQA is able to predict CDCs for different mammography units. Furthermore, these results generalize well across different architectures and training strategies. Hence, this approach has the potential to amend the current method [4, 5] in the future.
Another important potential benefit is that our method is data-driven and offers the opportunity to predict CDCs using alternative phantoms such as anthropomorphic phantoms [24]. Due to the flexibility of NNs and the shown generalizability it can be expected that NNs are able to learn making reliable predictions on different phantoms as well.
Acknowledgments
The authors would like to thank Tobias Kretz for answering our questions regarding the synthetic data generator. This project is part of the programme ‘Metrology for Artificial Intelligence in Medicine’ (M4AIM) that is funded by the Federal Ministry for Economic Affairs and Climate Action (BMWK) in the frame of the QI-Digital initiative as well as the programme ‘Machine Learning for Medical Imaging’ (ML4MedIm).
Data availability statement
The data cannot be made publicly available upon publication because they contain commercially sensitive information. The data that support the findings of this study are available upon reasonable request from the authors.
Appendix:
A.1. Discussion about problematic units
Table 1 shows that our approach works for almost all mammography units since the median and the rMAD of the chosen quality metric of the results are smaller or equal to τ = 0.075. However, we observed that in some scenarios the NN could not predict a few units well. A detailed analysis of this issue reveals that there are three problematic units: device 8, device 14 and device 18.
In figure 6 we give an example of this phenomenon. In this scatterplot the image-wise predictions by (EfficientNetS, Nearest, 250) are visualized. It can be seen that most units are fitted better by (EfficientNetS, Nearest, 250) than by (ResNet18, Cubic, 500), cf figure 5, but the three units: device 8 (light pink), device 14 (cyan) and device 18 (neon-green) perform worse. Furthermore, for these problematic units the NN shows a high variance as the predictions vary more strongly. Thus there is a relationship between prediction quality and variance of the prediction.
Figure 6. The correlation between predicted and true thickness of the gold cylinders is displayed on a log–log scale for the scenario (EfficientNetS, Nearest, 250). The results of all the 31 real units are summarized in this plot. Each color represents one mammography unit and each point a single prediction made at one diameter for one selected image. To improve the visualization we jittered all the points along the horizontal axis slightly.
Download figure:
Standard image High-resolution imageIn addition, in our experiments we could not trace back the occasionally insufficient performance of these devices to any scenario. Hence, we conclude that this phenomenon is related to the training data itself. To improve upon this, different data augmentations to cover the idiosyncrasies of these images, a different simulator or the inclusion of real data could be applied.
A.2. Specifications of the mammography units
In this study we tested our approach on 31 real mammography units. In this section we summarize some key aspects of these units to characterize their variety and provide the specifications of the tested mammography units in table 3.
We considered four types of image receptors: 9 Selenium detector, 19 CsI–Si detector, 2 computed radiography system and 1 photon counting system. These units cover the following anodes and filters: tungsten (W), rhodium (Rh), molybdenum (Mo) and aluminum (Al) where the anode/filter configuration Rh/Rh is most common; this configuration is used by 18 units with CsI–Si detector. For these 18 units we varied between three different tube current products of 32 mAs, 63 mAs and 125 mAs and six different versions of technical phantoms which are characterized by their serial number. Overall, the applied tube voltage (TV) ranges from 27 kV to 31 kV and the current time product (CTP) varies between 17 mAs and 140 mAs.
Table 3. Technical specifications of the images of the different mammography units.
Device | Type of image receptor | Anode/Filter | TV (kV) | CTP (mAs) | CDMAM serial no. | Image processed | Antiscatter grid in |
---|---|---|---|---|---|---|---|
1 | Selenium detector | W/Rh | 30 | 90 | 1002 | False | True |
2 | CsI–Si detector | Mo/Mo | 27 | 110 | 1002 | False | True |
3 | Selenium detector | W/Rh | 30 | 121 | 1484 | False | True |
4 | Selenium detector | W/Rh | 30 | 121 | 1002 | False | True |
5 | CsI–Si detector | Rh/Rh | 29 | 32 | 1484 | False | True |
6 | CsI–Si detector | Rh/Rh | 29 | 125 | 1002 | False | True |
7 | CsI–Si detector | Rh/Rh | 29 | 63 | 1488 | False | True |
8 | Selenium detector | W/Rh | 30 | 71 | 1002 | True | True |
9 | CsI–Si detector | Rh/Rh | 29 | 32 | 1002 | False | True |
10 | CsI–Si detector | Rh/Rh | 29 | 125 | 1486 | False | True |
11 | CsI–Si detector | Rh/Rh | 29 | 125 | 1484 | False | True |
12 | CsI–Si detector | Rh/Rh | 29 | 32 | 1485 | False | True |
13 | CsI–Si detector | Rh/Rh | 29 | 32 | 1486 | False | True |
14 | Selenium detector | W/Rh | 30 | 71 | 1002 | False | True |
15 | CsI–Si detector | Rh/Rh | 29 | 125 | 1487 | False | True |
16 | CsI–Si detector | Rh/Rh | 29 | 32 | 1487 | False | True |
17 | Computed radiography system | Mo/Rh | 29 | 71 | 1002 | False | True |
18 | Photon counting system | W/Al | 32 | 17 | 1002 | False | False |
19 | CsI–Si detector | Rh/Rh | 29 | 63 | 1485 | False | True |
20 | CsI–Si detector | Rh/Rh | 29 | 63 | 1487 | False | True |
21 | Selenium detector | W/Rh | 30 | 95 | 1002 | False | False |
22 | Selenium detector | W/Rh | 30 | 110 | 1002 | False | True |
23 | CsI–Si detector | Rh/Rh | 29 | 63 | 1002 | False | True |
24 | CsI–Si detector | Rh/Rh | 29 | 125 | 1488 | False | True |
25 | CsI–Si detector | Rh/Rh | 29 | 63 | 1486 | False | True |
26 | Selenium detector | W/Rh | 31 | 133 | 1002 | False | True |
27 | CsI–Si detector | Rh/Rh | 29 | 125 | 1485 | False | True |
28 | Computed radiography system | W/Rh | 29 | 71 | 1002 | False | True |
29 | CsI–Si detector | Rh/Rh | 29 | 32 | 1488 | False | True |
30 | CsI–Si detector | Rh/Rh | 29 | 63 | 1484 | False | True |
31 | Selenium detector | W/Rh | 31 | 140 | 1002 | False | True |
Footnotes
- 3
We use the CDMAM Analyser software (version 1.5.5) from NCCPM.
- 4
Alternatively, one could tackle this problem by forcing the NN to predict positive values only. This could be achieved by concatenating a softplus (or exponential) layer as the final layer to the NN for example. We observed that our method performs better than methods that force the NN to predict strictly positive values.
- 5
- 6
The rMAD of a large sample of a normal distribution equals the standard deviation of that distribution.