Reduction of detection limit and quantification uncertainty due to interferent by neural classification with abstention
Section snippets
Motivation
Many physical measurements consist of counting experiments (CEs), where the rate of occurrence of an event informs quantitative information about a physical system. These experiments are performed by discretely counting these events over a designated counting time. The two main goals of such CEs are to either detect the presence of a given phenomenon in a physical system, or to measure the prevalence of a phenomenon; the performance of tasks which are best indicated by the detection limit and
Relevant literature
A large uncertainty quantification literature, and in fact a sub-field of statistics, exists to determine the proper way to quantify samples in the presence of background. The statistical methods for quantifying uncertainty and determining detection limits given signal and background properties are mature and well founded, progressing to the point of technical manuals describing best practices [6]. This, however, is not true of the methods for CBCE. Statistical approaches to the uncertainty
Detection limit
A common use for counting experiments is to determine the presence of a given phenomenon; a use case which pervades many fields, including nuclear forensics, beyond-standard-model physics, and even medical diagnostic applications. For these cases, the detection limit, or minimum detectable amount, is the metric of interest for a given detection methodology. We show below that maximum accuracy thresholds on CBCEs are in general not coincident with optimal detection limit thresholds, and
Measurement uncertainty
Another large class of counting experiments are those used to quantify a material or phenomenon of interest, beyond simply detecting it. This technique is again used in a broad variety of fields, from nuclear forensics to prediction of political election results.
The results of such counting experiments, if analyzed without an estimate of interferent prevalence, can lead to extremely biased results. For example, a classifier discriminating analyte from interferent which has 90% accuracy, when
Other figures of merit
While we have focused in depth on two specific figures of merit for typical problems in counting statistics, the general method presented is not limited solely to detection limit or quantification uncertainty. Other figures of merit of interest could be false positive rate, risk minimization given varied costs of false positives to false negatives, or F1-Score. The more general conclusion of this paper could that be stated that, in any CBCE, an appropriate metric for CBCE performance should
Conclusions
Citing the utility of using classifiers to correct for difficult-to-reduce backgrounds in counting experiments, we claim that a classifier in the cases presented here is better judged by detection limit and minimal uncertainty than mere accuracy. We demonstrated that these are not the same. We presented a derivation of the detection limit and measurement uncertainty in such classifier-based counting experiments.
We also presented and showed the utility of abstaining from certain events in CBCEs.
Declaration of Competing Interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Anthony Carado reports financial support was provided by National Nuclear Security Administration Office of Defense Nuclear Nonproliferation. AH serves as a reviewer for NIMA.
Acknowledgment
This research was funded by the National Nuclear Security Administration’s Office of Defense Nuclear Nonproliferation, USA Research and Development.
References (23)
- et al.
Decision trees for optimizing the minimum detectable concentration of radioxenon detectors
J. Environ. Radioact.
(2021) - et al.
Use of neural networks to analyze pulse shape data in low-background detectors
J. Radioanal. Nucl. Chem.
(2018) - et al.
Background rejection in atmospheric cherenkov telescopes using recurrent convolutional neural networks
Eur. Phys. J. C
(2020) - et al.
Jet constituents for deep neural network based top quark tagging
(2017) - et al.
Background rejection in NEXT using deep neural networks
J. Instrum.
(2017) Detection and quantification capabilities
The nearest neighbor classification rule with a reject option
IEEE Trans. Syst. Sci. Cybern.
(1970)- et al.
On mixup training: Improved calibration and predictive uncertainty for deep neural networks
(2019) - et al.
Combating label noise in deep learning using abstention
(2019) On optimum recognition error and reject tradeoff
IEEE Trans. Inform. Theory
(1970)