Reduction of detection limit and quantification uncertainty due to interferent by neural classification with abstention

https://doi.org/10.1016/j.nima.2022.167174Get rights and content

Abstract

Many measurements in the physical sciences can be cast as counting experiments, where the number of occurrences of a physical phenomenon informs the prevalence of the phenomenon’s source. Often, detection of the physical phenomenon (termed signal) is difficult to distinguish from naturally occurring phenomena (termed background). In this case, the discrimination of signal events from background can be performed using classifiers, and they may range from simple, threshold-based classifiers to sophisticated neural networks. These classifiers are often trained and validated to obtain optimal accuracy, however we show that the optimal accuracy classifier does not generally coincide with a classifier that provides the lowest detection limit, nor the lowest quantification uncertainty. We present a derivation of the detection limit and quantification uncertainty in the classifier-based counting experiment case. We also present a novel abstention mechanism to minimize the detection limit or quantification uncertainty a posteriori. We illustrate the method on two data sets from the physical sciences, discriminating Ar-37 and Ar-39 radioactive decay from non-radioactive events in a gas proportional counter, and discriminating neutrons from photons in an inorganic scintillator and report results therefrom.

Section snippets

Motivation

Many physical measurements consist of counting experiments (CEs), where the rate of occurrence of an event informs quantitative information about a physical system. These experiments are performed by discretely counting these events over a designated counting time. The two main goals of such CEs are to either detect the presence of a given phenomenon in a physical system, or to measure the prevalence of a phenomenon; the performance of tasks which are best indicated by the detection limit and

Relevant literature

A large uncertainty quantification literature, and in fact a sub-field of statistics, exists to determine the proper way to quantify samples in the presence of background. The statistical methods for quantifying uncertainty and determining detection limits given signal and background properties are mature and well founded, progressing to the point of technical manuals describing best practices [6]. This, however, is not true of the methods for CBCE. Statistical approaches to the uncertainty

Detection limit

A common use for counting experiments is to determine the presence of a given phenomenon; a use case which pervades many fields, including nuclear forensics, beyond-standard-model physics, and even medical diagnostic applications. For these cases, the detection limit, or minimum detectable amount, is the metric of interest for a given detection methodology. We show below that maximum accuracy thresholds on CBCEs are in general not coincident with optimal detection limit thresholds, and

Measurement uncertainty

Another large class of counting experiments are those used to quantify a material or phenomenon of interest, beyond simply detecting it. This technique is again used in a broad variety of fields, from nuclear forensics to prediction of political election results.

The results of such counting experiments, if analyzed without an estimate of interferent prevalence, can lead to extremely biased results. For example, a classifier discriminating analyte from interferent which has 90% accuracy, when

Other figures of merit

While we have focused in depth on two specific figures of merit for typical problems in counting statistics, the general method presented is not limited solely to detection limit or quantification uncertainty. Other figures of merit of interest could be false positive rate, risk minimization given varied costs of false positives to false negatives, or F1-Score. The more general conclusion of this paper could that be stated that, in any CBCE, an appropriate metric for CBCE performance should

Conclusions

Citing the utility of using classifiers to correct for difficult-to-reduce backgrounds in counting experiments, we claim that a classifier in the cases presented here is better judged by detection limit and minimal uncertainty than mere accuracy. We demonstrated that these are not the same. We presented a derivation of the detection limit and measurement uncertainty in such classifier-based counting experiments.

We also presented and showed the utility of abstaining from certain events in CBCEs.

Declaration of Competing Interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Anthony Carado reports financial support was provided by National Nuclear Security Administration Office of Defense Nuclear Nonproliferation. AH serves as a reviewer for NIMA.

Acknowledgment

This research was funded by the National Nuclear Security Administration’s Office of Defense Nuclear Nonproliferation, USA Research and Development.

References (23)

  • HagenA. et al.

    Decision trees for optimizing the minimum detectable concentration of radioxenon detectors

    J. Environ. Radioact.

    (2021)
  • MaceE.K. et al.

    Use of neural networks to analyze pulse shape data in low-background detectors

    J. Radioanal. Nucl. Chem.

    (2018)
  • ParsonsR.D. et al.

    Background rejection in atmospheric cherenkov telescopes using recurrent convolutional neural networks

    Eur. Phys. J. C

    (2020)
  • PearkesJ. et al.

    Jet constituents for deep neural network based top quark tagging

    (2017)
  • RennerJ. et al.

    Background rejection in NEXT using deep neural networks

    J. Instrum.

    (2017)
  • Detection and quantification capabilities

  • HellmanM.E.

    The nearest neighbor classification rule with a reject option

    IEEE Trans. Syst. Sci. Cybern.

    (1970)
  • ThulasidasanS. et al.

    On mixup training: Improved calibration and predictive uncertainty for deep neural networks

    (2019)
  • ThulasidasanS. et al.

    Combating label noise in deep learning using abstention

    (2019)
  • ChowC.K.

    On optimum recognition error and reject tradeoff

    IEEE Trans. Inform. Theory

    (1970)
  • De StefanoC. et al.

    To reject or not to reject: that is the question - an answer in case of neural classifiers

    IEEE Trans. Syst. Man Cybern. C Appl. Rev.

    (2000)
  • Cited by (0)

    View full text