Paper The following article is Open access

Search for transient variations of the fine structure constant and dark matter using fiber-linked optical atomic clocks

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and

Published 4 September 2020 © 2020 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation B M Roberts et al 2020 New J. Phys. 22 093010 DOI 10.1088/1367-2630/abaace

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/22/9/093010

Abstract

We search for transient variations of the fine structure constant using data from a European network of fiber-linked optical atomic clocks. By searching for coherent variations in the recorded clock frequency comparisons across the network, we significantly improve the constraints on transient variations of the fine structure constant. For example, we constrain the variation to |δα/α| < 5 × 10−17 for transients of duration 103 s. This analysis also presents a possibility to search for dark matter, the mysterious substance hypothesised to explain galaxy dynamics and other astrophysical phenomena that is thought to dominate the matter density of the universe. At the current sensitivity level, we find no evidence for dark matter in the form of topological defects (or, more generally, any macroscopic objects), and we thus place constraints on certain potential couplings between the dark matter and standard model particles, substantially improving upon the existing constraints, particularly for large (≳104 km) objects.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

The nature of dark matter is one of the most important outstanding problems in physics today. Despite composing the majority of the matter in the universe, evidence for dark matter particles in direct detection experiments remains elusive [1]. So far, much of the focus has been on weakly-interacting massive particles with masses equivalent to ≳ GeV; the lack of evidence for their existence, however, is contributing to an increase in interest for more varied candidate models [2].

One possibility is that dark matter is composed of ultralight boson fields (masses ≪ 1 eV). Such fields may form classical oscillating fields that can be coherent on certain time scales [3, 4]. If the fields have specific self-interactions, they may also form stable macroscopic objects such as topological defects [5, 6]. If the fields have non-gravitational interactions with standard model fields, encounters between such objects and precision measurement devices may induce observable transient signatures in recorded data as Earth moves through the galactic dark matter halo [7].

Here, we consider topological defect dark matter objects that have quadratic scalar interactions with standard model particles. Such interactions lead to the effective rescaling of certain fundamental constants, which can shift atomic energy levels and transition frequencies (see, e.g., the recent review in reference [8]). Searches for transient frequency variations can then be performed by monitoring atomic clocks, which work by referencing the frequency of an external oscillator (e.g., a laser) to that of an atomic transition.

We note that ultralight dark matter can also cause long-term drifts [913] and local oscillations [1419] of fundamental constants. Dark matter with other couplings can also be sought with atomic clocks [20] and networks of other precision measurement devices, such as magnetometers [21, 22]; such searches are complementary to those considered in this work. While we specifically consider quadratic couplings, the analysis applies equally for linear couplings (for the correspondence see, e.g., reference [17]), though these are more tightly constrained [23].

With only a single measurement device, it is impossible to distinguish a transient frequency variation caused by a variation in fundamental constants from one caused by terrestrial sources. With a distributed network, however, the time-delays between signals appearing across network nodes must be consistent with the passing of a galactic-speed transient. On the time-scales considered in this work (>60 s), this will manifest as a simultaneous signal visible in all data streams. In addition, having a network with multiple different clock types helps to discriminate against false positives. Different clock types will respond differently to effective changes in fundamental constants, with the relative sensitivities being a prediction of the theory [24].

In this work, we use data from a European network of fiber-linked optical atomic clocks to search for evidence of transient variations in clock frequencies. Our analysis has allowed us to substantially improve constraints on transient variations of the fine structure constant, α, particularly for time scales above ∼102 s, where the long-term stability of the atomic clock comparisons in this network offers the largest advantage over existing experiments. For example, at ∼103 s, we constrain the transient variation to |δα/α| < 5 × 10−17. We consider only the variation of α since we employ optical clocks, which are only sensitive to this parameter [24].

This analysis can be interpreted in terms of a search for dark matter in the form of topological defects, and we find no evidence for such objects at the current sensitivity level. Assuming the defects make up the majority of the dark matter in the galaxy, we then place constraints on their possible couplings with standard model fields. Our results substantially improve upon the existing limits, particularly for large defects (≳104 km).

1. Transient variations of constants

Transient effects, in general, are associated with two distinct time scales. Firstly, there is the duration of each transient effect, which we denote as τint. Secondly, there is the average time between consecutive transients, which we denote as $\mathcal{T}$. Due to better statistics, a more precise measurement (or a more stringent constraint) can be made for effects with longer transient durations. However, this requires good long-term measurement stability (i.e., no drifts) in order to track the signal over time. This is one benefit of laboratory clock–clock comparisons, which have excellent long-term frequency stability. Constraints for the time between transients are limited by the observation time.

Since the observable of an atomic clock is its frequency, for a comparison of two clocks with frequencies νA and νB, we define the ratio yABνA/νB. The fractional variation in this ratio caused by a variation in α occurring during the sampling period τ for a pair of clocks both located at position r at time t is

Equation (1)

where KAB quantifies the sensitivity of the frequency ratio yAB to the variation in α [25]. This factor depends both on the atomic species and the transition considered.

The clock output, driven by the external oscillator, is referenced to the frequency of the probed atomic transition for time-scales larger than the servo loop constant, τservo. In writing equation (1), we have assumed that the effective sampling interval is larger than the servo time: τ > τservo. The experiment will still have sensitivity to variations below this time-scale, and it is possible to extend the analysis by taking into account the clock and laser responses below the servo time. Here, we focus only on the region where τint > τservo, where optical clocks are the most efficient and the main advantage of the optical clock comparisons is realised.

Assuming the transient variation follows a Gaussian profile, i.e., $\delta \alpha \left(t\right)=\delta {\alpha }_{0}\enspace \mathrm{exp}\left(-{\left(t-{t}_{0}\right)}^{2}/{\tau }_{\mathrm{int}}^{2}\right)$, equation (1) can be evaluated simply. The maximum δα-induced perturbation is

Equation (2)

where δα0 is the maximum amplitude of the transient variation δα(t), which occurs at time t0. The result changes only slightly for other profiles; e.g., for a rectangular (top-hat) profile the $\sqrt{\pi }$ factor is absent.

From this, one can constrain the possible values for δα0 by monitoring ratios of atomic clock frequencies. In the simplest case, the maximum allowed value for δα0 for a given τint is set by the maximum observed δy0 at the same time scale. An experiment with much greater sensitivity can be performed using a network of clocks, provided their instabilities are comparable, by searching for variations in the frequency ratios that are coherent across the entire network, and are consistent with a transient variation of α (given the known K coefficients).

2. Data and analysis

We analyse data from a European network of fiber-linked optical atomic clocks based on Sr, Hg, and Yb+ atoms, located in France, Germany, and the United Kingdom, see figure 1. The data was taken over a period of just over 40 days during May–June 2017. The clocks' operation are described in references [2632]. The same fiber links have been used previously for fundamental physics tests, e.g., in reference [33]. Due to the use of fiber links to perform the comparisons, the measurement stability is limited only by the instability of the clocks themselves, with negligible contributions from the fiber-based optical frequency transfer for the timescales >60 s considered here [26].

Figure 1.

Figure 1. European fiber-linked optical clock network. The relevant lengths are the linear distances between laboratories, not the length of the actual optical fiber links. The links use forward/backward light reflections to actively cancel signal variations coming from within the link [26]. Therefore, the effect of variation of constants on the links themselves will not affect the results on time-scales longer than that of the round trip time. The typical light reflection time is 10−3 s, much shorter than the ∼102 to 104 s transients studied here.

Standard image High-resolution image

The base sampling interval of the data is 1 s. However, we average the data from each clock stream up to the largest servo-time of the considered clocks, in order to be consistent with the assumption in equation (1). The maximum servo times are those of the Yb+ clocks (${\tau }_{\text{servo}}^{\mathrm{max}}\sim $60 s), so the effective sampling period is taken to be τ = 60 s. For averaging periods larger than this, the noise of all the clock pairs is essentially white frequency noise, with frequency instability scaling as $1/\sqrt{\tau }$. At 102 s averaging, the fractional frequency instability approaches 10−16 for the Sr/Yb+ comparison at PTB, and a few times 10−16 for the other local comparisons; more details are given in the appendix. The relevant K factors are 6.01, −0.75, and 6.76, for the Sr/Yb+, Sr/Hg, and Hg/Yb+ comparisons, respectively [34, 35].

If the source of the variation in α is galactic, we can expect it to move relative to Earth with galactic speeds, vg ∼ 300 km s−1 (set, e.g., by the motion of Earth through the galactic frame of rest). If we assume that the relative velocity distribution for the transients is described by the standard halo model (as for dark matter, see, e.g., reference [36]), more than 99% of the transients would move relative to Earth with v ≳ 75 km s−1 [37]. In the condition that τintL/vg, where L is the distance between clocks, we can treat all the clocks in the network as being co-located, in that they will be affected simultaneously. Since the longest distance in 750 km as shown in figure 1, this condition is easily satisfied for the τint ≳ 60 s time-scales considered here.

We use a maximum-likelihood method similar to the approach developed in reference [38] to search for transient frequency variations across the network. The details of the method are given in the appendix. In short, we define a likelihood function that quantifies how consistent the data covering a given time window is with a possible transient variation in α. We considered only time periods when at least two independent clock pairs (four clocks) were actively taking data, so that each clock appears only once in the combined data streams. This eliminates cross-correlations between clock pairs, which would complicate the analysis.

We also define a detection threshold for the likelihood, above which there can be no false-positives with 99% confidence. Here, a false-positive is defined as any time the likelihood surpasses the threshold due to purely random noise processes. Any time the likelihood is greater than the threshold can be investigated as a potential event. No such instances were found using the considered data set, allowing us to place constraints on the α variation.

For each time window throughout the total observation time, we calculate the best-fit value for δα0 (denoted $\delta {\alpha }_{0}^{\text{bf}}$) that maximises the likelihood for each relevant value of the possible interaction duration τint. The method also provides an estimate of the uncertainty, Δα0, in this best-fit value. Constraints can be placed by finding the largest best-fit $\delta {\alpha }_{0}^{\text{bf}}$ that appears throughout the span of the data as a function of τint, taking the uncertainty into account for the confidence level: $\vert \delta {\alpha }_{0}\vert {< }\vert \delta {\alpha }_{0}^{\text{bf}}{\vert }_{\mathrm{max}}+{\Delta}{\alpha }_{0}$; see the appendix for more details.

To interpret the analysis in terms of the time between transients, $\mathcal{T}$, we assume there was (at most) one event during the observation time Tobs with magnitude $\delta {\alpha }_{0}^{\text{bf}}$, and rule out the possibility of more frequent events with larger magnitudes. In the analysis, we only use sections of the data that are continuous for periods at least equal to τint with no gaps. Therefore, when performing the analysis for larger τint, we are restricted to using less of the data, which reduces the effective observation time. This reduces the applicable maximum $\mathcal{T}$ for the largest values of τint that can be fitted explicitly. The sensitive region can then be extended beyond this maximum directly probed value to larger τint according to equation (2), so long as τint is small compared to both $\mathcal{T}$ and the total observation time. These two conditions ensure that the sought signals would be well-separated transients (otherwise they may manifest as roughly constant additions to the clock frequencies, which would not be observable). Due to the observation time, we do not extend the constraints beyond τint = 10 h ≃ 4 × 104 s; this is described in more detail in the appendix. For the confidence level, we have assumed that the appearance of the transients follows a Poisson distribution. We thus place constraints only in the region with average time between transients $\mathcal{T}{< }{f}_{\text{P}}{T}_{\text{obs}}$, where fP is the Poisson statistics factor (fP = 0.87 for a 1σ confidence level).

We first place constraints on transient variations of the fine structure constant, without direct reference to the possible source of the variation. The results are shown as a function of τint in figure 2. Previous constraints come from optical clock to cavity frequency comparisons [39, 40]. There are also complementary constraints from the microwave atomic clocks of the GPS constellation, which apply to a combination of variation in the fine structure constant and the fermion masses [37].

Figure 2.

Figure 2. Constraints on the transient variation of the fine-structure constant α as a function of the transient duration, τint. The secondary horizontal axis shows the corresponding length scale, d = vgτint. The shaded curves show the regions of the parameter space that are excluded by various experiments (1σ confidence). Each curve is valid only below the presented maximum value for $\mathcal{T}$, the average time between consecutive transients. The new results of this work are shown in blue. Existing constraints from optical clock/cavity comparisons are shown in green (Wcisło et al [39, 40]). Limits also exist from microwave clocks of the GPS constellation (not shown); though they are substantially less stringent (δα/α ≲ 10−12 for τint ∼ 30 s) they are valid up to $\mathcal{T}\simeq 16\enspace \mathrm{y}\mathrm{r}\simeq 1{0}^{5}\enspace \mathrm{h}$ [37].

Standard image High-resolution image

Our analysis has substantially tightened the constraints on possible transient variations of the fine structure constant, α. The new constraints are particularly strong for time scales above ∼102 s, where the long-term stability of the atomic clock comparisons in this network offers the largest advantage over existing experiments. As discussed in the following section, these results also have important implications for the search for dark matter.

3. Topological defect dark matter

While our analysis is model-agnostic, and the constraints on the variation of the fine structure constant presented in figure 2 apply whatever the source of the variation may be, it is pertinent to interpret our results in terms of relevant cosmological models. To this end, we now introduce one specific model, which has been considered widely in the literature [7, 3744], that may cause the frequency variations in equation (1). Consider a scalar field, ϕ, that has quadratic interactions with standard model particles of the form

Equation (3)

where Fμν is the electromagnetic Faraday tensor, and Λα is the effective energy scale (inverse of the coupling strength). Such an interaction will lead to the effective rescaling of the fine structure constant, with

Equation (4)

see, e.g., reference [9]. If the field ϕ has sufficient self interactions, it may form stable macroscopic objects such as topological defects [5, 6]. The observable variation in α will occur only when the topological defect overlaps with the clock [7].

The spatial extent of topological defects is set by the Compton wavelength of the field, d = /(mϕc), where mϕ is the field mass. The energy density inside the defects is ${\rho }_{\text{inside}}={\phi }_{0}^{2}/\left(\hslash c{d}^{2}\right)$, with ϕ0 being the maximum value of the field amplitude [7]. In these models, the field amplitude goes to zero outside the defect. Assuming that topological defects make up all dark matter, we can link the energy density inside each defect to the average time between events (i.e. encounters between a defect and a given point in space):

Equation (5)

where ρDM = (0.3 ± 0.1)GeV cm−3 [45] is the galactic dark matter energy density in our solar system [46]. Combining this with the expression for ρinside leads to an expression for the field amplitude in terms of the observables and model parameters: ${\phi }_{0}^{2}=\hslash c\enspace {\rho }_{\text{DM}}{v}_{\text{g}}\enspace \mathcal{T}d$. We take d and $\mathcal{T}$ as the free parameters of the model, since they are the direct observables (ϕ0, mϕ, and ρinside are uniquely determined by d, $\mathcal{T}$, and ρDM).

Thereby, the constraints on δα (figure 2) lead directly to constraints on the effective energy scale Λα:

Equation (6)

The results are presented in figure 3 as a function of d for a few values of $\mathcal{T}$. The constraints reach the Λα ≳ 1010 TeV level for d ∼ 107 km. Also shown are the existing constraints from atomic clock experiments [37, 39, 40]. Other constraints coming from astrophysical observations [23, 47, 48] (not shown) are significantly less stringent, and do not exceed the ∼10 TeV level.

Figure 3.

Figure 3. Excluded region (1σ confidence) for the effective energy scale Λα for topological defect dark matter as a function of the defect size, d, for time between events $\mathcal{T}=0.9\enspace \mathrm{h}$ (left), $\mathcal{T}=12\enspace \mathrm{h}$ (middle), and $\mathcal{T}=45\enspace \mathrm{h}$ (right). The new results from this work are shown in blue, and the existing constraints are shown in green (analysis of optical clock/cavity comparisons in Wcisło et al (2016) [39] and (2018) [40]) and orange (analysis of the GPS atomic clock data [37]). These presented $\mathcal{T}$ values correspond to the maximum applicable for references [39, 40], and this work, respectively. The GPS constraints from reference [37] extend to $\mathcal{T}\sim 1{0}^{5}\enspace \mathrm{h}$ (they also apply to a combination of interaction parameters, as explained in the text).

Standard image High-resolution image

The results presented in figure 3 employ the model-dependent relation between the defect size and field mass, d ∼ 1/mϕ. In the case of a different relation (e.g., in models other than topological defects), then the slope of the constraints presented in figure 3 would change in the same simple linear way for all curves presented there. In the general case with no explicit relation between d and mϕ, the model would simply have an extra degree of freedom.

The results from the GPS microwave atomic clocks [37] (shaded orange in the figures) constrain a combination of interaction parameters, including those stemming from a coupling to fermion masses as well as the coupling to ${F}_{\mu \nu }^{2}$ as in equation (3). Therefore, in including those results on the same plot, we are implicitly assuming that the Fμν coupling (leading to effective variation in α) was the dominant coupling for the GPS experiment.

Since we consider long interaction times (i.e. large dark matter objects dL), all clocks in the network experience essentially the same value of the ϕ field. Therefore the results presented here apply for topological defects of any geometry (i.e. monopoles, strings, or domain walls). This is in contrast to the results of references [37, 40], which explicitly assume a domain wall geometry (the results of reference [39] also apply for general geometries). Note also that for such objects to leave transient signatures, they need to be well separated (${\tau }_{\mathrm{int}}\ll \mathcal{T}$). This is equivalent to demanding ρinsideρDM. For example, with d ∼ 104 km, equation (5) implies that it only makes sense to search for transients with $\mathcal{T}\gtrsim 0.1\enspace \mathrm{h}$. We also do not extend the limits beyond ∼107 km (corresponding to τint ∼ 10 h) for the reasons discussed in the previous section.

4. Conclusion

By using data from a European network of fiber-linked optical atomic clocks to search for evidence of transient frequency variations, we have substantially improved the constraints on transient variations of the fine structure constant. With the same analysis we also search for evidence of topological defect dark matter. At the current sensitivity level, no such evidence was found during the analysed time windows. Within the assumptions of our model, we have therefore placed constraints on the possible interactions of such defects with standard model particles, improving upon existing constraints by many orders of magnitude.

We note that it may also be possible to substantially improve the constraints in the region where the event rate is high, $\mathcal{T}\ll {T}_{\text{obs}}$, even if the signal magnitude is well below the noise, by exploiting statistical signatures [49]. For example, in the absence of transients, the distribution of extracted best-fit δα0 values would be expected to be roughly Gaussian. However, if a large number of transients were present in the data, non-Gaussianities, such as a skewness, would be expected in the distribution. Further, due to the orbital motion of Earth around the sun in the galactic frame, a ∼10% annual modulation in this skewness would also be present if it had a dark matter origin [49]. Also, by extending the analysis to lower effective sampling periods, we would have sensitivity to direct measurements of the transient speed and incident direction [38], which could be further used to exclude perturbations that cannot be caused by dark matter, and thereby improve the sensitivity of the search in that region. These avenues will become particularly important as more data and newer experimental techniques become available [42, 5054].

Acknowledgments

BMR gratefully acknowledges financial support from Labex FIRST-TF. We acknowledge funding support from the Agence Nationale de la Recherche (Labex First-TF ANR-10-LABX-48-01, Equipex REFIMEVE ANR-11-EQPX-0039, Idex PSL ANR-10-IDEX-0001-02). This work received funding from the European Union's Horizon 2020 program: ERC AdOC Grant No. 617553. Funding from the German Research Foundation DFG within the excellence cluster QuantumFrontiers EXC 2123, collaborative research centre CRC 1227 (DQ-mat, project B02), collaborative research center CRC 1128 (geo-Q, project A04) and research training group RTG 1729 is acknowledged. This work has received funding by the European Metrology Programme for Innovation and Research (EMPIR) in project 15SIB03 (OC18) and 15SIB05 (OFTEN). The EMPIR initiative is co-funded by the European Union's Horizon 2020 research and innovation programme, and the EMPIR Participating States within EURAMET and the European Union. NPL authors acknowledge support from the UK Department for Business, Energy and Industrial Strategy as part of the National Measurement System. This work was partially supported by the Max Planck–RIKEN–PTB Center for Time, Constants and Fundamental Symmetries.

Appendix A.: Data and analysis method

Before the analysis, we average the data into 60 s bins. This is done to set the effective sampling period to be greater than the largest servo loop time (τ = 60 s), as assumed in equation (1) of the main text. We only average over continuous sections of the data, ensuring we do not inadvertently assume any potential signal remains consistent across gaps in the data. Another effect of this averaging is that above the servo times the data noise can be very well modelled as white frequency noise; see figure A.1. An illustrative sub-set of the data is shown in figure A.2.

Figure A.1.

Figure A.1. Fractional instabilities for some clock frequency ratios determined by the Allan deviations. The solid line shows the $1/\sqrt{\tau }$ white-noise trend. For averaging times τ larger than ∼ 60 s, the noise is well-modelled as white frequency noise.

Standard image High-resolution image
Figure A.2.

Figure A.2. Subset of the clock frequency ratio data (averaged to 60 s) from the European fiber-linked optical clock network. Each time-series is shifted by a constant offset for clarity.

Standard image High-resolution image

Also, we restrict our analysis to include only independent clock pairs, so that each clock appears only once in the combined data streams. The effect of this is to remove any cross correlations between the different clock data streams. For each separate time window and τint value, we choose which clocks to include in order of their effective sensitivity: ${K}_{AB}/{\sigma }_{{y}_{AB}}^{2}$, considering only those pairs with continuous data over the given time window. Here, ${\sigma }_{{y}_{AB}}$ is the Allan deviation for the yAB frequency ratio, evaluated at the 60 s effective sampling interval.

We note that, due to the limited frequency width of the atomic transitions, large steps in the cavity–atom frequency difference that last for a sufficiently long period will lead to a loss of lock. After the source of any such events are identified, the corresponding data are removed. Shorter jumps may not lead to the loss of lock, and would then be indistinguishable from the regular clock noise. Such cases are of not much consequence for our analysis, since we confine our search to longer time periods ≳60 s. In practice, all such events are rare, and their contribution to the downtime of the clocks is negligible. At the same time, some very large data outliers are also removed, and are not included in the employed data set. We note, however, that such large frequency variations cannot be due to the interaction of dark matter with the clock atoms, since such large events would perturb the atomic transition by so much that the laser would lose lock to the atoms, and thus they would not appear in the clock comparisons.

To perform the analysis, we employ a version of the method developed and tested in references [38, 43]. Let ${d}_{j}^{i}$ denote the time series data from the ith clock pair at sample-point j, and ${\varphi }_{j}^{i}={\varphi }_{j}^{i}\left(\theta \right)$ denote the test signal for a given set of model parameters, θ (e.g., speed, incident direction, coupling strength). Assuming multi-variate Gaussian likelihoods [55], the posterior probability that time window Dt (centred around time t) is consistent with the presence of a (single) transient signal φ is

Equation (A.1)

where H is the inverse of the covariance matrix ${E}_{jl}^{ik}\equiv \langle {d}_{j}^{i}{d}_{l}^{k}\rangle $, p(θ) is the prior probability for the model parameters, and C is a normalisation constant. In general, the posterior is to be integrated (marginalised) over the model parameters to form the marginal likelihood (evidence). The signal φ can then be calculated according to equation (1) of the main text for each of the Ncp clock pairs in the network, with the time of arrival of the transient (the time the clock experiences the largest δα magnitude) determined by the position of each clock and the incident relative velocity of the source of the α-variation.

The posterior for the case that no signal is present in the data (i.e., the data is just noise) is given by equation (A.1) with φ = 0. Note that this does not depend on model parameters, so the marginalisation is trivial. The odds ratio is then given:

Equation (A.2)

Here, we have used a short-hard notation (x is d or φ):

Equation (A.3)

As noted above, due to the averaging procedure and the inclusion only of independent clock pairs, the data contains essentially no correlations. In light of this simplification, equation (A.3) can be expressed as

Equation (A.4)

where σi is the standard deviation of the data from the ith clock pair (given by the Allan variance at the 60 s effective sampling period).

In general, the test signal φ depends on the dark matter coupling strengths and the sensitivity of each clock in the network (K factors), as well as the topological defect size, speed, and incident direction. Then, to calculate the odds ratio, one would integrate over all these parameters taking the Bayesian priors into account, as in reference [38]. In our case, however, we can make a simplification. For the considered effective sampling period, τ = 60 s, all the clocks in the network can be considered to be co-located (see discussion in the main text). Therefore, the signal does not depend on the incident direction, and depends only linearly on the speed and coupling strength. In this case, the odds ratio is maximised simply by maximising the argument of the exponential in equation (A.2).

We treat the transient duration τint as a model parameter, and run the analysis separately for each relevant value. Noting that the dark matter signal is linear in δα0 (the maximum transient variation in α), we express the test signal as ${\varphi }_{j}^{i}\equiv \delta {\alpha }_{0}\enspace {s}_{j}^{i}.$ Then, the argument of the exponential in equation (A.2) becomes:

Equation (A.5)

For a given set of parameters, this quantity, and thus the odds ratio (A.2), is maximised for the best-fit value:

Equation (A.6)

In the absence of a signal, dHs is Gaussian distributed with a mean of zero and a variance equal to sHs, so the (1σ) uncertainty in the extracted best-fit is Δα0 = (sHs)−1/2.

The best-fit δα0 (A.6) is then calculated as a function of time (and τint) over the span of the data. By this we mean that we calculate the best-fit over a given time window, and then step this window along by the smallest available increment, τ0. The windows are assumed to be centred on the (possible) transient incident time, and extend to cover at least a time period equal to τint. We tested several values, and found that exactly how large each window is makes essentially no difference to the results (since the signal template s goes to zero quickly outside this region). For each τint, the largest best-fit value found throughout the entire observation time can be used to place constraints:

Equation (A.7)

where nCL = 1 for 1σ confidence. We consider only time periods when at least two clock pairs (four clocks) were actively taking data. For a given interaction duration, we only include data streams which have continuous data (i.e., sampled every 1 s up to at least the considered τint). This means that the effective observation time, Tobs, decreases with increasing τint. For 102 s, we have Tobs = 47 h, while for 103 s we have Tobs = 15 h. The best fit values, and the 1σ confidence bound, are shown in figure A.3.

Figure A.3.

Figure A.3. The purple line shows the observed maximum best-fit value for δα0 as in equation (A.6) as a function of the interaction time, τint. The green line is the 1σ confidence bound used to place constraints (A.7). Note that the effective observation time, Tobs, decreases with increasing τint, since smaller fractions of the data are continuous over the longer time periods, as discussed in the text.

Standard image High-resolution image

By finding the largest δα0 value that appears in the data, we are assuming there was (at most) one event present in the data with magnitude δα0, and then ruling out the possibility of events with magnitudes larger than this (at the stated confidence level). These constraints then apply to the parameter space region for time between events $\mathcal{T}{< }{f}_{\text{P}}{T}_{\text{obs}}$ (where fP < 1 is the Poisson statistics factor). This is the most conservative approach. It may be possible to set more stringent limits applicable for lower $\mathcal{T}$ values by finding the largest values for δα0 that appear in the data at least $n={f}_{\text{P}}^{\left(n\right)}{T}_{\text{obs}}/\mathcal{T}$ times.

We search through each τint specifically between a minimum and maximum value, which are set respectively by the effective sampling period (60 s) and the longest stretch of continuous data in the current data set (τmax ∼ 104 s). Assuming there is no true δα signal in the data, the observed maximum frequency variations that last for duration τint would be expected to scale as $\delta y/y\propto \sigma /\sqrt{{\tau }_{\mathrm{int}}}$. Therefore, between the maximum and minimum directly tested values, the constraints are expected to scale as $\sqrt{{\tau }_{\mathrm{int}}}$, which is seen in the results.

A transient with τintτmax will leave a signal in the data that is approximately constant over the τmax period equation (2) of main text]. Therefore, by performing a fit to δα0 in this case, we can search for evidence of transients with very large τint. However, in this case, the sensitivity does not increase with increasing τint as in the <τmax case, but instead stays constant, see equation (2) of the main text. We note also, that this procedure does not extend the sensitivity indefinitely as τint. In order to measure a transient frequency variation, one must know the 'real', or long-term average, frequency from which it varies. It only makes sense to claim knowledge of the unperturbed ratio yAB if the total measurement time is much greater than τint. We therefore do not extend the constraints past τint ∼ 4 × 104 s ∼ 10 h, which is about 50% of the total for the clock pair with the shortest measurement duration, and about 5% of that for the longest. In reality, the constraints are typically bounded well before this value due to the condition that the transients be well-separated, i.e., ${\tau }_{\mathrm{int}}\ll \mathcal{T}$ (which we take as ${\tau }_{\mathrm{int}}{< }\mathcal{T}/5$).

To search for potential positive events, instead of maximising the best-fit value for δα0, we maximise the likelihood itself. In our case, this is equivalent to maximising the argument in equation (A.5). Substituting $\delta {\alpha }_{0}^{\text{bf}}$ (A.6) back into (A.5) gives the value that maximises the likelihood. For convenience, we take the square root of this quantity, and define

Equation (A.8)

Note that R has the form of a signal to noise ratio. In fact it is equivalent to the ratio $\delta {\alpha }_{0}^{\text{bf}}/{\Delta}{\alpha }_{0}$ (up to a constant factor). As before, for a given τint, we find that maximum value of R that occurs throughout the observation time.

To determine the significance of any potentially detected event, we define a threshold, Rthresh, above which it is sufficiently unlikely that there is a false positive due to random noise processes. To determine the threshold, we use a Monte-Carlo procedure, generating random white noise according to the known average noise levels for each clock pair. The data is generated in such a way as to match the characteristics of the networks, i.e., which clock pairs were running at which times, including emulating any gaps in the data time-series. This simulated data is then run through the exact same search method described above, and we record the maximum R values as a function of τint. We repeat this process a large number of times (1000), and determine the level at which there are no statistical false positives at the 99% confidence level (false positive is defined here as any time |R| > Rthresh due to purely random noise processes). At the same time, we also define the expected value, Rexpect, which is calculated as the mean of the maximum R value for each τint extracted from the same simulations assuming white frequency noise.

The existence of correlations in the data may contribute to a larger rate of false positives in the analysis than would otherwise be assumed. At our current level of sensitivity, we did not observe any significant correlations of this sort. If we had, we would have had to apply a more complex statistical method in order to discern the possibility of random correlations from correlations induced by an external transient (such as dark matter). One such method for this would be to extend our false positive analysis to also use time-shifted real data. By this we mean that the time series for each clock pair be randomly shifted in time with respect to each other, to a degree large enough such that any true transient-induced correlations (on the considered time scales) would be removed. By repeating this process a large number of times, we would gain information as to the prevalence of such random correlations. This may be necessary as more data becomes available, and the precision increases.

The maximum extracted R values are shown as a function of τint in figure A.4, along with the calculated threshold, Rthresh. There are regions (around τint ∼ 103 s) where the observed R value exceeds that expected for white frequency noise in the absence of a signal, however the significance is low. Using the considered data set, we find no occurrences where the likelihood exceeds the threshold at the current sensitivity level.

Figure A.4.

Figure A.4. The purple line shows the maximum value for R calculated from the current data set as in equation (A.8) as a function of the interaction time, τint. The red line is the threshold, above which statistical false-positives do not occur at 99% confidence, and the dashed orange line is the expected value for R in the absence of a signal. Both Rthresh and Rexpect are calculated from simulations mimicking the current data set, assuming white frequency noise.

Standard image High-resolution image
Please wait… references are loading.