A publishing partnership

SN Ia Standardization on the Rise: Evidence for the Cosmological Importance of Pre-maximum Measurements

, , and

Published 2019 February 1 © 2019. The American Astronomical Society. All rights reserved.
, , Citation B. Hayden et al 2019 ApJ 871 219 DOI 10.3847/1538-4357/aaf232

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

0004-637X/871/2/219

Abstract

We present SALT2X, an extension of the SALT2 model for SN Ia light curves. SALT2X separates the light-curve-shape parameter x1 into an ${x}_{1}^{r}$ and ${x}_{1}^{f}$ for the rise and fall portions of the light curve. Using the Joint Lightcurve Analysis SN sample, we assess the importance of the rising and falling portions of the light curve for cosmological standardization using a modified version of the Unified Nonlinear Inference for Type Ia cosmologY (UNITY) framework. We find strong evidence that ${x}_{1}^{r}$ has a stronger correlation with peak magnitude than ${x}_{1}^{f}$. We see evidence that standardizing on the rise affects the color standardization relation, and reduces the size of the host-galaxy standardization and the unexplained ("intrinsic") luminosity dispersion. Since SNe Ia generally rise more quickly than they decline, a faster observing cadence in future surveys will be necessary to maximize the gain from this work and to continue to explore the impacts of decoupling the rising and falling portions of SN Ia light curves.

Export citation and abstract BibTeX RIS

1. Introduction

Type Ia supernovae (SNe Ia) have played a key role in our understanding of the energy density of the universe, acting as "standardizable candles" for measuring distances and inferring the dynamics of the expansion history. They demonstrated the first strong evidence for the presence of an accelerated expansion rate (Riess et al. 1998; Perlmutter et al. 1999) and continue to provide constraints on the physics driving the acceleration (Scolnic et al. 2018). As the numbers of SNe used in cosmological analyses grow well into the thousands, and other sources of uncertainties (such as photometric calibration) are reduced, an improved understanding of standardization will become increasingly important for reducing the remaining uncertainties.

The nature of SN Ia standardization has been determined empirically, and historically has included three main components. (1) The "color" of each supernova, measured slightly differently by different light-fitting methods, is correlated with peak luminosity, likely due to a combination of dust (Phillips et al. 2013) and an intrinsic color distribution, both requiring that bluer supernovae are brighter (Wang et al. 2006; Rubin et al. 2015; Mandel et al. 2017). The range of color standardizations has an rms around ∼0.3 mag. (2) The width of the light curve is positively correlated with the peak luminosity, likely due to a relationship between total radioactive energy available (the amount of 56Ni produced in the thermonuclear runaway of the white dwarf), and the rate of escape of optical photons in the ejecta (Hoeflich et al. 1996; Kasen & Woosley 2007). The light-curve-width standardization has an rms of ∼0.14 mag. (3) The final piece of the current standardization is a correlation between peak luminosity and the properties of the host galaxy; Kelly et al. (2010) found that supernovae in higher stellar mass host galaxies were brighter than expected after standardization, a phenomenon that has become known as the "host mass step." There is increasing evidence that the host mass step is mostly driven by the age of the progenitor system (Rigault et al. 2013, 2018; Childress et al. 2014; Kelly et al. 2015; Kim et al. 2018). The equivalent rms of the mass step standardization is ∼0.05 mag.

Given the important SN-standardization role played by light-curve width, here we focus on how that width is measured. In a standard approach (Guy et al. 2007; Jha et al. 2007), observations of a single SN Ia are fit to a family of light-curve templates in which a single width parameter controls the variation of both the rising part and the falling part of the light curve (e.g., the "rise time" and "decline rate", suitably defined). Unfortunately for this standard approach, it is now well established that, for any fixed decline rate, the SN Ia rise time varies significantly (Strovink 2007; Hayden et al. 2010; Ganeshalingam et al. 2011).

In Hayden et al. (2010), the "2stretch" model for light-curve fitting was presented. In that analysis, the Sloan Digital Sky Survey (SDSS)-II (Frieman et al. 2008) SN Ia light curves were K-corrected to rest-frame B and V bands, and then fit with an MLCS2k2 (Jha et al. 2007) ${\rm{\Delta }}=0$ template in each filter. The stretch parameter, a multiplicative factor applied to the time-axis of the light curve to estimate the width, was separated into a different stretch for the rising and falling portions of the light curve. In this work, we improve on the 2stretch model with "SALT2X". This is an extension of the Spectral Adaptive Lightcurve Template, version 2.4 (SALT2-4; Guy et al. 2007; Mosher et al. 2014). We use the SALT2.4 spectral time-series surfaces but apply a different ${x}_{1}$ to the rising (${x}_{1}^{r}$) and falling (${x}_{1}^{f}$) portions of the light curve. The model is described in more detail in Section 2. The SALT2X model allows us to apply the premise of 2stretch more generally to a larger SN sample, leveraging the power of the SALT2 spectral template, avoiding the need for K-corrections, and better utilizing all photometry for each SN. The SALT2X model will be available as a "source" in future releases of sncosmo (Barbary et al. 2016),4 and the code to reproduce this analysis is available on GitHub.5

Future large cadenced surveys, such as the Large Synoptic Survey Telescope (LSST Science Collaboration et al. 2009) and the Wide Field Infrared Survey Telescope (Spergel et al. 2015), will measure thousands to tens of thousands of SNe Ia for cosmological parameter estimation. Since SNe Ia rise faster than they decline (standard practice is to include observations in the light-curve fit within −15 to 45 rest-frame days of time of maximum), accurate constraints on the rising portion of the light curve require a fast observing cadence (≲4–5 rest-frame days). It is therefore critical to understand whether the rising portion of the light curves carries additional standardization information, which may help to reduce systematic uncertainties when the number of cosmologically useful SNe will grow by orders of magnitude.

In this analysis, we apply SALT2X to the Joint Lightcurve Analysis (JLA) sample of SNe Ia (Betoule et al. 2014). We perform a basic selection cut on the light curves, using the size and Gaussianity of the SALT2X fit posteriors as a metric for light-curve quality. We then use the Unified Nonlinear Inference for Type Ia cosmologY (UNITY) framework of Rubin et al. (2015) to determine the standardization parameters on the rising and falling portions of the light curve, finding a strong preference for the rising portion in the standardization. We pass a large sample of simulated light curves through the same procedure, and show that our analysis successfully recovers the input parameters.

In Section 2, we present the form of the SALT2X model in terms of the standard SALT2 model. Section 3 describes our light-curve fits to the JLA SNe. Section 4 describes our data selection criteria, and Section 5 describes our simulated data sample for testing the entire framework. In Section 6, we describe the application of the UNITY model to SALT2X, and in Section 7 we present our results, including cross-checks of the analysis. We conclude and discuss the implications of our results in Section 8.

2. The SALT2X Model

In this work, we introduce SALT2X, a version of the SALT2 light-curve model where the SALT2.4 spectral time-series surfaces are used, but separate ${x}_{1}^{r}$ and ${x}_{1}^{f}$ parameters are fitted, respectively, to the rising and falling portions of the light curve. Previously in Hayden et al. (2010), the light curves were K-corrected to the Bessell B and V bands. SALT2X is a more extensible, accurate, and reliable procedure for adding an extra light curve width parameter to the light-curve fit.

The original SALT2 (Guy et al. 2007) is based on the following model for the flux as a function of phase (p) and rest-frame wavelength (λ)

Equation (1)

where x0 is the normalization (inversely proportional to luminosity distance squared), M0 is the mean model, x1 is the light-curve shape parameter, M1 is the variation in SED with the light-curve shape parameter, c is the color parameter, and CL (the color law) is the variation (in wavelength only, not phase) with color. For the SALT2X model, we replace the single x1 with a smooth function that joins ${x}_{1}^{r}$ and ${x}_{1}^{f}$, matching to ${x}_{1}^{r}$ at early phases and ${x}_{1}^{f}$ at late phases:

Equation (2)

Again, p is the phase (the estimated rest-frame time of observation relative to time of maximum, p = 0 at time of maximum). The sigmoid transition from ${x}_{1}^{r}$ to ${x}_{1}^{f}$ is necessary to avoid discontinuity in the light curve, since SNe Ia reach peak brightness at different times in different bandpasses. We illustrate synthesized rest-frame U, B, V, and R light curves from our model in Figure 1.

Figure 1.

Figure 1. SALT2X model light curves for rest-frame U, B, V, and R. In the top panels, we vary ${x}_{1}^{r}$; in the bottom, we vary ${x}_{1}^{f}$.

Standard image High-resolution image

3. Light-curve Fitting

Separating the rising phases of the light curve from the falling phases introduces new challenges to the light-curve fitting procedure. In particular, the JLA sample combines SNe Ia discovered in both rolling and targeted searches, so the phase coverage across surveys is not consistent. Some SNe have few observations before or after peak brightness, meaning ${x}_{1}^{r}$ or ${x}_{1}^{f}$ could be ill-constrained. SNe such as these will have substantially non-Gaussian uncertainties on ${x}_{1}^{r}$ or ${x}_{1}^{f}$, challenging fitters that simply quote a best fit and parameter covariance matrix. We instead infer our light-curve parameters with MCMC, which properly treats non-Gaussian uncertainties. For this work we use the Python package emcee6 to sample from each SN posterior.

The SALT2X model is implemented in sncosmo v1.5.3, using the standard SALT2.4 training. The model itself inherits from the sncosmo.SALT2Source class, changing only the free parameters of the model and the function for calculating the flux.7 This allows us to capitalize on the convenience that sncosmo provides for many aspects of light-curve fitting, particularly filter integration, magnitude systems, and file I/O for data in the SALT2 file format.

We use filter-response curves and magnitude systems directly from the JLA data release with one exception. Since the SNLS filter response is position-dependent, and JLA does not release the filter curve for each individual SN as a unique product, we use the "JLA-Megacam" filters released in SNANA8 to access the SN-specific filters.

We use the magnitude systems released by JLA by registering the spectral references in sncosmo. We apply zero-point offsets by subtracting the zero-points listed in Table 4 from the zero-points in the JLA light-curve files. The SWOPE V-band filters are MJD-dependent as the filter was replaced in 2006 January. When the filter in the JLA light-curve file is listed as "SWOPE2::V" the filter is set to the appropriate response curve and zero-point via:

Each SN has bandpasses included only if the rest-frame effective wavelength is between 3000 and 7000 Å. We use the Milky Way $E(B-V)$ reported in the JLA light-curve metadata "MWEBV" parameter, with the CCM89 dust model as implemented in sncosmo v1.5.3, applied to the model in the observer frame, and assuming RV = 3.1.

The light-curve fit proceeds as follows. An initial guess for time of maximum and x0 is determined by looping over a grid of dates between the earliest and latest observations of the supernova, and fitting only x0 for the SALT2.4 model with ${x}_{1}=c=0$. The best ${\chi }^{2}$ point in x0 and time of maximum is used to initialize the model. We then perform a full SALT2 fit using sncosmo, which is used to cut the data to include only phases between −15 and 45 rest-frame days. This fit is then repeated once more, and another phase cut is performed at −15 to 45 rest-frame days. With this final version of the standard SALT2.4 fit, we retrieve the covariance of the SALT2X model from the SALT2.4 model covariance surfaces, and add it to the observational covariance reported by JLA in the flux covariance matrices included in the data release. The uncertainties that are used in the SALT2X fit are then fixed, and the SALT2X model covariance is no longer iterated (even though it is technically a function of ${x}_{1}$ and c). This is necessary because some of the light curves with sparse rise or fall data will have ${x}_{1}^{r}$ or ${x}_{1}^{f}$ posteriors that span regions where the SALT2.4 model covariance is undefined. The result of this initial SALT2.4 fit is plotted, and each of these plots has been manually reviewed by eye for reasonable convergence. The pseudo log-likelihood for emcee is then constructed as $-0.5\times {\mathbb{R}}\cdot {{\mathbb{C}}}^{-1}\cdot {\mathbb{R}}$, where ${\mathbb{R}}$ is the residual of the data and the SALT2X model, and ${{\mathbb{C}}}^{-1}$ is the inverse covariance matrix including both the SALT2X model covariance and the JLA observational covariance matrices.

With the data trimmed in phase, the model uncertainties estimated, and a log-likelihood for emcee, we run emcee with 100 "walkers" and 7500 samples, throwing out the first 2500 samples as burn-in. This amounts to 500,000 (100 × 5000) samples from the posterior. For the peak apparent magnitude estimate mB, used to construct the distance modulus estimate, we tried two approaches, which gave us virtually identical results in testing. The first is to make an approximate mB using ${\tilde{m}}_{B}\equiv -2.5{\mathrm{log}}_{10}({x}_{0})$. The second is to calculate mB by constructing the best-fit SALT2X model and calculating the magnitude at peak in the Bessell B filter, using the "vega2" JLA magnitude system. To build a posterior for mB, this must be done for each MCMC chain, and becomes computationally expensive because the actual time of maximum in the B-band must be estimated, requiring the filter integration to be performed on a fine grid of times. We used ${\tilde{m}}_{B}\,\equiv -2.5{\mathrm{log}}_{10}({x}_{0})$ for the results presented in this paper because it was more computationally convenient.

4. Data Selection

As described in Section 6, we use the UNITY framework (Rubin et al. 2015) to obtain our estimates of the standardization parameters. For nonoutlier SNe,9 this framework assumes Gaussian light-curve fit uncertainties. However, for SNe with poorly sampled light curves, the uncertainties can be non-Gaussian, particularly for ${x}_{1}^{r}$ or ${x}_{1}^{f}$. We are left with three options. (1) Compute non-Gaussian uncertainties for each SN and supply those uncertainties to UNITY (perhaps approximating these non-Gaussian uncertainties as a sum of Gaussians for computational simplicity). (2) Instead of fitting light curves as a separate, initial step, build SALT2X light-curve fits into UNITY, so that the issue of light-curve-fit parameter summary statistics is sidestepped (and thus the issue of non-Gaussian uncertainties on these parameters is sidestepped). (3) Apply a selection cut on the light-curve-fit results, selecting only well measured SNe for the analysis. As we show in Figure 2, the SNe with non-Gaussian light-curve-fit uncertainties tend to be poorly measured (and thus would have much lower weight no matter our choice), so we adopt option 3), and remove these SNe from the analysis. We discuss our tests of this selection and the rest of the analysis chain in Section 5. These tests were performed before we saw the equivalent results for the real data. Thus, this analysis is "blinded," although some of our cross-checks (Section 7.1) occurred to us and were performed after unblinding.

Figure 2.

Figure 2. Our percentile-based measure of the asymmetry of the ${x}_{1}^{f}$ − ${x}_{1}^{r}$ uncertainty (Section 4) plotted against the size of the uncertainty. Lower-quality light curves (the right half of the plot) have more variation in the uncertainty asymmetry. For our three sample selections, we select the SNe highlighted in blue, the blue+green (our nominal selection), and blue+green+red.

Standard image High-resolution image

As shown in Table 1, we perform our strongest data selection on the uncertainty on ${x}_{1}^{f}$ − ${x}_{1}^{r}$. For our light-curve selection criteria, we define S/N to be the ability to distinguish SNe inside the distribution of a light-curve parameter. The distribution of ${x}_{1}^{f}$ − ${x}_{1}^{r}$ has an intrinsic width of about 0.7, so S/N > 0.75 requires $\sigma ({x}_{1}^{f}-{x}_{1}^{r})\lt 1$ (see Table 1 for all S/N based selection cuts and the associated uncertainty cutoffs). We remove a few SNe with non-Gaussian (but modest) uncertainties, as shown in the remaining lines of Table 1. Our metric for non-Gaussian uncertainties is to compare the edges of the ∼2σ credible interval. For ${x}_{1}^{r}$, ${x}_{1}^{f}$, and ${x}_{1}^{f}$ − ${x}_{1}^{r}$ we compute the 2.28th percentile, the 50th percentile, and the 97.72nd percentile of the posterior samples for each of them. Then, we compute $\mathrm{log}[({P}_{97.72}-{P}_{50})/({P}_{50}-{P}_{2.28})];$ for a symmetric uncertainty distribution, this quantity is zero. For a skewed positive distribution, it is (almost certainly) positive, and similarly negative for negative skew.10 We cut when the absolute value is larger than 0.25, indicating a significantly non-Gaussian uncertainty distribution. After selecting for modest, symmetric uncertainties, we apply a cut to remove any extreme values of ${x}_{1}^{r}$, ${x}_{1}^{f}$, or c, as shown in the last three lines of Table 1. We note that these last cuts remove no SNe. All light-curve fits used in this analysis are available in Table 5.

Table 1.  Selection Cuts Used in Our Analysis

Selection Cut SNe (Low-z) SNe (SDSS) SNe (SNLS) Combined Combined Combined
  S/N > 0.75a S/N > 0.5b S/N > 1c
From JLA 118 374 239 731 731 731
$\sigma ({x}_{1}^{f}-{x}_{1}^{r})\lt 1/1.5/0.75$ 62 105 72 239 349 177
Percentile Cut $({x}_{1}^{f}-{x}_{1}^{r})$ 61 101 61 223 299 171
$\sigma ({x}_{1}^{r})\lt 1.33/2/1$ 61 101 61 223 297 171
Percentile Cut $({x}_{1}^{r})$ 61 95 57 213 274 165
$\sigma ({x}_{1}^{f})\lt 1.33/2/1$ 61 95 57 213 274 165
Percentile Cut $({x}_{1}^{f})$ 61 95 55 211 269 165
$-4\lt {x}_{1}^{r}\lt 4$ 61 95 55 211 269 165
$-4\lt {x}_{1}^{f}\lt 4$ 61 95 55 211 269 165
$-0.3\lt c\lt 2$ 61 95 55 211 269 165

Notes. We start with the 731 low-z + SDSS + SNLS SNe in JLA (Top Row), then apply sequential selection cuts and show the number of SNe remaining. The left four columns of numbers show the low-z, SDSS, SNLS, and combined SNe for our nominal selection (S/N > 0.75). The right two columns show other S/N cuts for just the combined sample (S/N > 0.5 and 1.0). Most SNe in the sample (and most of the SNe we eliminate) are removed by our ${x}_{1}^{f}$ − ${x}_{1}^{r}$ uncertainty cut (second row from top). The bottom three rows would remove any extreme values of ${x}_{1}^{r}$, ${x}_{1}^{f}$, or c, but we do not see any. The percentile cuts are the cuts on Gaussian posteriors in the light-curve fit described in Section 4.

aS/N $\gt \,0.75$ requires $\sigma ({x}_{1}^{f}-{x}_{1}^{r})\lt 1$, and both $\sigma ({x}_{1}^{r})$ and $\sigma ({x}_{1}^{f})\lt 1.33$. bS/N $\gt \,0.5$ requires $\sigma ({x}_{1}^{f}-{x}_{1}^{r})\lt 1.33$, and both $\sigma ({x}_{1}^{r})$ and $\sigma ({x}_{1}^{f})\lt 2$. cS/N $\gt \,1$ requires $\sigma ({x}_{1}^{f}-{x}_{1}^{r})\lt 0.75$, and both $\sigma ({x}_{1}^{r})$ and $\sigma ({x}_{1}^{f})\lt 1$.

Download table as:  ASCIITypeset image

5. Simulated Data Generation

A test sample was constructed in order to determine how our full framework behaves for data where ${x}_{1}^{r}$ and ${x}_{1}^{f}$ contain equal standardization information. The goal is for the sample to have the exact phase coverage distribution as the real surveys, with known light-curve parameters and known standardization parameters. This simulated data set provides an end-to-end test of the analysis, and imparts confidence that our results are not due to a detail of the data selection.

To accomplish this simulation, we used the real JLA epochs and uncertainties to define the observations for each simulated SN. For each JLA supernova, a SALT2X model is constructed with the redshift, time of maximum, and Milky Way $E(B-V)$ of the real supernova, with ${x}_{1}^{r}$, ${x}_{1}^{f}$, and c drawn from the following covariance matrix, similar to that inferred from the real data11 :

Equation (3)

The absolute magnitude including standardization information is then calculated as

Equation (4)

where ${M}_{B}^{\mathrm{fid}}=-19.1$, $\alpha =\gamma =0.07$, $\beta =3.1$, and ${\sigma }_{\mathrm{unexpl}}\,=0.1$. We then set this as the Bessell B absolute AB magnitude of the supernova, and appropriately rescale the SALT2X x0 parameter. We retrieve fluxes from the spectral time-series SALT2X model at the epochs of the JLA observations using the same bands and zero-points as those described in Section 3. These fluxes are fixed to the SALT2X model, so to achieve the appropriate amount of dispersion in the photometry, we add noise drawn from a multivariate normal of the form:

Equation (5)

where ${{\mathbb{C}}}_{\mathrm{obs}}$ is the covariance matrix of the measured photometric uncertainties from the JLA light curve, and ${{\mathbb{C}}}_{\mathrm{model}}$ is the SALT2X model covariance, drawn from the SALT2.4 surfaces that describe the model uncertainty from training.12 The simulated supernova fluxes have this noise added, and we use the flux uncertainties directly from the real JLA light curve.

A larger sample is produced by simulating 12 realizations of each JLA supernova. This simulated sample has identical phase coverage and flux uncertainties to the real light curves, but with known standardization parameters for the SALT2X model. These simulated supernovae are then run through the entire framework in the same way as the real data, including data selection, thereby testing how sensitive our results are to the cadence and uncertainties of the JLA sample. These results are discussed in Section 7 and Figure 3. In short, we see correct recovery of the simulation inputs.

Figure 3.

Figure 3. Credible regions derived from the simulated data. Each contour is drawn based on a KDE of the MCMC samples, and encloses 68.3% of the PDF. Low-z (blue), SDSS (green), SNLS (red), and combined (black) are all shown. We mark the true simulation input coefficients with a black square. We see no evidence of biases in this data set; in particular, α (the ${x}_{1}^{f}$ standardization coefficient) and γ (the ${x}_{1}^{r}$ standardization coefficient) are correctly recovered.

Standard image High-resolution image

6. UNITY and the Importance of a Bayesian Approach

The initial UNITY framework was presented in Rubin et al. (2015). This framework simultaneously models (nonlinear) SN standardization, cosmology fitting, the (sample-dependent) SN population, a population of outliers, systematic uncertainties, selection effects, and an unexplained dispersion. Importantly, UNITY is a Bayesian hierarchical model, necessary for performing even linear regression with uncertainties in both dependent and independent variables (in this case, all the light-curve fit parameters have uncertainties), as discussed in Gull (1989). For each SN, latent variables describe the "true" values of the measurements:

Equation (6)

We impose the following standardization relation, which also allows us to trivially marginalize (and thus eliminate) ${\tilde{m}}_{B}^{\mathrm{true}}$:

Equation (7)

where, as stated in Section 3, ${\tilde{m}}_{B}$ is virtually identical to the rest-frame B-band magnitude at peak (up to an additive normalization), but is faster to compute. Here α is the ${x}_{1}^{f}$ standardization coefficient, and β is the color standardization coefficient; as in Rubin et al. (2015), we use a broken-linear color standardization, where

Equation (8)

δ is the host-mass-standardization coefficient, and ${P}_{\mathrm{high}}$ is the probability that an SN host galaxy has a stellar mass $\gt {10}^{10}{M}_{\odot }$. (In Section 7.1, we investigate a broken-linear x1 standardization and find it has little effect.) Mi is the estimated absolute magnitude (up to an additive constant), which we allow to be SN-sample-dependent, removing virtually all dependence of our results on the cosmological model (which we fix to flat ΛCDM with ${{\rm{\Omega }}}_{m}=0.3$).

As ${x}_{1}^{f}$ is intrinsically strongly correlated with ${x}_{1}^{r}$, the quantity ${x}_{1}^{f}$ − ${x}_{1}^{r}$ is generally measured only at low-to-moderate S/N, even if ${x}_{1}^{r}$ and ${x}_{1}^{f}$ are independently well measured. As discussed in Minka (1999), such low S/N regression (with uncertainties in both dependent and independent variables) must be approached with a Bayesian hierarchical model, as we do here. In such a model, informative priors are taken on ${x}_{1}^{f\ \mathrm{true}}$, ${x}_{1}^{r\ \mathrm{true}}$, and ${c}^{\mathrm{true}}$ (representing a model of the true underlying distribution, without noise and unexplained dispersion), and the parameters in these priors ("hyperparameters") are also included in the model. The original UNITY analysis assumed redshift- and sample-dependent Gaussian distributions for ${x}_{1}^{\mathrm{true}}$, and redshift- and sample-dependent skew-normal distributions for c.

We make the following changes to UNITY in this work; some of these changes are improvements, but others are merely simplifications, removing features not needed for an analysis focused on standardization rather than cosmological parameters.

  • 1.  
    (Improvement) We switch to the multivariate skew-normal distribution (Azzalini & Valle 1996) describing the ${x}_{1}^{r}$/${x}_{1}^{f}$/c populations. The original UNITY analysis considered only x1 and c, and modeled their distributions as uncorrelated. We find that the ${x}_{1}^{r}$ and ${x}_{1}^{f}$ distributions are intrinsically strongly correlated, so this correlation must be modeled.
  • 2.  
    (Improvement) We add different ${x}_{1}^{r}$/${x}_{1}^{f}$/c population means for high-mass hosted and low-mass hosted SNe. As light-curve parameters (particularly light-curve width) correlate with host-galaxy environment (Hamuy et al. 1996; Sullivan et al. 2006), there will be a (small) bias on the host-mass standardization coefficient (δ) if the difference in population means is not taken into account.
  • 3.  
    (Simplification) We remove calibration uncertainties and selection effects. These sources of systematic uncertainty have only a small covariance with the standardization coefficients (Betoule et al. 2014), so we can safely exclude them, and gain a computational benefit in doing so.
  • 4.  
    (Simplification) We remove off-diagonal unexplained dispersion terms. The Rubin et al. (2015) UNITY model allowed for off-diagonal terms in the unexplained-dispersion covariance matrices. In the limit of Gaussian populations and linear standardization, these terms can describe some of the SN standardization. For example, if ${c}^{\mathrm{true}}$ has a Gaussian distribution of width 0.1 mag, and the color standardization coefficient (β) is 3, then this is effectively the same as an ${\tilde{m}}_{B}$/c covariance of $3\cdot {0.1}^{2}=0.03$. The original UNITY framework thus contained two types of standardization: the structural model (broken-linear relations), and the implicit linear model in the off-diagonal elements of the unexplained-dispersion covariance matrix. For this work, where we want to focus on the values of the standardization coefficients, we force these off-diagonal terms to be zero.

With the data selected, and the updates to UNITY in place, we can investigate the standardization coefficients, which we discuss in the next section.

7. Results

We start with our recovery of the input results in the simulated data, shown in Figure 3. The Low-z, SDSS, SNLS, and combined constraints are shown in blue, green, red, and black, respectively. We mark the input parameters with a black square. Even with 12× the statistics of the real data, there is no evidence of biases. We also performed a simulation with 4× JLA statistics where we added covarying unexplained dispersion in both color and magnitude. We used the following values, similar to those described by Kessler et al. (2013; based on Chotard et al. 2011)—${C}_{{m}_{B}{m}_{B}}=9\times {10}^{-4}$, ${C}_{{cc}}=6\times {10}^{-3}$, ${C}_{{m}_{B}c}=6\times {10}^{-4}$—and again find no evidence of biases that affect the significance of our result. We also check the light-curve fit results against the true simulation values, and find accurate uncertainties and no evidence of biases, demonstrating end-to-end recovery from the light-curve fits to the assumed standardization relation.

We show similar plots for the real data in Figure 4, with the 68.3% credible intervals in Table 2. Unlike the simulated data (which were generated with $\alpha =\gamma $), the α/(α+γ) credible interval (enclosing 68.3% of the posterior) is ${0.21}_{-0.11}^{+0.10}$, showing a statistical preference that rise time is more important than decline time in standardization ($\gamma \gt \alpha $). The normalized median absolute deviation of the magnitude standardizations for the SNe in the ${\rm{S}}/{\rm{N}}\gt 0.75$ selection cut are $\beta \cdot c=0.26$ mag, $\gamma \cdot {x}_{1}^{r}=0.13$ mag, and $\alpha \cdot {x}_{1}^{f}=0.04$ mag. The larger magnitude standardization range for ${x}_{1}^{r}$ indicates that it is not simply a rescaled version of ${x}_{1}^{f}$. In the lower panels of Figure 4, we see that other parameters correlate with decreasing $\alpha /(\alpha +\gamma )$: β increases, δ moves toward zero, and ${\sigma }_{\mathrm{unexpl}}$ decreases. We present a comparison of credible intervals between an ${x}_{1}^{r}$ + ${x}_{1}^{f}$ run and a single-x1 run in Table 2. To make this comparison fair, we use the same SNe selected for the ${x}_{1}^{r}$ + ${x}_{1}^{f}$ run for the single-x1 run. As the credible intervals are derived with the same data, the uncertainties correlate and thus the differences are generally significant. For example, going from low to high host mass moves the mean ${x}_{1}^{f}$ by −0.94 ± 0.16, while the mean of ${x}_{1}^{r}$ moves −0.57 ± 0.21. Only for 1.25% of the MCMC samples does ${x}_{1}^{r}$ move more than ${x}_{1}^{f}$. Thus, the change in δ (which is 1σ ignoring the correlated uncertainties) is ∼2.2σ taking them into account. Similarly, the correlation between ${x}_{1}^{f}$ − ${x}_{1}^{r}$ and c, which drives the correlation between $\alpha /\left(\alpha +\gamma \right)$ and β, is 2.9σ.

Figure 4.

Figure 4. As in Figure 3, we show contours enclosing 68.3% (shaded) of the posteriors, for Low-z (blue), SDSS (green), SNLS (red), and combined (black). Unlike the simulated data (Figure 3), there is a statistical preference for $\gamma \gt \alpha $, i.e., the rise-time containing more luminosity information than the decline. We also see evidence for correlations between smaller α/(α+γ) and larger β, less-negative δ, and smaller ${\sigma }_{\mathrm{unexpl}}$. For the purposes of making the combined constraints, we present the mean of all three ${\sigma }_{\mathrm{unexpl}}$ values (one for each sample), rather than plotting six contours.

Standard image High-resolution image

Table 2.  Comparison of Parameters Obtained Standardizing on Both ${x}_{1}^{r}$ and ${x}_{1}^{f}$ and the Traditional Single-x1 Analysis

Parameter ${x}_{1}^{r}$ and ${x}_{1}^{f}$ Single x1 (Same SN Selection)
α ${0.030}_{-0.016}^{+0.016}$ ${0.150}_{-0.009}^{+0.009}$
γ ${0.115}_{-0.013}^{+0.013}$
$\alpha /(\alpha +\gamma )$ ${0.21}_{-0.11}^{+0.10}$
β ${3.22}_{-0.13}^{+0.13}$ ${3.07}_{-0.13}^{+0.13}$
${\rm{\Delta }}\beta $ ${0.74}_{-0.40}^{+0.40}$ ${1.10}_{-0.44}^{+0.44}$
δ $-{0.046}_{-0.021}^{+0.021}$ $-{0.067}_{-0.020}^{+0.021}$
Low-z ${\sigma }_{\mathrm{unexpl}}$ ${0.102}_{-0.015}^{+0.017}$ ${0.120}_{-0.016}^{+0.017}$
SDSS ${\sigma }_{\mathrm{unexpl}}$ ${0.086}_{-0.012}^{+0.012}$ ${0.103}_{-0.010}^{+0.011}$
SNLS ${\sigma }_{\mathrm{unexpl}}$ ${0.076}_{-0.014}^{+0.015}$ ${0.081}_{-0.013}^{+0.014}$

Note. In the standardization where ${x}_{1}^{r}$ and ${x}_{1}^{f}$ are separate, we find a significant preference for $\gamma \gt \alpha $, indicating that ${x}_{1}^{r}$ is more strongly correlated than ${x}_{1}^{f}$ with peak magnitude. We see evidence that standardizing predominantly with ${x}_{1}^{r}$ increases β, moves δ toward zero, and decreases ${\sigma }_{\mathrm{unexpl}}$.

Download table as:  ASCIITypeset image

We show our main result visually in Figure 5, which plots the single-x1-corrected Hubble residual against ${x}_{1}^{f}$ − ${x}_{1}^{r}$, ${x}_{1}^{r}$, and ${x}_{1}^{f}$ for the real data, simulated data with $\alpha =\gamma $, and simulated data with α and γ as observed.13 It is helpful to understand these panels using a toy model. This toy model ignores the effects of finite scatter and correlations in the uncertainties, but does enable a simple visual check of the results. Suppose we define ${\bar{x}}_{1}\equiv \tfrac{1}{2}({x}_{1}^{f}+{x}_{1}^{r})$ and ${\rm{\Delta }}{x}_{1}\equiv \tfrac{1}{2}({x}_{1}^{f}-{x}_{1}^{r})$. Then suppose that SN luminosity scales as ${x}_{1}^{r}$, but we standardize the luminosity with ${\bar{x}}_{1}$. In this case, part of the single-x1-corrected Hubble residual should be positively correlated with ${\rm{\Delta }}{x}_{1}$. This is exactly what we see in the top left panel of Figure 5, which shows a positive correlation between single-x1 Hubble residuals and ${x}_{1}^{f}$ − ${x}_{1}^{r}$. As expected, we see much weaker correlations with ${x}_{1}^{r}$ (left panel) and ${x}_{1}^{f}$ (left bottom panel). As expected, in the middle column simulation where $\alpha =\gamma $, there is no residual correlation between single-x1-Hubble residual and ${x}_{1}^{f}$ − ${x}_{1}^{r}$, ${x}_{1}^{r}$, or ${x}_{1}^{f}$. In the right column, simulated with the same α and γ as measured on the real data, we confirm the source of this residual correlation.

Figure 5.

Figure 5. Traditional "Hubble residual" view of the preference for ${x}_{1}^{r}$ in the standardization. The left panels show results from the real data, the middle panels show simulated data with α = γ, and the right panels show simulated data with α and γ as observed. Top panels: Hubble diagram residuals from a single-x1 analysis plotted against ${x}_{1}^{f}$ − ${x}_{1}^{r}$. A moderate positive correlation can be seen in the real data and the α/γ-as-observed simulation (we show binned values in magenta) as expected from our primary finding that ${x}_{1}^{r}$ carries most of the luminosity information (Section 7). We also show single-x1 Hubble residuals plotted against ${x}_{1}^{r}$ (second-from-top panels) and ${x}_{1}^{f}$ (second-from-bottom panels). Also, as expected, the correlations here are much weaker for the real data and α/γ-as-observed simulation, and no correlations are seen in the α = γ simulation. In the bottom panels, we show the observed ${x}_{1}^{r}$ plotted against ${x}_{1}^{f}$. ${x}_{1}^{r}$ and ${x}_{1}^{f}$ are correlated; this must be an intrinsic correlation, as the uncertainties are almost always anticorrelated (uncertainty in the date of maximum shifts ${x}_{1}^{r}$ and ${x}_{1}^{f}$ in opposite directions).

Standard image High-resolution image

7.1. Analysis Cross-checks

We also run a series of cross-checks on the analysis, summarized in Table 3. We show the $\alpha /(\alpha +\gamma )$ credible interval, the fraction of the posterior with $\alpha \gt \gamma $ (as a measure of the statistical significance of our result), and the credible intervals for α and γ. In all cases, we have reasonable consistency with the nominal analysis.

Table 3.  Analysis Variants and Cross-checks

Run Variant α/(α + γ) $P(\alpha \gt \gamma )$ α Δα γ
Nominal, S/N > 0.75 ${0.21}_{-0.11}^{+0.10}$ 0.1% ${0.030}_{-0.016}^{+0.016}$ ${0.115}_{-0.013}^{+0.013}$
S/N > 1 ${0.19}_{-0.11}^{+0.10}$ 0.07% ${0.028}_{-0.017}^{+0.017}$ ${0.121}_{-0.014}^{+0.014}$
S/N > 0.5 ${0.24}_{-0.10}^{+0.10}$ 0.3% ${0.034}_{-0.015}^{+0.015}$ ${0.109}_{-0.013}^{+0.013}$
Four-dimensional ${\sigma }_{\mathrm{unexpl}}$ ${0.04}_{-0.21}^{+0.20}$ 1.5% ${0.005}_{-0.030}^{+0.031}$ ${0.142}_{-0.026}^{+0.029}$
Rescale ${x}_{1}^{r}$ Uncertainties ${0.02}_{-0.16}^{+0.14}$ 0.01% ${0.003}_{-0.024}^{+0.022}$ ${0.145}_{-0.021}^{+0.023}$
Gaussian Populations ${0.21}_{-0.11}^{+0.10}$ 0.09% ${0.030}_{-0.016}^{+0.016}$ ${0.115}_{-0.013}^{+0.013}$
Broken-linear α, $\tfrac{1}{2}({x}_{1}^{f}+{x}_{1}^{r})$ ${0.18}_{-0.11}^{+0.10}$ 0.06% ${0.026}_{-0.017}^{+0.016}$ $-{0.020}_{-0.029}^{+0.029}$ ${0.117}_{-0.013}^{+0.014}$
Broken-linear α, ${x}_{1}^{f}$ ${0.20}_{-0.11}^{+0.10}$ 0.08% ${0.028}_{-0.016}^{+0.016}$ $-{0.040}_{-0.032}^{+0.031}$ ${0.113}_{-0.013}^{+0.013}$
Low-z ${0.02}_{-0.26}^{+0.22}$ 0.7% ${0.002}_{-0.029}^{+0.028}$ ${0.118}_{-0.021}^{+0.022}$
SDSS ${0.05}_{-0.18}^{+0.15}$ 0.08% ${0.007}_{-0.026}^{+0.025}$ ${0.145}_{-0.022}^{+0.023}$
SNLS ${0.53}_{-0.22}^{+0.18}$ 57% ${0.081}_{-0.036}^{+0.034}$ ${0.070}_{-0.026}^{+0.029}$

Note. The variants on data and model selection provide a robust demonstration that γ > α, consistently indicating a preference for ${x}_{1}^{r}$ in the standardization. Two out of the three individual data sets also show a strong preference for γ > α, while the third (SNLS) shows consistency with that conclusion.

Download table as:  ASCIITypeset image

The top line shows our results for the primary analysis. The next two lines show our results varying the S/N cut. The stability of these results is evidence that UNITY correctly treats the per-SN uncertainties.

The next two lines investigate the impact of our assumptions about the light-curve-fit uncertainties. First, we allow the unexplained dispersion term to have a component in each variable (${\tilde{m}}_{B}$, ${x}_{1}^{r}$, ${x}_{1}^{f}$, c), rather than placing it in magnitude (${\tilde{m}}_{B}$). We do note that SALT2X inherits the SALT2 model uncertainties, so some uncertainty is effectively placed in each light-curve parameter, even in the nominal analysis. Our other uncertainty test is a simple investigation of whether a pathology in the SALT2 model (e.g., incorrectly adding a large amount of model uncertainty to the rising portion of the light curves) may drive our results. In this test, we rescale all ${x}_{1}^{r}$ uncertainties by a constant (scaling the covariance between ${x}_{1}^{r}$ and the other parameters by the same constant). We take a broad log-normal prior on the scaling factor of $1\pm 0.5$. These two uncertainty tests mirror each other; one changes the uncertainties by a quadrature sum, and the other by a constant. Neither test changes our main conclusion.

We also consider whether our skew-normal population distribution is driving the results. For the results in the next line, we replace the multivariate skew-normal population distribution with a multivariate Gaussian. Our conclusions are virtually unaffected.

Next, we consider a broken-linear x1 standardization, as we already do for c. This cross-check tests whether a nonlinear ${x}_{1}^{r}$/${x}_{1}^{f}$ relation, combined with a nonlinear x1/luminosity relation, drives our results. For this test, we transform our light-curve fits into the variables ${\bar{x}}_{1}\equiv \tfrac{1}{2}({x}_{1}^{f}+{x}_{1}^{r})$ and ${\rm{\Delta }}{x}_{1}\equiv \tfrac{1}{2}({x}_{1}^{f}-{x}_{1}^{r})$. In analogy with Equation (9), we introduce a broken-linear standardization on ${\bar{x}}_{1}$:

Equation (9)

where

Equation (10)

The new x1 standardization coefficients are α' and γ'. We can relate these back to α and γ as $\alpha =\tfrac{1}{2}({\alpha }^{{\prime} }+{\gamma }^{{\prime} })$ and $\gamma =\tfrac{1}{2}({\alpha }^{{\prime} }-{\gamma }^{{\prime} })$. These are the α and γ values quoted in Table 3; in addition, we also quote Δα. We see a slightly negative Δα (as did Rubin et al. 2015), but it is not statistically significant and introducing Δα does not change our conclusion that γ > α.

Table 4.  Zero-point Offsets Applied to the JLA Light Curve Files for Use in sncosmo

Filter Zero-point MagSys Spectrum System
STANDARD-U 9.724 bd_17d4708_stisnic_003 Landolt 2007
STANDARD-B 9.907 bd_17d4708_stisnic_003 Landolt 2007
STANDARD-V 9.464 bd_17d4708_stisnic_003 Landolt 2007
STANDARD-R 9.166 bd_17d4708_stisnic_003 Landolt 2007
STANDARD-I 8.846 bd_17d4708_stisnic_003 Landolt 2007
4SHOOTER2-Us 9.724 bd_17d4708_stisnic_003 Landolt 2007
4SHOOTER2-B 9.8744 bd_17d4708_stisnic_003 Landolt 2007
4SHOOTER2-V 9.4789 bd_17d4708_stisnic_003 Landolt 2007
4SHOOTER2-R 9.1554 bd_17d4708_stisnic_003 Landolt 2007
4SHOOTER2-I 8.8506 bd_17d4708_stisnic_003 Landolt 2007
KEPLERCAM-Us 9.6922 bd_17d4708_stisnic_003 Landolt 2007
KEPLERCAM-B 9.8803 bd_17d4708_stisnic_003 Landolt 2007
KEPLERCAM-V 9.4722 bd_17d4708_stisnic_003 Landolt 2007
KEPLERCAM-r 9.3524 bd_17d4708_stisnic_003 Landolt 2007
KEPLERCAM-i 9.2542 bd_17d4708_stisnic_003 Landolt 2007
SWOPE2-u 10.514 bd_17d4708_stisnic_003 Stritzinger 2011
SWOPE2-g 9.64406 bd_17d4708_stisnic_003 Stritzinger 2011
SWOPE2-r 9.3516 bd_17d4708_stisnic_003 Stritzinger 2011
SWOPE2-i 9.25 bd_17d4708_stisnic_003 Stritzinger 2011
SWOPE2-B 9.876433 bd_17d4708_stisnic_003 Stritzinger 2011
swope2-v-lc3009 9.471276 bd_17d4708_stisnic_003 Stritzinger 2011
swope2-v-lc3014 9.476626 bd_17d4708_stisnic_003 Stritzinger 2011
swope2-v-lc9844 9.477482 bd_17d4708_stisnic_003 Stritzinger 2011
SDSS-u 0.06791 ab-spec.dat Betoule 2012
SDSS-g −0.02028 ab-spec.dat Betoule 2012
SDSS-r −0.00493 ab-spec.dat Betoule 2012
SDSS-i −0.0178 ab-spec.dat Betoule 2012
SDSS-z −0.01015 ab-spec.dat Betoule 2012

Download table as:  ASCIITypeset image

As an alternative broken-linear x1 standardization, we try a broken-linear ${x}_{1}^{f}$ standardization (keeping a linear standardization relation for ${x}_{1}^{r}$). This cross-check is motivated by the observation that, for ${x}_{1}^{f}\gt 0$, the ${x}_{1}^{r}$/${x}_{1}^{f}$ correlation seems to be weaker (bottom panel of Figure 5). It is thus at least possible that the luminosity changes nonlinearly with ${x}_{1}^{f}$. Again, ${\rm{\Delta }}\alpha $ is negative (but not statistically significant) and our conclusion that $\gamma \gt \alpha $ remains unchanged. Even with this freedom, ${x}_{1}^{r}$ contains more information.

We also divide our results by data set, shown in the last three lines of Table 3. Two out of the three (Low-z and SDSS) independently show strong evidence for $\gamma \gt \alpha $, and all three are consistent with the combined constraint. SNLS is the least consistent, although at least one out of three α/(α + γ) subsample measurements would be expected to fall ≳1.5σ from the combined constraint more than 35% of the time, so this is not unusual. Table 5 includes all of the parameters of the light-curve fits necessary to reproduce the sample selection, UNITY analyses, and cross-checks.

8. Conclusions

In this paper, we introduce the SALT2X model, which divides the SALT2 light-curve-shape parameter (x1) into a rising (${x}_{1}^{r}$) parameter and declining (${x}_{1}^{f}$) parameter. We fit the JLA sample of SNe with this model, selecting only SNe with reasonable S/N and Gaussian ${x}_{1}^{r}$ and ${x}_{1}^{f}$ uncertainties. In order to standardize with both parameters simultaneously (despite the correlations between them), we use UNITY, a Bayesian hierarchical model that we demonstrate correctly recovers such standardizations in the presence of such correlations. We find strong evidence that (${x}_{1}^{f}$) contains only a fraction (${0.21}_{-0.11}^{+0.10}$) of the x1 luminosity information, justifying our decoupling of the rise and fall behavior. This result is robust to changes in the data selection, changing the assumed linearity of the standardization, and other analysis choices. End-to-end simulated data testing demonstrates that our result is not due to a subtle difference between the quality of the rising and falling epochs in JLA, or our implementation of the UNITY model.

When we shift more of the standardization to ${x}_{1}^{r}$, we see evidence that the host-mass standardization decreases in size, the unexplained luminosity dispersion decreases, and the color standardization shifts moderately in the expected direction of typical Milky Way extinction ($\beta \sim {R}_{V}+1=4.1$). These findings could imply that standardizing with ${x}_{1}^{r}$ reduces some of the astrophysical systematic uncertainties currently in SN cosmology. Thus, future surveys that seek to make SN cosmological measurements, such as the LSST and the Wide Field Infrared Survey Telescope should consider maintaining, at a minimum, a cadence of one observation per 4–5 days in the rest frame to ensure that the rise and decline are independently constrained.

In Hayden et al. (2010), it is noted that no significant Hubble residual effect is found by separating the rise and fall stretches. Since the SDSS sample in the SALT2X analysis demonstrates strong preference for γ > α, with many of the SNe common to both analyses, we investigated the difference in conclusion regarding the importance of the rise. In the Appendix, we demonstrate that not including the off-diagonal covariance terms from the light-curve fitting in Hayden et al. (2010) leads to an effective χ2 prior that α = γ. We note that in Hayden et al. (2010) $\alpha /\left(\alpha +\gamma \right)=0.42$, albeit without uncertainties; this qualitatively matches a detection of γ > α in the presence of a somewhat strong prior pushing toward α = γ. In this way, our result is not inconsistent with that of Hayden et al. (2010), but is a more thorough analysis.

In order to apply a rise-time-based analysis to a present cosmology result, one would need to include SNe with poor ${x}_{1}^{f}$ − ${x}_{1}^{r}$ constraints. This could be handled by moving the light-curve fitting and model training inside UNITY. This allows the population parameters (which could vary with redshift) to be applied as priors for the SNe where the rise and fall are not independently measured. The unexplained dispersion could also be retrained at the same time. With the light curve fitting and training marginalized directly during the cosmology fit, uncertainties would be more easily characterized without the need for posterior distribution approximations. Such a model is computationally expensive, but worth exploring. Evaluating the best light-curve model, including the importance of the rise time, could be explored within a single framework.

We thank Kyle Barbary for helpful comments regarding integration of the SALT2X model with sncosmo. We also thank Greg Aldering and Kyle Boone for useful discussions throughout the analysis. We acknowledge support from NASA through the WFIRST Science Investigation Team program. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work was also partially supported by the Office of Science, Office of High Energy Physics, of the U.S. Department of Energy, under contract no. DE-AC02-05CH11231.

Appendix: Comparison of Standardization Relation with Hayden et al. 2010

As mentioned in Section 8, in Hayden et al. (2010; H10) the authors found $\alpha /\left(\alpha +\gamma \right)=0.42$, and based on the χ2 of the fit and the rms of the residuals, determined that no significant preference for rise stretch (timescale) was detected. Since there is significant overlap with the JLA SDSS sample, the difference in conclusion in this work bears investigation.

There are many significant differences between the SALT2X analysis presented here and the H10 Hubble residual analysis (e.g., using the full light-curve information, rather than rest-frame B and V, and the UNITY framework for standardization). Here we demonstrate how the lack of off-diagonal covariance terms from the light-curve fits in the H10 Hubble residual analysis (source: B. Hayden, common author) acts as a prior pushing toward α = γ.

We construct a pseudo-χ2 for a representative single SN both with and without the covariance between the rise and fall width measurements as follows,

Equation (11)

Equation (12)

where we use representative values for the covariance of a normal SN: ${C}^{{x}_{1}^{f}}={C}^{{x}_{1}^{r}}=0.5$, ${C}^{{x}_{1}^{r},{x}_{1}^{f}}=-0.25$, and ${C}^{\mathrm{other}}=0.02$, which represents the combined covariance of the other terms like ${\beta }^{2}\ {C}^{c}$ and CmB. In Figure 6, we show ${\chi }_{{\rm{H}}10}^{2}-{\chi }_{\mathrm{HRS}}^{2}$ versus α. Removal of the covariance term has a large effect, reducing the value of ${\chi }_{{\rm{H}}10}^{2}$ most at $\alpha =\gamma $ where $2\,\alpha \,\gamma \,{C}^{{x}_{1}^{r},{x}_{1}^{f}}$ is at an extremum. The measurement of $\alpha /\left(\alpha +\gamma \right)=0.42$ in H10 is thus a combination of the data preferring a low α and this prior-like χ2 difference due to the large (almost always negative) rise and fall covariance from the light-curve fits.

Figure 6.

Figure 6. Effective Δχ2 of a typical supernova due to not including the rise and fall off-diagonal covariance as in Hayden et al. (2010). The lack of the covariance term behaves as a prior pushing toward α = γ, where $2\,\alpha \,\gamma \,{C}^{{x}_{1}^{r},{x}_{1}^{f}}$ is minimized (${C}^{{x}_{1}^{r},{x}_{1}^{f}}$ is almost always negative).

Standard image High-resolution image

Table 5.  All Light-curve Fits Used in the Analysis

SN z Host Model parameters
Name Set helio CMB Stellar Mass Unc. mB ${x}_{1}^{f}$ c ${x}_{1}^{r}$ Cov mB mB Skewness ${x}_{1}^{r}$
05D3jr SNLS 0.37 0.370531 8.008 0.833 12.00 −0.94 0.05 0.76 0.00074 0.004
05D3jq SNLS 0.579 0.5796 8.832 0.4195 12.64 1.92 −0.00 0.79 0.00059 −0.213
sn2006an Lowz 0.064 0.065193 7.989 0.91 7.46 0.04 −0.05 −170.39 0.00093 −0.376
16073 SDSS 0.146 0.14468 9.779 0.152 9.60 1.56 −0.00 −0.68 0.00089 0.466
16072 SDSS 0.277 0.27549 10.64 0.0065 10.93 0.48 0.00 0.67 0.00757 0.593

Only a portion of this table is shown here to demonstrate its form and content. A machine-readable version of the full table is available.

Download table as:  DataTypeset image

Footnotes

  • We call this a SALT2XSource.

  • UNITY uses a mixture model to simultaneously model inliers and outliers. For our analysis, we assumed that the outlier distribution has a fixed spread equal to 0.5 mag in mB (added in quadrature with the other uncertainties). We do not find any SNe in our analysis where the outlier likelihood is greater than the inlier likelihood, as outliers were already rejected in building the JLA sample (their rejection was done with a frequentist analysis).

  • 10 

    We did not use skew directly to ensure that we considered the symmetry of only the core of the distribution and not any tails with only a small fraction of the samples.

  • 11 

    The simulated data were generated before the final analysis of the real data was unblinded. We noticed after the analysis was complete that the ${x}_{1}^{f}$/c covariance should be negative. This difference in sign drives the opposite sign of the correlation between β and $\alpha /\left(\alpha +\gamma \right)$ in Figures 3 and 4, so despite the visual difference, we achieve end-to-end recovery of the simulation inputs.

  • 12 

    The SALT2 model covariance increases significantly at early times, so for the simulation we cap the early time model covariance to an S/N of 1, i.e., the maximum size of the model variance is (model flux)2.

  • 13 

    To get better statistics for the simulated data just for Figure 5, we generate Gaussian random light-curve fit results, rather than performing another computationally expensive end-to-end simulation. We draw from Equations (3) and (4), then convolve with random draws from the light-curve-fit covariance matrices of the real data to get the values with noise. To generate a self-consistent set of single-x1 values from these ${x}_{1}^{r}$/${x}_{1}^{f}$ simulates, we take the covariance-weighted mean of ${x}_{1}^{r}$ and ${x}_{1}^{f}$.

Please wait… references are loading.
10.3847/1538-4357/aaf232