GERLUMPH DATA RELEASE 2:2.5 BILLION SIMULATED MICROLENSING LIGHT CURVES

, , , , and

Published 2015 April 10 © 2015. The American Astronomical Society. All rights reserved.
, , Citation G. Vernardos et al 2015 ApJS 217 23 DOI 10.1088/0067-0049/217/2/23

0067-0049/217/2/23

ABSTRACT

In the upcoming synoptic all-sky survey era of astronomy, thousands of new multiply imaged quasars are expected to be discovered and monitored regularly. Light curves from the images of gravitationally lensed quasars are further affected by superimposed variability due to microlensing. In order to disentangle the microlensing from the intrinsic variability of the light curves, the time delays between the multiple images have to be accurately measured. The resulting microlensing light curves can then be analyzed to reveal information about the background source, such as the size of the quasar accretion disk. In this paper we present the most extensive and coherent collection of simulated microlensing light curves; we have generated $\gt 2.5$ billion light curves using the GERLUMPH high resolution microlensing magnification maps. Our simulations can be used to train algorithms to measure lensed quasar time delays, plan future monitoring campaigns, and study light curve properties throughout parameter space. Our data are openly available to the community and are complemented by online eResearch tools, located at http://gerlumph.swin.edu.au.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

Gravitationally lensed quasars are unique natural laboratories for exploring the physics of extreme environments in the universe. Microlensed quasars in particular represent the only systems known so far where properties such as the temperature profile of the quasar accretion disk can be probed.

Quasar microlensing was first observed as uncorrelated light curve variability attributed to compact stellar mass objects within the galaxy-lens that lie close to the line of sight (Chang & Refsdal 1979, 1984; Kayser et al. 1986; Paczynski 1986; Schneider & Weiss 1987; Irwin et al. 1989). From this it has been possible to derive constraints on the accretion disk size and temperature profile in a number of systems.

Accretion disk size estimates for wavelengths between 0.07 and 2 microns vary in the range $14.5\lesssim {\rm log} \left( {{r}_{1/2}} \right)\lesssim 17.3$, where ${{r}_{1/2}}$ is the half-light radius measured in cm (e.g., see Pooley et al. 2007; Morgan et al. 2010; Blackburne et al. 2011; Muñoz et al. 2011; Jiménez-Vicente et al. 2014, and references therein for results on 25 systems). Temperature profiles are also found to be rather flat, i.e., independent of wavelength (e.g., see Blackburne et al. 2011; Jiménez-Vicente et al. 2014). These results are in disagreement with the thin disk theory (Shakura & Sunyaev 1973), which predicts sizes down to an order of magnitude smaller and temperature profiles with a $4/3$ power law dependence on wavelength.

Another experiment that can be performed using gravitationally lensed quasars is measuring the value of Hubble's constant, H0 (Refsdal 1964). This has been performed in at least 24 multiply imaged systems (see Tortora et al. 2004; Paraficz & Hjorth 2010; Eulaers & Magain 2011; Rathna Kumar et al. 2014, and references therein) to yield values in the range 50–100 km s−1 Mpc−1. The wide range of values, which is less accurate than results obtained from other methods (e.g., Cepheid variability; Riess et al. 2011; Freedman et al. 2012), is due to uncertainties and degeneracies of the lens potential (Kochanek 2002; Kochanek & Schechter 2004; Oguri 2007; Schneider & Sluse 2013; Suyu et al. 2014).

In order to obtain light curves suitable for H0 and accretion disk studies, lens systems have to be monitored for periods of months, or even years (e.g., see Gott 1981; Kundić et al. 1997; Eigenbrod et al. 2005; Fohlmeister et al. 2008). Moreover, accurate observational constraints, such as the positions and photometry of the multiple images, are required to produce accurate models of the galaxy-lens. Extracting the correct time delays can be further complicated by the onset of microlensing, which introduces additional uncorrelated variability between the multiple images (e.g., see Hojjati et al. 2013; Tewes et al. 2013). In fact, microlensing is expected to be taking place in all gravitationally lensed quasars, although with varying strength.

The effect of microlensing is usually modeled using a magnification map: a pixellated map of the magnification in the source plane induced by the foreground microlenses (see Figure 1). Such maps are produced using different implementations of the inverse ray-shooting technique (e.g., Kayser et al. 1986; Wambsganss 1999; Kochanek 2004; Thompson et al. 2010; Mediavilla et al. 2011; Vernardos & Fluke 2014a).

Figure 1.

Figure 1. Central 2000–pixel (5 ${{R}_{{\rm Ein}}}$) wide region of a magnification map with $(\kappa ,\gamma ,s)=(0.52,0.36,0.5)$, produced by 5797 microlenses. The locations of the sampled light curves and their intersection points are shown in gray. The magnification along the thick (green online) curve is shown in Figure 2 for 25 different source profiles.

Standard image High-resolution image

Due to the relative motion of observer, lens, and source, the source appears to move across the map crossing regions of high and low magnification and producing a variable light curve. Simulated light curves have been used to study high magnification events (Witt 1990; Yonehara 2001; Shalyapin et al. 2002), autocorrelation functions (Seitz et al. 1994; Lewis & Irwin 1996; Wyithe & Loeb 2002), and derivative distributions (Wyithe & Webster 1999). Kochanek (2004) developed a quantitative Bayesian analysis technique which has been used to study the background quasar (e.g., Dai et al. 2010; Morgan et al. 2010) and the foreground galaxy-lens (e.g., Mosquera et al. 2013).

From the ∼90 known multiply imaged quasars (Mosquera & Kochanek 2011), ∼25 have been studied using microlensing techniques, either as single objects or in groups of a few. However, this is about to change due to the upcoming synoptic all-sky survey facilities, like the Large Synoptic Survey Telescope (LSST; LSST Science Collaboration 2009), which are expected to discover thousands of new lensed quasars suitable for microlensing studies (Oguri & Marshall 2010). Moreover, the high cadence of observations from these instruments will provide nearly effortless monitoring. These data are expected to increase the accuracy of H0 measurements (e.g., Coe & Moustakas 2009; Dobke et al. 2009), and improve our current techniques for constraining quasar structure (e.g., Sluse & Tewes 2014). It is crucial for microlensing to move from single object to parameter space studies.

As a theoretical counterpart of future quasar microlensing observational campaigns, the Graphics Processing Unit-Enabled High Resolution cosmological MicroLensing parameter survey4 (GERLUMPH; Vernardos & Fluke 2014a) has already generated $\gt 70,000$ microlensing magnification maps, the largest and most complete collection yet produced. The parameter space of convergence κ, shear γ, and smooth matter fraction s, (see next section) is covered in unprecedented detail, allowing for comprehensive explorations of microlensing properties (e.g., Vernardos & Fluke 2014b).

In this paper we present how we used the GERLUMPH maps to generate $\gt 2.5$ billion simulated microlensing light curves, the largest and most extensive set of light curves available to date. Our approach spreads the modeling effort all over the parameter space rather than focusing it on a single object, e.g., the method of Kochanek (2004) may extract up to 106 light curves from a single source profile and magnification map, while we are extracting 2000 light curves per source profile from 51,127 maps covering the parameter space (see next section). Hence, our data is not designed for detailed modeling of individual objects but it is designed for studying the robustness of, and degeneracies in, models for many individual objects across the parameter space. Using these simulations to unveil systematic errors introduced in the modeling process will hopefully lead to better measurements of H0 and accretion disk constraints.

We present our approach to extracting simulated microlensing light curves in Section 2. Our data are described in Section 3 and are openly accessible for download, complemented by online analysis tools that we introduce in Section 4. Finally, we conclude our paper and present future prospects in Section 5.

2. APPROACH

We have used 51,227 microlensing magnification maps from the GERLUMPH online resource, generated using the GPU-D (Thompson et al. 2014; Vernardos & Fluke 2014a) direct Graphics Processing Unit (GPU) implementation of the inverse ray-shooting technique (Kayser et al. 1986). The map resolution is 10,000 pixels on a side, the map width is set to 25 Einstein radii (${{R}_{{\rm Ein}}}$, see below), and the mass of the microlenses are fixed at 1 M. The maps are extracted from the region of parameter space with $0\lt \kappa \lt 1$ and $0\leqslant \gamma \lt 1.3$, which contains most of the models of the galaxy-lens of the currently known systems (see Figure 2 of Vernardos et al. 2014, or see the relevant GERLUMPH online tool).5

The smooth matter fraction is defined as $s={{\kappa }_{{\rm s}}}/\kappa $, where ${{\kappa }_{{\rm s}}}$ is the contribution to the total convergence by the smooth matter component. For each $\kappa ,\gamma $ combination we use 10 maps with different smooth matter content: $0\leqslant s\leqslant 0.9$ in steps of 0.1. The collection of maps used in this study is contained within the slightly larger set of maps used previously in Vernardos & Fluke (2014b). Properties of the GERLUMPH high resolution maps are examined in Vernardos et al. (2014), while a description of the GERLUMPH data, infrastructure, and tools, can be found in Vernardos & Fluke (2014a).

The characteristic microlensing scale length in the source plane is the Einstein radius:

Equation (1)

where ${{D}_{{\rm ol}}}$, ${{D}_{{\rm os}}}$, and ${{D}_{{\rm ls}}}$, are the angular diameter distances from observer to lens, observer to source, and lens to source, respectively, $\langle M\rangle $ is the mean mass of point-mass microlenses, G is the gravitational constant, and c is the speed of light. A typical range of values for ${{R}_{{\rm Ein}}}$ can be obtained from the sample of 87 lensed quasars compiled by Mosquera & Kochanek (2011): $5.11\pm 1.88\times {{10}^{16}}$ cm. Although these authors have used estimates of the lens redshift in a number of cases, their result is consistent with the CASTLES6 sample of 59 systems with both lens and source redshifts measured (5.35 ± 2.20 × 1016 cm, Falco et al. 2001). In the following, we use the mean from the Mosquera & Kochanek (2011) sample as a typical value for ${{R}_{{\rm Ein}}}$. This leads to a pixel size of the high-resolution GERLUMPH maps of $\sim 1.28\times {{10}^{14}}$ cm.

Because the microlensing effect depends weakly on the shape of the underlying accretion disk brightness profile (Mortonson et al. 2005), the half-light radius, ${{r}_{1/2}}$, is a convenient way to parametrize a range of disk models consistently. We model the quasar source profile as a face-on7 Gaussian disk, i.e., $I(r)={\rm exp} \left( -{{r}^{2}}/2{{\sigma }^{2}} \right)$, of varying size σ that is related to ${{r}_{1/2}}$ through

Equation (2)

We truncate the profile at $r=3\sigma $, which is the radius containing 99.7% of the total brightness. Therefore, the total size, or diameter, of the profile, d, which is used to determine the size of the effective map (see below), is equal to $6\sigma $. We consider 25 profiles of varying size d: from 2 × 1015 to 2 × 1016 cm (∼0.8 to ∼8 light days) in steps of 2 × 1015 cm, and from 2 × 1016 to $1.7\times {{10}^{17}}$ cm (∼8 to ∼65 light days) in steps of 1 × 1016 cm. Consequently, we cover the range $14.6\lesssim {\rm log} \left( {{r}_{1/2}} \right)\lesssim 16.5$ which contains most of the current microlensing size estimates of the accretion disk (see Section 1).

As it will be shown in the next section, including wider profiles produces large amounts of additional data without significant change in the light curves. Such a behavior is expected because the microlensing effect is more prominent for smaller sources (e.g., Wambsganss & Paczynski 1991), while the magnification of large sources tends to the macro-magnification (see Equation (4)).

Light curves extracted from the original GERLUMPH maps hold magnification information for "pixel-size" sources, i.e., sources that are smaller than the pixel size of the maps. To get information for a source profile with finite size one has to convolve it with the maps. This can be achieved using the convolution theorem and a GPU to accelerate computing Fourier transforms (see Vernardos & Fluke 2014a, for a detailed description of this technique). Assuming the map is periodic, which is required by the Fourier transform, leads to spurious magnification values around the edges of the convolved map. The size of these regions is equal to half the source profile size, i.e., $d/2$. The largest profile we use is 1332 pixels wide, therefore, by disregarding areas of 700 pixels around the edges of the map we are not affected by spurious magnification values. The resulting effective convolved maps, from which the light curves are extracted, have a width of 8600 pixels, or $21.5{{R}_{{\rm Ein}}}$.

We extract 2,000 light curves from each effective map. Ideally, this number would be as high as possible, however, increasing it further will produce much more data than what we can currently store. It turns out that this number is sufficient to adequately sample the underlying magnification probability distribution (MPD) of the magnification maps (see next section).

The light curve locations are randomly selected once, and then used for all the maps. The microlens positions for generating the GERLUMPH maps are set randomly using different random seeds for each map. Therefore, the locations of caustic networks between maps are not correlated and the fixed light curve locations do not bias the actual light curves extracted. This approach reduces the number of parameters that we have to keep track of, and avoids issues with generating random numbers, e.g., in case of reproducing our results using different compiler versions. In Figure 1, we show a 2000 pixel wide region from the center of a fiducial magnification map with the locations of the extracted light curves.

The light curve length is set to $1.5{{R}_{{\rm Ein}}}$ and sampling length to $0.0025{{R}_{{\rm Ein}}}$ (this is set by the resolution of our maps); using the values for ${{R}_{{\rm Ein}}}$ and the effective source velocity from Kochanek (2004) for Q2237+0305 the light curve would be ∼22 yr long, sampled every ∼13 days. Ideally, longer light curves would be desired, however, further increasing the light curve length will produce much more data than what we can currently store. A length of $1.5{{R}_{{\rm Ein}}}$ is adequate to capture the microlensing variations (${{R}_{{\rm Ein}}}$ is the typical scale length for the onset of microlensing) and corresponds to $\sim 3\sigma $ of the Gaussian distribution we used to model our largest source profile.

The actual number of pixels crossed varies according to the orientation of the light curve: 600 pixels are crossed in the horizontal or vertical direction, while only 425 pixels are crossed at an angle of $45{}^\circ $. Obviously, in the latter case some pixels are sampled more than once leading to the appearance of short flat parts, or steps, in the light curve. This can also arise due to the orientation of the light curve with respect to a caustic and is a pathological behavior due to the finite resolution of a pixellated magnification map. Some interpolation prescription between neighboring pixels would resolve this and make the light curves smoother. However, we currently avoid making any assumptions on such smoothing procedures and provide the raw sampled data.

Our approach to pixel sampling lies between Bresenhams' algorithm (Bresenham 1965) and a supercover algorithm (e.g., Andres et al. 1997), which both belong to the digital differential analyzer class of algorithms used in computer graphics for rasterization of geometrical shapes. In fact, infinitely reducing the scaling length would lead exactly to a supercover pixel sampling, i.e., sampling of all the pixels a line crosses. The maximum amount of light curve information extracted from a magnification map would be possible by calculating each portion of a curve that lies in a given pixel. However, our choice of sampling avoids the need for complex data structures, increased data size (two values required for each pixel), and resembles more the observational data which inevitably produce discrete samples.

2.1. The Data

We have carried out $\gt 1.28$ million convolutions across 5 days on a GPU supercomputer. Each combination of map, profile, and ${{R}_{{\rm Ein}}}$ is stored in an indexed database; the index of each entry points to a directory in a flat file system that holds all output (more details on the choices we made for the backend database and file storage can be found in Vernardos & Fluke 2014a). Each light curve consists of 600 sampled magnification values which are represented as 32-bit floats. The 2000 light curves per convolution are stored in a binary file of ∼4.6 MB in size. We have generated a total of $\sim 2.5\times {{10}^{9}}$ light curves, corresponding to 5.6 TB of data. All our data are freely accessible to analyze and download using the eTools described in Section 4.

3. RESULTS

As a first test, we consider maps for pixel-size sources and extract the MPD based purely on the light curve magnification values from 2000 light curves. Then, we compare this light curve MPD to the full MPD extracted from the same map region using the Kolmogorov–Smirnov test with the null hypothesis that the two distributions are the same. We find that ∼8% of the 51,227 tests fail with a p-value $\lt 0.05$. We note that the light curves intersect at 5995 pixel locations (same for every convolved map), which are counted more than once in the calculation of the MPD, but given the low number of failed tests we choose to ignore this.

More than 95% of the failed tests occur for maps produced by $\lt {{10}^{4}}$ microlenses. This means that the caustic networks are less dense and a higher number of light curves would be needed to better probe the underlying map MPD. Indeed, increasing the number of light curves to 4000 and 8000 leads to ∼6.5 and ∼4% of failed tests respectively (but increases the size of the data products by factors of 2 and 4). Finally, it is expected that when a pixel-size source map is convolved with a source profile, the caustic networks, and the corresponding MPD, are smoothed out. We perform the same MPD comparisons using 2000 light curves from convolved maps, and find that even the smallest source profile, i.e., $d=2\times {{10}^{15}}$ cm or $\sim 0.04{{R}_{{\rm Ein}}}$, leads to $\lt 1$% of failed tests.

The Kolmogorov–Smirnov test is only one method of assessing the usefulness of any sub-sample from a population. The test is more sensitive to differences near the peak of the distribution, rather than at the extremes where the individual probabilities are low (see Section 4.1 of Vernardos & Fluke 2014b). We reiterate that it is straightforward to generate additional lightcurves per map, and indeed more independent magnification maps, but it is non-trivial to store them.

In Figure 2 we plot the magnification values along the trajectory shown in Figure 1 for 25 different profile sizes. We convert the magnification of the unlensed source luminosity into magnitude change with respect to the macro-magnification produced by the galaxy-lens without the additional effect of microlensing:

Equation (3)

where L is the unlensed source luminosity, Ltotal is the total magnified flux, Lmacro is the magnified flux due to the galaxy-lens only, and the theoretical macro-magnification is:

Equation (4)

For small source sizes, the light curve has a characteristic double-peaked shape, with the peaks corresponding to the caustic crossing events shown in Figure 1. As the profile size gets larger the two peaks are smoothed out and disappear for the profile with size 2 × 1016 cm, or $\sim 0.4{{R}_{{\rm Ein}}}$, which corresponds roughly to the length of the light curve that lies within the caustic. For even larger profiles the light curve becomes almost flat and microlensing fluctuations are much less prominent as expected.

Figure 2.

Figure 2. Magnification values, expressed in units of $\Delta {\rm mag}$ (Equation (3)), along the trajectory shown in Figure 1. The thick black line (magenta online) corresponds to the smallest source size for which the microlensing induced fluctuations are the most prominent. The double peaks disappear for a profile with a size of $\sim 0.4{{R}_{{\rm Ein}}}$, roughly equal to the double peak separation. The pixel-size source light curve is shown in gray.

Standard image High-resolution image

In the original implementation of GPU-D (Thompson et al. 2010), we used a random distribution of light rays. Compared to the alternative, i.e., a regular grid, this was a compromise that helped amortize the computational cost of additional GPU-computation while allowing us to explore billion lens configurations with abitrary numbers of source-plane pixels. To this end, light ray positions were generated on the CPU (host memory) while the light ray deflections were being calculated on the GPU. As a result, there is an uncertaininty in the magnification values from each map pixel: counting ${{N}_{i,j}}$ rays in a given pixel is accompanied by an approximately $\sqrt{{{N}_{i,j}}}$ error, i.e., Poisson-like. The code comparisons made by Bate et al. (2010) showed that the these per-pixel magnification errors were small, and become less significant as the maps are convolved with realistic source profiles.

Low magnifications correspond to low ray counts and consequently larger errors with respect to the value itself (i.e., ${{N}^{-1/2}}$). Therefore, we expect larger fluctuations in the low magnification parts of a simulated light curve for sources smaller than the size of the map pixels. Indeed, such a behavior is observed for the pixel-size source light curve shown in Figure 2. As soon as the pixel-size source map is convolved with a profile—even the smallest profile—the fluctuations disappear and the light curve becomes smoother.

The error in the magnification ${{\mu }_{i,j}}$ of a given map pixel is

Equation (5)

where $\langle N\rangle $ is the average number of rays per map pixel, and $\langle \mu \rangle $ is the average magnification per pixel, which can be computed by the total number of rays in a map and the number of rays that would have reached the map if there was no lensing (see the Appendix of Vernardos & Fluke 2013, for more details). For convolved maps, this error propagates through the convolution formula:

Equation (6)

where we have assumed a square, normalized kernel k, with a size of $2s$ pixels, centered over the $i,j$th map pixel. Keeping track of the exact error of the convolved magnification values means convolving the "error" map with the source profile and storing an additional value for each light curve pixel. Such an approach would double the number of convolutions performed and data that have to be stored. To avoid this, we approximate the error given by Equation (6) by assuming that the Poisson statistics hold for the convolved ray-count map. In this case, the error for the convolved magnification is:

Equation (7)

where $N_{i,j}^{{\prime} }$ is not necessarily an integer ray-count anymore. Taking into account that $\mu _{i,j}^{{\prime} }=N_{i,j}^{{\prime} }\langle \mu \rangle /\langle N\rangle $, we end up with the final expression for the convolved light curve error:

Equation (8)

In other words, we approximate the error propagated by the convolution with an assumed Poisson error of the convolved pixel in question.

In Figure 3, we compare the above approximation to the actual error, calculated in randomly sampled pixels from a representative set of maps. We see that our approximation is in fact a maximum error of the magnification, which originates from the observation that

Equation (9)

This inequality depends on the actual convolution kernel used, however, and proving it is beyond the purpose of this paper. It suffices to say here that the Gaussian profiles we are using are peaked at the location of the $i,j$th pixel; if profiles that are peaked away from the central pixel are used, then our approximation is most likely going to fail.

Figure 3.

Figure 3. Magnification errors in units of $\Delta {\rm mag}$ (same as y-axis of Figure 2). The exact error for convolved magnification values (Equation (6)) is computed in 100 locations within a magnification map; 130 maps are used spanning the parameter space examined here. The results for a convolution kernel 16 pixels wide are shown as filled black circles, while open gray circles correspond to a kernel with 158 pixels on a side. Different errors for the same magnification (vertical spread of the points) are produced due to different neighboring pixels within the range of the convolution kernel, e.g., near a caustic. The solid line (red online) is the approximation of Equation (7); values of $\langle \mu \rangle $ and $\langle N\rangle $ from any of the maps produce practically the same curve.

Standard image High-resolution image

Thus far we have assumed a fixed physical size for the profiles, and the convolutions have been performed for a fixed value of ${{R}_{{\rm Ein}}}$ (see Section 2). However, what if a different value of ${{R}_{{\rm Ein}}}$ is used, e.g., to study specific lensed systems? In this case, the GERLUMPH light curve data can still be used, but caution has to be taken on how to scale the source sizes correctly. For example, if we apply the GERLUMPH maps to quasar Q2237+0305 that has ${{R}_{{\rm Ein}}}=1.81\times {{10}^{17}}$ cm for 1 M $_{\odot }$ microlenses (Mosquera & Kochanek 2011), our largest source profile, $1.7\times {{10}^{17}}$ cm, would correspond to a 376 pixel Gaussian kernel. The closest convolution kernel to this size is 392 pixels, which corresponds to a 5 × 1016cm profile and ${{R}_{{\rm Ein}}}=5.11\times {{10}^{16}}$ cm. Of course, the closer the size of the source profile in pixels is to one of the convolution kernels the more similar the light curves will be. The case with equal kernel sizes would lead to identical light curves, meaning that the scaled light curves would be exact, although a different value of ${{R}_{{\rm Ein}}}$ was used.

Variations of the value of ${{R}_{{\rm Ein}}}$ due to other reasons, e.g., uncertainties in the redshifts of lens and source or different mass of microlenses, lead to similar scaling of the source profile sizes. We demonstrate this with an example of a variation in the Hubble constant. The value of ${{R}_{{\rm Ein}}}$ used in this paper, i.e., $5.11\pm 1.88\times {{10}^{16}}$ cm (Mosquera & Kochanek 2011), was obtained assuming a universe with ${{{\Omega}}_{m}}=0.3$, ${{{\Omega}}_{{\Lambda}}}=0.7$, ${{{\Omega}}_{k}}=0$, and ${{H}_{0}}=72$ km s−1 Mpc−1. The angular diameter distances appearing in Equation (1), DA, depend linearly on the line of sight comoving distance8 , DC, as

Equation (10)

which in turn is proportional to the Hubble distance, ${{D}_{H}}=c/{{H}_{0}}$. This leads to

Equation (11)

which relates the error in H0 with the error in the derived ${{R}_{{\rm Ein}}}$ value. Our profile sizes can then be scaled according to the discussion in the previous paragraph. Similar reasoning could lead to a different relation for the dependence of ${{R}_{{\rm Ein}}}$ on H0 for different cosmological parameters.

4. ETOOLS

4.1. Accessing the Data

The GERLUMPH light curve data can be openly accessed online. To this purpose, we have extended the getquery eTool described in detail in (Vernardos & Fluke 2014a, Section 4 and Figure 7). Detailed instructions and help tips on how to download the data are provided on the GERLUMPH website9 throughout the various stages of the process.

The 2000 light curves for each map are stored in a binary file (lc_data.bin), which is compressed using a standard Unix tool (e.g., gzip or bzip2). The downloaded files are grouped in indexed directories for each map and source profile. The values of $\langle \mu \rangle $ and $\langle N\rangle $, required to calculate the errors in the magnification values, are stored in a metadata file (mapmeta.dat) in the map directories. Finally, at the root directory there are two reference files for looking up the map and profile indices (mINDEX.txt and pINDEX.txt) and their properties, e.g., κ, γ, profile size, etc.

4.2. The GIMLET Tool

Simulated light curves from the GERLUMPH maps can be inspected using the GERLUMPH Interactive Microlensing Lightcurve Extraction Tool (GIMLET): http://gerlumph.swin.edu.au/tools/gimlet/ which is openly available online. This is an exploration and planning tool, whose main goals are:

  • 1.  
    To plot high resolution light curves and show their location on the corresponding magnification maps.
  • 2.  
    To plot low resolution light curves for comparison, which can be extracted interactively in real time.
  • 3.  
    To demonstrate the effects of light curve sampling.

The interactive light curve is not extracted from the full resolution magnification map, as this would consume a lot of computational resources due to the size of each magnification map (381 MB). Instead, a scaled-down version of the map is used that has been precomputed and stored in the GERLUMPH database (the sample.png file, see Vernardos & Fluke 2014a). The spatial resolution of this map icon is set to 1000 × 1000 pixels, but it can be reduced according to the resolution of the monitor used to view the webpage. The magnification, in units of $\Delta {\rm mag}$ (Equation (3)), is binned in 256 bins in the range $[-4,4]$ (values outside this range, which are extreme and rare, are set equal to the range limits). These approximations are used to give a real-time feeling to generating light curves. To obtain high quality data users are advised not to use the GIMLET tool and instead download the high resolution light curve data.

In Figure 4 we show a screenshot of the GIMLET tool webpage. The panel shown, "Controls," contains the basic features and functionality. High and low resolution light curves can be displayed and compared. The length of the light curves is measured in units of ${{R}_{{\rm Ein}}}$, but it is possible to change to units of time (days) by specifying a value for ${{R}_{{\rm Ein}}}$ and for the effective velocity of the source, ${{\upsilon }_{s}}$. A time interval, ${\Delta}t$, can be set to simulate a light curve cadence of observations. Other functions can be performed in the remaining panels: any of the GERLUMPH maps can be loaded ("Change map"), the source profile for the interactive low resolution light curves can be modified interactively ("Interactive Profile"), and the color scheme used on the displayed map can be changed ("Color").

Figure 4.

Figure 4. Screenshot of the GIMLET tool, located at http://gerlumph.swin.edu.au/tools/gimlet. Detailed instructions and help in using the GIMLET tool can be found online. Here we outline its main features: (A) scaled down version of a GERLUMPH map, with the locations of the high and low resolution light curves. The low resolution light curve can be moved and rotated interactively by the user. (B) The corresponding high (black) and low (gray) resolution light curve variations, plotted as a function of length, measured in ${{R}_{{\rm Ein}}}$, or time measured in days. (C) Panels that allow the map, source profile, and color coding of the map to be changed (see Section 4.2 for more details). (D) The control panel for displaying the light curves and enabling sampling.

Standard image High-resolution image

4.2.1. Implementation

The content of the tool webpage is rendered by a PHP script that handles all the communication with the database. JavaScript functions and the jQuery10 library are used to further manipulate the elements of the webpage: reload the high resolution light curve data, extract the interactive light curve, and handle all the button functions. The Flot11 library is used to plot the light curves. The map is rendered in a HTML512 canvas element and the KineticJS13 library is used to plot the light curve location on the map. The color is applied to the map pixels using the WebGL14 JavaScript application programming interface, that allows for GPU acceleration at the user end.

5. SUMMARY AND DISCUSSION

We have used the GERLUMPH magnification maps to produce $\gt 2.5$ billion simulated microlensing light curves. Our data are publicly available for download from the GERLUMPH server. We also release an online exploration and planning tool for plotting the high resolution light curves presented here and extracting interactive low resolution light curves in real time. Our goal is to provide an extensive and consistent set of light curves to be used as a benchmark for future parameter space and individual system microlensing studies. We described numerical errors of our data and scaling of the source profiles with respect to values of ${{R}_{{\rm Ein}}}$ and H0 in Section 3.

Such a complete set of light curves is of high relevance to microlensing studies of large numbers of lensed quasars in the upcoming all-sky survey era of astronomy. While our light curve data cannot be used to fit observed light curves for single systems, they are designed to test the robustness of, and degeneracies in, such techniques for many individual objects in the parameter space. In this way, we will be able to discover potential systematic errors introduced in the modeling process of microlensed quasars, that will hopefully lead to better measurements of H0 and accretion disk constraints.

Our results can be used to train machine learning algorithms for measuring time delays (e.g., Cuevas-Tello et al. 2006; Hirv et al. 2011). Our simulations do not try to reproduce any specific observed light curve directly—instead, they cover a large range of possible, yet currently unobserved scenarios. As such, they are highly suitable as an input to unsupervised machine learning algorithms as part of the process of automatically discovering and classifying thousands of new lensed quasars set to be discovered in future synoptic surveys. For example, they are ideally suited for the time delay challenge (TDC; Dobler et al. 2013; Liao et al. 2014), a collaborative community effort that uses mock observations of multiply imaged quasars and attempts to measure the time delays using various techniques. The GERLUMPH light curves are available at high resolution and for a wide range of source and lens model parameters that can be matched to those of the TDC mock observations.

Additional machine learning and data mining approaches can be used to explore our simulated light curve dataset (e.g., see, Ball & Brunner 2010; Ivezic et al. 2014). This could be done by calculating basic statistical properties, e.g., mean, median, variation, etc or more advanced properties, like the power spectrum, the number of peaks in the light curves, their prominence and/or separation, etc. The entire sample of light curves could then be classified using such a metric as a new way of exploring and understanding lens and source model specific degeneracies. Given the large size of the data, i.e., 2.5 billion light curves corresponding to 5.6 TB of data, the high dimensionality of the parameter space, and the many metrics and classification techniques that can be used, this task will be the topic of future work.

We have focused on the size as the most significant source factor for microlensing (Mortonson et al. 2005). However, theoretical studies could be envisaged using more complicated source profiles (inclined disks, biconical flows, disk with hot spots, etc) that could be compared against the dataset presented here. Since our fast GPU convolution implementation ($\gt 1.28$ million convolutions across 5 days using gSTAR) and our data management infrastructure are already in place, this is a relatively straightforward task. Further developments, like machine learning classification techniques, would provide additional tools to investigate the effect of second order source characteristics.

Our choice of 1.5 ${{R}_{{\rm Ein}}}$ long light curves, and 2000 light curves per source and map was justified in Sections 2 and 3, respectively. The main restriction on the length and number of light curves is the storage space currently available on the host facility. Whereas much of the microlensing analysis process (to date) has been compute or memory limited, we are now in a regime where we are I/O limited. We are separately investigating the usefulness of data compression techniques (lossless and lossy) which may allow us to add additional light curves to the database (Vohl et al. 2015).

Planning of future and ongoing monitoring campaigns of specific systems could be facilitated by using our high resolution data and the GIMLET tool presented in Section 4. Existing GERLUMPH maps for the $\kappa ,\gamma $ values of the targeted systems can be selected from the GERLUMPH database and inspected. By providing a value for ${{R}_{{\rm Ein}}}$ and the effective source velocity, low resolution light curves can be produced interactively by changing their length and orientation on the map. The user can then intuitively decide the best observational strategy, e.g., in terms of the cadence of observations, based on the visual appearance of the light curves.

The GIMLET tool presented in this paper is a building block for a more advanced online modeling tool. GPU acceleration in the browser provided by WebGL presents opportunities to model three-dimensional source profiles interactively in real time, e.g., to add components like an event horizon or jets, and modify their properties, like the opening angle or Schwarzschild radius. WebGL could also be used to perform the convolutions between the modelled source profiles and the scaled down version of the map without any overhead, since all the computations would be performed in the browser in real time. Finally, we intend to integrate the browser-based front-end more fully with a back end supercomputer, where user requests for high resolution full scale modeling would be sent for computation. This working model may in fact not lie far from data access and analysis approaches that will be followed by the future all-sky survey facilities.

In conclusion, we have presented high quality microlensing simulations throughout the parameter space, preparing the theoretical ground for the upcoming all-sky survey era of quasar microlensing. This dataset can be used as a benchmark for existing and future single object and parameter space studies, and can be further explored using machine learning and data mining techniques. Our data and software is made openly available for further use by the community. We complement our data with comprehensive and innovative online analysis tools. We hope that this work will contribute to the advancement of data intensive quasar microlensing studies of the future.

This research was undertaken with the assistance of resources provided at the GPU-Supercomputer for Theoretical Astrophysics Research (gSTAR) through the Astronomy Supercomputer Time Allocation Committee, supported by the Australian Government. gSTAR is funded by Swinburne University and the Australian Government's Education Investment Fund. D.C. acknowledges receipt of a QEII fellowship from the Australian Research Council.

Footnotes

Please wait… references are loading.
10.1088/0067-0049/217/2/23