Community feedback welcomed on potential timeseries features

DMTN-118 is a new technote that discusses Rubin’s plans for the timeseries features it will compute on alert and Data Release lightcurves. Timeseries features touch a wide variety of scientific and technical issues, so we want to ensure that we hear from the science community before we make any final decisions.

The current draft of DMTN-118 provides a literature review and discusses some open questions. We are maintaining a working draft feature list at https://ls.st/fkr. Feedback on both documents is welcome.

We plan to present this material in a variety of community forums (SCMA VII, the PST/Science Collaboration Chairs talk series, at the Project and Community Workshop), so there will be many opportunities to comment. However, we also welcome asynchronous discussion of the document here on Community, either in reply to this post or as new topics.

7 Likes

Hi Eric, the review is looking great. Can I check what the thinking is regarding periodicity searches accommodating various noise processes/shapes (I couldn’t see it on my quick read but perhaps I missed it)? Specifically, Vaughan et al. (2016) looked at how red noise tends to produce periodic looking behaviour for a few cycles on long timescales and phase folding would then likely produce a decent chi^2 (this could then produce a load of spurious signals).

Indeed, false periodicities due to correlated noise are a major concern. I’m aware of the Delisle et al. 2020 work extending False Alarm Probabilities to consider correlated noise–other pointers are welcome!

This feature document is great! For fitting generic light curve profiles (e.g., Eqs 18,19,20), is it feasible to iterate with minimal outlier rejection to avoid us discarding an event because of 1 crazy flux measurement ? An example outlier rejection is up to 1(2?) observations with | F_meas - F_model)/sigma | > 4. In DC2, the 10sigma outlier fraction for SNIa is few per thousand, but with 50 observations per light curve, about 1 in 10 events will have such a crazy outlier.

This comment is from a statistical viewpoint. Astronomers have been using `sigma-clipping’ for outlier rejection for decades, but it is not clear this is best for small-samples with non-Gaussian (and sometimes asymmetrical) distributions. The standard deviation is particularly sensitive to non-Gaussianity. Instead, statisticians recommend robust measures of spread such as the IQR (InterQuartile Range, 25% to 75%) or MAD (median absolute deviation), but again a criterion like 10*IQR does not have a universal meaning for all datasets. One can consider some classical outlier criteria (Dixon & Grubbs tests from the 1950s), procedures based on fits to the Generalized Extreme Value distribution or (specifically for time series) ARIMA models, or methods from sequential analysis and quality control charts.

Matthew. The study you mention by Simon Vaughan is only part of the difficulty. The astronomers’ 1/f^alpha `red noise’ is a particular nonlinear, long-memory temporal process. But even linear short-memory processes like AR(p) with p>2 produce temporary quasi-periodicities. So many autocorrelated processes in astrophysics — such as turbulence in accretion disks and avalanche models of stellar flaring — can produce spurious periodicities in periodograms. And the problem is worsened in time series with sparse observations, limited duration, and irregular spacings. This is why it is so hard to develop generally applicable False Alarm Probabilities for any periodogram: Fourier ~ Lomb-Scargle, phase dispersion minimization ~ AOV, and so forth. At the very least, one should examine the periodogram peak heights from permutations of the time series … this is a necessary, though not sufficient, procedure for identifying unreliable spectral peaks.

I’m interested in implementation of these features. I’m developing high performance light curve extraction library light-curve, it actually has almost all “baseline” features listed in DMTN-118, and it is known to be used by Antares and Ampel. Performance tests show that Python bindings to this library is 1.5-200 times faster than feets library (~15 for a typical feature set). Is the Rubin Observatory interested in using some LSST community driven implementation of feature extraction library like this one?

Hi Eric, I completely agree with you that there are a variety of underlying shapes/processes which can produce false period/QPO candidates. The reason I referred to Simon’s paper specifically is that it has relevance for longer timescales which have the potential to cause issues specific to searches for certain (compelling) physical processes (and for instance there appears to be a strong reason to initially reject candidates on timescales > 1/3 the observation length), and yes, shorter timescale processes are certainly also important beyond just nu^-2. With regards sparse sampling, PSRESP-like approaches allow a FAP via standard routes although is computationally taxing (especially as one should always try and account for model uncertainty in the FAP test). I don’t know if any of these are easily implementable on such expansive datasets. @ebellm - is it the case that period/QPO searches will be carried out on the entire data at regular points?

Hi @malanchev, very interesting, thanks for the pointer! (I assume you meant this repo, though?) For a variety of reasons I expect that we’re going to need to do our own feature implementation (in C++ and Python), but independent resources such as yours community validation will be very useful.

@m.j.middleton

is it the case that period/QPO searches will be carried out on the entire data at regular points?

Yes, timeseries features will be computed on the entire LSST dataset during each Data Release. We also compute (potentially a subset of) features each time we have a new DIASource detection during Alert Production, using a rolling 12-month history of detections.

Thank you for the answer!

Yes, I meant that repo, sorry. It would be fine if you will open source it, because I believe one good implementation is enough for the community. Do you have any release schedule? I would stop to work on my implementation if your one will be released soon.

I raise here the question whether non-astronomical methodological studies and packages designed for irregular time series can be useful for Rubin/LSST time domain studies. For software packages, possibilities include:
i) traces, “a Python library for unevenly-spaced time series analysis” written by three data scientists. It provides mostly an infrastructure with no advanced methods.
ii) zoo and xts. zoo is a long-standing R CRAN package providing an infrastructure for irregular time series, and xts extends the zoo class for improved R programming. While these packages might be interesting to LSST programmers, the potential science value is in the dozens of CRAN packages built on top of them. They are aimed towards analysis of data from engineering sensors, hydrology, econometrics, epidemiology, meteorology, pollution, time series graphics, and much more. Would you like me to spend time (10 hours?) examining these packages for possible use in LSST analysis?
iii) SPECTRUM is a package published in the journal ‘Computers & Geosciences’ providing various extensions to the Lomb-Scargle periodogram,

The mathematical literature on irregular time series badly scattered in several fields: signal processing, econometrics, climatology, experimental physics, and astrophysics. There is no book synthesizing the literature. The most useful review I have found is a German PhD thesis on “spatio-temporal dependences from paleoclimate data” by Kira Rehfeld (2013). It integrates astronomers’ methods like the Edelson-Krolik correlation function and Lomb-Scargle periodogram into a broader framework … and the text is quite readable. References that might interest us include:
D. J. Albers and G. Hripcsak. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series. Chaos, solitons, and fractals, 45(6):853–860, 2012.
P. Babu and P. Stoica. Spectral analysis of nonuniformly sampled data – a review. Digital Signal Processing, 20(2):359–378, 2010.
P. M. T. Broersen. Spectral Analysis of Irregularly Sampled Data with Time Series Models. The Open Signal Processing Journal, 1:7–14, 2008.
S. Dilmaghani, et al. Harmonic analysis of environmental time series with missing data or irregular sample spacing.Environmental science & technology, 41(20):7030–8, 2007.
K. Hocke and N. Kampfer. Gap filling and noise reduction of unevenly sampled data by means of the Lomb-Scargle periodogram. Atmospheric Chemistry and Physics, 9(12):4197–4206, 2009.
S. Lhermitte et al. A comparison of time series similarity measures for classification and change detection of ecosystem dynamics. Remote Sensing of Environment, 115(12):3129–3152, 2011.
M. Mudelsee. Estimating Pearson’s Correlation Coefficient with Bootstrap Confidence Interval from Serially Dependent Time Series. Mathematical Geology, 35(6):651–665, 2003.
M. Mudelsee. Climate time series analysis. Springer, 2010.
R. Quian Quiroga, et al. Event synchronization: A simple and fast method to measure synchronicity and time delay patterns. Physical Review E, 66(4): 041904, 2002.
K. Rehfeld et al, Comparison of correlation analysis techniques for irregularly sampled time series. Nonlinear Processes in Geophysics, 18(3):389–404, 2011.
P. Stoica and N. Sandgren. Spectral analysis of irregularly-sampled data: Paralleling the regularly-sampled data approaches. Digital Signal Processing, 16(6):712–734, 2006.

3 Likes

Eric,
some comments here for you from UK and Lasair view, with a focus on the questions you raised in some of the sections (Q4.1 below refere to Question raised in Section 41. etc…)

Q4.1
Agree with Eric’s assessment : alert packet timeseries features need not be as extensive as the DRP timeseries features. Variable star features are much more useful in the DRP. Agree that disparate feature sets in the AP and DRP are not a problem and users will get used to those.

Would note that on the statement “only objects that vary relative to the template image will produce DIASource”
This should more accurately read

“only objects that vary relative to the template image (by at least 5 times the noise level in the input image, assuming the input noise dominates over template) will produce DIASource“.

Otherwise there will be noDIASource and no measurement or data point (apart from ForcedPhot). Hence the alert LC will be sparser than the DRP (as noted in Section 4.10).

Q4.2 & 4.3

For transients, our thoughts would be, to run a simple Bazin (or similar) lightcurve fit, and report the parameters in each band. This will allow users to combine the information from multiple bands whatever way they like.

Fairly basic statistics such as latest dy/dt (where y is either flux or mag), will be very useful. I (Stephen) would be willing to look more carefully at exactly what statistics are provided and how they are computed. Example variations would be :

  • when three epochs exist, then 2 values of dy/dt
  • colors (when multiple taken on same night)
  • latest mag (and/or flux) in each filter (and MJD of that)
  • peak mag (and/or flux) in each filter (and MJD of that)

Careful attention to the errors and the dt intervals are required, since two low significance fluxes separated by 2-3sigma on the same night (df large, dt is small) lead to spuriously large df/dt.

The basic statistics should be run when there are at least 2 data points. For the basic model fits (e.g. Bazin), agree with the document, that empirical checks on how many points are required should be done. When minimum reached, then run the fit.

For Lasair - we recognise Eric’s words in Section 4.3.1. This basically says lightcurve classification of periodic and aperiodic stellar Alert data in LSST is not particularly useful. It is much more of a DRP problem./solution.

Q4.4
Agree with Eric’s thoughts here. Main question is where responsibility lies (Rubin/Broker/user)

For transient like objects (which means no underlying stellar source), a parametrised fit such as the Bazin model (or similar) would suffice from Rubin. This is a way for users to select sources of interest through a broker, and then to extract the lightcurve data for their own model fits. An example would be :

  1. Give me all transients not coincident with stars, with a peak flux constraint (e.g. brighter than flux equivalent of m=23)
  2. Find most likely host galaxy and get host photoz (or existing specz) from Lasair
  3. Make a selection on the Bazin model parameters from the latest fits
  4. Design a query to get all lightcurves of objects that satisfy the above
  5. Run their own SALT2 models on all of these lightcurves

So the Rubin AP content would allow a user to do 1-3, and a Broker would facilitate 1-3 through 4, and
also provide the storage and compute for a user to run 5.

Does that help you ?

Q4.10

This is important point - there are arguments to use the forced photometry fluxes in the transient parameter fit, as flux measured below 5sigma can significantly contribute information. Where possible, we would always recommend using the forced photometry fluxes. This is from experience of comparisons of early fitting/classifications from forced and non-forced.

But it does need done relatively quickly - at the same time as the forced photometry is triggered.

There is perennial concern regarding the reliability of measures of periodicity, such as peaks in Lomb-Scargle periodograms. There is much recent interest in the use of the Generalized Extreme Value distribution for evaluating False Alarm Probabilities, based on the Fisher-Tippett theorem. With the sparse and irregular cadences expected from LSST surveys, it is possible that many LSPs will show spurious peaks, and confusion between harmonics may be common when true periodicities are present.

In this context, the recent paper by Toulis & Bean (arXiv:2015.14222, listed in arXiv under ‘stat’ not ‘astro-ph’) proposes a different approach where one obtains hypothesis test probabilities for specific LSP peaks of interest. Code for the least-squares-based algorithm is provided on Github in both R and Python. Panos Toulis is a young faculty member in statistics at University of Chicago who only recently has addressed astrostatistical questions, and Jacob Bean is a faculty member in astronomy with long experience in exoplanet studies. They presented this method at the Statistical Challenges in Modern Astronomy VII conference a few weeks ago.

2 Likes