Simulation of effective filters per visit?

I was wondering if it would be possible to extract the effective filter band per visit from a simulation of observations spanning a reasonable distribution of observing conditions? The y-band in particular will have considerable throughput variation over the survey, which I guess would be e.g. seasonal and position (distance pointing is from zenith) dependent.

Given some cadence and observation condition simulations, is there a step where the effective throughput as a function of wavelength is stored? I’m not looking to produce any actual simulated observation, just the throughput, such that it could then be convolved with any SED and some more idealistic/simplistic observation simulated.

The simulations stack does not, as of yet, account for throughput variations as a function of observing conditions.

The baseline/ directory in the throughputs package contains throughput data for all of the physical components of the telescope (filter, lenses, mirrors, and detectors), so, if someone wanted to run MODTRAN for different airmasses, it would be possible to create throughputs as a function of airmass. That is not a standard part of the stack, though.

Using MAF, you could pull out the series of observations for a given spot in the sky, and then with the airmasses, dates, etc generate a realistic atmospheric throughput.

Of course, in theory, the calibration telescope should provide the atmospheric throughput, so the LSST reported magnitudes will be corrected for any changes in the atmosphere.

The outrigger telescope will measure conditions, but in order to correct the observation, you will need to make an assumption about the underlying SED. If the template sets that you compare to aren’t quite the same as the real underlying SED, then you will make a small error in the correction. One of the things that we wanted to examine was how large these effects might be for a realistic range of observing conditions. On the other hand, the variations can be a positive development: we were also looking at potential improvements to photo-z estimates by using the color information in the varying y-band filter.

Yeah, it would be great to do the exercise of taking a bunch of observed magnitudes and throughtputs and seeing how well one can recover the underlying SED.

I know @ljones made a library of model atmospheres to see how well we would correct main-sequence stars. I’m not sure if they made it into a git repo though.

Hm, sorry - either I didn’t get email properly from this or I just missed it.

The french group made a large set of atmospheric simulated atmospheres, but Tim Axelrod also made a series of atmospheric models that varied a single input parameter (such as amount of water vapor, or amount of ozone), that I used for the Level 2 Photometric Calibration document.
The input atmospheres are in a git repo:


in particular, see the files in

You may also find the “atmoComp.py” python code useful

and I know Zeljko has a student working on a nicer version of this that we need to put into github pretty soon (it’s mostly done, just has a little cleaning up to do).

Thanks!

It looks like the AtmoComp.py program calculates the MODTRAN output parametrisation that is in eqn 29 of the Photometric Calibration document.

So, has anyone ever worked out a very basic relation between the values that the coefficients of this equation might take, and the time (i.e. time of year/hour of night) and maybe conditions of an LSST observation? And if not, is making a crude relationship like this even possible? For example, perhaps I could vary only water vapor (H20?) in a pseudo-realistic way just based upon the rough size of variations on an hourly and seasonal basis (e.g. looking at Fig 7 of the Photometric Calibration document as a guide). This might be enough to build up a picture of the effect of y band variations on photo-z (using OpSim output to look at distribution in dates of y-band visits).

There is a ways to go here, but we have some work towards that being done by a student.