CCD Nonlinearity Near Saturation

We are attempting to define the behavior of LSST photometry around the saturation and nonlinearity limits in different bands. Currently, two properties we are trying to find out are the expected full well depth of the LSST pixels and the linearity behavior - essentially how the gain changes with star brightness, and the resulting shape and size of the resulting bleed trails. I realize that information might not be known for sure, but I imagine there are basic specs for that. I have reviewed the existing documentation to get that information but cannot find it. We have also been in touch with the software engineers behind PhoSim, and they do not have that information either. Can someone point us to that?

Hi Josh, thanks so much for your inquiry.

I’ve found two relevant specifications for you. In LCA-128 (“Technical Specification: LSST Back-Illuminated n-channel CCD with Extended Red Response”): CCD-008 specifies the “pixel charge capacity shall be no greater than 175,000 electrons … the point at which the detector response (volts out divided by mean per pixel integrated photo-generated charge) saturates when illuminated by a flat field”. Then also, LCA-57 is the Science Raft specification and has a linearity specification for the full signal chain (CCD + readout), and requirement C-SRFT-064 states “Response to flat illumination shall be linear to within 3% between 10000 and 90000 electrons per pixel.” (Note that this is relaxed from a similar specification in LCA-128).

I realize this only answers part of your question. An internal effort has been initiated to get you the data you need about the nonlinear response function. We expect this to be prepared in the next couple of weeks.

Attention @poconnor, @CStubbs, @sritz, and @cclaver.

Josh & Melissa,

i see you’ve called attention of Camera folks to this question. Perhaps worth saying that the requirements bound the behavior in terms of full-well and linearity, but don’t at all specify the linearity close to full well, nor the shape of the bleed trails. However, there is already plenty of data on non-linearity, up to full well, which could be reviewed profitably. As to the shape of the bleed trails, this is not part of our standard CCD or Raft lab testing, although there should be some data with bleed trails - and I think this could be studied if really necessary.

I’m sure Steve Ritz will respond in more detail for the Camera.

Aaron

1 Like

Just replying so that I can keep an eye on this/help out if needed, as I know a small but non-zero amount about this stuff.

1 Like

Hi Aaron, thanks for the info. When you say “there is already plenty of data on non-linearity, up to full well, which could be reviewed profitably”, could you tell us where to find that? There are a large number of documents and resources scattered across various web platforms, so it is hard for us to track it down. Can you post any associated links?

Ultimately we are trying to understand the range of magnitudes for which we can get some amount of LSST photometry and describe the relative photometric precision we can expect. We also want to see what saturated images should look like so that we can test approaches to halo photometry or PSF-reconstruction photometry.

I think it’d be helpful to describe what you’re planning. We’ll correct for non-linearity as well as it can be measured, so I think you’re asking about the measurement techniques for non-linearity, or what happens at saturation (buried channel or otherwise), or whether the physical full well is below the A/D (should be, but we’re not quite sure yet). Is that right?

Hi Robert. What we ultimately trying to determine is what the photometric performance will be at the bright end. That will be important for cross-survey checking and calibration, as well as for certain science cases. So we want to get an idea of the photometric precision in each band as a function of magnitude, accounting for various saturation and nonlinearity effects. We also want to understand what sort of bleeding will occur, which will matter for crowded field photometry in bright regions.

We realize that the data teams are already working on aspects of this, so we’d like to learn what is known at this stage and what are the expectations going forward, so that we can communicate them to the science collaborations and help us plan better. Let me know if that addresses your questions.

I think it’s probably better to ask about the measurements you want to make (and how to test them!) rather than about detector characteristics. It also means that we can give you other related information (e.g. we’re thinking about doing shorter exposures in twilight to bring the bright limit up) – something that I wouldn’t have passed along based on the original question. Another thing you need to worry about is the flat fielding and atmospheric absorption strategy once you have more than a few e4 photons (i.e. below c. 0.5%).

I’m not at all trying to shut down discussion, and don’t want to give that impression. It’s a little hard to point you at precursor data as we don’t have such good measurements for the HSC video chain,we don’t have good monochromatic flats, and we don’t have a monitor telescope to back out the atmosphere. And we don’t have real bright point sources observed with LSST chips in a way that I’d believe – there is some lab data (including projected spots) that might be interesting.

OK, that does help clarify the issues involved. I have heard about possible plans for twilight observations, but I wasn’t sure how seriously that was being considered.

To keep our questions in a scientific perspective, we want to know the best estimates for photometric precision as a function of apparent magnitude in each of the bands. We want to know just how bright we can go in each band, either with the default pipeline or if we should explore custom pipelines for bright stars. The science goals behind this are partly to see what fraction of various stellar spectral types and variability classes can achieve a threshold photometric precision in LSST that are also already observed by precursor surveys like PAN-STARSS, PTF/ZTF, etc. That will tell use what kinds of targets will have long-term lightcurves from multiple surveys, and what kinds of phenomena can be detected in the different surveys. Even if twilight observations are conducted, they will be at a different cadence and pattern than the universal cadence, and therefore not currently simulated in the opSims, and we want to be able to use the opSims to model the lightcurves of these stars.

Although LSST is typically discussed as having a bright limit around 16, we expect that will vary by filter, and will not really be a hard limit. So if it will be possible to obtain even 3% relative photometry on objects around 14 or 15, that is good to know.

Josh,
the data I’m referring to is Camera bench data - ie. almost entirely flat fields taken on either individual CCDs or full Rafts. All this data is collected in an internal camera database - not in documents or web sites that are publicly available.

One sensor was used on sky last year - actually @merlin is the person to ask about that - that might be the best place right now to look at what bleed trails look like. We are planning to take some data with a grid of point sources - and could take it such that the points saturated, which would provide some of the information you seek. In addition, Tony Tyson has taken a great deal of ‘point source grid’ data, and that could be helpful to you.

Aaron

I have inquired about whether it’s possible to make this data public. I’ll post back once I know…

Also, it’s worth noting that sensors don’t necessarily behave the same wrt saturation for flats and stars, though I think, IIRC, there are actually some saturated stars on very high backgrounds from observations of the Eagle Nebula in that dataset, which could prove to be a nice tool for getting a handle on these differences.

hi Josh, do you have access to the DESC confluence? here are some relevant Monocam links:
https://confluence.slac.stanford.edu/display/LSSTDESC/MonoCam
https://confluence.slac.stanford.edu/display/LSSTDESC/Observed+objects

if you don’t have the access, it should be easy to arrange.

the data is at NCSA and easily accessible but note that not all exposures went through dm and were properly calibrated so we should be careful… why don’t you have a look at the links above and we can discuss further.

Hi Andrei,

I do not have DESC Confluence access. Who do I contact to arrange that?