What are the default settings for ExtendedSource initialization in SCARLET

@pmelchior @fred3m I have a few questions about using SCARLET.
What are the default settings for ExtendedSource initialization? The API documentation says symmetric=True, monotonic=True, thresh=1.0.

Does \symmteric=True mean DirectSymmetryConstraint with sigma=1?

monotonic=True mean DirectMonotonicityConstraint(use_nearest=False, exact=False, thresh=0)
with thresh=1?

I also didn’t understand what thresh is.

And is MinimalConstraint always executed?

Yes, those are the correct default settings for ExtendedSource.

When symmetric=True then the initial morphology is perfectly symmetric (so yes, sigma=1).

For monotonicity, thresh sets a minimal gradient. By default thresh=0, which means that the morphology can be flat. When monotonicity=True, in an attempt to keep the initial footprint as small as possible thresh=0.1, enforcing a slight gradient.

The default constraints for ExtendedSource are

constraints = (sc.SimpleConstraint() &
               sc.DirectMonotonicityConstraint(use_nearest=False) &
               sc.DirectSymmetryConstraint())

SimpleConstraint is slightly more restrictive than MinimalConstraint, as it also uses scarlet.operators.prox_center_on, which prevents a source from having zero flux in it’s central pixel (which can cause the location of a source to drift off to infinity).

Thanks for clarifying that. I’m testing out scarlet to see how it works on the simulated parametric galaxies I have, so trying to figure out the best ways of using it. The user guide is quite detailed and that’s great :slight_smile:

I had a few more follow up questions. Do you have a general sense in which situations one would use the different constraints? Are there ways to know a priori which constraint gives the optimal solution? The same for initializing the source. The API page on sources has the line

See Sources for more on initializing and using sources.

But the link doesn’t seem to be working.

And the PSF Matching hasn’t been integrated in yet right?

I’ll fix the link in sources. It should be http://scarlet.readthedocs.io/en/latest/source.html, and PSF matching has been tested and should be merged to master later today or tomorrow.

As for constraints, we typically use the defaults for ExtendedSource. We’ve found that even extended spiral galaxies are symmetric to better than 2% (with the exception of well resolved spirals with multiple components), and testing on 10k simulated blends (with real galaxy morphologies) showed that perfect symmetry works better than softer symmetry requirements in general. This is most likely because the benefit of using symmetry to constrain flux in blended regions is more advantageous than errors introduced by any intrinsic asymmetries in the flux of an object.

Because science cases might require different constraints, we are currently testing different constraints for other situations, for example AGNs. If you have some sample blends that you’re trying to fit, you could send them to me and I’d be happy to do some testing to try and find an optimal set of parameters.

How do you get the error in deblended output? When I tried err = source.get_morph_error(images) for an image, when scarlet is run with partial PSF deconvolution
I get a scipy error

~/.local/lib/python3.6/site-packages/scarlet-0.2.fd9e460-py3.6-linux-x86_64.egg/scarlet/source.py in get_morph_error(self, weights)
    440             Sigma_pix = scipy.sparse.diags(w.flatten(), 0)
    441             PA = [scipy.sparse.bmat([[self.sed[k,b] * self.Gamma[b]] for b in range(self.B)])
--> 442                     for k in range(self.K)]
    443             Sigma_s = [PAk.T.dot(Sigma_pix.dot(PAk)) for PAk in PA]
    444             me = [np.sqrt(np.diag(np.linalg.inv(Sigma_sk.toarray()))) for Sigma_sk in Sigma_s]

~/.local/lib/python3.6/site-packages/scarlet-0.2.fd9e460-py3.6-linux-x86_64.egg/scarlet/source.py in <listcomp>(.0)
    440             Sigma_pix = scipy.sparse.diags(w.flatten(), 0)
    441             PA = [scipy.sparse.bmat([[self.sed[k,b] * self.Gamma[b]] for b in range(self.B)])
--> 442                     for k in range(self.K)]
    443             Sigma_s = [PAk.T.dot(Sigma_pix.dot(PAk)) for PAk in PA]
    444             me = [np.sqrt(np.diag(np.linalg.inv(Sigma_sk.toarray()))) for Sigma_sk in Sigma_s]

~/.local/lib/python3.6/site-packages/scarlet-0.2.fd9e460-py3.6-linux-x86_64.egg/scarlet/source.py in <listcomp>(.0)
    439             import scipy.sparse
    440             Sigma_pix = scipy.sparse.diags(w.flatten(), 0)
--> 441             PA = [scipy.sparse.bmat([[self.sed[k,b] * self.Gamma[b]] for b in range(self.B)])
    442                     for k in range(self.K)]
    443             Sigma_s = [PAk.T.dot(Sigma_pix.dot(PAk)) for PAk in PA]

TypeError: unsupported operand type(s) for *: 'float' and 'LinearFilterChain'

Am i running something wrong or is that what the documentation refers to in?

CAVEAT: If the source has a PSF, the inversion of the covariance matrix is likely instable.

Sorry @sowmyak for some reason community didn’t notify me that there was a new post in this chain. Feel free to @fred3m me if you ask a question about SCARLET or it’s implementation in the LSST stack. I haven’t seen this before, @pmelchior have you? This might be something that slipped through the cracks when we removed sparse matrices from the rest of SCARLET.

Didn’t see that until now either. [Is there a way to scan for scarlet-related topics here on community ]

Anyway, this is a bug. Fred’s right, the offending code stems from the previous sparse matrix implementation. It is unrelated to the caveat of not working properly.

@sowmyak I would warn that these errors aren’t very good. They come from linear propagation assuming that there’s only one source. The problem with the PSF is that you’d perform a deconvolution of noise, which is not stable. I have an idea of how to much better error estimates, but it’s not coded up yet. What do you need these errors for?

@pmelchior @fred3m I wanted to run the stack on scarlet model outputs directly to perform measurements, which needs the error. There is no hurry to get this done immediately. I just wanted to be sure that it wasn’t me using scarlet incorrectly

Yeah, you’re not doing this wrong. It’s on my todo list, and I think I know how to do it properly, but it’s untested yet. To be fair, there’s no published work (I know of) on getting error estimates from a proximal algorithm, so this requires some creative thinking.

The main question for you is: do you need the right errors or just some noise on the models?

My end goal is to compare the measurements (centroid, color, shape, flux) computed by stack (with the current deblender), to stack with scarlet deblended outputs. So this would require the right errors. I can leave this part on hold for now and come back to it later when you’ve included it.

I’ll also add that while it would clearly be ideal to propagate deblending uncertainties into the uncertainties of measurements on deblended pixels, it’s quite likely that it will be so difficult that LSST will not try to do that, at least in DR1 (beyond the linear propagation of pixel noise that @pmelchior already mentioned). So it’d be very useful to do these experiments with just empirical approaches to estimating the actual impact of deblending on measurement errors.

@jbosch @pmelchior @fred3m So, right now I have a blend image with poisson noise which I run it through scarlet. For each object, I subtract the remaining scarlet deblended output models from the blend image, run this through stack, assuming that the noise is still poisson. I’m not using scarlet generated errors anywhere. Am I doing the right thing or is there a better way of doing this?

Currently the measurement is done on a single band image. In the future when the measurement would be done on different band images I’d imagine this wouldn’t work, since the SED and morphology errors in the different bands would be correlated right?

Are you regenerating the noise via a Poisson assumption after subtracting the Scarlet models, or just using the same noise realization that was created for the blended image?

I think I’d recommend the latter, on the principle that deblending doesn’t actually reduce the noise in the pixels. Most of the noise should be coming from the background rather than the sources themselves anyway, as I assume you’re not primarily concerned with very bright objects.

@jbosch scarlet does reduce the noise in the image if the model is subtracted from the given image. This is because it is flexible enough to pick up some of the noise.

@sowmyak I take it that you run cModel on the deblended children (or rather on the image after all but one child have been subtracted), but there is no need to run another measurement algorithm to get centroids, colors, and per-band fluxes, you can directly read them off of the scarlet model. The purpose of the error propagation method that is currently coded up is to provide uncertainties for those direct measurements. However, if you really want to use cModel for a direct comparison, then I agree with Jim that you can simply subtract scarlet model, and ignore the reduction in noise. The measurement errors will then appear less noisy, but you wouldn’t inflate the errors by adding artificial noise.

For shapes, the situations is a bit different because the scarlet models don’t need to be elliptical, so it depends on the convention what you want to call “shape”. Lorena Mezini and Erin Sheldon are looking into WL measurements with scarlet right now, using metacal.