LSST Mask Planes to Euclid Flag Maps

Hello,
I am working for Euclid and have the responsability of the lsst data processing within the Euclid infrastruture.
I was asked about the bit flags used in LSST, and saw that you are refering to Mask Planes Number whereas Euclid uses Bit Mask Value in their documentation.
I guessed that the correspondance between the two was as follow :

BIT_PLANE	     FLAGMAP_BIT	BIT_VALUE	   
0                0x00000001  	1   			   BAD						
1                0x00000002  	2   			   SAT						
2                0x00000004  	4   			   INTRP					
3                0x00000008  	8   			   CR						
4                0x00000010  	16  			   EDGE					
5                0x00000020  	32  			   DETECTED				
6                0x00000040  	64  			   DETECTED_NEGATIVE
7                0x00000080     128                SUSPECT

Can someone tell me if this is correct ?
Thank you for your help.

Rémi

LSST Applications: How Mask Planes are handled in @c afw gives some information. Basically, the mapping from name to bit and bit value is not fixed; it can vary between image types (and even theoretically from image to image). But it is always recorded so that code can reliably use names.

Dear Rémi,
I am not sure if the following helps, but I have been doing some validation tests of the mask planes from test data processing sets for the Rubin LSST DP0.2 campaign. I “borrowed” some code from Brant Robertson and Alex Drlica-Wagner’s Rubin LSST Stack club jupyter notebook AFW_Display_Demo.ipynb and found the following for Rubin DP0.2-processed calexp image masks:

As K-T Lim mentions above, mapping between the name to bit value is not fixed and can vary between image types; e.g., I suspect things may differ for the coadd images.

I hope this helps!