Kernel dies when loading several maps

Hi,

Sorry for posting this, I know a similar issue has been covered, but this is slightly different. I’m trying to create my own notebook based on the 03c_Survey_Property_Maps tutorial notebook but when I try to load additional maps or I do some operations with the already loaded maps I continuously get the error:
The kernel for notebooks/my_notebooks/Untitled.ipynb appears to have died. It will restart automatically. (Untitled is just a copy of the original notebook)
When I log in I request the maximum RAM. Is there anything else I could do?

Thank you!

Hi Martin!

I am the Community Engagement Team forum watcher this week. If it is OK with with you, could you put your notebook into /scratch/mrmonroy on your RSP server instance on https://data.lsst.cloud ? That way, I can copy it and see if I can get it to run on my own RSP server instance.

Thanks!

Best regards,
Douglas

Hi Douglas!

Thank you so much! I have copied it there, it is named Untitled.ipynb (sorry for the mess on it, it is the prototype yet and there’s much to clean).

Best regards,
Martín

Got it and started looking! Any particular code cell causing the problem? Thanks!

Great!
It seems that the line with
hsp_bins = hsp.HealSparseMap.make_empty(hspmap.nside_coverage, hspmap.nside_sparse, dtype=np.float64)
almost at the end is the one when it always crashes, but I’ve had the same problem on other lines before after logging out and in several times.

Hmm… looks like it crashed for me (with the same error you reported) at this cell:

ngal_vals, mask, gal_pixels = cat2map(sel_ra,sel_dec,nside_deg)

Yes, that line has given me the same problem. Actually, it seems the notebook crashes more often after using functions defined by me.

OK, I have run the notebook several times now, and it now seems consistently to crash at this line:

hsp_bins.update_values_pix(valid_pix, sky_bins,operation='replace')

Between runs, I log out entirely. When I log back in, I choose the “Recommended (Weekly 2022_40)” server image and the Large (4.0 CPU, 12288M RAM) container, and I click on the “Reset user environment: relocate .cache, .jupyter, and .local” server option:

I wonder what is the issue with that one line of code.

Let me look a little further…

Ok, I think I have found a way to solve the issue. I have just realized that I forgot to degrade the maps so I was loading and working with them at a really high nside resolution. Then I think that was causing memory problems, specially when later creating a map at the same resolution. It appears it’s working way better now.

Thank you so much for your help Douglas!

Ah, OK. I think this matches my thoughts – in that it is some sort of memory issue.

When I try similar code from the healsparse documentation, I have no problem, but I think the number of bins, etc. in the original cell might be too large:

(If useful, I have temporary placed the version of the notebook I had been playing with in the scratch area of data.lsst.cloud here: /scratch/douglasleetucker/Untitled_DLT.ipynb )

In any case, it sounds you solved the problem.

Best wishes, Martin, and have a good rest of the day!

Exactly, at some point I forgot to add the degrading line. Yes, it looks like it is solved! I’ll have a look to your notebook, thanks a lot for it and for your help!

Best wishes and have a nice day!