To avoid using WHERE IN, I am attempting to use the strategy that is outlined in the workbook DP03_06_User_Uploaded_Tables. The workbook suggests a methodology that includes running the astroquery.table function to create an ADQL readable file from an ascii file (CSV, TXT, etc). Prior to building a query, the Table (astroquery.table) method is invoked and produces a Python variable called ut1 (2, 3, etc.) This variable is then used in the Query as a table that is expected to be inside a folder called “TAP_UPLOAD”. Since ut1 is a Python variable and not a file, it is not in such a folder and there does not appear to be code that will create a folder or a file. I scanned the astroquery.table documentation quickly and I can’t see anything that would suggest this. Since ut1 is a Python element and not a file, the Query is not able to locate it when it is executed in the ADQL query.
Hi Suber-
I think this is simply a misunderstanding. That notebook demonstrates reading an ASCII file into memory as an Astropy Table (where the ASCII file is contained in the data/ directory under notebooks/tutorial-notebooks/). If you execute the cell that reads ut1 = Table.read(fnm1, format='ascii.basic')
, then type print(ut1)
, you should see a table containing 8 lines of data. If you do not see that, then you will need to make sure that you are pointing to the correct directory path in the line where it says fnm1 = 'data/dp03_06_user_table_1.cat'
. For example, if you are working on a copy of the notebook that you’ve moved to a different location, then the data/ directory may not have moved with you. In that case, you can use the full path as:
fnm1 = '~/notebooks/tutorial-notebooks/data/dp03_06_user_table_1.cat'
What the notebook is doing in the query section is uploading this table to “TAP_UPLOAD.ut1” so it can be joined to the DiaSource table. Thus if the table hasn’t read in correctly at first, there is nothing to upload, and you will get an error.
Let me know if that doesn’t get you unstuck!
@jeffcarlin Thanks to you, I am unstuck. I got a match between a GAIA identified exoplanet candidate and a DC2 source, based on the RA and DEC. I’m happy!
Glad to hear!
Note that you shouldn’t expect to get matches between real objects in the sky (i.e., from Gaia) and DC2/DP0 sources. DC2 is simulated, so does not correspond to the actual stars in the sky.
Yes. I’m just trying to get some ideas together on how to attack the issues once the first real data is released.
Thanks again for your help.
Note that there are a number of features in the works - that includes a temporary upload feature to QSERV that will work for DP0.2 just like the DP0.3 one (DP0.3 data is stored in Postgres not Qserv) and also persistent user table upload. The former will be available for DP1 (when the first data is released) and the latter will come after that.
In case it helps with your planning.