Issues when trying to start a super transaction

Hi Qserv developers,

Just looking for a bit of advice with an issue I am facing when trying to start a new super transaction.
I have tried deleting and recreating the database and tables a few times now without any issues but when starting the super transaction I receive a MySQL syntax error:

(lsst-scipipe-ceb6bb6) [qserv@master partitioned]$ curl http://localhost:25080/ingest/trans -X POST -H "Content-Type: application/json" -d'{"database":"UKIDSSDR8_gpsSource","auth_key":""}'
{"error":"operation failed due to: Connection[2]::execute(_inTransaction=1)  mysql_real_query failed, query: 'SELECT DISTINCT `chunk` FROM `replica`  WHERE `database`='UKIDSSDR8_gpsSource'   AND `worker` IN () ORDER BY `chunk`', error: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ') ORDER BY `chunk`' at line 1, errno: 1064","error_ext":{},"success":0}(lsst-scipipe-ceb6bb6)

I have checked this with a few databases but just getting the same error.
Not sure if something is a bit confused after re-trying this a few times or if there is something I am not doing correctly.
(Creating the database and tables both return “success”:1,)

Running qserv-status.sh on and 3 containers shows everything is running successfully.

Any help or advice would be appreciated.

Many thanks,
Craig

Are you sure you did everything in the replication system configuration setup correctly? Or maybe this link is a better reference point.

(Also: Community’s formatting is getting confused, at least for me, by the embedded “`” characters in the SQL query above, but it does look like it is generated correctly with the exception of the empty worker list.)

Thanks for getting back to me,

This Qserv setup has been running for a while, so everything had been working correctly.
There are already a few existing databases which I am able to query without any issues.

Is there any way of checking what the worker list currently is, or how to re-add them if they are now missing?

Thank you,
Craig

Did some more digging.

According to the qservw_worker databases each worker node appears to be named correctly.

MariaDB [qservw_worker]> select * from Id;
+---------+------+---------------------+
| id      | type | created             |
+---------+------+---------------------+
| worker1 | UUID | 2020-06-20 06:34:31 |
+---------+------+---------------------+
MariaDB [qservw_worker]> select * from Id;
+---------+------+---------------------+
| id      | type | created             |
+---------+------+---------------------+
| worker2 | UUID | 2020-06-20 06:34:31 |
+---------+------+---------------------+

However I did notice when querying some other tables, I get the following error:

MariaDB [qservMeta]> select * from UKIDSSDR8_dxsDetection__dxsDetection LIMIT 1;
ERROR 4110 (Proxy): Query processing error: QI=?: Failed to instantiate query: NoSuchTable(css):Table 'qservMeta.UKIDSSDR8_dxsDetection__dxsDetection' does not exist. [in function getMatchTableParams at core/modules/css/CssAccess.cc:450]

It looks like the details about some, not all, tables exist but the actual data within the tables doesn’t exist.
Not sure if this is related but just something I noticed.

On looking over the install steps I saw in the run-multinode-tests.sh script [1] the CREATE NODE worker${i} appears to be piped out to a qserv-admin.py script.
I have tried doing this manually but I am unable to locate the qserv-admin.py script anywhere.

for i in $(seq 1 "$NB_WORKERS");
do
    CSS_INFO="${CSS_INFO}CREATE NODE worker${i} type=worker port=5012 host=worker${i}.$DNS_DOMAIN;
"
done
docker exec "$MASTER" bash -c ". /qserv/stack/loadLSST.bash && \
    setup qserv_distrib -t qserv-dev && \
    echo \"$CSS_INFO\" | qserv-admin.py && \
    qserv-test-integration.py"

I am just wondering if the worker nodes need to be defined anywhere else apart from the qservw_worker databases and the /etc/hosts file?

[1] - https://lsst-uk.atlassian.net/wiki/spaces/LUSC/pages/1424752661/Setup+a+containerised+multi-node+Qserv+instance+with+a+data+replication+system

Many thanks,
Craig

Dear Craig,

it looks like you’re trying to mix the old implementation of the Qserv Ingest with the new one. The new system has a different API and tools. The new Ingest System is an integral part of the Qserv Replication System. Hence, the first step would be to set up and configure the Replication system.

Could you, please, have a look at a simple example of installing and testing both for the “mono”-node Qser setup? I’m sure this example could be easily extended to cover the “multi”-node scenario as well.
https://confluence.lsstcorp.org/pages/viewpage.action?pageId=140281270

By the way, the link to the Wiki page in your message won’t work for me. Apparently, the page requires an account at your organization.

If you need further assistance on this subject, and if you have an account at LSST Slack then you may contact me directly via the private LSST Slack channel @iagaponenko or by sending a question to the Qserv support channel #dm-qserv.

Regards,
Igor Gaponenko
LSST DM Databases and Data Access Team

Forgot to mention a link to the User documentation for the new Ingest system: https://confluence.lsstcorp.org/pages/viewpage.action?pageId=133333850

@gapon
Thank you.

My process in a nutshell is as follows.

Create the database:

curl http://localhost:25080/ingest/database -X POST -H "Content-Type: application/json" -d@db_dxsSource.json

Create the table:

curl http://localhost:25080/ingest/table -X POST -H "Content-Type: application/json" -d@dxsSource.json
(Which both return success=1)

The start the super-transaction:

curl http://localhost:25080/ingest/trans -X POST -H "Content-Type: application/json" -d'{"database":"UKIDSSDR8_dxsSource","auth_key":""}'
(Which then throws the MySQL error)

Which all look to match the commands in your links.

It would be great to chat with you and others on Slack.
Would you be able to let me know how to gain access to the channel (I only have access to the UK one) my email is c.manzi@epcc.ed.ac.uk.

Many thanks,
Craig