Export Data: Workers

Hi everyone,

I am trying to export the data on a worker pioreactor by creating an ´export_data.sh´ file by following the instructions in Export and import your data > Export your data. This works for leader pioreactors, however when it comes to workers, lighttpd.service and huey.service loading error emerges along side No such command ‘backup_database’ as shown in the error log below:

Error Log:
+ sudo systemctl stop lighttpd.service
Failed to stop lighttpd.service: Unit lighttpd.service not loaded.
+ true
+ sudo systemctl stop huey.service
Failed to stop huey.service: Unit huey.service not loaded.
+ true
+ sqlite3 /home/pioreactor/.pioreactor/storage/pioreactor.sqlite ‘PRAGMA wal_checkpoint;’
0|-1|-1
++ sqlite3 /home/pioreactor/.pioreactor/storage/pioreactor.sqlite ‘PRAGMA quick_check;’
+ DB_CHECK=ok
+ [[ ok != \o\k ]]
+ pio run backup_database --force
Usage: pio run [OPTIONS] COMMAND [ARGS]…
Try ‘pio run --help’ for help.

Error: No such command ‘backup_database’.
+ pio run backup_database
Usage: pio run [OPTIONS] COMMAND [ARGS]…
Try ‘pio run --help’ for help.

Did anyone encountered this issue or am I missing a point I should have taken?

In advance, thanks for your help!

Hi @sharknaro,

This might be a problem in our docs. The export_data.sh script is normally only used on the leader to export the core Pioreactor database. The workers don’t have a database file.

However, I’m guessing you are interested in exporting the calibrations from the workers? This is possible using a different method - but now that I think about it, it should by as simple a script, too. Let me write one and I’ll share here soon.

1 Like

Hi Cameron!

Yes, I want to change the cluster of the worker so I have to flash the SD card and I did not wanted to loose the calibration as it is going to be the same pioreactor just in a different cluster.

In the docs, I can see that the 5th step says to fo this for the workers for getting the calibration data, hence I tried ro implement the provided script.

Looking forward to hear from you. Thanks for your response and time!

Yes, I want to change the cluster of the worker so I have to flash the SD card and I did not wanted to loose the calibration as it is going to be the same pioreactor just in a different cluster.

You don’t need to re-flash to change the worker’s cluster. The other cluster’s leader only needs to add the worker to its cluster (using the leader’s UI, or on the leader’s command line with pio add-pioreactor <workername>.

Does that solve your problem, or do you still want a worker export script?


EDIT: if you do this, also remove the worker from the original cluster by deleting it’s associated line in the config.ini:

[cluster.inventory]
workername=1
originalleader=1
...

to

[cluster.inventory]
originalleader=1
...

Considering the limited information I provided, I can understand your logic here. Thus, sorry for not being explicit. In my case, I have two clusters next to each other with different SSIDs (lets call them c1 and c2). Hence, for the worker from c1 to be recognize by the c2 leader, I need flash the card with the SSID and password of c2 leader, right? However, if I am wrong snd transferring the worker without flashing, I would appreciate if you can elaborate that Cameron!

Yes, I was already planning to execute this step once I exported the data, but thank you for the advice Cameron!

There’s a way to change the connected network of a Pioreactor - not a problem. Here’s how:

  1. SSH into the worker you wish to change

  2. First, in case things go sour in the next step, copy-paste the output of the following onto your computer.

    pio run pump_calibration display
    
    pio run od_calibration display
    

    This is the calibration data we wish to preserve. If things go wrong, we still have the data!

  3. Note that after you enter the next command, your SSH session be disconnected from the Pioreactor (since the Pi is switching to a different network than you’re on). Enter:

    sudo nmcli dev wifi connect <wifi-ssid-of-c2> password "<network-password-of-c2>"
    
  4. That should be it - you should be able to connect to wifi c2 and be able to connect to the worker again.

1 Like

Thank you very much for the information Cameron, I was not aware of this possibility! I will try it as soon as possible.

Kind regards,
Baris Kara

Hi again Cameron,

I have went through the steps you suggested and it works like a charm!.. until being added to the next cluster. Once in the next cluster, the I get an error syncing configs to pio2 etc.

Currently, I am trying to figure out the reason. However, if you know the potential cause and have free time your help will be appreciated!

In advance, thanks for the help!

EDIT: I tried to discover the workers by the command line ´pio discover-workers´. However, the pio2 did not get discovered by the leader of the new cluster

Hi @sharknaro, is pio2 the worker you just added to the new cluster? Is there an error that is displayed?

Adding Pioreactors is idempotent (i.e. you can do it over and over again without harm): you could try re-adding the worker and see if that resolves it, too.

Yes, that is correct. I added pio2 and pio3

No error displayed, after the addition. However, when I changed the position of the pio2 from bottom to top in the config.ini cluster section. I got the error syncing configs.

I am tyring to re-add the pioreactors, however, I assume because they are already added to the cluster, I get the error not found on network after 195 secs.

hm hm hm, sounds like pio2 isn’t on the network…

Oh! I know. In step 3 above, when you added the wifi credentials, everything was fine and it connected to c2. However, during “add-pioreactor”, the worker will reboot. I am guessing upon starting up again, it reconnected to its old network, c1. This is because, given two available wifi networks c1 and c2, it will choose the one with higher priority (probably c1).

Can you check to see if pio2 is back on c1 (check with a ping pio2.local after reconnecting to c1)?

If so, SSH into it, and let’s reassign priority:

nmcli connection modify <wifi-ssid-of-c2> connection.autoconnect-priority 100

Then you can reboot pio2 (sudo reboot), and I think that will get pio2 responsive to the new cluster.

1 Like

This was a shot at the bullseye!

When I SSH to the leader of the old cluster and typed ping pio2.local I received a series of lines indicating successful replies along with information about the round-trip time. So you were correct.

After this step, pioreactor was synchronized with the config.ini of the new cluster.

Thank you very much Cameron!

SMALL DETAIL: Remember to include sudo to have right privilege, i.e.:

sudo nmcli connection modify <wifi-ssid-of-c2> connection.autoconnect-priority 100