Pioreactors Unresponsive to stirring/OD/self-test

I have been having issue lately with two of my pioreactors in which I will try to start up stirring and OD on the web interface but the pioreactor doesn’t turn them on. I tried to self test but it doesn’t start either. Is there something I can do to get them to start these automations. I had these two working before so it is strange that they don’t work now.

Hi @kradical, hm, are there any error messages that come up in the logs? Can you export your logs dataset for the experiment and send that to me (cam@pioreactor.com).

Sometimes a power-cycle is enough, though.


EDIT: a few more ideas. If you press their onboard button (on the HAT), does the blue LED light up? And if so, is a message displayed in your UI when the button is held down?

I power cycled them but when I try to export logs I end up getting an error “Server error occurred. Check logs.” for both but there isn’t anything in the recent log events on the overview page. I pressed the button on the hat and it showed the name of the pioreactor in one of them but not the other.

That’s annoying! Can you try the following:

  1. SSH into your leader, and show here the output of:
    tail /var/log/pioreactorui.log
    
  2. For the Pioreactor where the button isn’t working, can you SSH into that one and show here the output of:
     ps x
    

Here is the one that works:

pioreactor@pr4:~ $ tail /var/log/pioreactorui.log
2024-03-15T15:58:01-0400 [pioreactorui] DEBUG .env={‘DB_LOCATION’: ‘/home/pioreactor/.pioreactor/storage/pioreactor.sqlite’, ‘DOT_PIOREACTOR’: ‘/home/pioreactor/.pioreactor/’, ‘WWW’: ‘/var/www/pioreactorui/’, ‘UI_LOG_LOCATION’: ‘/var/log/pioreactorui.log’}
2024-03-15T15:58:01-0400 [pioreactorui] DEBUG Starting MQTT client
2024-03-15T16:04:51-0400 [pioreactorui] ERROR Yaml error in demo_stirring_example.yaml: Invalid enum value ‘stirring’ - at key in $.common[...]
2024-03-15T16:04:54-0400 [pioreactorui] ERROR Yaml error in demo_stirring_example.yaml: Invalid enum value ‘stirring’ - at key in $.common[...]
2024-03-15T16:04:58-0400 [pioreactorui] ERROR Yaml error in demo_stirring_example.yaml: Invalid enum value ‘stirring’ - at key in $.common[...]
2024-03-15T16:05:11-0400 [huey.consumer] INFO Executing pio run export_experiment_data --output /var/www/pioreactorui/static/exports/export_Yeast 0_10ml per hr_20240315160511.zip --tables logs --experiment Yeast 0.10ml per hr
2024-03-15T16:05:13-0400 [pioreactorui] ERROR Usage: pio run [OPTIONS] COMMAND [ARGS]…
Try ‘pio run --help’ for help.

Error: No such command ‘export_experiment_data’.

Here is the one with the nonworking button:

pioreactor@pr3:~ $ ps x
PID TTY STAT TIME COMMAND
651 ? Ssl 0:09 /usr/bin/python3 /usr/local/bin/huey_consumer tasks.huey -n -b 1.0 -w 2 -f -C
658 ? Ssl 0:57 /usr/bin/python3 /usr/local/bin/pio run monitor
725 ? Ss 0:00 /lib/systemd/systemd --user
726 ? S 0:00 (sd-pam)
746 ? S 0:00 sshd: pioreactor@pts/0
747 pts/0 Ss 0:00 -bash
758 pts/0 R+ 0:00 ps x

Hm, let’s clean up pr4 first:

While ssh’d in:

rm .pioreactor/experiment_profiles/demo_stirring_example/yaml

It’s a bit strange that export_experiment_data doesn’t exist. Is this the leader of your Pioreactor cluster? Can you run pio cluster-status and show us the output?


For pr3, the job monitor is on, so it should be working… what’s the output of pio logs -n 15?

pioreactor@pr4:~ $ rm .pioreactor/experiment_profiles/demo_stirring_example/yaml
rm: cannot remove ‘.pioreactor/experiment_profiles/demo_stirring_example/yaml’: No such file or directory
pioreactor@pr4:~ $ pio cluster-status
Usage: pio [OPTIONS] COMMAND [ARGS]…
Try ‘pio --help’ for help.

Error: No such command ‘cluster-status’.

pioreactor@pr3:~ $ pio logs -n 15
2024-03-15T15:57:53-0400 [monitor] DEBUG PWM power supply at ~5.20V.
2024-03-15T15:57:53-0400 [monitor] DEBUG Power status okay.
2024-03-15T15:57:53-0400 [monitor] DEBUG Disk space at 2%.
2024-03-15T15:57:53-0400 [monitor] DEBUG CPU usage at 50%.
2024-03-15T15:57:53-0400 [monitor] DEBUG Memory usage at 25%.
2024-03-15T15:57:53-0400 [monitor] DEBUG CPU temperature at 47 ℃.
2024-03-15T15:57:53-0400 [monitor] DEBUG Heating PCB temperature at 25 ℃.
2024-03-15T15:57:56-0400 [monitor] NOTICE pr3 is online and ready.
2024-03-15T15:58:00-0400 [monitor] INFO Ready.
2024-03-15T15:58:00-0400 [monitor] DEBUG monitor is blocking until disconnected.
2024-03-15T15:59:31-0400 [monitor] DEBUG Wrong state lost in broker - fixing by publishing ready
2024-03-15T15:59:36-0400 [monitor] DEBUG Sleeping.
2024-03-15T15:59:36-0400 [monitor] INFO Updated state from ready to sleeping.
2024-03-15T15:59:40-0400 [monitor] NOTICE pr3 is online and ready.
2024-03-15T15:59:44-0400 [monitor] INFO Ready.
2024-03-15T15:59:44-0400 [monitor] INFO Updated state from sleeping to ready.

Originally I had set up pr4 as part of another cluster with the pioreactor_leader_worker image but it wasn’t recording OD measurements. I deleted pr4 from the leaders inventory. I did the same pr3 as well, so maybe that is what is causing the issue.

That’s likely. It looks like pr4 isn’t your cluster leader, atleast according to the configuration on pr4. I think you have multiple Pioreactors with the leader software, and therefore multiple UIs that are creating some confusion between the Pioreactors and users. We can try to fix it forward, but it may be messy and we’ll need to work together. Otherwise, how comfortable are you with the nuclear option (starting over)?

Aside: better cluster management and networking is our priority this year, so this will all be easier in a future release.

1 Like

Yes all of my pioreactors have the leader software with multiple UIs. I wanted to be able to have multiple leaders so that it would be easier for other lab members to use one of the pioreactors. I would be open to starting over.

So would I only need to restart the two having issues or would i need to reinstall for the two working reactors?

I think just the Pioreactors that are acting as workers: pr3 and pr4. Then, re-add them from your leader Pioreactor.

By the way: we are working on being able to host multiple experiments in a single cluster, so there will be one leader and workers in the cluster can be assigned to different experiments, and all running experiments show up in the UI. This solution will mean you can avoid having to set up multiple leaders, etc. to have multiple experiments.

2 Likes