How are people using LLMs/AI with their pioreactors?

Interested in getting a thread going where people can share their learnings on how they’re using LLMs to talk to/control their pioreactors. There is an MCP server for the pioreactor, but I’ve actually found that the most flexible (probably not the safest though - see The Normalization of Deviance in AI) is just to run claude code on my local machine and tell it to “ssh into pioreactor@ip”. It works incredibly well and I basically use it to do most things now, like “can you update my pH control script so that it’s more robust to cases where the pH spikes up for a couple of seconds as that’s probably sensor error” and “can you write me a quick script to run a pH calibration”.

Next I’m thinking of turning this into a skill that automates a lot of the boilerplate, and happy to share so that people can iterate - but if there are lessons to be learned interested in hearing them!

Bonus: Bruce Li from TJX has lots of posts on LinkedIn about their AI implementation (https://www.linkedin.com/in/multiphaseflow/)

  • I use claude code/local harness (or similar) and ssh into the pi
  • I run the LLM harness (or similar) on the pi
  • I use the MCP server
  • I don’t use LLMs
  • I hate AI, why do we need it for bioreactors?
  • I copy and paste code that an LLM writes somewhere else
  • Other - Comment
0 voters
1 Like

Hey Noah, have you ever tried giving control to Claude in turbidostat mode? A bit worried by the overflow, but would like to see how good it is if I ask him to optimize functions like the growth rate using, for example, two carbon sources

I tend to not let an LLM have direct control, it’s more that it can set the parameters for jobs, check they’re running ok and fix them if not etc. in the case of turbidostat mode I imagine it’s coded to prevent overflow happening, so as long as Claude is just modifying the job things should be ok!

This is really cool thanks for sharing. Do you then just push to ur github fork from the pio over ssh?

My workflow was always the opposite. I use code llms to make modifications to my fork on my local computer then update the pio after I push.

Also for data analysis I export the db onto my local computer and then do any analysis there. my analysis is usually python written by claude to do statistics and matplotlib to make the plots. for obvious reasons i wouldnt run the analysis scripts on the rpi. It is capable but youd have to move the resulting plot images to ur computer to view them anyways so might as well move the db and do analysis locally.

I wouldnt touch the mcp server for safety reasons as you said. Do you think theres a downside to my workflow that im overlooking?

This is really cool thanks for sharing. Do you then just push to ur github fork from the pio over ssh?

My workflow was always the opposite. I use code llms to make modifications to my fork on my local computer then update the pio after I push.

I tend to get claude to write the changes to a git repo on my local machine, commit and push to remote, then clone and pio plugins install on the pioreactor. Sounds quite similar

Also for data analysis I export the db onto my local computer and then do any analysis there. my analysis is usually python written by claude to do statistics and matplotlib to make the plots. for obvious reasons i wouldnt run the analysis scripts on the rpi. It is capable but youd have to move the resulting plot images to ur computer to view them anyways so might as well move the db and do analysis locally.

Yes I don’t do any analysis on the Pi, we’re actually thinking about how to auto-export to something like netcdf on cloud storage and then be able to have automated cloud run functions/notebooks etc. that run analysis. But yes python! Thinking about maybe some julia at some point for more digital twin/bayesian stuff

I wouldnt touch the mcp server for safety reasons as you said. Do you think theres a downside to my workflow that im overlooking?

Oh no the MCP server is a much “safer” way to work than giving the LLM total access to the filesystem of the pi… right now I know my workflow is substandard the eventual goal is to have everything be a plugin and these plugins can provide MCP tools and have a skill that tells the agent how to use the mcp tools and how to acess a knowledgebase on the Pi that’s a bunch of .md files based on what we’ve learned about bioreactor operation. It’s much better for the LLM to be like “ah I have a tool here that is “set pH control level”” rather than have it go “I guess I better add base because the pH is a bit too low” :skull: