I just started a new experiment but I can’t seem to get the calculated growth rate feature working. When I click run, it just says starting and hasn’t actually given my any data after ~ 20 minutes.
This is what it says in the event log:
16:24:45 | pioreactor1 | growth rate calculating | Starting to compute statistics from OD readings. This will take a few minutes.
I’ve restarted the growth rate automation a couple times, but that hasn’t seemed to fix it.
I did make this change to the config: samples_per_second=0.016 (previously was 0.2)
Could this be what is causing it to take longer than normal to start showing growth data?
I only want it to take one sample every minute, because that is 1,440 data points/day, or 10,080 over a week.
I just made this change because my last experiment I ran for ~ 1-2 weeks. When I tried opening the exported excel log, it had ~200,000 rows and was basically unusable because it had so many rows (0.2 samples/second → 17,000 samples/day → 120,000 samples/week)
Would it be possible to implement a feature where it samples the readings at one rate (e.g., 0.2 samples/second), but only save one data point each minute? Or maybe save the average of 12 samples each minute?
Edit: It is working now, it just took longer than usual because I had lowered the samples/second setting. I would be interested in whether it is possible to sample at a higher rate, but only save the average value each minute.
it just took longer than usual because I had lowered the samples/second setting
You’re correct. Growth rate calculations required 30 points initially. A 5s / sample, this take only a few minutes. At 60s / sample, well, it takes excruciating long! Overall, it’s not a very good experience for users to wait 30m.
The big root problem is that the exported dataset is too big to manage. In my previous life, we dealt with this by creating “roll-ups” of datasets - basically taking a finely-grained table, and aggregating it over a more coarse-grained time period. I like this idea, especially for these long-term experiments. I’ll spend some time this week working on a solution here and expose it in the UI. Once that has landed, the sec / sample can be reverted back to 5s.
The small problem is that if someone does want 60s / sample, collecting 30 data points takes too long. I may make some adjustments here, too.
Thanks! If I have an experiment currently running, will I have to end the experiment before updating?
Also, would it be possible to set up the pioreactor to measure transmission as well as 90 degree scattering? I was thinking this might help because it would tell me whether a drop in scattering is due to increased transmission, or due to increased absorption. Would this give me any greater insight as to population growth?
I would advise waiting until the experiment is over - just to be safe.
You can (but first read the bottom paragraph): the most simple way is to reuse the REF photodiode by removing it from REF pocket and repositioning it into the 180deg pocket. Then, in your config.ini, under section [od_config.photodiode_channel], change REF to 180. Now you have two photodiodes: one at 90deg, and another at 180deg. You can start the experiment as usual. Caveats:
you are losing the REF signal, so you may see more environmental impacts in your experiments: eg: change in ambient temp, or change in vial temp, may appear. (Temperature effects the LED).
The 180deg signal is highest at the start of the experiment, when there is little-to-no cloudiness (called turbidity). On the other hand, the 90deg signal is lowest at the start of the experiment. So you have a trade off in signal: you may need to reduce the IR LED intensity earlier on (else you may hit the upper-bound of the PD sensors), but reducing the IR LED intensity may cause the 90deg signal to lose sensitivity.
But, the broader question: why are you seeing a drop in scatter signal? The IR light shouldn’t be absorbed as much as you think. There is the phenomenon of saturating, see here. Do you think that is what is happening?
I had thought I had purchased a spare set of the LED’s, so I figured I might as well try using them as well since I had them on hand. I just checked my order thought, and it appears I did not purchase an extra set, so I am just going to leave my photo diodes as they currently are (REF & 90 Deg.)
In regards to why I saw the drop in scatter signal, I am not really sure. I had let my previous experiment run for ~2 weeks, but it had quite a bit of issues with contamination, so I wasn’t sure what could be causing the drop. I was thinking that I could add in a diode to measure transmission, and this would tell me whether the issue was saturation or not. If both 90 deg scatter and 180 transmission drop, then I would assume the cause is saturation. If 90 deg scatter decreased while 180 transmission increased (or stayed roughly constant), then that would be evidence that saturation was not the issue.
The issue is currently moot though, because I just checked my order and realized I didn’t order a second set of diodes, and also because I started up a new experiment.
So you have a trade off in signal: you may need to reduce the IR LED intensity earlier on (else you may hit the upper-bound of the PD sensors), but reducing the IR LED intensity may cause the 90deg signal to lose sensitivity.
Could this issue be circumvented by taking two measurements? Ex: Measure 180 and 90 signals using a high intensity IR, then take a second measurement ~5 seconds later with a low intensity IR? I’m just asking because I’m curious, and not because I’m hoping to use something like this anytime soon.
Also, I noticed that the blog page/link to the Kalman filter looks like it is dead.