Hi everyone,
In several of my experiments, the OD measurements decrease gradually to hit 0 and they never recover. Sometimes, the reactors even start well but when I restart the OD measurements, the values crash to zero. The situation is happening randomly with time and reactors. I have tried to change the ir_led_intensity, replace the PDs and LED. However, I did not manage to solve the problem. Has anyone experienced similar issue?
The plot below shows measurements of a near-clear media (similar to water) with small amount of bacteria. I measured the culture and confirmed that it has OD = 0.05. In an experiment before me, the reactors were working fine and did not shot this issue.
Sometimes, I create a new experiment and that alone fixes this issue. However, this time it only recovered some of the reactors. Is it possible for reactors to get locked in certain state when experiment is created? - i.e., why some reactors recovered to measure higher values after a new experiment is created right after this issue?
1 Like
Hi @sharknaro,
Hm, this plus your random jumps in a previous post has me concerned. Can you email me the Optical Density Readings and the Experiment Logs exports from this experiment to cam@pioreactor.com?
Just of the top of my head, some causes:
- bug in the firmware
- bug in a calibration
- REF signal is much too high
- some weird combination of configs?
I see you are using a long duration between OD readings - can you tell me what that duration is? (Actually, maybe just copy-paste your [od_readings.config]
section here)
Hi @CamDavidsonPilon,
I sent the data to the email you provided. I think the problem was random. Today, I started the experiment and it started well, although it hit 0 in certain points as you can see in the attached picture below
This being said, here is the requested [od_readings.config]
[od_reading.config]
# how many optical density measurements should be published per second? Recommended maximum is 1.
samples_per_second=0.02
# default intensity of IR LED. The value `auto` will automatically choose an IR LED intensity that matches an internal value.
# Or set an integer between 0 and 100. Higher is usually better (>= 50), but keep less than 90 for longevity.
ir_led_intensity=70
# lower to remove heating artifacts, but may introduce more noise in the OD. Must be between 0 and 1.
pd_reference_ema=0.9
# populate with your local AC frequency (typically 50 or 60) to get slightly reduced noise. Inferred from data otherwise.
# local_ac_hz=
# apply a smoothing penalizer
smoothing_penalizer=6.0
# turn off the other LED channels during an OD reading snapshot.
turn_off_leds_during_reading=1
Regarding the suggestions:
- Firmware bug - might be though seems to be occuring at random
- Calibraiton bug - no calibration is set on the workers
- REF is high - I tried 50, 70, and auto (all resulted the same)
- Config combinations - maybe, though I had 25.5.22 setup on both the leader and worker.
Solution that worked for me:
A) Test which PDs is the problem (suggested by @CamDavidsonPilon)
This will remove the REF signal from the calculation and you’ll see both signals in the chart. This way, we can see how both signals look and figure out which PD is causing problems.
1 . Set both channels to 90 in config.ini
[od_config.photodiode_channel]
1=90
2=90
2 . Create a test experiment and start OD readings (you should see plots similar at the right side of the picture below)
B) Replace the malfunctioning PD
1 . Replace all the PDs that were giving “0” measurement in the OD plot with
2 . Create a test experiment and start OD readings.
For example, in the picture above channel 2 (ch2 aka 90) were giving “0”. Hence, these PDs were replaced with new PDs. After this, check the channel 2 (ch2 aka 90) readings.
Today, the same also happened with a worker in another cluster. I replaced the PDs with new ones + changed the “ir_intensity=auto” (it was ir_intensity=70) and this fixed the problem.
Personal experience: Even for “near perfectly clear” media, the sensors always generate some voltage (i.e., the reading is never just “0”, it is always ~0.000X). At least in all my experiments, Piroeactors that showed “0” in the readings never recovered or scaled in OD reading even when the culture became turbid. However, if it was even 0.0001, I observed the OD readings to scale with the media turbidity. So my take on this is that:
- The PDs getting worn out, most likely, due to being old.
- There is a loss in connection between the PDs and the HAT port (less likely as this does not seem to be happening with channel 1 and channel A)