I looked at this particular issue soon after reading through the Theremino stuff. They claim to use a pole-zero cancellation network to address it.
The undershoot is a consequence of the coupling capacitor charging up a little during the pulse, so when the voltage going into the capacitor declines to zero the built-up charge on the capacitor requires that the output side has to go below the quiescent set point. That, actually, is the main reason I used a really long time constant for the C-R input network on my signal conditioning board -- if the RC time constant is very long compared to the pulse duration the undershoot should be close to zero. However, the pocket geiger's lowpass filter network takes no such precaution. My simulations show that increasing the value of some of the low-pass elements on the pocket geiger board reduce the undershoot, but at the cost of changing the filter cutoff frequency. Options are few here.
Yes - I agree entirely, and I am puzzled why I can't see "undershoot". I have very carefully simulated, down to pico-aps, all that a PIN diode can deliver from "pulses", into an entirely artificial load. It does require re-working a whole lot of LTSpice default settings.
I think that if the pulse charge is small enough, and the coupling capacitor is orders of magnitude larger than the PIN diode reverse capacitance, it cannot "charge" the coupling capacitor. The coupling capacitor looks like a short-circuit straight into the TIA input. It only takes about 6fA into that 500MHz GBP to provoke it into cancelling the little shot of current
I'm wondering if the undershoot may be a non-issue if we're integrating the pulse rather than performing a peak-find using interpolation as described by the Theremino folks. One potential "fly in the ointment" is my current pulse-qualification code, which sets pulsewidth based on excursion above the quiescent baseline -- so voltages that go BELOW the baseline aren't integrated. This whole situation probably requires some simulation data plus a standalone program to see how it all works out. Yes, a full analytical approach is possible but I think it could be pretty messy. If there are any takers, go for it!
I think I may have already been a taker, and gone for it hard, and it was messy!
Preserving the resolution
At the small signal stage, well before we are attempting to measure it's energy analogue by any kind of final integration, this attempt is to faithfully preserve that wave shape, with it's precious information. I don't yet know if it is unrealistic to expect that we could count up (say 20) places across the waveform, and add them up, and hope to see a bucket filling with 5.5KeV as separate to a bucket filling with 5.9KeV for (say) chromium, but it is what I hope for. I do feel we won't have much chance unless that pulse is preserved.
Low Pass Filtered Versions
I accept that low-pass filtered versions might possibly retain the energy analogue, or enough of it to be useful, so long as the poles of the filtering do not get on top of each other. I fully understand that the prime attraction for this is that it provides more time to process the slower pulse through slower ADC samplings, at the expense of a pipeline delay that may let other pulses that might have happened go by without getting looked at. That is not really a big deal. We can just sample for longer. I simply liked it better that the pulse we have takes a time which depends on the PIN diode capacitance, and so I chose to sample fast enough to measure that.
Arguably, if we allowed a small amount of Butterworth type filter, we could let the 13uS pulse become (say) 50uS, and get a finer resolution measure of the energy under it. That was a passing thought, but I also thought that if I got 20 samples of the original, then it left room to play with software higher resolution sampling later, if slowed down versions turned out to be a raging success!
An alternative might be to use LTspice to do it. Use a switch to route pulse voltage into an integrator. The switch would be gated using various threshold voltages as chosen by your favorite theory
.
Finally, I'm wondering if errors due to that pesky undershoot might not just become a simple offset that could be compensated for.
When I reduce the coupling capacitor, in simulation, to anything less than about 800pF, then I do start to see the unwanted effects on the waveform. We will never go to that 100pF or so as seen on the Pocket Geiger. The Thermino "signal conditioning circuit" using two BCW60B transistors, working on a 200mV pulse, is also, in my view, pretty useless.
Working with electrons
Simply having a pile of electrons that resulted from a X-ray photon arriving into the PIN diode is not enough to get a grip on it. Those electrons are not yet a current. One needs to introduce the
per second time over which they made a capacitance arrive at a voltage from that charge.
Every equation and physics analysis I found, (or watched Indian University lectures on YT) always involved a
rate of arrival, meaning a radiation flux. Photons per second. There was nothing on how to figure how much pulse current would happen from a single photon.
The answer was to calculate via the energy known to be in there. That is how I came up with the currents pulse estimates.
It should be OK
Regardless of what the actual pulse might or might not do using my low noise TIA, I expect I will have a way. Even if I have to use the DC coupled TIA, and find a way to lose the inevitable offset. For now, I am happy that my most careful simulations do not show a problem, provided that first TIA current comes in via a capacitor so big that it cannot start storing significant charges of it's own.