Needing more than a spark test?

Inserts foot in mouth. CBlas and cLAPACK are quite advanced for a Teensy. It could be done, but it's a project all by itself. When I did something similar for an embedded platform, I had vendor support. I don't for see PJRC doing that simply because they have their hands full getting parts to maintain production. They have had to do multiple redesigns because of parts availability. Their interests seem to be aligned with audio, but not general signal processing.

The afore mentioned libraries were designed for PCs with an OS and having threads. Not insurmountable for embedded, but not a walk in the park.
 
In the context of a relatively slow ADC I'm stuck with a maximum of about 70 pulses. I don't think there's much advantage in performing an FFT on that, compared to just summing 70 data points.

A 1MSPS ADC will get you close to a 128-point FFT. But is that going to be a significant advantage compared to

float_t integral = 0;
for(uint16_t i = 0; i < datalen; i++)
integral += pulsedata;

If all you want to do is use "integral" as the index into an MCA array (along the lines of "MCA[integral] += 1" ) and then plot it.? I'm leaving real-world niceties like scaling the integral value so it doesn't overflow the MCA array, what I wrote is just for pedagogical purposes :)

A 1024 long FFT containing real pulse data of a 100us pulse would require a 10MSPS ADC. I found a 16 bit LTC2386 on Digikey selling for about $44 so it should be possible to do something like that. It has a LVDS serial interface so the serial channel would have to be running about a 160MHz clock rate. The interface is NOT SPI but it should be possible to use the FlexIO peripheral system on the T4.x chip. The converter would require LVDS-to-single-ended level translators for the clock and data lines. The FlexIO system may be fast enough but I'd want to double-check that before jumping into that kind of projecte.

The question is, at the end would the extra effort pay off or not. I think it's too soon to say.
 
In the context of a relatively slow ADC I'm stuck with a maximum of about 70 pulses. I don't think there's much advantage in performing an FFT on that, compared to just summing 70 data points.

A 1MSPS ADC will get you close to a 128-point FFT. But is that going to be a significant advantage compared to

float_t integral = 0;
for(uint16_t i = 0; i < datalen; i++)
integral += pulsedata;

If all you want to do is use "integral" as the index into an MCA array (along the lines of "MCA[integral] += 1" ) and then plot it.? I'm leaving real-world niceties like scaling the integral value so it doesn't overflow the MCA array, what I wrote is just for pedagogical purposes :)

A 1024 long FFT containing real pulse data of a 100us pulse would require a 10MSPS ADC. I found a 16 bit LTC2386 on Digikey selling for about $44 so it should be possible to do something like that. It has a LVDS serial interface so the serial channel would have to be running about a 160MHz clock rate. The interface is NOT SPI but it should be possible to use the FlexIO peripheral system on the T4.x chip. The converter would require LVDS-to-single-ended level translators for the clock and data lines. The FlexIO system may be fast enough but I'd want to double-check that before jumping into that kind of projecte.

The question is, at the end would the extra effort pay off or not. I think it's too soon to say.
It all depends on what you need to capture and the pulse SNR's. If there's buckets of SNR, than almost anything works. Looking forward to seeing more data. Hope that data will make it more clear what is required.

Re: FlexIO, it seems to be pretty fast. It has a minimum speed of 49MHz. Not sure of the max, but it is designed to shuttle around camera data for pixel processing. I think there's a post by Paul Stoffregen on Teensy FlexI/O on his (PJRC) forum. Date is around 2018.
 
To continue my previous post, it also is worthwhile to consider the detector. X-ray detectors have an inherent minimum-energy resolution that is dictated by what technology they use. The best, for something that is hobbyist-affordable, probably is a NaI(tl) scintillator coupled to a PMT, and that will get you about 6-7% (usually specified with 662Kev gamma rays). If that relationship holds true down to the 10Kev range (and it probably doesn't), that means the spectrum produced by a 10Kev gamma ray source would have a spread of 600ev. That is enough for iron and cobalt peaks to significantly overlap -- regardless of how much fancy SNR-improving stuff we do. So it doesn't make sense to beat ourselves up a lot over improving the pulse SNR once we get to a certain point.

But there is one way the FFT can improve things. That is to use it to de-convolve the MCA spectrum to achieve an effectively-better energy resolution. That wouldn't be a realtime FFT so even C++ FFT code would be OK.

For the latter application, head on over to the Theremino gamma spectrometer information web pages. They have a (machine translated) paper regarding their success in using deconvolution.
 
For the latter application, head on over to the Theremino gamma spectrometer information web pages. They have a (machine translated) paper regarding their success in using deconvolution.
Is their code well documented? I haven't had a lot of luck with deconvolution, personally. Or rather, I have failed to implement it successfully. I'm supposing that I didn't quite understand it sufficiently...

Edit.
I found the paper on Gaussian deconvolution at the Therimino site. The basic premise is that you know all the possible contaminating spectral lines. Suppose we do, since there's a finite number of elements, and even less of them have adjacent lines. Based on that possibility one tries to sort that all out. I need to reread it. Seems you will need quite a bit table or database of all the lines. Could principal component analysis be of help here? Or is the gaussian deconvolution sufficient?

2nd edit. Wish that Therimino wasn't in visual basic. Would have thought it would be in C or C++.
 
Last edited:
Is their code well documented? I haven't had a lot of luck with deconvolution, personally. Or rather, I have failed to implement it successfully. I'm supposing that I didn't quite understand it sufficiently...

Edit.
I found the paper on Gaussian deconvolution at the Therimino site. The basic premise is that you know all the possible contaminating spectral lines. Suppose we do, since there's a finite number of elements, and even less of them have adjacent lines. Based on that possibility one tries to sort that all out. I need to reread it. Seems you will need quite a bit table or database of all the lines. Could principal component analysis be of help here? Or is the gaussian deconvolution sufficient?

2nd edit. Wish that Therimino wasn't in visual basic. Would have thought it would be in C or C++.
I'm not very familiar with principal component analysis so I took a look at the Wikipedia article about it. It looks to be similar to methods used in linear algebra to reduce a matrix to its equivalent diagonal. Basically finding the simplest basis vectors of the given data space.

I also was immediately reminded of the K-L "Karhunen-Loeve" transform which does something similar. Guess what, the Wikipedia article regarding the "Kosambi-Karhunen-Loeve theorem" mentions that it is used to perform principal component analysis!

IIRC the KL transform is extremely computation intensive. It's necessary to separate the contribution of different "dimensions" by calculating the covariance of the process that generated the data. I have no idea how XRF data might fit into that. The output of an MCA is, in essence, a two-dimensional dataset -- the number of counts inside an array of contiguous energy windows -- while the stuff generating the data comes from 1 or more elements. It appears to me that there isn't enough "dimensionality" in the dataset to use principal component analysis. The classical linear algebra equivalent is where you have "N" unknowns but something less than "N" equations, so you can't explicitly solve the unknown values.
 
I'm not very familiar with principal component analysis so I took a look at the Wikipedia article about it. It looks to be similar to methods used in linear algebra to reduce a matrix to its equivalent diagonal. Basically finding the simplest basis vectors of the given data space.

I also was immediately reminded of the K-L "Karhunen-Loeve" transform which does something similar. Guess what, the Wikipedia article regarding the "Kosambi-Karhunen-Loeve theorem" mentions that it is used to perform principal component analysis!

IIRC the KL transform is extremely computation intensive. It's necessary to separate the contribution of different "dimensions" by calculating the covariance of the process that generated the data. I have no idea how XRF data might fit into that. The output of an MCA is, in essence, a two-dimensional dataset -- the number of counts inside an array of contiguous energy windows -- while the stuff generating the data comes from 1 or more elements. It appears to me that there isn't enough "dimensionality" in the dataset to use principal component analysis. The classical linear algebra equivalent is where you have "N" unknowns but something less than "N" equations, so you can't explicitly solve the unknown values.
And this my friends is about as far-removed from machining that we've gotten so far! Gotta do more lathe or mill talk here :D
 
And this my friends is about as far-removed from machining that we've gotten so far! Gotta do more lathe or mill talk here :D
True enough, but if you want to do spectral analysis or XRF, there's going to be some math involved. The things we have to suffer so we can know what we are machining!
 
99 pages of stuff, a math diversion or two won't hurt!
 
I think ya'll are just trying to get to 100 pages before you build a prototype! ;)
 
Back
Top