Needing more than a spark test?

At the risk of preempting Homebrewed: Photon emission is random although you can adjust the flux buy altering the amount of Am that you use to excite the subject material.(or moving it further away.) If the signal photons are too close together in time they will cause anomalous data. A way to reject these is needed and some ideas have been thrown out. The fluorescence is not in the visible light range; it is low energy x-ray around 6KeV. I was hoping a simple peak detector would do the trick, but apparently the electronic hardware noise complicates this. That is above my pay grade.

Post 578 is an excellent summary!

Also Graham's simple approach in post 577 seems very appealing.
 
Last edited:
At the risk of preempting Homebrewed: Photon emission is random although you can adjust the flux buy altering the amount of Am that you use to excite the subject material.(or moving it further away.) If the signal photons are too close together in time they will cause anomalous data. A way to reject these is needed and some ideas have been thrown out. The fluorescence is not in the visible light range; it is low energy x-ray around 6KeV. I was hoping a simple peak detector would do the trick, but apparently the electronic hardware noise complicates this. That is above my pay grade.

Post 578 is an excellent summary!

Also Graham's simple approach in post 577 seems very appealing.
Thanks for the additional explanation. Low energy x-rays as the fluorescence does complicate things. Makes simple optical approaches hard. Reading some more, I find that one solution for gamma ray spectrometry is to make a multichannel analyzer. This sorts out all the bins according to energy. Is this the basic approach? I haven't found a detailed explanation of how to deal with pile-up, or the reception of multiple photons within a short time period, ie. within a time constant of the analog front end. No matter what, that needs to be dealt with, as it is certain to happen. How do similar implementations reject near coincident photons?

Further reading reveals that the detector resolution can be a limitation. Is the detector chosen sufficient to resolve the emission spectra for the use case? (Somewhat common materials one wishes to identify?) Gee, sorry to ask 100's of questions. It's hard plowing through a 59 page thread. Seems that there's a lot of interesting processing (or visual interpretation) going on to reject "clutter" or uninteresting features like back scattering, and even effects from the enclosure. Looks like a fun project.
 
You are asking all the right questions! Some of these have been partially hashed out. Welcome aboard.
Seems that Graham wants to get low noise data and throw out peaks that don't make sense. Homebrewed wants to analyze the $hit out of the data as fast as possible. Who is correct?! Such suspense...
 
You are asking all the right questions! Some of these have been partially hashed out. Welcome aboard.
Seems that Graham wants to get low noise data and throw out peaks that don't make sense. Homebrewed wants to analyze the $hit out of the data as fast as possible. Who is correct?! Such suspense...
I apologize for not rereading the thread. I have started plowing through some of it. A few of my questions were answered in the beginning. It's not clear what has been settled in both their approaches. (Detector choices, PMT or not, etc.) Tossing data in the beginning can result in throwing out the very information one needs. (I've seen that happen.) It's hard to know, in advance, what's good and bad. So preserve the ADC samples! However, if one runs several hypotheses simultaneously, that can give good results. Certainly going to be interesting attempting identification - it seems easy in principle, but often is hard in practice. Has anyone thought about how to calibrate this instrument?

I do think, at least initially, the processing should be done on a higher power platform (PC) until things are sorted out. Get the samples transferred to a PC and play there - it's far more efficient (from a development perspective) than on a micro/arduino platform. Once the algorithm is determined, then port it back to the platform (if it fits). If it doesn't fit, one either needs a different algorithm (at the expense of something) or a more capable computation engine, or a better/different method of data collection.
 
OK - a few random thoughts..

Hi @WobblyHand - a newer fellow curious about what is going on here, welcome!
Although I originated the sentiment in the title, @homebrewed is the relative mainspring. I had not considered that moderators might want to rule our explorations out of order for being somehow non "hobby-machinist", so in our defence, I cite the motivations.

Having a chunk of steel, likely acquired via some route other than purchased from a source with the composition guaranteed, is a problem!
One might want to know if it is potentially hardenable, or maybe free-machining by having some lead in it. We really do want to know if the bit of cast iron is semi-steel, or hardly better than pig-iron weights. We would like to know if the stuff is a heat-hardenable alloy, or a carbon steel, and so on.

Doing something like this has me had me thinking about how one might even turn or drill lead, and how to fabricate the enclosure. Everything about trying to make this gadget in a way that lots of HM members might manage to get together is about something we believe they would find useful. On the way, we become educated in some practical nuclear physics, right down to the numbers. We have weeded out the wishy-washy stuff. The thread is now hardball about the science, and we have appreciated and set out what it takes to get this data.

I do agree the thread is huge, and it would take someone with more than average perseverance and interest to trawl the whole thing, but that journey details our learning curve. At this stage, we freely discuss atomic absorption spectra with all that already under our belts. We should perhaps pause a bit, and periodically throw in summaries, and potted explanations, so new readers not so immersed in it, can take advantage.

I hesitate to try for "Needing more than a spark test (2)". That's hiking the generation number like a trash Hollywood sequel, but it would at least re-start the number of pages.
-------
Thinking about the energies we want to detect, there is the question - can we reliably detect lead? Can we even detect Carbon? Nice to know, because that is such an important ingredient. It's presence is most of what allows the spark test to work. From where we started, with the pocket-Geiger, we have the excerpts of energies in a couple of PDFs.

Lead is atomic number 82. Check it out on the second PDF, but also display the first, to get at the top line column titles.
There is no chance a incoming 60KeV gamma can get the K-shell electrons in lead (Pb) to emit anything. 74.9KeV is just too much, and 84.9KeV is worse!
But look, the L-shell electrons need only 10.5KeV, and 12.6KeV to shift, and that is right where our detector can work! See those two together, and you suspect lead.

So ask, can one tell if it is contaminated by Rhenium, Osmium, Iridium, Platinum, or Gold? Sure you can. Consider first the likelihoods. Then consider the resolution of our proposed gadget. Much of the design philosophy I went for was not to settle for a smudged pulse. It needed to be good enough, and have a measurement accurate enough, to try and separate these.

Think through all the common metals and stuff in our steels. Cr, Mo, S, Mn, Fe, Zn, and so on. It's clear that not-very-good analysis, with poorer resolution might persuade is that what we thought was lead might be sulphur, This is why we need the bucket statistics to be smart! Also, the resolution as high as we can get it. A lot of the software around struggles with getting around stuff like baseline shift, phase delay, pulse stretching and the like. My approach is a bit hardball. DC coupled, or clamped, with high enough bandwidth and low enough noise to make such "smart" guesswork unnecessary, if I can make it so.

One thing I had not considered is how little of our steels are all from ore. Except for perhaps a Katana blade, or special purpose steels, recycled scrap in the mix now has become an alloy with a proportion from all sorts of steel, and has been becoming steadily more radioactive. So long as it's not noise, this might even be an advantage. If it makes the steels glow X-Rays, that's OK :) I don't expect we will ever be bothered by our steels ever "getting warmer" :)

Consider the important carbon. Only 6 electrons, and all of them in the K-shell. Only a feeble 227eV will it yield. The sensor we hijack from the pocket-Geiger kit will only have absorption probability around 3% for that. Even so, it is not zero, and if we have a low enough noise floor, we can scale the measure to account for the sensor. Then again - why bother? So long as the count is characteristic of the element in calibration with that sensor, the element is identified. It's bucket is incremented, and the display will show it there, regardless the info came from a small signal.
It's true the axis of the display plot may need to be scaled to account for the sensor curve, or maybe a logarithmic expansion to some base to exaggerate the low levels could be useful, but the key thing is - we can perhaps detect carbon. It becomes a thing about very low noise amplifier detection technology, which fortunately these days, is actually reasonably affordable.

As for the processing, it takes more than a little Arduino, but a 55 bucks Raspberry Pi can stomp on it! Very high speed DSP, and suchlike are not needed if one is happy to capture the pulse(s) with high speed analog stuff, and analyze at leisure. It just means you have to wait more seconds for the counts to build up. Mark has actually experimented with how often the scintillations happen.
 

Attachments

Last edited:
I'm up to page 4 now... You guys have covered a lot of ground!

Nonetheless, are you you trying to identify 1144 from 1018? Or just aluminum (aluminium :) ) from lead. There seems to be a lot of potential backscatter and junk, that might interfere with alloy determination. Quantitative analysis is usually harder than we think. The idea is pretty interesting, that's why I have started reading from the beginning. It will take a while to catch up. When I do, I'm not sure I can help that much, but my background is EE and radar (signal processing).
 
I'm up to page 4 now... You guys have covered a lot of ground!

Nonetheless, are you you trying to identify 1144 from 1018? Or just aluminum (aluminium :) ) from lead. There seems to be a lot of potential backscatter and junk, that might interfere with alloy determination. Quantitative analysis is usually harder than we think. The idea is pretty interesting, that's why I have started reading from the beginning. It will take a while to catch up. When I do, I'm not sure I can help that much, but my background is EE and radar (signal processing).
My scheme envisages having a lookup database to display the probable metal alloy type from the analysis of the peaks in the bucket count histogram. At least - that is the ambition. There will be a fun stage where one sets about making a calibration set of results of the pure elements alone, with some novel ideas on how to get a reading. This includes making up solutions, and getting those little cubes of pure stuff used by Periodic Table collectors, etc).

Once the traces of the calibration (pure) are available, then move on to showing it various types of steel, either known from the beginning, or identified because you can see the proportions of elements. That plot can be a "calibration signature" for the particular steel - like 1144, or 4140.
It's a bit of a software smart trick to have the computer do enough correlation, or statistical stuff to be able to suggest what it is, with perhaps a % probability. Possibly display the calibration material plot over the test metal plot, but in a different colour.

Initially, I expect I will be using a physic (chemistry??) textbook, along with some metallurgy data, to figure what the metal is. (Hoping!!
 
If I understand correctly the source is a radioactive element emitting low energy X-rays? And these sources emit photons at random times, if I remember correctly. How does one count two photons that occur so closely that the smeared out scintillator response blurs them together? Is it possible to work in the spectral (fourier) domain rather than the time domain? That way you don't need crazy fast circuitry, correct?

Is the fluorescence in the visible spectrum? The lines emitted are unique to the chemical makeup? How do you determine the power in each frequency? Are you making a sort of spectral analysis tool? Like a diffraction grating coupled with a frequency insensitive detector? Or a fourier spectrometer - much more sensitive, but a lot harder to make!
You're right, pulse overlap is a source of inaccuracy. Since radioactive decay is a random event, there's no way to prevent a certain number of pulses occurring too close together, so the best approach is to reject pulses whose shape is wrong -- clearly, pulse overlap (in terms of the bandwidth of the acquisition system) will result in a non-gaussian shaped curve. My scheme will hopefully detect this by looking at the goodness of the 2nd-order polynomial fit.

I've read some papers that describe attempts to analyze pulses in the frequency domain but the approach doesn't appear to have gotten much traction in terms of real-world applications. As far as requiring fast circuitry, there is an open-source XRF system (done by the Italian Theremino group) that uses a computer sound card to perform the acquisition. With the right pulse-shaping circuitry and S/W their approach produces some pretty good results, despite the relatively low sample rate.

The fluorescence is not in the visible spectrum. It is in the x-ray spectrum as well, but at a lower energy than the incident 60,000 electron-volt photon. In the vicinity of 6,000 electron-volts. For some perspective, the energy of a violet (visible) photon is between 2.75 - 3.26 electron-volts. A difference of several orders of magnitude.

The emitted spectral lines are characteristic to the element. They do not depend on the oxidation state of the element. If you've got a compound all you will see is a spectrum that is a weighted sum of the individual elements in the compound. This is not the same as NMR spectroscopy, where the spectrum IS shifted by the chemical bonds that are present.

The spectrometer we're working on is known as an energy-dispersive detector. It outputs a pulse whose height is proportional to the x-ray photon that strikes it. The x-ray photon doesn't make it out alive :). Ideally, 100% of its energy is absorbed by the detector. It is possible to make an x-ray spectrometer that uses a type of diffraction grating -- but the x-ray wavelength is too short for conventional grating technology. Instead, a crystal is used. The regular spacing of the atoms in the crystal lattice acts as a diffraction grating. The disadvantage of this type of spectrometer is that it only detects one wavelength at a time, so a very slow scan through the wavelengths is necessary. It also is much less sensitive than an energy dispersive detector. I am unaware of ANYONE who has made a DIY version of a wavelength-dispersive spectrometer.

I also am unaware of a fourier-transform spectrometer (similar to an FTIR) that works in this wavelength range. The challenges of making such a thing would be monumental. The wavelengths of interest are less than 1 nano-meter.....so the mechanical system would have to be better than that!
 
Yes, 59 pages is a bit of a steep slope to climb! But that includes quite a bit of, for lack of a better phrase, thrashing around looking for the best approach. And it's clear that there still is some gentlemanly disagreement on what that might be! It's all fun, if it's the kind of thing that rocks your boat. I like it because it uses quite a wide range of knowledge and technologies. You can see that both Graham and I have some knowledge regarding electronics; and that is a huge leg up in this case. We will try to make things as clear as possible to those who may not be quite so far along in that realm. As I like to say, there should be no differences between us created by hoarding knowledge, only those given to us by Nature and Nurture.

Back to the more mundane issue of calibration, it is absolutely necessary to have some pure elemental samples. We can't generate a calibration curve by using first principles. Or at least I can't :laughing:
 
The spectrometer we're working on is known as an energy-dispersive detector. It outputs a pulse whose height is proportional to the x-ray photon that strikes it. The x-ray photon doesn't make it out alive :). Ideally, 100% of its energy is absorbed by the detector.
This is a point on which I looked hard for what is known. I know when the photon arrives, it might miss. Most of what is inside of what we think is solid hard stuff is mostly empty space. There is a probability of collision involved. If it misses, there is more to hit beyond. If the stuff is thin enough, it can go right through. It's higher energy X-rays stuff (Gamma), after all!

If it hits, it excites the element electrons into a higher energy state. They don't stay that way. They drop back into their "normal" state. The amount they took in to get to the excited state is released as a fluorescence photon, having a new wavelength determined by the energy change involved, and Planck's constant. It comes out as X-Rays.

So - what happens to the excess?
A 60KeV photon hitting iron (Fe) uses up only 6.4KeV and 7.6KeV to excite the K-shell, and presumably, at the same time, uses up another 705eV and 718eV getting the L-shell electrons into a higher state. That total is 15.4KeV, leaving another 44.57KeV yet to do anything.

Does the leftover energy simply excite the same atom electrons another couple of times until it can't quite manage a last K-shell event?
Does the remainder go on to keep working the same atom L-shell up and down again and again until even that runs out?
The wavelengths are getting pretty long for the small energies. Would that be into infra-red?
Does it end up shaking the atom about somewhat, as in it "warms the stuff up" a bit?

Does it not happen that way at all? Does the remainder 44.57KeV keep going to strike some other atom instead?
------------------------
All of the above is about the fluorescence scintillation. The next bit is about what happens in the photodiode detector.

Although I have trawled many videos about what happens in photodiodes, particularly from the advanced set of lectures from a Indian university, I have not seen a clear explanation of exactly how a arriving photon turns into a current amid the conduction band carriers in a material. All the equations are about a energy flux of lots of photons. There is an efficiency involved. We do not get a current of energy equal to 100% of the incoming. Some of it, I think, ends up as heat.

In our design, we have a transimpedance amplifier capable of seeing a current started by only one photon, which is then amplified.
A whole bunch of other noise currents will be amplified along with it. Some of it is thermal generated, but I don't propose cryogenic amplifier design. Some is induced from outside fields, but we can shield it from such interference by design. We even can deny magnetic interference, and we can get up to circuit design tricks that can cancel differential noise, and avoid common mode noise. In the end, this will be about signal to noise ratio, and the only way to preserve that, locked in, is to start with extreme low noise amplification with very high gain, so that any noise from later stages is dwarfed by comparison to our (amplified) original noise.

I am pretty sure that we will run into pulse and real signal situations we did not anticipate, meaning the difference between theory and practice. We just have to give it our best shot. You are doing all the right things to anticipate most everything. The aluminium plates I had first thought was overkill, but I revise my opinion.
 
Back
Top