I may have discovered a problem in my MCA library. I haven't had an opportunity to see how it affects the end result, but....
In the MCA_begin code I calculate the bin size for the MCA's x axis. It is used to convert a floating point input (like voltage peak or pulse area) into an integer that indexes into the MCA array. It uses the length of the array, an unsigned integer, and divides that into the specified data range -- a floating point number. I don't have a typecast from integer to float for the integer....with no resultant complaint from the compiler! I don't know if it's just quietly doing the typecast or not (the latter case would definitely cause problems!). This would be atypical behavior from a C compiler but it's designated as "Cpp" code, to suit the Arduino IDE. Maybe that's the difference?? I'd think that a more modern language would implement even stronger type-testing but mebbe not?
Ooh - yes, I will have a look, but I won't be directly commenting on the coding. I remember the type casting in C, and how carefully one had to manage it, when I did a C course (eons ago
). Trying to do arithmetic with an integer getting involved with numbers that produce fractional remainders, even with other integers! . Oh boy! Better still, numbers that had a good floating point value getting turned into truncated integers. It can happen easily. I will be having a go using Java, with some help, but that too has huge potential for mess-ups.
For now, let's assume you get to all the places where data type conversions in the code might have might messed with you, because I know you will, and you are good at this stuff!
----------------------
The way I imagine this works..
So just thinking about an artificial scenario where we have a pulse, and it has a integrated count-up, representing it's area, and that be the analogue of the energy. Now imagine we have the same sort of pulse, but with a higher amplitude. Quite reasonably, we can say that the new pulse has a higher energy, even if we had the duration remain constant. (It might not be so). Suppose we force that. We take a fixed number of samples during a pulse we believe is valid. We force the duration of a pulse. We choose a time long enough for the good pulse to have subsided, and reject others.
Now getting to the size of the array for the number of buckets. How many distinct separate areas can we resolve from little pulses to big pulses? This is about the resolution of the X-axis. We need it be such that from one bucket to the next, the change in eV is reasonable. If we make too many, the counts we have have to be placed throughout it, making the actual count in most buckets rather low.
If too few, then some range of energy values can all increment the same bucket. The buckets can show higher counts, but there are fewer of them.
So how big should the array be? If we think we can tell the difference between (say) 6KeV and 7KeV, we can use 60 buckets. We are not going to see any energy beyond 60KeV. That is the extreme right side of the X-axis. If we can tell the difference between 6KeV and 6.7KeV, or 6.3KeV and 7KeV, we can use 120 buckets. If we are good enough to tell the difference between 6KeV and 6.2KeV, we can have 600 buckets.
Whatever number we choose for the array, it is, of course, integer, and stays fixed, The value in it is floating point. Some kinds of array variable can have dynamic length. I think that kind we definitely do not need. We do need implicit conversion for the answer to any floating point calculation be cast to floating point even if one of the inputs was an integer.