« February 2013 | Main | December 2012 »

Thursday, January 03, 2013

More Sparkfun geiger counter explorations

My last blog entry was about playing with the Sparkfun geiger counter and some initial explorations. Since then have written some software to acquire data from the geiger counter (GC) that, while still kludgy, is good enough for me to distribute for other people to use. The programs with an explanation of what they do will be on the downloads section of my web site RSN.

The data shown below is from 135409 events spanning an interval of 5.9 days with the GC sitting in my Kamloops basement. Most of the events are background radiation but there's also the K40 events that are more frequent when I sit in front of my laptop as the GC is on my desk downstairs. The background radiation I've been measuring is nonstationary as it started off at about 14.5 cpm and most recent average is 15.9 cpm. Not sure why this is happening and the simplest thing to do in such cases would be to buy more GC's and set them up close together. Unfortunately Sparkfun is sold out of GC's at the moment. So, the anomalies in the data presented are of unknown origin at present.

Below is a histogram of all of the event intervals thus far seen. It's been truncated at 25 seconds although there are events past this mark with the longest interval without an event being around 45 seconds. The histogram has been normalized by dividing by 135409 and an exponential of the form y = A + B*e^Cx fitted to the data.

One expects an exponential function in this case, and a beautiful exponential fit is obtained using only 8 iterations in DPlot. What interested me more was the oscillation about the curve fit line rather than the actual fit.

If the data were truly random, then one would expect any such periodicities to average out and one would have a closer and closer fit of the curve fit to the raw data. What we have instead are these curious damped oscillations which are spaced roughly 400 msec apart. They are artifacts of the recording system and, in any system involving windoze, one can almost assume that windoze is the problem. To determine whether this is the case am currently writing code to take the raw GC intervals and digitize them using an mbed MCU. Not as straightforward as doing it in VB, but far more artifact free than a windoze system. Let's now take a look at the artifacts we have to deal with:

 AAsA

As the event interval increases, the value of the residuals drops off markedly. As was noted in my previous blog post, there appeared to be far too many short intervals -- I attributed that to a problem with the GC but it is more likely that my sampling platform is more likely the problem. To get a better idea of what is going on with these residuals, let's compute their FFT:

 

The large peak is at a frequency of 2.52 Hz and the secondary peak is at 5.04 Hz. This gives a period of 397 msec which is very close to what we've eyeballed at 400 msec. The big question now is what's responsible for this artifact?

Windoze is not a real time OS even though it's possible to sample data at a high rate of speed from windoze. The only time where one can get undistorted data sampling from windoze is through a sound card. Anything else, unless external buffering is done, is highly distorted. I naively assumed that the distortion present in sampling GC at 1 msec precision would be on the order of the error I get in sampling keyboard under windoze which is +/- 15 msec, or 1.5% error. However, there are obviously some higher order systemic errors that one has to deal with.

One of the things I've done in the past to reduce sampling errors is to run a program in real time mode whereas this seemed to make absolutely no difference with sampling GC data. The code for GC MCU is open source and will be analyzed to see if there is any reason why the 2.52 Hz anomaly should be there, but I'm 99% sure that the source of the problem is windoze.

Windoze uses a byzantine software path to get the data from the GC to my VB program. Interfacing to the GC takes place via the USB_serial driver which treats the GC as an RS232 serial data source. The GC is transmitting only one byte for every event and one would think that getting the timing of this byte right would be trivial on a 1.6 GHz processor. The byte is transmitted almost simultaneously by the GC to the host windoze machine. This is in the form of a USB packet which will interact with the windoze USB_serial driver. The USB_serial driver then probably stashes the byte it recieved into a buffer. My VB program uses the MScomm control to acquire data from the "serial port" to which it's connected. The MSComm control is configured so that it will raise an event as soon as a single byte comes in on the "serial port". Will have to look at how the USB_serial driver communicates with the rest of windoze but I suspect that what happens is that a message is sent to my VB program from windoze. This message is then routed to the MSComm control which raises the event noting that data is present and my program then grabs the data and reads the value of the msec timer using TimeGetTime() call.

Being a preemptive multitasking OS, windoze will suspend my process if a higher priority process has to run. I do admit that I run a lot of simultaneous programs on the machine that I'm also running the GC interface but, at any time, most of these programs are in idle mode. Nevertheless, it appears that very frequently the data from the GC is delayed by 400 msec after being recieved by the USB_serial driver. This may also explain the excess number of short intervals as, if there are two events ready to be transmitted after the delay, they will occur at a far shorter interval than they actually occurred. My next step is to do some research into WTF is going on in windoze that could result in this 2.52 Hz frequency in the OS as this seems far slower than one would expect for an OS with a time quantum of 10 msec allocated/thread. Given that most idle process threads would relinquish their quantum almost immediately, it seems bizarre that this long delay would exist. I'm interested in other peoples opinions on this subject.

Note: next run of the Geiger counter sampling program yielded peaks that were spaced 250 msec apart. This is clearly a windoze problem and may have to run the program under W3.1 to get better accuracy.

Posted by Boris Gimbarzevsky at 1:19 AM
Edited on: Thursday, January 03, 2013 1:35 AM
Categories: Embedded systems