Main | Historical projects »

Wednesday, March 13, 2013

New toy - logic analyzer

Kicking myself that I didn't buy this logic analyzer weeks ago. Got bogged down in debugging my Propeller assembly language (PASM) code and my first attempt at making debugging easier was to write yet more code which ran in a separate cog and which flashed LED's when the code would run past a certain point. Fine for slow code, but I needed something which could deal with the Propeller running at full speed.

First thing I bought was a DSO Nano single channel oscilloscope; very nice little toy an small enough that I can slip it into my pocket but not fast enough for debugging 20 MIPS Propeller code. Then it struck me that a logic analyzer would be the thing to get. A bit reticent given the $150 price tag of the Saleae 8 channel 24 MHz logic analyzer, but ordered it from Sparkfun a couple of days ago and picked it up at my office today. Rarely have I been as impressed with a new toy. It's a tiny little thing, the square metal box with all the colored wires coming out of it on the bottom left of the photo.

The Propeller board is connected to a bank of LED's which was my first debugging tool. What amazes me about microprocessor development nowadays is how tiny all the pieces are and I can easily use my already overcrowded desk for development instead of needing a large bench area for my test equipment. 

One of the most frustrating bits of debugging I was doing was trying to get code working to produce two offset 1 msec pulses which will be used to drive the red and IR LED's of the pulse oximeter portion of my ambulatory physiologic monitor (APM). This is a trivial bit of code but, when coding in PASM, often stupid errors are hard to find unless one is thinking like an assembler which, when I start to think that way I can't seem to do medicine because everything is so illogical and fuzzy. 5 minutes after I had the item out of the box, I hooked it up to Propeller pins 0 to 7 while the software was downloaded from the Saleae site. Then, once I captured my first signal, I had the problem solved. A quick rewrite of the code and the satisfying picture below appeared:

The clock pulse displayed is a half-speed system clock which still needs some tweaking as the period should be 2.000 msec, not 2.006 msec. One can clearly see the LED drive waveforms on Channel 6 and Pin7. That's the way they're supposed to look and, because they came out inverted when I first made the changes, found yet another bug in my code.

Below is a magnified view of the clock pulse and a precise measurement of how long it takes the clock cog to go from generating the debug clock pulse on pin0 to get to the PulseOx driver code.

 A

All of the in-between code from the primary clock loop, the RTC and Polar HR monitor pulse detection takes 1.8333 microseconds to execute.

Needless to say I'm thrilled with my new toy and would highly recommend it to anyone who needs a logic analyzer that deals with moderate speed digital circuits. People on the Sparkfun site are grumbling that it doesn't do 50 MHz SPI connection debugging, but I bit bang all of my SPI protocols on the the Propeller and so will be easily able to decode them with this logic analyzer. Conclusion: well worth the money and should have invested in some diagnostic tools earlier than trying to emulate PASM in my wetware -- works for small bits of code but this logic analyzer rocks.

Wednesday, February 20, 2013

Propeller based geiger counter

A few months ago I got one of the Sparkfun geiger counters and, despite my plans to make this unit into a portable device, it's ended up as my basement radiation sensor. So, there was only one option available to me - buy another geiger counter (GC). This device arrived a couple of weeks ago and it will be a portable device but still haven't figured out a way of protecting the delicate mica window so can also sample alpha radiation during my travels. A project for the future.

GC1, which was the first unit, is connected to my laptop via a USB to serial link. It outputs serial data in the forms of 1's and 0's but the only thing that interests me is the timing of each event. This is done via a VB6 program using the MScomm control which seems to introduce significant timing artifacts -- far more than I would have expected. Thus, GC2 is sampled via a Parallax Propeller chip. Considering that I've entered the Parallax Micromedic contest, I thought this would be good programming practice and, more importantly, practice at putting circuits which spread over my desk into boxes.

After a month of coding for the Propeller chip, I can honestly say that I've never had this much fun programming since I first discovered computers. The Propeller chip is an amazing device that runs at 20 MIPS/cog (although by changing from a 5.00 MHz to a 6.250 MHz crystal one can overclock it to 25 MIPS/cog) and it's a hardware hackers dream machine. I highly recommend this chip for any application that needs to connect to hardware. My preference is for Propeller assembly language (PASM) although there is also the option of Spin as well as PropForth which I'm just playing around with now.

The setup I'm using is shown below:

The GC is powered from the Propeller demo board +5V output (on the left) and the output of the GC (the Out pin) is connected to a Propeller pin via a 18 Kohm resistor. One can go higher as the Propeller only requires low input currents, but the 18 Kohm resistor happened to be in a box of resistors so I grabbed it, it worked, and that's all I wanted.

Propeller code is written in PASM with a bit of Spin glue code. All the Propeller code does is to look for a high-going pulse on one of its inputs and, when it finds one, it grabs the start time of the pulse and starts timing the duration. When the pulse goes low, the clock time and pulse duration are written to hub RAM on the propeller. The Spin code just executes an endless loop looking for new data and, when it finds some, formats it and sends it over a serial connection at 115,200 baud to my laptop. The resolution of the Propeller "clock" is 2 microseconds and so after some 8900 seconds, the 32 bit clock counter overflows. Considering that I'm just interested in the intervals between two GC events, there's no need for me to have an extension to this 500 KHz clock.

A histogram of the distribution of pulse widths is shown below:

So, the pulse is a rather short one and I'm likely getting the largest pulses as an examination of the Sparkfun GC schematic shows that the Out connection is just coming from a single transistor capturing the GM tube voltage. The Out connection really should use a larger resistor, as the out connection is taken at the juncture of Q2's emitter with R17 which is a 1 Mohm resistor to gnd. Likely there are some interesting GM tube phenomena happening is one was to look at some of the lower voltages, but I'd suggest sticking in a FET op-amp voltage follower to get a high enough input impedance and then connecting to the Propeller. The pulse is quite short (~280 microseconds in width) with a narrow distribution although there was a single pulse among the 6560 points which was about 470 microseconds long.

When one looks at the distribution of inter-event intervals, they form a classic exponential curve as one would expect for a stochastic radioactive decay process:

While the number of intervals is still small, it appears that the "ringing" that was visible in the early portion of the decay curve is absent thus proving this is a M$ windoze artifact. Already quite a nice curve fit and the value of C is quite similar to that obtained wtih GC1.

Next step was to compare GC1 with GC2. GC1 has been running for several months now with periodic restarts of the sampling program to keep the file size managable. Here's the last week or mores data from GC1:

A couple of things to notice about this graph:

(1) the periodic spiking throughout. This is what I believe is the windoze artifact. In a stochastic variable, any such periodicities should be eliminated with enough sampling as it's well known that the the limit of this distribution (assuming a stationary radioactive source) as N approaches infinity is a pure exponential curve

(2) Marked excess of short intervals. I commented on this anomaly when I first began my series of blog posts about the Sparkfun GC, then the short intervals went away. However, over the course of the last month or so, the mean cpm have gone from 15 to 16.9. It may be that GC1 is either picking up some of the small oscillations that result in a GM tube when an especially energetic cosmic ray passes through or it may be a more sensitive tube. Of course the only way to sort this out is to buy another GC and have two of them running in parallel monitoring my basement background radiation (does this process ever end?)

Also seen above is that the two exponential fit is marginally better than a single exponential fit (in retrospect, the color of the line for the 2 exponential fit is too close to the original red). The two exponential fit also seems to fit the data better fromabout 4000-10000 msec where the single exponential fit is seen to sag in relation to the data.

When one compares GC1 and GC2, one finds a close correspondence between the two units except in the very short interval region:

Given the small number of samples, GC2 data is obviously quite a bit noisier. The difference GC1-GC2 is shown below:

As has been the case with my previous programs, they'll be posted once they're cleaned up enough to be exposed to public gaze. The Propeller pulse finding program will be posted first to Propeller forums and will try to remember to update this blog entry when I do that. All of my code is open source but I consider it unwise to post broken code.

Sparkfun and Parallax have been great catalysts at getting me heavily involved in electronics again (I'm not sure my patients appreciate this newfound interest). Sparkfun is a very dangerous site to peruse at 03:00 if ones credit card is in easy reach (even worse I've memorized my credit card number given the amount of late night ordering of toys that I do). Sparkfun's electronics are totally open source and I get the feeling that whoever designs the circuits gets something working, has circuit boards etched, and puts it on the site for sale. I'm not denigrating the quality of their material, but rather pointing out that their designs are unfinished. OTOH, this is a tremendous impetus to improve on the design and that particular aspect of their products has gotten me programming with the solder programming language (to quote Steve Ciarcia).

Parallax is another company that has bought into the open-source concept and they produce an incredible microcontroller in the form of the Propeller1 chip (and the Propeller2, when it comes out, will be the bit-bangers ultimate wet dream). I've gotten one of the Propeller1 chips running with a 10 nsec clock speed and am finding that I have to insert NOP's for my "high speed" TTL circuitry to function correctly! This particular hardware interface to the GC took about 15 minutes of soldering and a couple of hours of programming in PASM. Teraterm is used to store the data to a text file which is then processed by a simple VB6 program and plotted out with DPlot.

Saturday, February 16, 2013

Propeller Programming stream of consciousness

For those of you who think this is a serious discussion of Propeller PASM, it's not. It started off as a comment on PASM when suddenly it seemed to take on a life of its own. So, it's probably best described as a sort of rant combined with stream of consciousness after a week spent programming that wonderful Propeller chip which is my favorite computer architecture at the moment. Chip Gracey, you rock.

While trying to write propeller subroutines on paper upstairs, suddenly a few things became clear; the reason for the jmpret instruction and the reason for movs and movd instructions. I made some stupid mistakes in my Prop code when first came home as I made the unforgivable error of assuming that #<variable> would give me indirect addressing (No, No, No, No ---- repeat*1000(No)).

There is no indirect addressing instruction in the Prop instruction set. Remember that. To move a variable to a buffer, one does this as follows:

mov BuffAddr, #Buff

movd #ins1, BuffAddr

ins1 mov #1, Data

So, instead of the nice PDP-11 method of:

MOV Data, @BuffAddr

which takes 2 bytes, one has the 12 byte instruction sequence above. The primary difference is that the PDP11 instruction would take about 4 microseconds to execute as it involved a memory based indirect address whereas the Prop instruction takes 12 clocks to execute or 0.15 microseconds.

The other thing is subroutine calls. The PDP8 was more advanced than the Prop when it came to subroutine calls. However, with the JumpRet instruction, one can return from anywhere as long as one allocates a return address for every subroutine, ie

jmpret Sub1Ret, Sub1

rs1 <code to execute after routine>

Sub1 <do something>

jmp Sub1Ret

The jmpret instruction writes #rs1 to variable Sub1Ret and calls Sub1. Have to just ensure that do a jmp Sub1Ret to get back to the caller. Kind of neat and also finicky but have to play the hand I've been dealt.

There's something about the Prop chip which is intensely appealing and I think that it's an architecture that DEMANDS self modifying code is what makes it such an attractive machine for me. Self-modifying code is something that will get one flunked out of a CompSci course. Mention self-modifying code to a CompSci graduate and they will suddenly turn pale, make the sign of the cross and back slowly away from you saying very slowly, "just stay calm and no-one gets hurt". If you want to then induce a vagal freeze response in them, say the words "Go To". That will overload their wetware, create potentially irreperable cognitive dissonance making them maybe capable of employment at a bottle recycling depot where their ability to count rapidly will be useful.

Self-modifying code seems to be the divide between those who want contact with the bare silicon and those who prefer to live in a world of virtual machines that all talk Java to one another and keep their variables private. Mention a global variable to these people and it has the same effect as if one pulled out ones dick at a Victorian tea party and asked "does anyone here have a longer one?"

Well I happen to like living dangerously which is far preferable to living a life of boredom. Having to restrain myself from trying out all the neat meatware hacks I come up with in my other life as a doctor makes me far more experimental when it comes to my hacker alter-ego. Basically, if there's a limit, I want to see if it can be broken or stretched.

When I first entered the micromedic contest, it was with some trepidation given the 496 longs/cog. This is a hard limit and I don't have access to a silicon foundry so I don't try to fit in 1 Mb of code into a cog. However, when I started writing PASM, I suddenly realized that I was using far fewer instructions that I thought would be needed. That's because data acquisition software is, when one looks at it in detail, really very simple. One either takes a value from one location and stores it in another, or one generates an SPI/I2C clock sequence which is used to either send or recieve serial data. This made me realize that most of the programs which I write in a HLL really aren't doing anything useful as far as the fundamental problem is concerned. All of the routines which I'm especially proud of are very terse and, when I was a young hacker wanabee, the head hacker reminded us of the importance of being terse. That was in the tiny 12 Kword memory space of the TSS/8. Verbosity in PDP8 code was met with a trembling, angry, outstretched hand pointing to the evil empires hulking 360/65 and a command to "go play with your bloatware".

Today I can't think of a single hacker that would consider the IBM 360's OS to be "bloatware" as the usual 360 installation was lucky to have a single megabyte of core storage. The OS was written in assembly language, but was very arcane and if one threw in 2 or 3 instructions that really weren't needed, one didn't notice given that there was a full MEGABYTE of core. That type of profligacy didn't go over very well in the minimalist atmosphere of the PDP8 head hacker's lair. There would be instruction sequences on a blackboard to be simplified and the best one could expect from him if one came up with a particularly novel simplification was "now that's a neat hack".

The Prop has way more RAM than the TSS/8, 2048 longs in the cogs and 8192 longs in hub RAM. That works out to 40,960 bytes of RAM in comparison to the 18432 bytes (although the unit of storage in a PDP8 was a 12 bit word) on the TSS/8. The TSS/8 ran 8 slave teletype machines, each one capable of downloading paper tape programs into the TSS/8 at the blinding speed of 110 baud. When I got an account on the TSS/8, I was in Nirvana. Now, to be fair to the TSS/8, it did have a hard disk drive which I believe was 2 Megabytes or so in size. OTOH, my Propeller board of education (BOE) has a mini-SD card socket and will handle cards up to 32 Gb in size. The TSS/8 also had it's own large room dedicated to it at the U of Calgary and, those of us who had proved especially worthy in the head hacker's eyes, were allowed to come into the inner sanctum and actually touch the PDP-8 machine! We were also allowed to reboot it, but the only PDP-8 we were allowed direct contact with was a bare machine with, I believe 1024 or 2048 words of core.

That's where I first learned assembly language programming. Spending much of my classroom time in highschool writing PDP8 software in octal and getting angry glares from teachers who assumed that their trivial subject was much more deserving of my time than programming. Once I had the sequence of octal instruction codes perfected, on yellow sheets of paper often with holes from multiple erasures of incorrect code sequences, I'd head over to the UofC Data Center and sign up for an hours time on the PDP8. Then, trembling with anticipation I'd key in my program on the switch panel; key in the address, load the address into the address register and then key in the sequence of 12 bit words hitting deposit address after every word was entered. It was one of the most exhilerating things I'd ever done at that point in my life. If I was right, then the panel lights would flash and the results of the mathematical operation I'd coded would be available in RAM and they were read out by setting the base address and then reading out the contents of all of the variables that were modified. Then I learned to use the PAL assembler which was a multiple pass assembler which required one to read both the assembler and ones paper tape program in multiple times before the binary code was punched out on paper tape. This was what one then loaded into the PDP8 <sorry, caught in a reminiscence loop, but when you remember your first computer far more clearly than your first lay after 40+ years, you must be a hacker>

I guess that's why I like the Propeller so much as it's triggered this gush of computer related memories of a much simpler time in the computational age. That's where I'm coming from and why I hate Java. When I finally got into HLL's, FORTRAN was my language of choice as real men used FORTRAN and just suits used COBOL. Well, actually real men used assembly language but I never did get an IBM 360 assembler program to run although I did generate thousands of pages of memory dumps attempting to get one to run (memo to self - don't start with trying to write re-entrant subroutines on an architecture you are just starting to learn).

Let's get away from this nostalgic haze now. What surprised me with the Prop was how short my routines were and that suddenly 496 instructions and variables was starting to look like a large amount of memory. As I'm writing this, I have a Propeller program which is sampling a 3 axis accelerometer and timing each event to msec precision, is reading out a Polar HR monitor that I'm wearing and is storing the data to a SD card based datalogger. There's an LED that's flashing with my HR and it's going fast now as I'm so excited to finally have an ambulatory physiologic monitor (APM) (which has been a dream of mine for 3 years now) on a Prop project board inside a little Sparkfun components box. It's running off 6 volts of AA batteries and survived a shopping outing earlier today stuffed in my jacket pocket.

The first program that I wrote for the APM was a 1 msec clock (there's something very reassuring about being able to create a precise timebase for a processor architecture and many of the development boards that I've gotten nowhere so far with do have mice millisecond clocks on them). The nice thing about the Prop is that it has very deterministic instruction timing. The vast majority of instructions execute in 4 clocks which, at an 80 MHz clock speed, works out to 50 nsec/instruction. 20 MIPS/cog is rather powerful when I was used to the 0.5 MIPS PDP-11/23 as my previous favorite machine. Thus, when one combines short programs with high clock speeds, one gets damn fast execution times. Also, way more leftover RAM in each cog than I had expected.

The clock cog generates a 1 msec clock which is output on Pin0 and the longword clock resides at hub RAM location $30 (why no-one has ever considered a Propeller page0 is beyond me but that's way beyond the scope of this already meandering text). That way other cogs can grab the timer when they need it. Because of all the room I had left over in the clock cog, I figured I might as well add a real time clock (RTC) to it and the RTC needs to be told if a year is a leap year, but it has a better knowledge of how many days are in a particular month than I do. It keeps excellent time which it should as the Prop system clock is generated by a 5.00 MHz crystal oscillator. Then, when I got the Polar HR receiver and chest strap, I decided why not throw in the HR monitoring chunk of the project into the clock cog?

Just a few more instructions to locate the 15 msec long high-going Polar HR pulse and also time the duration of the pulse so can distinguish noise from actual Polar HR events. The time of the HR event and duration of the pulse are saved to a hub RAM buffer.

Then, found some code on the Propeller object exchange (OBEX) which had assembler code to read out the 3 axis accelerometer which is accessed via SPI. Hacked the code till it did what I wanted and put it in another cog. Brutally chopped up a demo program someone had written to put accelerometer code into a spreadsheet and ran wild with code in the massive 32 Kb memory space which are the realms of Spin and the hub. Most of this code is just endless calls to the serial driver to print out various variables and test out subroutines in cogs and is due for a very serious pruning in the near future.

Then, of course, there is the serial I/O cog which uses some very cool multiprocessing techniques which let it reliably emulate an 115,200 baud modem. When I'm finished interacting with the program, I switch the serial port output to pin4 where it just streams data to the SD datalogger; another really cool Sparkfun device which just lets you throw text at it and it stores it to text files on the SD card.

I'm steeling myself for the next step which is to write the VB6 code needed to analyze the huge amount of accelerometry an HR data I've recorded today. Normally if I was told I could do some VB6 coding rather than some terminally boring activity like dealing with my hospitalist billings, I'd jump at the chance. However, after spending a week in the wilds of Propeller assembly language (PASM), I feel like someone who's just spent the afternoon on Wreck beach playing in mud and forgets to dress or clean up before they go to a black tie cocktail party. Now I face numerous restrictions on what I can and can't do. Don't get me wrong, VB6 is my favorite HLL, but it's just that unique feeling one has from intimate contact with the hardware that's missing. when one moves to a windoze platform from a hardcore minimalist development system. M$ is the new evil empire and IBM is like a now senile dictator who's discovered philanthropy at some point during his decline. (IBM's heavily into open source software whereas if an asteroid hit Redmond, I'd be deleriously happy; gotta teach those alien asteroid tossers better English - "Russia or Redmond - what the hell, they're probably the same")

Well, I'm not writing the analysis code in Spin - that's for damn sure. For an architecture that's totally unique and aimed at the true hacker, Spin is a letdown. It's almost as Spin was thrown in as an afterthought -- "most people can't program in PASM - we need a HLL on this sucker if it's going to go anywhere". Spin is annoying, but I can write Spin programs. The most heinous sin that the authors of Spin committed was omitting a "Go To" from their language. May they burn forever in the fires of hell for such insolence. You DON'T write a HLL language for a microprocessor without a Go To!!!!!

It would get way beyond the point of this essay which has the direction of a drunkards Brownian walk (Disclaimer - I'm celebrating the first successful outing of the APM with a bottle of "Stump Jump" Australian Shiraz, and all those who find this essay in some way offensive should send their complaints to the Osbourne family located somewhere near McLaren Vale in South Australia - its all their fault) if I were to start going on a rant about object oriented languages (OOP). To me OOP is a gimmick and nothing's going to change the fact that there's a 100 fold or greater difference between the average programmer and a great programmer. I've known lots of great programmers and they can code well in any language that suits their purpose. I much prefer the engineering approach to programming which is get the job done and who gives a shit if the code is ugly. The CompSci approach is: "lets make sure that the indentation is perfect and we've fully followed all of the rules setout by whoever happens to be the current guru of programming etiquette at the moment". Often this results in very nice looking code, every comma where it should be, impeccable indentation but, more often than not, doesn't work. If it does work, it uses 100x the resources that the product of a "just get the job done" programmer would have written.

We could go on forever about whether my use of "magic numbers" in code is a valid shortcut or an unpardonable sin that deserves at least a week in the stocks in a public square. For a very slow programmer, portability is a key concern because if they're faced with a new architecture, it's nice if they can just tweak a few variables and have it running on that architecture. For a very fast programmer who's totally grokked the problem they're dealing with and simulates/debugs in wetware, they can effortlessly bang off another version of a program for a new architecture, magic numbers and goto's included, and have it working before the slow programmer has yet to come up with a code indentation scheme.

For a while I was seduced by the "we have RAM to burn" paradigm that made me temporarily part ways with my previous perniciously parsimonious programming style. I've now rerealized that bloatware is evil even if one is running code on a 2.9 GHz i7 processor with 16 Gb of RAM. That doesn't mean that I'm going to worry about wasting the odd byte or two in a data structure (and I no longer have a fixation about data structure sizes absolutely needing to be the smallest power of 2 one can get away with) but I have a very dim view of XML and other "human readable" POS that seem to be the products of psychosis (and that's being charitable). (Sorry, wandered off again from the increasingly hard to follow train of though).

I've been told in the past that I get weird when I program and, if I find that patients are suddenly getting up and unexpectedly leaving the exam room, then there may be a grain of truth to this assertion which was made by a female. One week ago, I gave myself the goal of having a functioning APM before the end of the 7 day period. Considering that I totally crashed and burned when I proposed to do the same project on an ARM based Stelleris system in a Circuit Cellar design contest in 2010 (it came with a fully "modern" ARM IDE trial version which took up hundreds of megabytes on my HDD and I wasn't even able to get it to generate a "Hello world" program despite days of trying), I reassured myself last Friday that the contest deadline is July 31. Then something funny happened; the code just started to flow. One of the best things that I did was to write an assembly language routine for an individual on the Parallax Forums who needed to time pulses. This seemed trivial and then I was somewhat surprised when another commenter (apparently one of the forum's uber hackers) poked numerous holes in my code. My initial reaction was irritation - after all I'd run the code numerous times in my wetware Propeller model and it was ***Perfect***. Then, I reread his comments and had one of those WTF moments where I wondered how I could have missed something so blatently obvious and why did it take someone else to point it out. The commenter on the Prop Forums site was quite right - I had buggy code; just because it worked 99.9% of the time was insufficient. It had to be perfect and I had failed in that task.

So, I went back to the Prop documentation, reread everything, updated my wetware model and found a solution to the problem. I'll post the problem and the solution some other time as this isn't designed to be a technical exposition - instead it's just a stream of consciosness rendering of how neat it feels to have finally gotten an APM together - it may be V0.01 of the device, and it would take a very dedicated volunteer to carry the sucker around for a day, but it works!!! So, I feel like celebrating a bit and have decided to post a blog entry as my cat just went to sleep when I started telling her about the device. Can't post stuff to the Prop forums as, after all, it is a contest that I'm in and even though my chances of winning are slimmer than finding a 16 year old virgin in Kamloops, it's the deadline that makes me stop puttering around and really apply myself. I don't care about the $5 K prize at the end of the contest; I've given up considerably more money than that by chosing to play with my electronic toys rather than practicing medicine 1 week/month. What's important to me is the feeling I get when I come up with an idea and see it realized in physical reality. There's nothing like it. Of course, as expected, all of my code and hardware will be open source as of 31/7/2013. Don't get me started on what I think of software patents.

It's very reassuring to see the flashing red LED and blue SD card LED brightly lit to the left of me which lets me know that data is being streamed to the SD card. What I can't understand is why more people don't program in PASM??? To me it's a no brainer - there's the instruction set, there's a pad of paper - start writing code. I'd say that 95% of the submissions to the Parallax Forums are in Spin which is an ugly language (and I believe I'm mentioned the no GO TO heresy) but it's a pseudo-OOP language. Purists will look down on it as it doesn't meet some arcane criteria that a language must posess before it becomes OOP.

To be fair to the adherents of Spin, it's fast to use if you haven't done assembly language programming in the past. Right now, taking one of the coddled progeny of the modern age and asking them to do assembly language programming would be the equivalent of telling them to fix their vehicle with a screwdriver after it's broken down in July at noon in Death Valley. They haven't a clue where to start or what might possibly be wrong and, a very substantial proportion of the population haven't the foggiest notion of how an internal combustion engine works. As far as they're concerned, it's just magic - press one button and the door unlocks, put the key in the ignition and drive away. Most of the current population can be trained to read a gas gauge and become attentive to that most idiotic piece of automotive engineering ever invented, the "check engine" light. Beyond that, all they know is that Henry Ford was visited by Athena in the middle of the night and, the next morning, aside from a few drying wet spots on the sheets, he had the inspiration for the automobile. Automobiles are produced in the same factory that produces unicorns and free cell phones for the masses.

When you're a relic of the dinosaur age, like I am (and I still remember women in fur bikinis distracting tyranosaurs while the men closed in on them from the sides) you know how to start with a bunch of TTL chips, a soldering iron and a large quantity of wire and build a computer from the ground up. It's simple and very tedious and a few bits into a memory address register you start wondering do I really need that many bytes of RAM. Of course, when I was younger I was told - you kids have it easy, back when I was your age there was no such thing as an IC and if you wanted a logic gate, that was one tube, two for a flip flop and lots of resistors and capacitors. Of course you could use relays if you didn't mind your computer being a bit slower.

So, it's not so much what you start with, but how much you understand the basics. I've built tube logic gates and I've used relay logic. I've also played with an abacus and taken apart mechanical calculators. These are all significant aspects of our civilizations computational history. The key to this is trying to figure out how things work. When I used to watch TV ages ago, my favorite shows were How It's Made and Junkyard Wars. I could watch how things are made forever as the first thing I wonder about when I pick up a manufactured object is exactly that - how was it put together, where did the pieces come from, who makes the various pieces, what ingredients go into it, etc. Junkyard wars is an indictment of our society in which we throw away so many absolutely useful items which can be disassembled to make other new things. To me, a junkyard has always been a treasure trove of neat items and I loved spending time in junkyards when I was a kid (my mother wasn't too happy with this).

So, programming in Spin is the easy way out. It's a no brainer. Find that your program is too slow, launch the Spin interpreter in another cog - after all the Propeller has 8 independent cogs. And, yes, Spin is fast but PASM is much much faster. The minimum delay time in a WaitCnt instruction in Spin is 371 clocks whereas in PASM it's 8 clocks. 371 clocks is a glacial 4.348 microseconds whereas 8 clocks is a more reasonable 100 nanoseconds. Exchange your 5 MHz crystal for a 6.25 MHz crystal on the Prop BOE and you're at 10 nsec/clock. I happen to like doing things fast and why would I accept a language that's 43 times slower than PASM?

Presumably any reader who's stuck it out so far might wonder what the point of this piece is and all I can say is it's in the text. Read it over myself and I like it (but then I'm on glass #5 or so of the Stump Jump wine) so I might have differing opinions tomorrow morning. However, it does encapsulate a lot of the thoughts that have crossed my mind during a week of programming and that's why it's going on the Electronics Blog. Next week is back to meatware hacking which I'm not really looking forward to. Meatware has many more degrees of freedom than hardware and, from a debugging standpoint, is a far far greater intellectual challenge than trying to figure out why a light doesn't come on when, as far a you know, your program should have turned it on. Meatware also responds in unexpected ways to fixed stimulus paradigms and that's what makes it interesting. Now on to some VB6 APM analysis program construction.

Thursday, January 03, 2013

More Sparkfun geiger counter explorations

My last blog entry was about playing with the Sparkfun geiger counter and some initial explorations. Since then have written some software to acquire data from the geiger counter (GC) that, while still kludgy, is good enough for me to distribute for other people to use. The programs with an explanation of what they do will be on the downloads section of my web site RSN.

The data shown below is from 135409 events spanning an interval of 5.9 days with the GC sitting in my Kamloops basement. Most of the events are background radiation but there's also the K40 events that are more frequent when I sit in front of my laptop as the GC is on my desk downstairs. The background radiation I've been measuring is nonstationary as it started off at about 14.5 cpm and most recent average is 15.9 cpm. Not sure why this is happening and the simplest thing to do in such cases would be to buy more GC's and set them up close together. Unfortunately Sparkfun is sold out of GC's at the moment. So, the anomalies in the data presented are of unknown origin at present.

Below is a histogram of all of the event intervals thus far seen. It's been truncated at 25 seconds although there are events past this mark with the longest interval without an event being around 45 seconds. The histogram has been normalized by dividing by 135409 and an exponential of the form y = A + B*e^Cx fitted to the data.

One expects an exponential function in this case, and a beautiful exponential fit is obtained using only 8 iterations in DPlot. What interested me more was the oscillation about the curve fit line rather than the actual fit.

If the data were truly random, then one would expect any such periodicities to average out and one would have a closer and closer fit of the curve fit to the raw data. What we have instead are these curious damped oscillations which are spaced roughly 400 msec apart. They are artifacts of the recording system and, in any system involving windoze, one can almost assume that windoze is the problem. To determine whether this is the case am currently writing code to take the raw GC intervals and digitize them using an mbed MCU. Not as straightforward as doing it in VB, but far more artifact free than a windoze system. Let's now take a look at the artifacts we have to deal with:

 AAsA

As the event interval increases, the value of the residuals drops off markedly. As was noted in my previous blog post, there appeared to be far too many short intervals -- I attributed that to a problem with the GC but it is more likely that my sampling platform is more likely the problem. To get a better idea of what is going on with these residuals, let's compute their FFT:

 

The large peak is at a frequency of 2.52 Hz and the secondary peak is at 5.04 Hz. This gives a period of 397 msec which is very close to what we've eyeballed at 400 msec. The big question now is what's responsible for this artifact?

Windoze is not a real time OS even though it's possible to sample data at a high rate of speed from windoze. The only time where one can get undistorted data sampling from windoze is through a sound card. Anything else, unless external buffering is done, is highly distorted. I naively assumed that the distortion present in sampling GC at 1 msec precision would be on the order of the error I get in sampling keyboard under windoze which is +/- 15 msec, or 1.5% error. However, there are obviously some higher order systemic errors that one has to deal with.

One of the things I've done in the past to reduce sampling errors is to run a program in real time mode whereas this seemed to make absolutely no difference with sampling GC data. The code for GC MCU is open source and will be analyzed to see if there is any reason why the 2.52 Hz anomaly should be there, but I'm 99% sure that the source of the problem is windoze.

Windoze uses a byzantine software path to get the data from the GC to my VB program. Interfacing to the GC takes place via the USB_serial driver which treats the GC as an RS232 serial data source. The GC is transmitting only one byte for every event and one would think that getting the timing of this byte right would be trivial on a 1.6 GHz processor. The byte is transmitted almost simultaneously by the GC to the host windoze machine. This is in the form of a USB packet which will interact with the windoze USB_serial driver. The USB_serial driver then probably stashes the byte it recieved into a buffer. My VB program uses the MScomm control to acquire data from the "serial port" to which it's connected. The MSComm control is configured so that it will raise an event as soon as a single byte comes in on the "serial port". Will have to look at how the USB_serial driver communicates with the rest of windoze but I suspect that what happens is that a message is sent to my VB program from windoze. This message is then routed to the MSComm control which raises the event noting that data is present and my program then grabs the data and reads the value of the msec timer using TimeGetTime() call.

Being a preemptive multitasking OS, windoze will suspend my process if a higher priority process has to run. I do admit that I run a lot of simultaneous programs on the machine that I'm also running the GC interface but, at any time, most of these programs are in idle mode. Nevertheless, it appears that very frequently the data from the GC is delayed by 400 msec after being recieved by the USB_serial driver. This may also explain the excess number of short intervals as, if there are two events ready to be transmitted after the delay, they will occur at a far shorter interval than they actually occurred. My next step is to do some research into WTF is going on in windoze that could result in this 2.52 Hz frequency in the OS as this seems far slower than one would expect for an OS with a time quantum of 10 msec allocated/thread. Given that most idle process threads would relinquish their quantum almost immediately, it seems bizarre that this long delay would exist. I'm interested in other peoples opinions on this subject.

Note: next run of the Geiger counter sampling program yielded peaks that were spaced 250 msec apart. This is clearly a windoze problem and may have to run the program under W3.1 to get better accuracy.

Posted by Boris Gimbarzevsky at 1:19 AM
Edited on: Thursday, January 03, 2013 1:35 AM
Categories: Embedded systems

Saturday, December 01, 2012

Sparkfun geiger counter - initial explorations

One of the things I've got to stop doing is looking at electronics sites late at night. While perusing the sparkfun.com website, ran across a geiger counter at a reasonable price ($150). This was something I knew I needed and so also ordered a bunch of other stuff at the same time as dealing with customs duties is a hassle so might as well get a bunch of other things I just can't live without.

The geiger counter itself is trivially easy to setup; just plug it into the USB port of ones computer and, assuming that one has the serial USB drivers installed (where to get them is given on sparkfun site), one can immediately see background radiation counts appearing on a terminal emulator which receives the output of the sparkfun device The geiger counter appears as a serial device to the system and transmits data at 9600 baud (a bit sedate if one is counting very radioactive sources).

Geiger counter output consists of the digits '0' and '1' which indicate whether the previous interval is longer or shorter than the current interval between counts. This output wasn't really that useful to me so I wrote a quick VB6 program which timed the occurrence of events to 1 msec precision and stored them in a csv file.

First thing to do was to measure background radiation and, in a basement in Kamloops BC, I get an average of 15 counts/minute (cpm) as background radiation. Then, took the first known radioactive source that I could find and put it in front of the geiger counter to see if I could pick up the radiation. (Still looking for my bottle of U2O3). The radioactive source was a bottle of potassium citrate capsules. Gratifyingly, the number of counts/minute increased and I got 20 cpm with the bottle of K-citrate in front of the GM tube. Taking it away made the number of cpm drop back to 15; a very nice demonstration that I could work with weak radioactive sources given a long enough counting period.

The radioactive isotope in K is K40 which is a beta emitter and K40 makes up 0.0117% of K. K40 has a half-life of 1.27 billion years meaning that a gram of KCl should have about 15 disintigrations of K40 to Ar40 every second. The bottle of K-citrate I was using had about 0.76 moles of elemental K contained within it which means that it would be producing about 668 counts/second or 40,058 cpm. I only picked up 5 of those cpm or 0.01% of the K40 disintigrations that one expects to be taking place in the K40 known to be in the bottle. A rather low count efficiency, but I was surprised that I even could pick up K40 radioactivity with a cheap geiger counter given that all of my biochemistry labs had used far more radioactive tracers than K40.

Next step was to figure out how best to display the results. The raw count data is quite random looking as expected:

What turned out to be more useful was to integrate the interval data to produce a graph of elapsed time vs counts. This is the red line in the graph above. The slope of the line is 2979 msec which, coincidentally, is the mean inter-event interval. This is a damn good approximation to a straight line with correlation coefficient of 0.9999; it doesn't get much better than that.

So, next step was to plot the integrals of background counts and K40 counts on the same graph to see how long a sampling period would be required to show up a difference between the two cases:

The red line is background radiation with a slope of 4025.9 and blue line is K40 source with average inter-count interval of 2979 msec. It's quite clear that one can separate these lines easily by eye even with as few as 500 counts. The shorter the inter-count interval, the lower the slope of the line as it takes fewer counts to get up to a given total time. Not bad for a $150 device.

Next step was to look at the distributions of the counts on a histogram. The histogram below is for K40 counts.

Expected shape of the distribution is a Poisson distribution but this one looks a bit funny; there seem to be too many counts in the 0-99 msec bin. One way of checking on this is to look at the same histogram, but now plotting the log of the count data:

The equation for a Poisson distribution involves a term e^(-lambda) and the shape of the distribution that one gets here would require lambda of around 1 or less. Thus, one would expect a log transform to give a straight line. Actually eyeballing the line is a better way of showing that there are too many short intervals. (I might be wrong on this).

Another way of checking whether one has too many short intervals is to look at the data and the one thing that immediately struck me was that the number 47 appeared far too often. What I suspect is happening is that the very simple low pass filter that the Sparkfun geiger counter uses is producing several events for a single GM tube electron shower. This is something that can be confirmed in the future by looking at the shape of the GM tube event on an oscilloscope (actually will have to use a Phidgets A/D converter to acquire this data and have to get motivated enough to write the code). The excess of short intervals that one sees is the analog of an improperly debounced push button. Something which will be easy to fix by sampling the raw GM tube analog output and counting events in software using a lot more powerful CPU than the microprocessor on the Sparkfun board.

Next thing to check, if one is using the Sparkfun geiger counter as a source of random numbers, is to verify that the waveform of Interval(count) has a spectrum equivalent to white noise. For the K40 data:

And, as expected for white noise, when one does a log-log transformation of the FFT, one has a slope of 0. So far so good. Then, out of curiousity, I decided to look at the residuals from the fitting of a line to the integrated intervals as a function of count. First the residuals:

When I first saw this I thought it looked like 1/f noise (have been spending far too much time in front of DPlot looking for 1/f noise in various things). Sure enough, when one does an FFT and log log transformation one gets:

The points fall on a straight line and represent 1/f^-0.9 noise (actually count^-0.9). This is odd as what's supposed to happen when one integrates white noise is that one gets Brownian motion with 0 memory, not 1/f noise. Time to get back to some mathematical basics as it's been over 20 years since I've worked in this area and am going on faint memories that still exist in parts of my wetware that haven't been hijacked to store medical knowledge.

So, while the Sparkfun geiger counter is more sensitive than I thought in detecting radioactivity, it likely is too biased to use as a source of random numbers. The nice thing about Sparkfun products is that they are completely open source and the schematics are available on the Sparkfun site as well as the source code for the microprocessor which digitized the GM tube output and outputs it on a USB serial port emulation. Don't know if there's enough RAM/flash on the embedded processor to perform more fancy "debouncing" algorithms on the filtered GM tube output, but such operations are easy to do in software if one instead samples the OUT signal from the Sparkfun Geiger counter.

The VB program that I used to capture inter-event intervals will be put online once I've cleaned it up. It still crashes when I let it run for too long a period of time which is unacceptable and one other option I want to include in it involves reading the CPU's TSC whenever a serial event occurs. The low bits of the TSC can be used as random bits although this has to be subjected to testing as the delays between windoze recieving a packet on a USB port to the data finally being available to the MSComm control in a VB program may be deterministic enough that even the low bits of the TSC can't be considered to be random. What I'm using to perform 1 msec timing now is the windoze system call TimeGetTime which accesses the 1 msec clock.

Posted by Boris Gimbarzevsky at 11:52 PM
Categories: Embedded systems

Tuesday, April 03, 2012

Use of standard deviation as marker for temperature event changes with Phidgets IR temperature sensor

One project that I'm working on is a system to turn on the tap for my cat, Billy, who has unfortunately trained me to turn the tap on for her whenever she jumps onto the counter beside the kitchen sink. For some reason she far prefers water when it is running out of a tap than in her water bowl. This got me thinking of how best to implement a system which would do this automatically.

What is needed is a cat sensor and, when the cat is detected, an electrically controlled valve will be turned on. The water remains on while the cat is by the tap drinking. Because this process causes considerable local spray, the sensor would have to detect her from a distance of at least 3'. My initial thought was to utilize a webcam which would run on an SBC2 and the data would be monitored by one of my desktop machines. A Billy detection algorithm would be quite simple as all that would be needed is a collection of black pixels larger than a preset threshold to turn on the tap. (It's easy to detect black cats). While this approproach would have worked, it seemed a bit excessive in terms of CPU requirements for such a simple task.

Then, a novel solution suggested itself. I was playing with my Phidgets 1045 IR temperature sensor a few days ago and found that it responded to warm or cold objects at a greater distance than I had expected it to work. Right now, my current motion sensing application for this sensor picks up my waving my arm at a distance of 12'! Given how warm cats are, the IR sensor will be the primary cat detector.

Given that temperature in the house changes depending on the time of day and whether the furnace is on or not, some type of high pass filter was needed to filter out the slow temperature changes. The other interesting property of the 1045 unit is that it is very uniform over the short term but is subject to significant fluctuations over the long term. A graph of the 1045 temperature sensor together with the high precision non-IR temperature sensor which is on the same board (and used to correct the IR temperature) is shown below:

 

There is a lot of noise in the IR sensor at this timescale. Note that the on-board temperature sensor has very good temperature resolution and easily picks up temperature changes from the furnace vent which is located at least 15' from the 1045 board.

A 10 minute section of a portion of the record is shown below:

 

There are 600 1 second averaged samples in the graph above and, while the IR signal may appear very noisy, in this case appearances are deceptive.

To get an idea of the noise levels and variability in the signal, I averaged 32 samples at a time and computed the SD on each sample. The 1045 outputs a pair of readings every 32 msec and the 1.024 seconds/sample is close enough to 1 second for my purposes. What astounded me was how low the SD's were for the IR temperatures as can be seen in the graph below which is the same section of data but now plotting IR temp and IR_temp_SD below:

The SD was the filter I was looking for! All that was necessary to detect a change in temperature was to look for SD values that were above a threshold. Essentially no false triggering occurs with using an SD threshold of 0.03 or greater. A histogram of about a days worth of IR temp SD values is shown below.

There is a very rapid falloff of the right handed tail of the SD distribution function and one can use a threshold of 0.025 for maximal sensitivity as long as one is prepared to accept that there will be spurious events. For warm or cold objects a few feet from the IR sensor, SD values of 5 or more are obtained for my hand or a cold beer held in the field of view of the sensor. That's the other beauty of using the SD as the detection function as it is always a positive number and it's magnitude is a function of how different IR readings are in the set of 32 sequential samples that are used to compute the mean and SD. Also, the sample period is short enough that there are no false triggers from slow changes in local environmental temperatures. It's clearly a good enough filter to reject the periodic furnace waveform.

Using SD as the basis of my filtering function was a serendipitous discovery; I wrote a quick test program to compute the means and SD's of 32 sample chunks of IR data and looked at the results in DPlot. The utility of the SD in picking up movement events was immediately obvious:

Note the scale for the SD - the tiny blue bumps hugging the x-axis indicate actual movements and use of SD with a fixed threshold is far simpler than keeping track of the IR temperature baseline and using a floating threshold. The other option one has is to set a high IR temperature threshold - say 26.5 C. This would pick up all of the large temperature changes which are the result of my being a couple of feet from the sensor, but would miss all of the small changes.

The only reason I computed SD was that I was testing an algorithm that computes mean and SD in one step:

mean = sum(x(i))/n and SD = (sum(x(i)^2) - sum(x(i))^2/2)/(n-1)

It's just more elegant to have a single loop to compute both mean and SD. At the time that I started this project I figured I'd have to adapt one of my neuronal action potential finding algorithms modified for noisy data to determine when movements occurred. Time to look at use of SD in this application also.

I've written a test program which will demonstrate the use of the 1045 sensor and SBC2. Currently, the 1045 sensor is connected to the SBC2 and is accessed through the Phidgets web service. The nice thing about this program is that it works on Phidget serial numbers. Thus, instead of being connected to the SBC2, the 1045 could be connected to another computer on the network and any 8/8/8 board could be used to flash the LED that is used to indicate that a temperature discontinuity has occurred. Once I clean up the code a bit and write some documentation will post it as it's far less frustrating to use that way.

Friday, March 30, 2012

Phidgets SBC2 - initial impressions

Over the last year or so I've started using Phidgets sensors more and more because they are so convenient. True, they are higher priced than creating ones own hardware, but right now convenience is more important than cost to me. One of the applications that I have planned is monitoring temperature of my shop which is some ways away from the house although I did run a data cable to it when I upgraded the shop power. The parameters that I'm interested in monitoring are the shop temperature, light intensity, status of the door (open or closed) and external temperature as a reading there is sufficiently far from the house that it represents a true outdoor temperature. What quickly became apparent was that I needed more wires in the cable than were available and a solution was found when I got a Phidgets SBC2 to play with.

One of the nice things about Phidgets sensors is that they can be made visible to all computers on the network through the Phidgets web service. Thus, all I'd need to do is to connect the SBC2 to my house network (have a power line modem connection to my shop as well as flaky wifi given the distance) and no longer have any limitation on sensor number and can also monitor the status of motion sensitive lights in the back of the yard through the use of a Phidgets light sensor. SBC2 allows the connection of 8 analog sensors and has 6 USB ports available as well.

Setting up the SBC2 as a remote 8/8/8 board was extremely simple as the SBC2 is accessed via a web interface and it's just a matter of setting the IP address (if one wants static IP instead of DHCP allocated IP address) as well as a few other parameters. Whole process, the first time that I did this, took 15 minutes. The SBC2 showed up immediately on the Phidget control panel and changing one line of code in one of my USB based 8/8/8 sample programs allowed me to remotely access the SBC2. I should note that the SBC2 firmware also comes preflashed with the code for a webcam which was a nice bonus.

So, if one has low speed sampling applications (my shop environmental sampling is about 1 sample/second/channel), then the Phidgets web service can be used unaltered. Of course, I couldn't stop there as I had to play around with the SBC2 to find out its capabilities and I was impressed. The SBC2 is a 400 MHz ARM processor with 60 Mb of SDRAM and 512 Mb of flash on the board (where the other 4 Mb of SDRAM go to I have no idea). It runs Emdebian and has a few very annoying features like only allowing telnet access via SSH (putty program works fine) but also has secure ftp and it took me over an hour to find WinSCP which is an open source secure ftp client. Considering that this is an embedded system, it seems that encrypting data to and from the board is a tad paranoid.

Once I began playing around with Linux realized that this is a far more powerful machine than I had first thought. I should note that my last big embedded system project was in 2005 using Freescale's Zigbee boards which had an 8 bit 6800 relative on board along with a staggering 32 Kb of RAM and a similar amount of flash. More than enough space for me to program my ambulatory accelerometry and breath monitoring project (the latter part isn't very ambulatory).

The SBC2 is the first embedded system that I've used that is so far divorced from the hardware. Whenever I play with a microcomputer system, the first thing I do is to start reading about the instruction set, how to setup timers, interrupt structure, etc. It's nice in a way to not have to worry about these things but I have this overpowering urge to dig deeper in any system. Fortunately, as it is a Linux system, all of the source code is available and I've already found some novel ways to construct web interfaces to the machine (still in the planning stages).

Looking at why the Phidgets web link was slow was the first project and I was shocked by what I found. The SBC2 can serve up 640x480 video at 9.5 fps. This process takes up 10-20% of CPU time. Thus, the SBC2 CPU is damn fast. The first step in looking at the Phidgets web link was to run a packet sniffer while the weblink was actively sending data to my remote machine. The results were, to put it mildly, somewhat surprising:

report 200-periodic report follows: 
report 200-lid4 is pending, key /PSK/PhidgetInterfaceKit//250532/Sensor/4 latest value "380" (changed)
report 200-lid4 is pending, key /PSK/PhidgetInterfaceKit//250532/RawSensor/4 latest value "1555" (changed)
report 200-that's all for now

report 200-periodic report follows:
report 200-lid4 is pending, key /PSK/PhidgetInterfaceKit//250532/RawSensor/6 latest value "344" (changed)
report 200-lid4 is pending, key /PSK/PhidgetInterfaceKit//250532/Sensor/6 latest value "84" (changed)
report 200-lid4 is pending, key /PSK/PhidgetInterfaceKit//250532/Sensor/5 latest value "220" (changed)
report 200-lid4 is pending, key /PSK/PhidgetInterfaceKit//250532/RawSensor/5 latest value "899" (changed)
report 200-that's all for now

report 200-periodic report follows:
report 200-lid4 is pending, key /PSK/PhidgetInterfaceKit//250532/Sensor/7 latest value "371" (changed)
report 200-lid4 is pending, key /PSK/PhidgetInterfaceKit//250532/RawSensor/7 latest value "1518" (changed)
report 200-that's all for now

So, for 4 samples 3 packets are required! The total amount of data, if one coded the network interface efficiently, would be <header>, <data> with the <data> portion taking up no more than 16 bytes, or more likely 12 as the channel number can be a single byte. It's clear that whoever coded the Phidgets webservice portion didn't have much of a concern with bandwidth. OTOH, the system works very well as soon as one plugs the pieces together and one can argue that that should be the primary metric one uses rather than the use of 3 packets to send 4 A/D values (the reason that there are 2 values sent for each A/D channel is that there is a "raw mode" which seems to represent the average of a number of values and hence is a 12 bit quantity unlike the actual A/D values which are sampled at 10 bits).

If this was a M$ product I'd be outraged, but all Phidgets code is open source so, if one doesn't like the way things are done, the code is there to modify. Right now I don't really feel like rewriting Phidgets21 which runs on the remote machine so I'll leave the web interface the way it is but there is the potential to get full A/D sampling rate from the 8/8/8 board portion of the SBC2 remotely by efficiently coding the network transmission code. This is only 1 KHz on 4 channels maximum but it is more than adequate for most of the physiologic signals I'm interested in.

Ambulatory physiologic monitoring was the primary reason I bought another 5 SBC2's to play with. I'm assuming that I'm going to probably fry a system or two during the development phase and it's simpler to have a WiFi connection to a remote set of sensors rather than running long wires (for me anyway). Also, by having wireless links, one can use the SBC2's digital outputs to switch relays controlling 120 VAC devices without the risk of frying my laptop or desktop because WiFi isolation is about as good as one can get. I can't replace any of my laptops or desktops for $225.

The ambulatory physiologic monitor (APM) will be described in more detail in a future blog entry and will involve sampling EKG at 1 KHz as well as earlobe pulse waveform at 1 KHz. Whether or not the SBC2 is fast enough to also work as a pulse oximeter remains to be seen. Other inputs to the device would be accelerometry and gyro data from the spatial 3/3/3, GPS output which will also serve as a precise timebase when the person is where a GPS signal can be picked up and perhaps another 3 axis accelerometer attached to a leg. Also will measure ambient light intensity and sample sound at 20 Hz to get an idea of the ambient sound environment as well as when the person is talking. Respiration would be measured by sampling a strain sensor around the chest (still don't have that bit working). Ambient temperature would also be measured but none of the Phidgets sensors are accurate to measure the small body temperature changes that occur during the course of a day. Also, finding an area of the body to use for this temperature sensor is a bit challenging as most subjects would likely balk at the idea of having a rectal temperature probe inserted for 24 hours, but this is the most stable source for core body temperature. All data would be written to a flash memory stick.

Unlike the Stellaris-based APM that I over-ambitiously tried to build in 2010, the SBC2 based project is almost akin to writing the code on a "mainframe" rather than an embedded system. The SBC2 comes with gcc and the Phidgets library and it is quite trivial to create C programs which run on the SBC2. I'm still having a problem wrapping my head around the concept that the vast majority of APM project will consist of software rather than hardware and having such a vast amount of system memory is a novel experience for an embedded system. We'll have to see whether the remainder of the project will go as easily as my preliminary investigations of the SBC2.