Monday, April 01, 2013
Statist moonbat levies on electronics in BC
A while back I bought a 1 Gb network drive and was pleasantly surprised by how useful this was in making data available between my computers. Thus, when I was at London Drugs today and there was another 1 Gb network drive on sale, I snapped it up. When at the checkout, I noticed a $0.90 "levy" on the bill. On closer questioning of the checkout clerk, it appears that the statist moonbats in Victoria have put a "levy" on all electronic equipment. This is a blatant tax grab and I asked if it was refundable. The answer was "no".
The 1 Gb drive went for $79.95 and the "levy" represents 1.125% of the price of the drive. The 1/4/2013 date for this kleptocratic action was likely preplanned to make people think it was a joke. It has made me exceedingly pissed off and, when I settle down enough to write a letter that is considered professional by the BC College of Physicians and Surgeons, I'll send it to the moonbat in chief of the province. The moonbat in chief of BC is Christie Clark and, when the most positive thing one can say about her is "she has big tits", you know this province is in deep shit.
The obvious solution is to order all of ones electronics from outside the country. Considering that Kamloops is not the type of city where one can walk into a well stocked electronics store at 03:00 and pick up a bag of 4.7 K resistors, all of my electronics components purchases are made online. Now it looks that my disk drives and other related hardware will be also ordered online.
I'm not sure what the intent of this "levy" is but it calls for an obvious solution. After clearing off ones defective hard drive of any personal information (to do this I use the Ciarcia method (several rounds from a .45 pistol)) dump it on the lawn of ones local MLA. Do the same thing with defective keyboards, computers and any piece of electronic equipment that is subject to this kleptocratic "levy".
Personally, I cannabalize any piece of defective electronic equipment for any useful parts and disk platters make nice mirrors. Only the defective disk drives with patient information are erased via the Ciarcia method. It's high time that these moonbats were chucked out of office, but there seems to be no politician in BC with any brains. That's why I recommend writing NOTA (none of the above) on a sticky label, pasting it to your ballot and voting for NOTA in the next provincial election. Obviously if there is a local Libertarian or Marijuana party candidate, vote for them, but for those of us who have a very restricted choice of candidates, vote NOTA.
Friday, March 08, 2013
Windoze sucks bigtime
I promise this is my last entry on my Geiger counter (GC) explorations for the indefinite future. The most important point in this post is that windoze sucks bigtime. Now I don't know if the problems I've unearthed belong to the windoze serial driver or MSComm VB6 control, but they raise some very serious issues with using windoze for any form of data acquisition.
M$ doesn't advertise windoze as a real-time OS and the 10 msec quantum used by WinXP certainly relegates it to the non-realtime mode. What I naively assumed was that in a WinXP system with a 10 msec quantum, and most of the threads in idle mode, that I'd be capable of getting 20 msec temporal precision. The windoze scheduling algorithms are fairly efficient and they run through the list of runnable threads in order of priority and, if a thread has nothing to do, it immediately relinquishes its quantum. Thus one would expect a process which has been assigned real-time priority to be able to time events with a precision of 1-2 quanta.
There are near realtime aspects of windoze as when one samples the sound card or video card. If there weren't, then one would get distorted sound or video. It may be that this is a driver issue and that M$ made the assumption that all that one needs to deal with serial data is to allocate a large enough buffer and then who cares about the temporal spacing of serial events. I don't know if this is the case since windoze is a closed source system and the serial driver is a black box. In such a setting I don't know if the driver black box is defective or the OS black box is the culprit. It may be that the driver is fine and the problem lies in the MSComm control black box. That's what happens when one deals with a closed OS -- the source of the problems is not at all obvious and, one can't simply peruse the source code to find and fix bugs like this. That's why I'm switching to Linux.
In the Propeller pulse timing routine that I discussed in a previous blog posting, I had deterministic timing of all GC events as well as measureing the duration of the GC output pulse. The resolution of this time was 2 microseconds; if windoze allowed one the amount of freedom to write interface code that I have on the Propeller chip, that temporal resolution would be 100 nsec or less because my laptop is a 1.6 GHz Pentium processor. As it turns out, my estimate of 20 msec temporal precision for windows was wildly optimistic.
The Propeller program that I wrote grabbed the 2 microsecond clock time and width of the GC pulse and sent it via a serial line to the Teraterm program which logged it to a disk file. As expected, when I plotted a histogram of the GC inter-event intervals at a 50 msec bin width, the scatter around the best fit exponential decreased steadily with sqrt(# of intervals). When one samples the same GC under windoze, however, something totally different happens. The histogram contains regularly spaced peaks which don't decrease as one gets more and more intervals. These peaks are quite periodic and have periods ranging from 200 to 400 msec.
The output of the Sparkfun GC is a single character for each event; either a 0 or a 1 depending on whether the current inter-event interval was less than or greater than the previous interval. I don't know the exact relationship but there is a very short latency between a sufficiently energetic photon hitting the GM tube and the Sparkfun GC outputting a serial byte. This latency can be easily computed by perusing the GC MCU source code which Sparkfun freely supplies. My VB6 program to time GC events sets the MSComm buffer size to 1 which means that an event is generated as soon as a single character is recieved on the serial input. This event is used to grab the time on the windoze msec timer and the event data is then written to a file. The VB6 portion of the code executes in < 1 microsecond.
I don't know how windoze prioritizes drivers, but it seems that the USB serial driver is of very little importance to M$. The only way in which these 200-400 msec spaced peaks can be produced is if the serial driver is ignored for such long periods of time. Presumably the wizards of Redmond assumed that a 4096 byte buffer meant that ignoring the serial driver for 400 msec while data accumulated in the buffer was a perfectly acceptable thing to do. They likely never envisioned the possibility that one would ever think of precisely measuring the exact time at which a particular byte was recieved. So, in this particular application, windoze is completely defective. The peaks in the inter-event histogram suggest bunching of bytes in the buffer which are then released as a group when windoze thinks it's time to deal with obsolete serial connections. I haven't run into anything about the serial driver in Mark Russinovich's very detailed reverse engineering of windoze, but W7 has the same problem as WinXP. Only Win 3.1 would be able to precisely deliver bytes as they arrive to the serial driver.
So, the only conclusion one can derive is that, if one wants to do precise timing, stay as far away from windoze as possible (there's good reason to do so for ideologic reasons as well). Most of my data acquisition applications involve precise timing of events to sub-millisecond precision. My Commodore 64 was more than capable of doing so as was the PDP-11. However, a 3 GHz superscalar processor, when infected with windoze, behaves more like an ancient mechanical calculator than a machine capable of timing events with microsecond precision. Thus, my transition from windoze to Linux is in progress. Linux is not a real time OS as well, but it's possible to launch Linux as a process by a real time kernel which can exploit the full speed of a Pentium processor. Also, Linux is open source so if one doesn't like the way that something works, it can be changed. If one has to work with windoze, then the only option is to do data acquisition with hard real time MCU's such as the Propeller chip which can then send data into the senile windoze OS which will take care of the data packets in a doddering fashion.
Sorry, no nice graphs in this post as all the data from GC2 is being acquired on my 64 bit ASUS laptop which has an 1820x1024 screen resolution and I'm too lazy to copy the graphs over to my HP tablet PC and render them in a smaller size for posting to this blog entry.
While on the subject of Geiger Counters, the new version of the Sparkfun GC doesn't suffer from the excess of short intervals which plagued the first version of their GC. This redesign was in response to a negative user comment on the flakiness of the initial GC high voltage supply. Clearly, if the short interval excess was a power supply problem, it has been fixed on the current Sparkfun GC. And thus endeth the GC topics (unless of course I detect a supernova in my basement).
Saturday, February 16, 2013
Propeller Programming stream of consciousness
For those of you who think this is a serious discussion of Propeller PASM, it's not. It started off as a comment on PASM when suddenly it seemed to take on a life of its own. So, it's probably best described as a sort of rant combined with stream of consciousness after a week spent programming that wonderful Propeller chip which is my favorite computer architecture at the moment. Chip Gracey, you rock.
While trying to write propeller subroutines on paper upstairs, suddenly a few things became clear; the reason for the jmpret instruction and the reason for movs and movd instructions. I made some stupid mistakes in my Prop code when first came home as I made the unforgivable error of assuming that #<variable> would give me indirect addressing (No, No, No, No ---- repeat*1000(No)).
There is no indirect addressing instruction in the Prop instruction set. Remember that. To move a variable to a buffer, one does this as follows:
mov BuffAddr, #Buff
movd #ins1, BuffAddr
ins1 mov #1, Data
So, instead of the nice PDP-11 method of:
MOV Data, @BuffAddr
which takes 2 bytes, one has the 12 byte instruction sequence above. The primary difference is that the PDP11 instruction would take about 4 microseconds to execute as it involved a memory based indirect address whereas the Prop instruction takes 12 clocks to execute or 0.15 microseconds.
The other thing is subroutine calls. The PDP8 was more advanced than the Prop when it came to subroutine calls. However, with the JumpRet instruction, one can return from anywhere as long as one allocates a return address for every subroutine, ie
jmpret Sub1Ret, Sub1
rs1 <code to execute after routine>
Sub1 <do something>
The jmpret instruction writes #rs1 to variable Sub1Ret and calls Sub1. Have to just ensure that do a jmp Sub1Ret to get back to the caller. Kind of neat and also finicky but have to play the hand I've been dealt.
There's something about the Prop chip which is intensely appealing and I think that it's an architecture that DEMANDS self modifying code is what makes it such an attractive machine for me. Self-modifying code is something that will get one flunked out of a CompSci course. Mention self-modifying code to a CompSci graduate and they will suddenly turn pale, make the sign of the cross and back slowly away from you saying very slowly, "just stay calm and no-one gets hurt". If you want to then induce a vagal freeze response in them, say the words "Go To". That will overload their wetware, create potentially irreperable cognitive dissonance making them maybe capable of employment at a bottle recycling depot where their ability to count rapidly will be useful.
Self-modifying code seems to be the divide between those who want contact with the bare silicon and those who prefer to live in a world of virtual machines that all talk Java to one another and keep their variables private. Mention a global variable to these people and it has the same effect as if one pulled out ones dick at a Victorian tea party and asked "does anyone here have a longer one?"
Well I happen to like living dangerously which is far preferable to living a life of boredom. Having to restrain myself from trying out all the neat meatware hacks I come up with in my other life as a doctor makes me far more experimental when it comes to my hacker alter-ego. Basically, if there's a limit, I want to see if it can be broken or stretched.
When I first entered the micromedic contest, it was with some trepidation given the 496 longs/cog. This is a hard limit and I don't have access to a silicon foundry so I don't try to fit in 1 Mb of code into a cog. However, when I started writing PASM, I suddenly realized that I was using far fewer instructions that I thought would be needed. That's because data acquisition software is, when one looks at it in detail, really very simple. One either takes a value from one location and stores it in another, or one generates an SPI/I2C clock sequence which is used to either send or recieve serial data. This made me realize that most of the programs which I write in a HLL really aren't doing anything useful as far as the fundamental problem is concerned. All of the routines which I'm especially proud of are very terse and, when I was a young hacker wanabee, the head hacker reminded us of the importance of being terse. That was in the tiny 12 Kword memory space of the TSS/8. Verbosity in PDP8 code was met with a trembling, angry, outstretched hand pointing to the evil empires hulking 360/65 and a command to "go play with your bloatware".
Today I can't think of a single hacker that would consider the IBM 360's OS to be "bloatware" as the usual 360 installation was lucky to have a single megabyte of core storage. The OS was written in assembly language, but was very arcane and if one threw in 2 or 3 instructions that really weren't needed, one didn't notice given that there was a full MEGABYTE of core. That type of profligacy didn't go over very well in the minimalist atmosphere of the PDP8 head hacker's lair. There would be instruction sequences on a blackboard to be simplified and the best one could expect from him if one came up with a particularly novel simplification was "now that's a neat hack".
The Prop has way more RAM than the TSS/8, 2048 longs in the cogs and 8192 longs in hub RAM. That works out to 40,960 bytes of RAM in comparison to the 18432 bytes (although the unit of storage in a PDP8 was a 12 bit word) on the TSS/8. The TSS/8 ran 8 slave teletype machines, each one capable of downloading paper tape programs into the TSS/8 at the blinding speed of 110 baud. When I got an account on the TSS/8, I was in Nirvana. Now, to be fair to the TSS/8, it did have a hard disk drive which I believe was 2 Megabytes or so in size. OTOH, my Propeller board of education (BOE) has a mini-SD card socket and will handle cards up to 32 Gb in size. The TSS/8 also had it's own large room dedicated to it at the U of Calgary and, those of us who had proved especially worthy in the head hacker's eyes, were allowed to come into the inner sanctum and actually touch the PDP-8 machine! We were also allowed to reboot it, but the only PDP-8 we were allowed direct contact with was a bare machine with, I believe 1024 or 2048 words of core.
That's where I first learned assembly language programming. Spending much of my classroom time in highschool writing PDP8 software in octal and getting angry glares from teachers who assumed that their trivial subject was much more deserving of my time than programming. Once I had the sequence of octal instruction codes perfected, on yellow sheets of paper often with holes from multiple erasures of incorrect code sequences, I'd head over to the UofC Data Center and sign up for an hours time on the PDP8. Then, trembling with anticipation I'd key in my program on the switch panel; key in the address, load the address into the address register and then key in the sequence of 12 bit words hitting deposit address after every word was entered. It was one of the most exhilerating things I'd ever done at that point in my life. If I was right, then the panel lights would flash and the results of the mathematical operation I'd coded would be available in RAM and they were read out by setting the base address and then reading out the contents of all of the variables that were modified. Then I learned to use the PAL assembler which was a multiple pass assembler which required one to read both the assembler and ones paper tape program in multiple times before the binary code was punched out on paper tape. This was what one then loaded into the PDP8 <sorry, caught in a reminiscence loop, but when you remember your first computer far more clearly than your first lay after 40+ years, you must be a hacker>
I guess that's why I like the Propeller so much as it's triggered this gush of computer related memories of a much simpler time in the computational age. That's where I'm coming from and why I hate Java. When I finally got into HLL's, FORTRAN was my language of choice as real men used FORTRAN and just suits used COBOL. Well, actually real men used assembly language but I never did get an IBM 360 assembler program to run although I did generate thousands of pages of memory dumps attempting to get one to run (memo to self - don't start with trying to write re-entrant subroutines on an architecture you are just starting to learn).
Let's get away from this nostalgic haze now. What surprised me with the Prop was how short my routines were and that suddenly 496 instructions and variables was starting to look like a large amount of memory. As I'm writing this, I have a Propeller program which is sampling a 3 axis accelerometer and timing each event to msec precision, is reading out a Polar HR monitor that I'm wearing and is storing the data to a SD card based datalogger. There's an LED that's flashing with my HR and it's going fast now as I'm so excited to finally have an ambulatory physiologic monitor (APM) (which has been a dream of mine for 3 years now) on a Prop project board inside a little Sparkfun components box. It's running off 6 volts of AA batteries and survived a shopping outing earlier today stuffed in my jacket pocket.
The first program that I wrote for the APM was a 1 msec clock (there's something very reassuring about being able to create a precise timebase for a processor architecture and many of the development boards that I've gotten nowhere so far with do have mice millisecond clocks on them). The nice thing about the Prop is that it has very deterministic instruction timing. The vast majority of instructions execute in 4 clocks which, at an 80 MHz clock speed, works out to 50 nsec/instruction. 20 MIPS/cog is rather powerful when I was used to the 0.5 MIPS PDP-11/23 as my previous favorite machine. Thus, when one combines short programs with high clock speeds, one gets damn fast execution times. Also, way more leftover RAM in each cog than I had expected.
The clock cog generates a 1 msec clock which is output on Pin0 and the longword clock resides at hub RAM location $30 (why no-one has ever considered a Propeller page0 is beyond me but that's way beyond the scope of this already meandering text). That way other cogs can grab the timer when they need it. Because of all the room I had left over in the clock cog, I figured I might as well add a real time clock (RTC) to it and the RTC needs to be told if a year is a leap year, but it has a better knowledge of how many days are in a particular month than I do. It keeps excellent time which it should as the Prop system clock is generated by a 5.00 MHz crystal oscillator. Then, when I got the Polar HR receiver and chest strap, I decided why not throw in the HR monitoring chunk of the project into the clock cog?
Just a few more instructions to locate the 15 msec long high-going Polar HR pulse and also time the duration of the pulse so can distinguish noise from actual Polar HR events. The time of the HR event and duration of the pulse are saved to a hub RAM buffer.
Then, found some code on the Propeller object exchange (OBEX) which had assembler code to read out the 3 axis accelerometer which is accessed via SPI. Hacked the code till it did what I wanted and put it in another cog. Brutally chopped up a demo program someone had written to put accelerometer code into a spreadsheet and ran wild with code in the massive 32 Kb memory space which are the realms of Spin and the hub. Most of this code is just endless calls to the serial driver to print out various variables and test out subroutines in cogs and is due for a very serious pruning in the near future.
Then, of course, there is the serial I/O cog which uses some very cool multiprocessing techniques which let it reliably emulate an 115,200 baud modem. When I'm finished interacting with the program, I switch the serial port output to pin4 where it just streams data to the SD datalogger; another really cool Sparkfun device which just lets you throw text at it and it stores it to text files on the SD card.
I'm steeling myself for the next step which is to write the VB6 code needed to analyze the huge amount of accelerometry an HR data I've recorded today. Normally if I was told I could do some VB6 coding rather than some terminally boring activity like dealing with my hospitalist billings, I'd jump at the chance. However, after spending a week in the wilds of Propeller assembly language (PASM), I feel like someone who's just spent the afternoon on Wreck beach playing in mud and forgets to dress or clean up before they go to a black tie cocktail party. Now I face numerous restrictions on what I can and can't do. Don't get me wrong, VB6 is my favorite HLL, but it's just that unique feeling one has from intimate contact with the hardware that's missing. when one moves to a windoze platform from a hardcore minimalist development system. M$ is the new evil empire and IBM is like a now senile dictator who's discovered philanthropy at some point during his decline. (IBM's heavily into open source software whereas if an asteroid hit Redmond, I'd be deleriously happy; gotta teach those alien asteroid tossers better English - "Russia or Redmond - what the hell, they're probably the same")
Well, I'm not writing the analysis code in Spin - that's for damn sure. For an architecture that's totally unique and aimed at the true hacker, Spin is a letdown. It's almost as Spin was thrown in as an afterthought -- "most people can't program in PASM - we need a HLL on this sucker if it's going to go anywhere". Spin is annoying, but I can write Spin programs. The most heinous sin that the authors of Spin committed was omitting a "Go To" from their language. May they burn forever in the fires of hell for such insolence. You DON'T write a HLL language for a microprocessor without a Go To!!!!!
It would get way beyond the point of this essay which has the direction of a drunkards Brownian walk (Disclaimer - I'm celebrating the first successful outing of the APM with a bottle of "Stump Jump" Australian Shiraz, and all those who find this essay in some way offensive should send their complaints to the Osbourne family located somewhere near McLaren Vale in South Australia - its all their fault) if I were to start going on a rant about object oriented languages (OOP). To me OOP is a gimmick and nothing's going to change the fact that there's a 100 fold or greater difference between the average programmer and a great programmer. I've known lots of great programmers and they can code well in any language that suits their purpose. I much prefer the engineering approach to programming which is get the job done and who gives a shit if the code is ugly. The CompSci approach is: "lets make sure that the indentation is perfect and we've fully followed all of the rules setout by whoever happens to be the current guru of programming etiquette at the moment". Often this results in very nice looking code, every comma where it should be, impeccable indentation but, more often than not, doesn't work. If it does work, it uses 100x the resources that the product of a "just get the job done" programmer would have written.
We could go on forever about whether my use of "magic numbers" in code is a valid shortcut or an unpardonable sin that deserves at least a week in the stocks in a public square. For a very slow programmer, portability is a key concern because if they're faced with a new architecture, it's nice if they can just tweak a few variables and have it running on that architecture. For a very fast programmer who's totally grokked the problem they're dealing with and simulates/debugs in wetware, they can effortlessly bang off another version of a program for a new architecture, magic numbers and goto's included, and have it working before the slow programmer has yet to come up with a code indentation scheme.
For a while I was seduced by the "we have RAM to burn" paradigm that made me temporarily part ways with my previous perniciously parsimonious programming style. I've now rerealized that bloatware is evil even if one is running code on a 2.9 GHz i7 processor with 16 Gb of RAM. That doesn't mean that I'm going to worry about wasting the odd byte or two in a data structure (and I no longer have a fixation about data structure sizes absolutely needing to be the smallest power of 2 one can get away with) but I have a very dim view of XML and other "human readable" POS that seem to be the products of psychosis (and that's being charitable). (Sorry, wandered off again from the increasingly hard to follow train of though).
I've been told in the past that I get weird when I program and, if I find that patients are suddenly getting up and unexpectedly leaving the exam room, then there may be a grain of truth to this assertion which was made by a female. One week ago, I gave myself the goal of having a functioning APM before the end of the 7 day period. Considering that I totally crashed and burned when I proposed to do the same project on an ARM based Stelleris system in a Circuit Cellar design contest in 2010 (it came with a fully "modern" ARM IDE trial version which took up hundreds of megabytes on my HDD and I wasn't even able to get it to generate a "Hello world" program despite days of trying), I reassured myself last Friday that the contest deadline is July 31. Then something funny happened; the code just started to flow. One of the best things that I did was to write an assembly language routine for an individual on the Parallax Forums who needed to time pulses. This seemed trivial and then I was somewhat surprised when another commenter (apparently one of the forum's uber hackers) poked numerous holes in my code. My initial reaction was irritation - after all I'd run the code numerous times in my wetware Propeller model and it was ***Perfect***. Then, I reread his comments and had one of those WTF moments where I wondered how I could have missed something so blatently obvious and why did it take someone else to point it out. The commenter on the Prop Forums site was quite right - I had buggy code; just because it worked 99.9% of the time was insufficient. It had to be perfect and I had failed in that task.
So, I went back to the Prop documentation, reread everything, updated my wetware model and found a solution to the problem. I'll post the problem and the solution some other time as this isn't designed to be a technical exposition - instead it's just a stream of consciosness rendering of how neat it feels to have finally gotten an APM together - it may be V0.01 of the device, and it would take a very dedicated volunteer to carry the sucker around for a day, but it works!!! So, I feel like celebrating a bit and have decided to post a blog entry as my cat just went to sleep when I started telling her about the device. Can't post stuff to the Prop forums as, after all, it is a contest that I'm in and even though my chances of winning are slimmer than finding a 16 year old virgin in Kamloops, it's the deadline that makes me stop puttering around and really apply myself. I don't care about the $5 K prize at the end of the contest; I've given up considerably more money than that by chosing to play with my electronic toys rather than practicing medicine 1 week/month. What's important to me is the feeling I get when I come up with an idea and see it realized in physical reality. There's nothing like it. Of course, as expected, all of my code and hardware will be open source as of 31/7/2013. Don't get me started on what I think of software patents.
It's very reassuring to see the flashing red LED and blue SD card LED brightly lit to the left of me which lets me know that data is being streamed to the SD card. What I can't understand is why more people don't program in PASM??? To me it's a no brainer - there's the instruction set, there's a pad of paper - start writing code. I'd say that 95% of the submissions to the Parallax Forums are in Spin which is an ugly language (and I believe I'm mentioned the no GO TO heresy) but it's a pseudo-OOP language. Purists will look down on it as it doesn't meet some arcane criteria that a language must posess before it becomes OOP.
To be fair to the adherents of Spin, it's fast to use if you haven't done assembly language programming in the past. Right now, taking one of the coddled progeny of the modern age and asking them to do assembly language programming would be the equivalent of telling them to fix their vehicle with a screwdriver after it's broken down in July at noon in Death Valley. They haven't a clue where to start or what might possibly be wrong and, a very substantial proportion of the population haven't the foggiest notion of how an internal combustion engine works. As far as they're concerned, it's just magic - press one button and the door unlocks, put the key in the ignition and drive away. Most of the current population can be trained to read a gas gauge and become attentive to that most idiotic piece of automotive engineering ever invented, the "check engine" light. Beyond that, all they know is that Henry Ford was visited by Athena in the middle of the night and, the next morning, aside from a few drying wet spots on the sheets, he had the inspiration for the automobile. Automobiles are produced in the same factory that produces unicorns and free cell phones for the masses.
When you're a relic of the dinosaur age, like I am (and I still remember women in fur bikinis distracting tyranosaurs while the men closed in on them from the sides) you know how to start with a bunch of TTL chips, a soldering iron and a large quantity of wire and build a computer from the ground up. It's simple and very tedious and a few bits into a memory address register you start wondering do I really need that many bytes of RAM. Of course, when I was younger I was told - you kids have it easy, back when I was your age there was no such thing as an IC and if you wanted a logic gate, that was one tube, two for a flip flop and lots of resistors and capacitors. Of course you could use relays if you didn't mind your computer being a bit slower.
So, it's not so much what you start with, but how much you understand the basics. I've built tube logic gates and I've used relay logic. I've also played with an abacus and taken apart mechanical calculators. These are all significant aspects of our civilizations computational history. The key to this is trying to figure out how things work. When I used to watch TV ages ago, my favorite shows were How It's Made and Junkyard Wars. I could watch how things are made forever as the first thing I wonder about when I pick up a manufactured object is exactly that - how was it put together, where did the pieces come from, who makes the various pieces, what ingredients go into it, etc. Junkyard wars is an indictment of our society in which we throw away so many absolutely useful items which can be disassembled to make other new things. To me, a junkyard has always been a treasure trove of neat items and I loved spending time in junkyards when I was a kid (my mother wasn't too happy with this).
So, programming in Spin is the easy way out. It's a no brainer. Find that your program is too slow, launch the Spin interpreter in another cog - after all the Propeller has 8 independent cogs. And, yes, Spin is fast but PASM is much much faster. The minimum delay time in a WaitCnt instruction in Spin is 371 clocks whereas in PASM it's 8 clocks. 371 clocks is a glacial 4.348 microseconds whereas 8 clocks is a more reasonable 100 nanoseconds. Exchange your 5 MHz crystal for a 6.25 MHz crystal on the Prop BOE and you're at 10 nsec/clock. I happen to like doing things fast and why would I accept a language that's 43 times slower than PASM?
Presumably any reader who's stuck it out so far might wonder what the point of this piece is and all I can say is it's in the text. Read it over myself and I like it (but then I'm on glass #5 or so of the Stump Jump wine) so I might have differing opinions tomorrow morning. However, it does encapsulate a lot of the thoughts that have crossed my mind during a week of programming and that's why it's going on the Electronics Blog. Next week is back to meatware hacking which I'm not really looking forward to. Meatware has many more degrees of freedom than hardware and, from a debugging standpoint, is a far far greater intellectual challenge than trying to figure out why a light doesn't come on when, as far a you know, your program should have turned it on. Meatware also responds in unexpected ways to fixed stimulus paradigms and that's what makes it interesting. Now on to some VB6 APM analysis program construction.
Tuesday, July 12, 2011
Russian referral spammers
For the last month or so I've been getting a huge amount of referal spam from what appears to be a Russian run botnet. These morons have latched on to one of my web pages and send Get requests with faked originating links which point back to Russian sex sites and drug sale sites. This is clearly from a botnet as the IP address of the originator keeps constantly changing. The behavior of the botnet program is interesting. The initial link was: /Computers/StateMachineHierarchy.html and I changed that filename to something else. Now, rather than getting just a Get request to the initial link, I get requests for: /StateMachineHierarchy.html which, of course, returns a 404.
That particular page was a minor aside to my 2000 era computers pages and I'll restore it when the botnet looks for other targets. Maybe if I get particularly energetic, and find the time, I'll find the locations of all the botnet machines and write a script to send emails to their ISP's regarding the infected machines on their networks. Why these Russian idiots are engaged in this referal spam is unknown. Looking through my weblogs indicates I've been the target of referal spam in the past and it's most annoying when the target is a large file which wastes my bandwidth. Right now all these idiots get back for their Get request is a 404. I do periodically look at where accesses to my webserver come from and what the most commonly requested pages are. I don't post a page with the most frequent referral sites which is what the referal spammers are hoping I'll do. In any event, I would filter the list to remove requests for non-existent web pages which also gets rid of other types of spam.
What disturbs me about this, and other attacks on my webserver and mailserver is that there are an increasing number of morons with access to computers. I miss the days when the internet was an anarchic but civilized place and it's totally different now that the criminals have moved in. Botnets are only possible if clueless users have no idea that their machines have been infected. The solution is not antiviral software but rather a knowledge of what is normal on ones computer. My personal anti-virus software is primarily wetware based supplemented with ProcessExplorer, WireShark, various file hex editor programs, TCPview, Process Monitor and Rootkit Revealer. I'll be fighting back against these morons although it's going to have to be via attacks on their proxies as they're likely safely isolated controlling their botnet from afar. The other option is to simply refuse all accesses from Russian and Chinese IP addresses although the botnet appears to be worldwide.