You can get in touch with me by .
I’m in the process of setting up a Tribe.net account. It is not a bump-free ride yet — it doesn’t seem to want to verify my registration. But I do like the idea behind it.
Why is that atypical? Well, from where I sit, I see more people transition from being programmers to web developers than the other way around. Your experience, of course, may be different.
But I am passionate about great design, and great pieces of software that happen to be based on good ol’ http. And I’m passionate about these things because I think communication is really important - important enough that any barriers should be broken down as much as possible.
So I’m attracted to ideas like blogging, RSS, the Semantic Web, ways of simplifying the markup of text... in short, ways of sharing information.
But I like to think I’m a pretty pragmatic guy, too. Yeah, the semantic web is cool, and I think that we’ll have something like it someday, but it’s going to take a lot of grunt work by highly-trained people to get the value out of it. And I think that cost is too high right now. But that, as they say, is another blog.
I like to play a lot too. So I’m sure you’ll see quite a bit about the games I like. It’ll cover some PS2, and it’ll also cover the slowly-but-surely emerging world of abstract games. Chess variants need not apply. PC games need not either.
One thing I’m going to try with this blog that I think would be unique is to post links to stuff I found interesting fairly quickly, which in itself isn’t that interesting, but then, when I have a few moments later on (for values of “later” being ‘that night’ to weeks from now), I’ll follow it up with a longer entry explaining why I think it matters. I haven’t seen anyone else do it that way before, which really isn’t saying much, but I hope it makes for an interesting mix of different kinds of content for you.
By the way: Blosxom Rules. Thought you should know.
Thank you for being motivated enough to comment on something I wrote.
...and that’s it. I get access logs daily, so I will see your link in them. If you feel it urgent that I see your post as soon as possible, you can always send me .
I’m going to be writing up a colophon soon, but for now, please realize that this blog you’re reading is statically generated from a blosxom installation on my machine. There are no moving parts.
This means that for those of you who favour trackback mechanisms, you can’t ping me as you would with other blog installations.
Commenting on this post really won’t be that hard, and anyone can do it. Just blog about it at your site, and link to the permalink on mine. If you want to ‘ping’ me, then after you go live with your post, click on the link to my site.
Now, you’re probably wondering how the heck I’m going to know you linked to me. Well, it’s not a big deal. If you got the time, read this wonderful article by John Gruber on Trackbacks The short version is: why use trackbacks — the way most people do — when your web logs note referrers? I have access to my web logs on a daily basis. Ergo, I’ll be parsing them for referrers, checking them out, and posting the interesting ones.
“But that’s crazy!” Perhaps, but I’m not getting that many hits per day. Yet. And if I were, then I’d want a static site to keep the bandwidth and server load down. This is not my domain, and my presence here is entirely due to the good graces of one of my friends.
But if you want to help me out here, then all I’d ask you to do is, when linking to me, to write in and around the link something that provides good context for your contribution. If I’m getting a lot of referrers, then I’ll probably write a script to visit your page, excerpt an area around that choice phrase, and integrate it somehow into this site.
While this is a very interesting and popular idea, I’m not sure if I like the MT Trackback feature. Here’s a link to the specification I’ll counterpropose my own idea soon.
UPDATE: Looks like John Gruber beat me to it And while it’s a long, long read, it’s worth every word, for he lays out his thoughts in careful detail, leaving no stone unturned.
Quote from the article:
The amount of money that is in aggregate routed into marketing, even in a niche like computer technology, is immense. While I, like many, tend to think that the marketing and advertising professions are dysfunctional, they’re not entirely clueless; if a credible human voice inside a big scary company (think Don Box) is a good way to get the message across, they’ll notice. If a smart writer with a long leash turns out to be more useful than a phalanx of conventional journalists (think Jon Udell or Dan Gillmor), they’ll notice that too.
And the currency generated by such discourse — the attention of people who spend — is, in the world of marketing, beyond price; diamonds are dust beside it.
This might be an interesting source when creating a business case for creating a social/human networking software package.
I read it, I liked it, and I’m not smart enough to disagree on any one thing. You’ll find me at the flat end of the curve. :)
People on unix-y type machines, who render their blosxom blogs out as static files, and host on Apache (not sure if this trick works in IIS) might be interested in this trick. Using it, I’ve been able to make my staticly rendered files 25% of the original size, with a corresponding reduction in bandwidth demands. And that means not only do I save on bandwidth costs, but my viewers will get their pages faster too.
For many people, this might seem to be an incredible trick, guaranteed to be a catch. There is one, but it’s probably not as bad as you think. Here’s the lowdown:
One of the features the HTTP 1.1 specification reccomends be implemented in the browsers is the capability of decompressing gzip files, and rendering the results in the window - if it’s a filetype the browser can handle, of course. In fact, between Internet Explorer 4.0 or newer, Netscape 4.0 or newer, and many of the rest of the modern (version 4 compatible) browsers, this feature has been implemented. (OmniWeb 4.1 and Safari 1.0 are two notable exceptions, and I’m sure there are others) This is significant because over 98% of traffic to websites these days come from Netscape & IE browsers of at least version 4.
Even more interesting, if you’re serving files from Apache, it’s configured out of the box in such a way that if you have a file called foo.html.gz (the .gz extension implying that it’s gzipped), and a client requests foo.html, it will send the .gz file automatically. I do not know if IIS can do this as well - anyone with pages that are being served from IIS please experiment and let me know.
So what we have here is a situation where gz files can be served automatically in for calls to .html files, and almost all browsers currently in use can decompress them automatically and display the results in the browser. Seems to me that the time is ripe for some well-earned savings in bandwidth.
I did say there was a catch. It’s up to you to decide how serious it is, and if you’re in control of the server, you can actually work around the problem by installing an Apache module For those browsers who don’t support in-application gzip decompression, your html files will be automatically downloaded to their hard drive. If your traffic logs indicate that 1 person in 1000 will see that problem, then maybe it’s ok for you. If your logs indicate that 1 person in 50 have that problem, maybe it’s not ok. It’s up to you.
What about links? Do they need to be renamed with a .gz extension? No. As I said earlier, if you make a request to foo.html, and there is no foo.html, but there is a foo.html.gz, Apache will give you that instead. No configuration required.
If you’re administering your web server, then you don’t need to do what everyone else might have to do. Just look for the mod_gzip Apache module and install it. That module will detect if the client can support in-application decompression, and automatically compress and send the requested file to them. The module can go one further and builds a cache of compressed files to reduce latency in the long-term.
For the rest of us who still want to take advantage of this trick, here ’tis.
In the terminal, render your blosxom files statically (if you haven’t already done so), cd into the root of your statically rendered files, then type this:
$ find . -name "*.html" -print|xargs gzip
What this command does is find, starting from the current directory, all files that end in .html, print them to the standard output, which gets piped into xargs. xargs will then call gzip for each of the found files, compressing them.
If this is something you like a lot, you could even make a command for it.
If you’re running tcsh, throw this line in your .tcshrc file:
alias webgzip 'find . -name "*.html" -print|xargs gzip'
if you’re running bash, throw this line in your .bash_profile (or equivalent)
alias webgzip='find . -name "*.html" -print|xargs gzip'
For the command to take effect, quit and restart your terminal. If you don’t want to do that, I believe in tcsh you type:
$ source .tcshrc
at a prompt, and for bash:
$ . .bash_profile # or equivalent
and you’ll get it.
For more information, you can visit this site: http://www.webcompression.org/
Abstract Games: 2
Reputation Systems: 1
Kid Tales: 8
Mac OSX: 1
Social Software: 3
This, sir, is a great game. Please check out the rules here: http://www.abstractgamesmagazine.com/epaminondas.html
The only drawback to it is that it takes so long to set up. I know, I’m such a whiner. :)
...as crafted by Jason Scharlach & Robert Hahn
1. O’s win in 3
. . . . X X O O . X . . . . . . . . O . . . . . . . . . . . . . . . O . . . . . . . . . . . . . . . . . . . . . . . X . . X . .
2. O’s win in 3
O O X X . . . . . . . . . . . . . . . . X O . X . . X . . O . . X O . . . . . X . . . . . . . . . . . . . . . . X . . . X . O .
3. O’s win in 4
. X X . . . . . . . . . . . . . . . O . X . O X . X . X . . . X X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
see here: http://www.abstractgamesmagazine.com/epaminondas.html for rules. If you like playing abstract games, then I’d recommend you take a subscription to their magazine out, then.
YAML is a simpler, easier to read markup language that people sometimes compares to xml. Syck is an allegedly fast library with ruby bindings (bindings also exist for other scripting languages) parser for parsing YAML files.
I wonder if YAML might not be a bad way to mark up any data that may have malformed HTML in the payload - like RSS. I suspect, however, that advocating this idea, particularily in Sam Ruby’s RSS workalike initiative would be largely ineffective. Clearly, I’m not the only one with that notion...
Go YAML! You have a place in this world.
Inspired by this work of Paul Klee, Justin Kominar drew this picture of me on my whiteboard:
ESR has an interesting book out online called "The Art of Unix Programming" Must read it sometime.