Sunday 28 December 2008

Satisfaction

Working on projects or working products? Which is for you? Both provide for interesting, stimulating work with difficult problems to solve. Which are you personally going to derive the most satisfaction from? Well, I have a theory, or, a way of phrasing the question that has helped others in the past and might help you.

  • When you are working on projects you will sit down with someone with a problem. You'll get to know that person and their problem intimately and personally. Hopefully you'll then solve their problem and leave them in happier and better place. All thanks to the expertise you have imparted.

  • When you are working on products you will not have this personal connection with your customers. Instead you will attempt to imagine how all the possible customers in the world could potentially want to use your product. You'll try to place yourself in an enormous range of situations and attempt to make each of those a bit better. Hopefully, if your product is completed and a success in the market then you will have made the world a better place for a huge range of people. None of whom you'll ever know.

So... satisfaction from helping just a few people you know well, or satisfaction from helping a huge range of people you'll never know? Of course, if you do choose to work on projects then you're guaranteed to help people, whereas products succeed far less often.

Neverwhere

Neverwhere
Neil Gaiman

Like Wrath of a Mad God this is fantasy too: but this is a completely different proposition. This is good, very, very good. Good enough that I will recommend this to non-fantasy reading friends. Ahhh... A breakout hit - the dream of fantasy authors the world over. Well, here's a tip: instead of sucking, try writing high quality, original, funny and genuninely moving stories, like, say, this.

The poignancy. It's not cloying, there's no preaching. Not even any condescension, patronisation or pity. This is a tale of those who fall through the cracks. Those you don't notice around you; the other nation outside, in the words of Billy Bragg, sleeping in the street. A tale of the disenfranchised, the dispossessed. told so well. So clearly, so directly, with no pathetic efforts to tug at the heart strings that, for me, this became the most moving story since Greene's The Quiet American.

But it's fantasy. How can a fantasy novel seriously be mentioned in the same breath as Graham Greene? Well, I'm going to have to try to justify that. On the surface, and the back cover, Neverwhere is a fantasy adventure set in a strange, fantastical world at once beneath and entwined within everyday London. This world intersects with London through the streets and the homeless. Richard Mayhew is pulled from our world into this other place. Forced onto a quest all he really wants is to be able to return home.

Viewed as a fantasy creation, this other world is a joy. Full of magic, grand quests and the most imaginative etymologies for major London landmarks: I certainly wished I knew London better. To get the right feeling I was able to transplant Sydney in place of London. Enough wandering in the City, Surry Hills, Pyrmont and Balmain and you have the feeling that there is history, and history on history here. And beyond that, it's dark. Frighteningly, unexpectedly dark.

Like Midnight's Children though, I read Neverwhere in two ways. As well as the straight forward fantasy interpretation, you could also see this as a story told by an unreliable narrator. What if the weird, fantastical world beneath London's streets doesn't exist? I mean, not even within the world of the book? What if that entire world is inside of Richard Mayhew's mind and he just doesn't know it? And for me, that possibility made this a touching, poignant story. A story genuinely of those who fall through the cracks; into a world that is both magical, frightening and very dangerous.

Unfortunately, to make you believe I'll have to cite specifics. Without spoiling, I'd point at the third quest for the Blackfriars. When you read that scene think about alternate explanations.

Sunday 7 December 2008

Automatic Deployment for Rails

For the Rails applications we're building at work, as well as all the standard continuous integration features, we also automatically deploy our applications. That is, every time we submit code a central server is automatically updated with a new release. Before running tests.

We're pretty happy with this set up. It's already found a couple of bugs in some plugins we're using. More on that in an upcoming post. Here's how we made our automatic deployment work. We're using Capistrano for our deployment scripts, we're deploying to Phusion Passenger running under Apache on FreeBSD and our continuous integration server runs an Ant script.

These instructions describe how to set up a Apache 2.2 web server with Phusion Passenger on FreeBSD; the Ant script to automatically deploy and how to configure a Rails app to be deployed like this.

This will give you two new environments for your apps: DEVTEST and UAT. UAT is a user acceptance testing environment, our system testers and analysts use and own this environment. We don't automatically deploy to here, we release to here. DEVTEST is the environment we automatically deploy to.

Setting up Your Server

Installing Phusion Passenger

Installing Phusion Passenger on a FreeBSD server is no different to installing anywhere else:

$ sudo gem install passenger
$ sudo passenger-install-apache2-module

Configuring Apache

At the end of the second step, the installer tells you to add some config to the end of your Apache config. On FreeBSD, edit this with:

$ sudoedit /usr/local/etc/apache22/httpd.conf

And then add the following at the end:

LoadModule passenger_module /usr/local/lib/ruby/gems/1.8/gems/passenger-2.0.3/ext/apache2/mod_passenger.so
PassengerRoot /usr/local/lib/ruby/gems/1.8/gems/passenger-2.0.3
PassengerRuby /usr/local/bin/ruby18
NameVirtualHost *:80
<VirtualHost *:80>
    ServerName devtest.example.com
    ServerAlias devtest
    DocumentRoot /usr/local/www/rails/devtest
    <Directory "/usr/local/www/rails/devtest">
        Options FollowSymLinks
        AllowOverride None
        Order allow,deny
        Allow from all
    </Directory>
    RailsEnv "devtest"
</VirtualHost>
<VirtualHost *:80>
    ServerName uat.example.com
    ServerAlias uat
    DocumentRoot /usr/local/www/rails/uat
    <Directory "/usr/local/www/rails/uat">
        Options FollowSymLinks
        AllowOverride None
        Order allow,deny
        Allow from all
    </Directory>
    RailsEnv "uat"
</VirtualHost>

Unless you want to use two different servers for the two environments, you'll need to use named virtual hosts, and ask your friendly administrator to add CNAME records to your DNS server pointing devtest and uat at the same physical server. They'll know what you mean.

Create a Local User

You'll need a local user on your server. This is the user that will run the automatic deployments.

$ sudo adduser
Username: deploy-robot
Full name: Deployment Robot
Uid (Leave empty for default):
Login group [deploy-robot]:
Login group is deploy-robot. Invite deploy-robot into other groups? []: www
Login class [default]:
Shell (sh csh tcsh zsh nologin) [sh]: 
Home directory [/home/deploy-robot]:
Use password-based authentication? [yes]:
Use an empty password? (yes/no) [no]:
Use a random password? (yes/no) [no]:
Enter password:
Enter password again:
Lock out the account after creation? [no]:
Username   : deploy-robot
Password   : ****
Full Name  : Deployment Robot
Uid        : 1001
Class      :
Groups     : 
Home       : /home/deploy-robot
Shell      : /usr/local/bin/sh
Locked     : no
OK? (yes/no): yes
adduser: INFO: Successfully added (deploy-robot) to the user database.
Add another user? (yes/no): no
Goodbye!

Deployment Directories

Set up the directories to hold your applications.

$ sudo mkdir -p /usr/local/www/rails/devtest
$ sudo mkdir -p /usr/local/www/rails/uat

These are the web roots for each of the environments, but applications will not be deployed here. Instead, symlinks will be created from here to where the applications are actually deployed.

$ sudo mkdir -p /usr/local/app/rails/devtest
$ sudo mkdir -p /usr/local/app/rails/uat

These last two directories, and everything under them should be owned by the deployment user you created above.

$ sudo chown -R deploy-robot:www devtest uat

Gems

Finally, there are some gems you'll need installed on the target deployment server. Some of these depend on FreeBSD ports.

$ cd /usr/ports/comms/ruby-termios
$ sudo make install clean

And then just a couple of gems.

$ sudo gem install termios
$ sudo gem install capistrano

And that's it for initial server configuration. There will be some more configuration when first deploying an application.

Preparing Your Application

Capistrano Config

Capify your application:

$ cd app
$ capify .

Edit your capistrano rules in deploy.rb. You'll want them to look something like the following. These rules use no source control system to get the code. Our continuous integration server takes care of checking out the code, so it's easier to deploy from the local code copy. And, this way we can be sure each deployment only contains one changelist.

# Overall config
set :use_sudo, false
# Application config
set :application, "app-name"
set :default_env, "production"
set :rails_env, ENV['RAILS_ENV'] || default_env
# Deployment source and strategy
set :deploy_to, "/usr/local/app/rails/#{rails_env}/#{application}"
set :deploy_via, :copy
set :scm, :none
set :repository,  "."
# Target servers
set :default_server, "localhost"
set :dest_server, ENV['SERVER'] || default_server
role :app, dest_server
role :web, dest_server
role :db,  dest_server, :primary => true
# Phusion Passenger specific restart task
namespace :deploy do
    desc "Restart Application"
    task :restart, :roles => :app do
        run "touch #{current_path}/tmp/restart.txt"
    end
end

Environment Configuration

Set up the two new environments for your application.

$ cp config/environments/production.rb config/environments/devtest.rb
$ cp config/environments/production.rb config/environments/uat.rb

Somewhere inside both those files you'll need to set the RAILS_RELATIVE_URL_ROOT as the application will be running at a sub-URI on your server and Rails needs to know that. Something like:

ENV['RAILS_RELATIVE_URL_ROOT'] = "/app-name"

The two new environments will also need to be described in your database.yml file. This of course depends on your specific database server setup, so I'll leave that bit to you.

Server-side Application Setup

Apache needs to know about the applications, and there needs to be symlinks from the web root to the application deployment folder. This setup only needs to be done once for each application.

To add the application to Apache, edit /usr/local/etc/apache22/httpd.conf again, and in the VirtualHost section for the devtest environment, add a line like the following:

RailsBaseURI /app-name

Now, set up the symlink:

$ ln -s /usr/local/app/rails/devtest/app-name/current/public /usr/local/www/rails/devtest/app-name

And you're done with the application configuration.

Ant Deployment Scripts

Our company has an in-house continuous integration server. We'd be too embarrassed at cocktail parties if we didn't have our own. Yes, yes, I know this is completely ridiculous. And to make it even worse, it only runs Ant scripts. Sigh. Anyway, here's how you make Ant automatically deploy an application to devtest.

In a file called definitions.xml:

<project name="definitions_rake">
    <macrodef name="rake">
        <attribute name="app" />
        <attribute name="target" />
        <element name="variables" optional="true" />
        <sequential>
            <exec executable="rake" dir="@{app}" failonerror="true">
                <arg value="@{target}" />
                <variables />
            </exec>
        </sequential>
    </macrodef>
    <macrodef name="capistrano">
            <attribute name="app" />
            <attribute name="environment" />
            <attribute name="task" />
            <sequential>
                <exec executable="cap" dir="@{app}" failonerror="true">
                    <env key="RAILS_ENV" value="@{environment}" />
                    <env key="SERVER" value="${project.server}" />
                    <arg value="@{task}" />
                    <arg value="-s" />
                    <arg value="user=${project.user}" />
                    <arg value="-s" />
                    <arg value="password=${project.password}" />
                </exec>
            </sequential>
    </macrodef>
    <macrodef name="deploy">
        <attribute name="app" />
        <attribute name="environment" />
        <sequential>
            <capistrano app="@{app}" environment="@{environment}" task="deploy:setup" />
            <capistrano app="@{app}" environment="@{environment}" task="deploy:migrations" />
        </sequential>
    </macrodef>
    <macrodef name="test">
        <attribute name="app" />
        <sequential>
            <rake app="@{app}" target="db:migrate" />
            <rake app="@{app}" target="test" />
            <rake app="@{app}" target="spec" />
        </sequential>
    </macrodef>
</project>

Ant macros, while quite insane, are generally a better way to define new tasks than the complete insanity of trying to write a whole Ant plugin in Java. These macros define low-level tasks to run rake and capistrano tasks, and then use these to build up higher level tasks like test and deploy. All these tasks assume that Ant has been run from the directory immediately above your Rails app directory.

In a file called project.properties, set your server, user name and password. Having the password here is unfortunate, but it is a local account, with limited privileges on an internal server. Your call.

user=deploy-robot
password=deploy-robot-password
server=deployment-server

In a file called build.xml:

<project name="aegean" default="build">
    <import file="./definitions.xml" />
    <property file="project.properties" prefix="project" />
    <!-- Sample application.
         To add a new application:
         1. Copy the following targets.
         2. Replace 'depot' with your Rails app name.
         3. Add the 'app name' target as a dependency of the target 'build'.
    <target name="depot.deploy.devtest">
           <deploy app="depot" environment="devtest" />
    </target>
    <target name="depot.test">
           <test app="depot" />
    </target>
    <target name="depot" depends="depot.deploy.devtest, depot.test" />
    -->
    <target name="example.deploy.devtest">
           <deploy app="example" environment="devtest" />
    </target>
    <target name="example.test">
           <test app="example" />
    </target>
    <target name="example" depends="example.deploy.devtest, example.test" />
    <target name="build" depends="example" />
</project>

The large comment block is just helpful for other developers trying to add another application. From here, to try this out:

$ ant

It should run the deployment, and then run the test suites. If that works as you expect, then just configure your continuous integration server to run Ant over that file on every submit.

Hopefully this is of use to someone. Though this is how our environment is configured, I have written this all from memory, so I might have missed a critical step somewhere. Please let me know if there's anything that needs to be changed.

Tuesday 2 December 2008

Reading News

Previously, I've been a Google Reader fan for my RSS news reading needs. Now that I'm a proper Apple fan boi with an iPhone and a MacBook Pro, I've switched to NetNewsWire. Waaay better. The Google Reader iPhone app was what really drove me away. I'm probably going to have to turn off my blog for this, but desktop applications are frequently better than web applications. Heresy, I know. Google's iPhone Reader app has two specific problems:

  1. It refreshes the page after you close a tab. This is pretty irritating. Particularly if, like me, you only show unread items. Things disappear while I'm still reading them. Aargh!

  2. The big one: they 'mobilize' web pages. That is, instead of linking to the original version of every item Google has decided to link to a rewritten version of the item. Supposedly this version will be more readable on the iPhone. Well, the iPhone actually has a really good browser. But they've actually significantly broken something: the iPhone web browser recognises YouTube URLs and opens them in the built-in YouTube app. Because the iPhone web browser can't play YouTube movies. The rewriting means that this doesn't work. Thank you Google, thank you.

Anyway, there is one feature that I miss from Google Reader: sharing items. But there's the whole desktop application thing going on. I'm now posting items I would have shared to my Twitter feed: gga, look for items tagged #feed.

So how do I this? A pretty simple piece of AppleScript:

tell application "NetNewsWire"
        set t to title of selectedHeadline
        set u to URL of selectedHeadline
end tell
tell application "Twitterrific"
        post update t & ": " & u & " (#feed)"
end tell

A single click from NetNewsWire and I've posted an item to Twitter. If you think you might be interested in items I've previously shared, follow me on Twitter.

Wednesday 26 November 2008

Dealing with Bot Nets

Currently at work I'm designing a large-scale system that will be susceptible to a certain kind of denial-of-service attack. By way of analogy, imagine that Gmail didn't bother to prevent robots from creating accounts. By the time the first human went to create an account all the reasonable combinations of the top 10,000 human names would have had already been taken, by robots. This would be very irritating to all actual human users.

Our problem is much more serious than simply losing human-preferred free email addresses. But, it is a case of preventing robots from soaking up a finite resource and depriving real humans of using resource.

My approach to large system design is to always get security right first: you can never effectively retrofit it later. And the central question we keep coming back to on security is how to defend ourselves against robots. Our thinking has typically followed certain lines:

  1. To acquire a resource, a user must prove they are human.

  2. All users must have a registered account, so we can identify who is consuming the resource and only have to verify their humanity once: on registration.

  3. The user's account must be protected with a password to avoid a bot misusing a real human's account.

  4. Each account has a threshold of resource acquisition. If the threshold is exceeded than that account is temporarily blocked in some way.

At this point in our thinking we're pretty confident that we've dealt with the risk of a robot creating an account and using that single account to soak up all our resources. We're also pretty certain we've dealt with the issue of a robot creating many, many accounts, using those accounts to soak up resources while staying under the threshold for each.

But. What about bot nets? And by restricting single accounts like this, haven't we just forced attackers to use a bot net? Attackers would want to distribute a bot across the Internet. Each bot would not use its own account, instead it would use the account of the human owning the computer the bot had infected. Once the bot is on the human's computer it can easily grab the credentials, as a key logger or by sniffing around in the browser cookies. In this situation our threshold control hasn't really stopped the attacker, but it has hurt the human. The effective threshold for the human is now much lower.

And it is on this point that our discussions tend to go around and around. How can we prevent bots (who may have acquired a human's account) without negatively affecting the human's experience and without placing prohibitive barriers to use in place?

Thinking about this issue tonight, I wonder if we're not completely wrong in this argument? If a user's computer has been compromised and is now part of a bot net, should we be trying to give that user a smooth experience at all? They've been compromised, shouldn't we identify that, inform the user and then attempt to lock them out completely? There's a question there about when we can let them back in, but I'll leave that now.

My central question is, should web applications actually aggressively make the experience worse for user's who have been compromised? In the case of a bank the answer seems obvious. I suspect we're actually similar.

Wrath of a Mad God

Wrath of a Mad God
Raymond E. Feist

Pure crack for fantasy geeks and about as high quality. I've been reading Feist since a friend recommended Magician to me when I was nine years old; in grade four, back in 1988. My friend's name was Paul Reid and that was 20 years ago now. It's also long since I realised that I'm pretty much only reading Feist because reading Feist is what I do.

As his books get steadily worse that becomes a weaker and weaker reason. He does have some redeeming features: he doesn't forget where he put the plot; his sagas actually finish; he manages to avoid appearing a total right-wing fascist. After the disappointment of Martin and the betrayal of Jordan those are very good things to a recovering fantasy geek. He is still one of the reasons that I haven't completely given up on fantasy. And of course, Gaiman.

Why am I now so disappointed? His first three books (Magician, Silverthorn and A Darkness at Sethanon) were really great fantasy epics. Magician even managed that rarest of fantasy firsts: a self-contained, single, enjoyable novel. What was so enjoyable? A rich, consistent, well-thought through world, with a deep and fascinating history. The sort of thing that makes Tolkein so popular. Those books sold well, Feist proceeded to mine that world and his characters in countless sequels. And like the fools we are, us fantasy fans lapped those sequels up.

You may think you want the blank spots in the story filled in, you may think that those tantalising glimpses are only a fraction of the glory that is fully formed, but hidden, in the author's mind. But. You are wrong. The back story you build, the worlds you imagine around the glimpses? Those are the real joy in fantasy. Do not burn those worlds to the ground by demanding ad reading endless prequels and sequels. Let the great stories stand alone.

Feist is a great example of this. It turns out that he didn't really have anything to surround those brief histories and as he writes more and more he's starting to change things. Sometimes for the better, but many times the things I've loved have died.

I see two things here: the world is not meant to change, even if it does make things easier for someone; and, you don't want to know your heroes too well. Even if they are only characters in a book.

Sunday 23 November 2008

The Worst Desktop Operating System. Evar.

I complain a lot about FreeBSD here and on Twitter and, thankfully, I am now about to stop using that horror on my desktop. But why horror?

  • In the world of desktop computers, anything that is not Windows, is niche.

  • In that niche, anything that is not Mac OS X is niche.

  • In that niche, anything that is not Ubuntu Linux is niche.

  • In that niche, anything that is not Red Hat or SUSE Linux is niche.

  • In that niche, anything that is not one of the commercial workstation UNIX operating systems, like Solaris, or AIX, or HP/UX is niche.

  • And down there, in that niche, in that fraction of a fraction of a fraction of a fraction of a percent of the world of desktop computers, FreeBSD is niche.

From a technical point of view it actually has quite a lot to recommend it. The kernel is very well tested and reliable. For a UNIX, it has generally made decisions for correctness over performance. Something Linux certainly can't match. The user land is a consistent space, harking back through over 20 years of tradition. The ports system is a pretty good way to install and manage software.

But. In the whole world there are perhaps 15 people using it (no, not really). Anytime you Google for any problems or issues, you'll find Linux, and just have to hope that you can figure out to translate the instructions.

And this is to say nothing of the complete dearth of available software. To use FreeBSD is to always be several versions behind in Firefox. To have to compile Emacs from CVS source. To have to tweak the source code to your video driver.

FreeBSD may once have had the One True Filesystem layout, but not anymore. Linux is now nearly the king of that hill. Don't use FreeBSD as your desktop. You really don't care about how good the kernel is. You really do care about not having to compile video drivers. Worst Desktop Operating System Evar.

Thursday 20 November 2008

Still Alive

Yes, this is one of those irritating posts. Where a blog that you thought had quietly retired suddenly reappears with a post. A post that says basically nothing. A very self-indulgent post just promising that there will actually be real work worth reading reappearing soon.

Why couldn't the blogger just leave us all in peace? Why this attempt to appear that he hasn't just gotten bored or too lazy to update? Why this empty post tantalising and teasing with a promise; only to disappoint with more deathly silence.

Yep, this is one of those posts.

But! I actually do promise to post something real soon. No! Really!

And, in a desperate attempt to appear trustworthy, here's a short overview of what's been going on.

  • Switched from the horror of FreeBSD I now have a brand new MacBook Pro as my primary computer. After nine years I'm finally being paid to use the platform I stayed loyal to throughout the dark years. Hopefully the new computer, well set up, will actually help me write more here. It got me writing this.

  • New project. Can't talk about it. Cool though. Has inspired some general problem solving that I can talk about though. There will be some technical recipes on here for the first time.

  • Briefly had a fish tank on my desk at work. It was very nice. The tank did well, but then I had to move desks. Probably worth doing, but you'd want to be more sure of where you were sitting.

  • Joined a book club. Read quite a few books. And yep, that means reviews. There will be some of those coming soon.

  • Still annoyed at various parts of my industry, enough to rant.

Hopefully, all that and more to be posted.

Monday 1 September 2008

On the Nature of my Damage

Recently I have realised that at a very early age my attitudes towards and interactions with computers were permanently damaged. Like all geeks, I first started programming in primary school. And like many geeks my age, the first computer I had to program was an Apple //e. My Dad had lots of books for the Apple //e, so I had a lot to work through. But once he got a Mac, I wanted to program that as well.

The seduction of more power, I guess.

Well, times were tough in the Northern Territory: the only book I could find even vaguely on programming the Mac was The Apple Human Interface Guidelines. The original edition, by Tog, from the mid-1980's. That was it. And this was, of course, long before general Internet availability.

What was I going to do? That was all I had, so that's what I read. Cover to cover. Twice.

The Apple HIG is a somewhat unusual technical manual. Instead of just documenting all the available possibilities, dispassionately and exhaustively, this book took a very firm position. There was a right way to do things and things must be done the right way. The HIG then set out to list the right ways and the wrong ways, with justifications.

This preaching about the true path was both low-level and high-level: as well as detailed instructions on how to place and label buttons, it was also about how to design whole programs for the smoothest and most consistent interaction with the user.

And there was installed my damage. That book didn't just encourage good UIs, it demanded them. And now it seems that I demand a lot from computers. Computers shouldn't be hard to use, in fact we shouldn't even notice that we're using them at all.

Now every time I have to do something just so the computer knows what's going on (like 'Save') or I have to jump through a hoop because it's easier for me to jump then the programmer to write their software well, I feel a deep sense of annoyance. It doesn't have to be this way, dammit! Computers are meant to free us from drudgery to allow us more time to do the things we enjoy. Or, more cynically, the jobs we're more efficient at. Either way, doesn't matter to me. But, most of all, computers don't have to be this way. It isn't that much harder to do the right thing. We could do the right thing in the 1980's; we can do the right thing now.

As a programmer I could be frustrated and demoralised by the state of my industry. Maybe later. For now I choose to rant and rail against this, and fight. Much to the endless delight of my highly fortunate colleagues.

Where have all the photos gone?

I've stopped posting my photos on my blog, they go straight to my Flickr.com feed now. It's just easier to post to and have the photos look reasonable. Sorry Google, but Picasa just isn't there yet!

Anyway, here's my feed: overwatering.

And here some recent sets of photos that I kind of liked. Follow the photos for larger sizes and the rest of the sets.

boatshed (side)

farm cove sunset

dinosaur orchid

I will post other links to some other photos I like in the future, but if you're interested, probably best to subscribe to my Flickr feed.

Sunday 13 July 2008

after the quake

after the quake
Haruki Murakami

I'm in a book group again and this is our first book. Funnily enough when we all brought our picks to the first gathering there were two Murakami suggestions - the other being A Wild Sheep Chase. We chose after the quake as our first book (it was short and a short story collection - a slightly commitment-phobic book group) and A Wild Sheep Chase was pushed to the end of the list with a strong suggestion to find a substitute. And now the suggester has left the group! Oooh - scandal!

For all that after the quake was fantastic. It's a collection of short stories each following a single person's life after the Kobe earthquake. None of the characters lives were directly affected by the quake: they didn't live in Kobe, they apparently didn't lose anyone from their lives - but for each of them the quake was there, this huge background event that has shuddered through them all.

The writing is spare, brief, highly evocative and, ultimately, beautiful. Reading this very short collection was an unusual reading experience: it was relaxing, peaceful. There was no urge to understand what was going on, to read deeper - there was just a peaceful journey. Apparently Murakami is to be read very literally and that's how I saw this. It seems to be full of allegory and deeper intent, but I don't think that's what we're supposed to read. It felt like a series of beautifully told stories about ordinary people. People whose lives had been massively disrupted - even though nothing actually happened to them. And thinking on that, there is a strange undercurrent of guilt: as if they should not be feeling pain while there is so much suffering on TV.

I have a theory that there is something that connects together all the stories told in this book. An earthquake is a sudden event following a long build-up of pressure, after the quake the seismic fault lines settle into a new state, one that is hopefully more stable. Unfortunately, for us, it requires this sudden release to jump to the new state. This is reflected in all the stories: the characters' lives were flowing along and suddenly the earthquake kicks them into a new state. With an upheaval of their life. The book as a whole is tied together by the final story, where the characters end up living the life they had always intended. It may sound corny, but hope from the change. And, as it is told quite subtlely, both in message and style, you don't feel the urge to cringe.

Some final comments: I read this immediately after Midnight's Children, the difference in style was very striking. Throughout the book group this was a hit. Even those who initially skeptical (due to cat torture, or overly trendy covers) were won over. I'd recommend it, but don't expect to be grabbed by the collar and hauled on a ride. This is a slow, contemplative book. Read for the enduring feeling of peace.

Thursday 10 July 2008

Blink

Blink
Malcolm Gladwell

This book is just plain cool and it's actually hard to say precisely why. Humans think and make decisions very quickly without knowing we do this, or even understanding how we can do it.

There are two immediate rammifications:

  1. If you know a field well, and I mean very well. If you've studied it, trained in it, worked and lived in it, then your snap thought process, your 'Blink' is very valuable. You should trust it.

  2. If this isn't your field of expertise though, your brain will find something to react to, some stereotype you aren't even aware of and react to that. Frequently, that stereotype will be "I don't like that because it's different." In these cases your 'Blink' will lead you wildly astray. Don't trust it - it's hard, but dig deeper and take time.

A major flaw may have occurred to you: if you can't understand these instant reactions, how do you know which one you're having? Well, if you're honest with yourself, of course you know. Either you have studied something, or you haven't.

But that doesn't work well for the softer skills like reading people. Every thinks they're good at reading people.

And there's the Dunning-Kruger Effect waiting to bite.

So what can you do? Well to start, read this. It's a truely fascinating study of people and how we think. And, being aware of the decisions you make without thinking is actually a pretty powerful antidote to those times it leads you astray.

You've just met someone. He seems like a pretty good guy and you like him. Your powers of rationalisation will tell you that you like him because he seems confident but easy-going. You also liked his mildly self-deprecating introduction. And if this is social, great! Just go with it! But, if this is an interview and you're on either side of the table, stop and ask yourself. Is that all true, or do I just like him because he's tall?

Seriously. Read the book. Gladwell also wrote The Tipping Point which I will be definitely be reading.

Steve Jobs & the JesusPhone Will Save Us

Clearly Steve Jobs and the JesusPhones is the ultimate name for a band.

We've had mobile phones in our lives for quite awhile now. First they were enormous, and only tradesmen had them. Then they started to get small, really small. So small you couldn't use them. And then they got bigger again: now swelling with countless features. Torches, cameras, pedometers. Some of the features stayed, but not many. Next was email, and that's been pretty popular. The Internet made its way onto our phones as well, but like video calls didn't really go anywhere.

This Friday the iPhone will launch in Australia. And predictably people are going crazy. When was the last time you knew the launch date of a mobile phone ahead of time? Sure, most of the hype is because it's Apple and everyone loves Apple and isn't it so gorgeous and stylish and Oh My God I've just got to have one. Deep breath. But is there something else going on here?

The core function of a mobile phone is making phone calls. Well, yeah. But there have been countless other features rammmed into them. Haven't some of these taken off as well? Yes. There is one that is on every phone in Australia, most of the phones in the world and used by the overwhelming majority of mobile owners; in some demographics more than phone calls: SMS. But SMS is just a very limited single-person to single-person version of online chat. AOL first released Instant Messenger back in 1997 and it's been huge ever since. IM, with presence, blocking, buddy lists, group chat, location mobility is a far richer chat experience than SMS. So why don't you, yes, you reading this post, have an IM client pre-installed on your phone? Why hasn't SMS gone the way of SIM card addressbooks (remember those?) and been completely replaced by IM?

Firstly though, why is that an interesting question? As I said, we've been carrying mobile phones for a long time. And in that time phones have progressively become more and more powerful. Sure, they've lagged in the power stakes behind standard computers, but I think you'd be surprised by how little. The original iPhone was equivalent at release to a four year old Mac laptop. Four years! I was writing interesting software (including a chat system) on 18 year old Macs! So clearly phones are powerful enough. How come then, given that we have these mini-computers with us more than our real computers there aren't interesting applications for them? How come it's still phone calls and SMS? This is expecially frustrating as these powerful devices have permanent connections to the Internet, everywhere! Something I could only dream of when I was first writing software 15 years ago!

People have tried. Shrinkwrapped application developers, vertical integrators, shareware developers have all tried to make a living writing software for phones. And one by one they've given up. And after much thinking the industry as a whole has come up with a batch of reasons why there has been no success. And a lot these reasons boil down to there is no killer app. There isn't one thing that people want to do with their phones other than make calls or send texts. And I bought that line too. Until I thought of SMS and IM.

So why no IM? Well firstly, you are not Nokia's or Ericsson's customer. You are their product. Telstra is their customer and you are being delivered to Telstra so Telstra will buy mobile network gear off Nokia. Interesting. It may not be true any longer, but Nokia used to make more off that gear than their phones. The phones were a loss-leader to drive sales of equipment.

Why the 160 character limit on SMS? Because SMS messages are squeezed into a gap in the control sequences that phones exchange with the towers to remain connected to the network. In other words, SMS messages are sent anyway, all the time, even if you haven't put anything in them. They are just part of the network! So why do the telcos charge 25c per message? Because they can. Oligopolies are cute like that.

Imagine how many text messages are sent every day. Think about how much is charged per-text. All of that income is pure profit for Telstra and the other telcos. That is an enormous, uncontaminated by overheads revenue stream. That kind of revenue is addictive. And here is the crux of the problem with mobile phones: the telcos became addicted to their existing revenue streams and then, with the handset manufacturers as their willing accomplices, set to work on completely controlling and stifling mobile phones as a platform.

Writing applications for phones is incredibly difficult. I don't want to go into the problems here but the two main issues are the half a dozen different platforms with inconsistent implementations of the same platform across devices and end-user distribution and installation are essentially impossible. This situation did not happen by accident though. The telcos strongly encouraged this situation to emerge. Why? Because they are terrified of just becoming a utility that can only charge for data flowing down the pipe. It may be too late, but this was a very short-sighted fear.

Apple and the iPhone are changing this world. Not because Apple are out to save the world, not because they only care about the user experience, not because their phone is pretty. Nope, that's all hype. The iPhone changes things because for the first time, you the phone buyer are actually the customer of the handset manufacturer. Apple is not trying to sell network equipment, Apple is trying to sell phones. And they decided that to sell phones the phone has got to have a great browser. And the ability to install other applications. And somewhere to buy those apps from.

You are buying the iPhone and you're liking it. Or you're not buying it, but those particular features sound pretty good. Why can't my Nokia have those? And pretty soon the telco's worst fear is realised: they are just a pipe through which we ship packets. And I can guarantee when that happens that 160 characters worth of IM conversation will cost a lot less than 25 cents. Try 0.03 cents. That's 833 times less! At today's rate, no demand discount applied!

So, relegated from giants of the economy to the likes of water and sewage for the telcos. But, it didn't have to be this way. As well as providing the network, telcos had something else: a billing relationship with the consumer.

What if when browsing Amazon on your phone when you bought something you didn't have to enter any credit card details? Instead the web site communicated directly with your phone, used a rolling key from there to sign the invoice and then billed it straight to your phone bill? Gee, sounds pretty convenient to me. And a hell of a lot more secure than handing out credit card details. This can only work with phones, and telcos have only a short window remaining to make this happen before something else comes along. They had their chance to replace the credit card companies. But because of their addiction to their immediate (but ultimately doomed) revenues, their willingness to screw their customers and stifle an entire world of technology for almost two decades they appear to have done themselves out of a future.

I, for one, shall not mourn their passing. And do not mourn for Nokia either. Brainless henchman is not a noble calling.

Sunday 6 July 2008

Midnight's Children

Midnight's Children
Salman Rushdie

Dense, detailed, loud, intense and, in a way, unrelenting. The world is swirling around you and you've got no idea where to look but you want to look everywhere right now! I've never been there, but this book is what I imagine India is like. I don't think that's unreasonable either as Rushdie seems to be wanting to tell the story of India's birth as a country.

This is another of those literary 'magical realism' novels that I find much easier to describe as fantasy. There is definitely a lot of apparent fantasy in here, but the story has much more to it than those parts.

For me, possibly the most interesting aspect was the realisation that Saleem Sinai was an unreliable narrator. This was just a suspicion at first, he was so desperate to defend everything as true that I started thinking he doth protest too much. And once that seed was planted it became easy to read everything too ways: all the fantasy that Saleem claimed could be explained entirely prosaically.

So I read the book with two interpretations. I don't know which is true, but I do know that for all his transparency Saleem is one of the more intensely realised and interesting characters in fiction.

Ahh, audiophiles

I've always enjoyed audiophiles; it's pretty hard to find a single group with so much rich potential for mockery. But, through all my laughter at their talk of high quality digital cables (they haven't heard of error correction perhaps?); through all the sniggering over their detailed discussions about bit rates when the Nyquist-Shannon sampling theorem is a mystery unto them (What? Perhaps the CD sound frequency of 44.1kHz being approximately twice the typical highest human-audible frequency is a coincidence?)

Anyway, for all that I've always just thought it was funny: Ahh, aren't they cute? No knowledge of information theory at all, but here they are arguing about transmitting bits. Still cute though. Just a geeky hobby, kind of like theology. Theologians and audiophiles arguing about things that aren't really going to have any effect on their lives, that they don't understand, and in the end are all indistinguishable.

And I've always assumed that on some level audiophiles knew just how ridiculous they were. They'd never admit it, but there was always something in there that would prevent them from doing something really stupid. But, no!

Behold! The $500 Ethernet Cat-5 cable! And it's not even blue, like a proper one! And they're available used! Some idiot actually bought one of these!

Oh, and please, please, please can an audiophile attempt to defend this? I won't respond, but it's always amusing to listen to.

Saturday 24 May 2008

Time for a New Desktop

I'm now in the process of switching my main desktop from Windows XP to FreeBSD. And man, all I can say is that if you're still on Windows or a Mac, now's the time to go Free baby!

My Windows desktop has two DVI LCD displays, and a Microsoft (hiss!) Natural Keyboard. I'll still be using Windows reasonably often but I don't want to be rearranging keyboards, mice and monitors on my desk constantly. An extra monitor on my desk, and a USB-DVI KVM switch, and all that's nicely arranged. Sure, when I switch to FreeBSD sometimes the mouse is all messed up: moves the wrong direction, won't send clicks, buttons confused. But this doesn't happen often, and if I just switch back to Windows a couple of times, it sorts itself out.

After I'd recompiled and installed my editor, and installed some fonts from source, the next thing to get right were having the two screens display a different image: they were defaulting to clone mode (same image on both screens.) That was pretty easy to fix up in the end.

Oh, but before I did that there was also the kernel that needed to be recompiled, because I needed to be able to run Java. Sure, I've only got JDK1.5, but that's pretty close to up-to-date, and I really don't know why you'd need anything more recent.

Somewhere in here I had to get some assistance from the sysadmin. While installing those fonts up above, one of the packages had complained that an existing package hadn't been installed correctly, it very helpfully told me how to de-install and re-install it. Nice. That package had a 'delete' dependency on KDM, the login window for KDE, and then on re-install KDM didn't come back, so I had to get the sysadmin to reinstall KDE. It hardly took him anytime at all. People who design packages really should get their dependencies correct.

So, I've got my desktop working again, my editor running, and some nice TrueType fonts available. Emacs can't see any of those TrueType fonts, but I'm sure with a few more recompiles with different configure options it will all come right. The one remaining problem was getting FreeBSD to use both screens in something other than clone.

Configuring Xinerama wasn't the right way to go: the web recommended that and a colleague already had that working. But it wasn't right for me. I had to download the source code for my video card driver from nVidia (after I found someone with the same machine as me, running Windows, and asked them what video card they had.) I hacked the source to the driver, as it complained that FreeBSD 7.0-CURRENT wasn't supported, although I was running FreeBSD 7.0-STABLE. A few small changes to the source and it compiled straight away. Then I ran the nVidia config tool, and restarted. That very nicely controlled both monitors by turning one off and the other on. But right there on the nVidia site were instructions on how to enable TwinView. Perfect! After I'd rewritten /etc/X11/Xorg.conf based on their instructions and what I knew about my video card and monitors, I restarted again.

And... perfect! Two independent monitors that I can arrange windows on to my heart's content!

So, I really don't know what you're waiting for, you really should dump those Macs and Windows computers and switch straight to FreeBSD! You have no idea how good the kernel is. Sure, it isn't as good as Solaris, but it's miles better than Linux.

Monday 12 May 2008

The Mythical Man-Month

The Mythical Man-Month
Fred Brooks

Everyone knows this book; everyone knows the core points and Brooks' recommendations and laws, even if not everyone has read it. This is one of the few true classics of computing. I'm not going to waste anyone's time repeating those assertions.

The Mythical Man-Month is now, unfortunately, hilariously anachronistic. And, the anachronisms are starting to damage the book: the core ideas are getting buried beneath 40 years of development technology advances. Engineers each get their own computer (or even two!) now, we don't need to share debugging time anymore. Surprisingly, I'm a little hesitant to recommend this book now. Beneath the anachronisms there is plenty of good advice: the point he tries to make about planning your debugging time and keeping track of what happened afterwards still applies, for example. But, you have to be prepared to dig, to see through all that to what he's really trying to say. If you do choose to read, skim the original parts and dwell more on his 20th anniversary additions:

There are two things I will say:

  1. 'Build one to throw away' is wrong. Brooks comes out very clearly against that, even though he originally popularised it. Don't do it, plan to prototype and grow organically. This suits me just fine and leads to:
  2. Brooks is the original agilist. Time and time again the things he values are competent, pro-active people and high-visibility, high-efficiency, fast-turnaround development processes. The corner stones for the agile family of methodologies.

Wednesday 9 April 2008

Dreaming in Code

Dreaming in Code Scott Rosenberg

Rosenberg is one of the co-founders of the online magazine Salon.com, a magazine I've been reading on and off since 2000. After having a bad experience with internal software development, he became interested in how, after more than 50 years experience, we still find, in the words of Donald Knuth, that 'software is hard.'

In the interests of disclosure, I will point out that I got a copy of this book for free from the author after he offered copies to bloggers if they agreed to mention the book. So, this is me (gratefully, happily) fulfilling my end of the deal.

This book is a story, a story of a collection of great developers attempting to write an amazing piece of software. It is not a retrospective story either. It begins shortly after the project has started, but unfortunately couldn't follow the project to completion: the project still isn't done yet.

The Open Source Applications Foundation (OSAF) was formed by Mitch Kapor (of Lotus 1-2-3 fame) to build a new form of personal information management software: Chandler. Dreaming in Code tells the story of how they tried and (so far) failed. With many informative diversions into the theory of software engineering in an effort to discover why software seems to so consistently be delivered late (if at all) and buggy (if it even gets close to meeting the original vision.)

Rosenberg does an impressinve amount of research into the theory. No appropriately read professional software engineer will find any revelations here: this is all stuff you should already know. But Rosenberg is not claiming to invent or discover anything new. In fact, he goes out of his way to disclaim that there is any original research contained in this book. He is a journalist, and as a journalist he has produced a detailed literature survey. The summary in two well-written chapters is useful even for experienced software engineers. I'm sure non-software engineers will find this all very interesting; assuming they are interested in how software gets written.

For software engineers, there is something very interesting here. The internal mechanics of a a team building a piece of software is a very secret thing. Companies are secretive and companies and open source projects want to protect their reputations. Most software engineers only ever see how a project they are working on proceeds, and then they're too close, plus there's no nice summary of what happened. Dreaming in Code is something very valuable to our field: an accurate story of the human side of a software development project. With both the clarity of distance and the accuracy of events recorded at the time they happened.

It was simply astounding how familiar this story was. OSAF and Chandler get some things spectacularly wrong, but then in other cases they do things very right. It's easy to point at the things they stuffed up and claim that you would never make those mistakes, but it's a little too easy to ignore the things that were done right. The good decisions end up forgotten and never noticed.

So what did I think that got wrong? Firstly, and by far the biggest: Analysis Paralysis. This is a mistake that I've seen projects make over and over again. In fact, I'd go as far as to say that it's more common to see this affliction than not. Projects just can't seem to make a decision, stick to it and then start building. The fear of the future locks them solid: what if we make the wrong decision? What if someone blames me for the wrong decision? In the end, ikf the decision wsas so egregiously wrong that it can be traced back to just one person, then everyone else around at the same time is just as culpable for allowing that decision to happen. Everyone, please just get in the habit of making decisions. Rely on those around you to spot a bad direction: that's what they're there for.

Second big mistake for Chandler? People. No surprises there, it's the big and obvious disaster. If you believe that software is hard and you care about your software then clearly you should only work with the best. That's easy to say, but pretty hard to achieve. And in Chandler's case the people problem exhibited in a couple of interesting ways. Before I talk about the people problems that I saw Chandler as having I need to say that any attempt to judge is based solely on what is described in this book. I (obviously) didn't work on this project, I don't know the people, there's only so much I can say. Having said that, I'd also like to say that people problems kill most projects and if we hope to advance we need to get over this fear of talking about the problems with people. Maybe then we can find some solutions. Or maybe that's just an overly analytical geek talking. Why can't everyone just be a nice reducable puzzle, dammit?

There was a permanent employee of the OSAF who was hired quite early and ended up in quite a senior technical position, and this employee was unfortunately the very soul of Analysis Paralysis. Any story about some interminable technical discussion has this particular employee at its heart. He was extraordinarily conservative and wary of making decisions, but ultimately many of the technical decisions came down to him. After a few of these stories, I was left shouting 'Do something about him!' at the book. And I've seen precisely this problem face-to-face too many times to ignore. Once again people, make decisions! You probably won't get it too far wrong. In fact, in this case, the inability to make a decision led to one of their few definite cases of over-engineering.

The second people problem: OSAF planned to run themselves as an open source project that encouraged volunteers. Predictably enough, the only people who could volunteer for any extended period tended to be ex-Apple and -Netscape employees who had already made their fortune and no longer needed to work for an income. From the outside these volunteers presented an interesting problem. They were all brilliant and had done amazing work in the past, but they didn't see this project as any kind of meal ticket. There was no drive for them to get this project finished and out the door. In the end, they come across as people partially on the side-lines, commenting endlessly but never really pushing the project forward. Instead, they were offering endless advice on how things could be done better. Software projects are always cursed with people like this; encouraging volunteers just seems to guarantee it.

But back to the book, I think I've already made clear that I really enjoyed it. I also think the opportunity to see inside another project is infinitely valuable for software engineers and software engineering. Aside from all that, I will say that sometimes the brief newspaper style of the book was a little irritating. On occasion I felt a topic or anecdote could have done with some more depth before moving on.

Software engineers will get a lot out of this; non-software engineers who care about how software (upon which, our civilisation is built) is written will get even more out of this. Oh, and as a professional programmer and wannabe-amateur writer-slash-blogger, software is definitely written - then organically edited into shape.

To finish on a positive note, something Chandler got right? Python. Choosing to write their software in an expressive high level language was clearly a win. There is no question now that Python is fast enough for desktop software, and that's really the only doubt. Hopefully Chandler can be used as an example of choosing better languages.

If you choose to read this, or not, in the end people! Make a decision!

Monday 7 April 2008

Atonement

Atonement
Ian McEwan

This is simply an excellent book. It doesn't go in for any special literary tricks, there's no special effort to make some obvious point: it's a really good story, told very well. There's some intimations of other layers, but feel free to ignore those. One thing that this book does pull off is an unsympathetic main character who I actually managed to not hate in the end. I didn't want to hurl the book across the room; always a worthwhile achievement.

A couple of points: I always find it vaguely amusing to see novelist characters in books written by professional novelists; even the best write what they know. The characterisation and the imagery are what really grabbed me. I was there on that hot, summer day in 1935. I knew Robbie Turner, and I knew Cecilia?Tallis.

There is much that could be said about the effect of fantasy and the blurry line between a clearly seen artificial vision and reality, especially in relation to the powerful imagery that provides this story.

But instead, I'll just say this is a great novel, read.

Sunday 6 April 2008

Computers Hate Me

It's true, they do. Possibly something of a disadvantage in my chosen career, but I get by, carefully. Don't believe me? Hear my tales of woe, come cry with poor, poor me.

April, 1997 - Still at University, just started my second year of a computer science degree. I save up the cash and buy the first computer of my very own: a Performa 6360. I get it home; I set it up, including copying all my work off the family computer: there must have been 80 megs of data! I'm about to go downstairs and delete everything off that shared computer, whe I stop. Nah, I'll do that tomorrow. I shut my brand new Mac down, I go to sleep. I wake up in the morning and one of the first things I do is turn my new Mac on. To be confronted with the dreaded flashing disk icon. My Mac couldn't find a disk to start from. Uh oh... Even booting from a system CD showed nothing. In the end this wasn't even a disk crash, there was a bug in the disk driver. It completely lost everything meaningful off the disk. Nice. Good thing I hadn't deleted my backup. Words to live by.

July, 1998 - For our third year project we decided to write a TCP peer-to-peer IM system for Apple's up coming new OS: Rhapsody. The beta didn't run on my 6360, so I sold that and bought a Power Mac G3, one of the original (or so I thought) beige ones. Turns out it wasn't quite 'original' enough: I scored a motherboard rev that wouldn't boot the developer seed of Rhapsody that I had access to. Argh! We still got our IM system working: we wrote it using the cross-platform environment Apple released for Windows NT. Everyone else in the class wrote Access databases.

March, 2004 - Time to finally upgrade the now ancient G3, so I order an iLamp G4. It arrives, complete with a nice line of purple pixels all the way down the screen. Fortunately, it was declared DOA and a complete replacement was sent.

Hmmm... all Macs so far. Why do I keep buying these?

July, 2005 - We've now started a startup. We know .NET, so that's what we're writing it in. I buy my first Windows PC - a Compaq Presario. It came with XP Home, so I buy an upgrade to XP Pro at the same time. At home that night, running the upgrade - and it just stops. No upgrade for me. And even better, it deleted the old XP Home installation, leaving me with an unbootable PC. Sound familiar? I got my computer back with a clean installation of XP Pro. Except that didn't include any hardware drivers at all. Instead of just using VGA 640x480 resolution on my 19in LCD monitor, I spent the evening finding, downloading and installing all the right drivers. That is also my only experience of trying to convince someone I had willingly bought software from that I was not a criminal. Thanks, Microsoft Software Activation. This computer lasted barely two years before a very fatal disk crashed, ended that incarnation.

So, it's not the computers, it really seems to be me. Those are the only computers that I've bought. Seriously, no other computers were hidden away in there. I've also had zip drives inexplicably and suddenly give the click of death, lamps leave scorch marks on my desk, monitors catch fire (really! there was smoke), printers refuse to power on and USB devices make my machine reboot right now. Maybe I just have a special relationship with hardware? I've definitely got a reputation for it... But, I'm a software guy, and there are uncountable software disasters tucked away in there.

Now, it's that time again: I need to replace my four year old iMac. I'm planning on getting a laptop, hopefully a MacBook Pro. Doesn't sound dangerous to me, what could possibly go wrong?

Friday 4 April 2008

The Player of Games

The Player of Games Iain M. Banks

That 'M' is important and very distinctive. This is a completely different author to Espedair Street; even though both books list all Iain Banks and Iain M. Banks books. No? Don't believe me? Well, yeah. His fiction is published as Iain Banks and his sci-fi as Iain M. Banks. Strange, but that's the way he does it.

His sci-fi is some of the best I've read since Philip K. Dick. And as he doesn't produce anywhere near as much as Dick, it averages a lot better. Though without some of the crazed inventiveness. But that sounds like damning Banks with faint praise: his sci-fi really is that good. There are fantastic ideas and a very plausible feel to everything. He doesn't shoot himself in the foot by trying to explain how everything works: the technology is just there and it works.

But his strongest points are actually his characterisations and story. You get involved, you believe, and most importantly, you care. And on top of that, the story is usually about the growth and life of a character - sometimes a descending spiral with no apparent way out; sometimes a broadening and opening of a character you initially dislike.

This book is fascinating for the first real peek inside the Culture, instead of the view of a mercenary looking from the outside, in.

Tuesday 1 April 2008

Orlando

Orlando
Virginia Woolf

Another literary fantasy novel. AFter the disappointments of Jordan, Martin and, most of all, Feist, I'm happy to be looking to Marquez, Updike and now Woolf for my fantasy fix. As I've said before, any story is by definition a fantasy, so why restrict your scope to only the events that can take place in this prosaic world we are trapped in? Sure, there's a place for the great everyday; but fantasy can be so much fun!

And given how dry Woolf is, it's surprising to see how fun Orlando can be. There are two key elements of fantasy here: Orlando (the character) lives for a very long time, and there's a second question of gender... The age question is handled interestingly. There's never a discussion of this, Orlando just keeps on living, aging at a different rate to everyone else around.

This disconnect from reality creates a dreamy, flowing world: the story reads like a lyric poem: drifting from image to image guided by your narrator, Orlando. And then towards the end it starts to coalesce on but two points. But slowly, like a willow emerging from the mist. Left wondering if those were always there, you float past.

Monday 31 March 2008

Startups as the Future of Technology

It's very fashionable in geek circles to attack Paul Graham at the moment, particularly after his essay You Weren't Meant to Have a Boss. I've been reading his essays for a few years now and I've wavered between agreement and an undefinable sense of unease. Now I believe I can finally pin this down.

The central point of Graham's Boss essay seems to be that over a certain size organisations become rigidly hierarchical and once the hierarchy sets in the creativity of programmers is significantly and fatally constrained: over a certain size an organisation will be incapable of producing original software. This is a continuation of a theme through much of Graham's writing. Startups do interesting work and software development will migrate exclusively to startups.

Well, I think that's something I can disagree with. Of course, it's pretty obvious that I have a vested interest. An iconoclast like Paul Graham will always get the most vicious response from those he seeks to help. Allow me to give my personal background, in the interests of disclosure.

My entire career has been in software development. In my first job out of Uni I fell into the 'hero' programmer archetype. In every job after that I've been in some form of senior position: tech lead, architect, team lead. I've also been an independent consultant, co-founded my own startup (we failed; be very careful about selecting your co-founders) and now I'm working for a startup. Yep, as an employee with a boss and all.

I spent sometime this morning going through all the startups listed on the Y Combinator website. For each startup I've tried to classify them according to the current big themes in web sites.

Social Networking: Reddit, Loopt, Flagr, LikeBetter, JamGlue, Scribd, I'm In Like With You, SocialMoth, Anywhere.FM, Disqus, Reble, AddHer, Inkling, Draftmix

Advertising & Sales: ClickFacts, Adpinion, Bountii, Octopart, Auctomatic, TextPayMe, TipJoy

Apps on the Web: Snipshot, Wufoo, YouOS, Thinkature, Weebly, Buxfer, Heysan, Versionate, Fuzzwich, RescueTime, 8AWeek

Other: Virtualmin, Justin.TV, Xobni, Webmynd, Heroku,

Dead: Shoutfit

The 'other' category is the interesting one: into that bucket falls a server admin dashboard, a web-based TV channel and a plugin for searching Outlook email. But, there are a lot less of those than the social networking and web-based desktop application startups.

I'm sure many, particularly the founders, will disagree with my classifications. But, these are mainly right, especially if you read 'Social Networking' as 'Social Networking around Common Interest X.' And in the end you'll find the exact classification is not important.

All of these startups share a few things in common. They were all launched quickly and they're all pure software development, often running on someone else's eco-system. There is a place for development like this, but if this is the future for computer science I believe the field will be significantly poorer for it.

I am working for a company that by pretty much every definition is a startup. By the time we left stealth mode in March last year the company had been around for 13 years and had grown from a core group of computer scientists to a company of over 300, including chemists and physicists. Oh, and we invented a new type of printer. A startup like ours simply doesn't fit into the Y Combinator model. We also don't fit into the small company with no bosses model: building hardware takes time and a lot of people, you simply can't avoid either.

This is my concern. Is all future computer science productisation and development really going to be latest and cool ad-funded mobile social networking site for parrots? Because, excuse me if I'm not excited by that future. I am still excited by the potential for computing and the Internet in particular, but that potential is better served by longer-term thinking and grander plans than refinements of what everyone else is doing.

This criticism may actually run deeper. A continuing trend in computing is to make programming easier for all. This has had the effect of pushing some tasks out of the realm of the programmer and back to expert users. This has been a good thing. Users have more control and programmers are free to work on interesting problems. The web has also been fantastic at improving human to human communication. Recently, the potential of the web as a mechanism for computer to computer communication is becoming more apparent. Many of the Y Combinator startups very effectively exploit this: improved experiences and convenience by combining the information on eBay with the blogosphere. It also appears that these startups are surfing a wave. The gap between technology becoming easy to use and becoming easy to program. In other words, I suspect this style of startup is not long for this industry. An historical aberration, automatic arbitrage for company startup, as it were.

This is not the end of startups of course. There will always be startups, however there will have to be some interesting, risky and difficult technology behind the curtain. Originality will once more be prized. In this world, You Weren't Meant to Have a Boss will make a lot less sense.

Personally, I fully expect to do the startup ride at least once more. And I'm looking forward to doing that in a world that once more demands innovation rather than just another social network. After all, I really do want to add something to the world and I just don't see that happening with late noughties startups.

Oh, and if you also want to work on world changing, original technology, my company is hiring. Love web technologies, think you have what it takes to work for Google, but aren't excited about working for a company of 10,000? Want to work in Sydney, Australia? Beautiful beaches, summer all year long... Send me an email, giles dot alexander at Google's-free-webmail dot com.

Sunday 23 March 2008

Espedair Street

Espedair Street
Iain Banks

A very good writer, his sci-fi (under the name Iain M. Banks) is consistently original, but his non-genre fiction is also very good. Dead Air is worth reading for the head-butting alone and The Wasp Factory is bizarre, unexpected and simply amazing.

The strength in his fiction is the characterisation. Danny Weir, in Espedair Street, is a great example. A washed-up 70's rock star who has managed to annoy and drive off all his friends. He's now brooding self-pityingly in a stony mansion in Glasgow. But, you're introduced to him, you hang out with him, you drink with him and you get to know him, know him well. Though he spends the book going over everything that's gone wrong in his life and though most of that is down to his amazingly ability to always make the wrong choice and though it may be hard to listen to an hyper-rich rock star complain about his past it doesn't matter because you know him and, ultimately, like him. Enough to hope he finds some way out.

Oh, and the book manages to frequently be damn funny, as well.

Tuesday 18 March 2008

The Timeless Way of Building

The Timeless Way of Building
Christopher Alexander

For the past year or so, this was my bus book. That's a surprisingly long time, and it probably shouldn't have taken me that long to read. Late last year, about 50 pages from the end, I paused in my reading; and then took several months to pick it up again. This seems unfair to the book: it deserved a much more coherent read than that. Though, the ideas are different enough to also benefit from a considered read. I'll pick this up again sometime, and I promise to read much faster that time. Anyway.

One sentence summary: this book will forever change the way you look at and think about buildings, towns and architecture.

Alexander firmly believes that modern planning and building practices are bankrupt and can only result in inhospitable, unwelcoming cities and homes. A belief that seems to be firmly born out by most urban planning since the Second World War: just look at the damage Harry Seidler has wrought on Sydney for an example close to home. This book is a polemic, a grand rant against the current state of his own industry and art. A work in the tradition of many a genius' (and quite a few looney's) Let's Blow Up the Universe screed. So, genius or looney? I've probably already given away my opinion on that matter...

Ultimately, it would not do this book any justice to attempt to briefly summarise what it has to say.

But what the hell, I'll give it a shot anyway. Alexander's central thesis is that there is a shared quality amongst those towns and buildings where people feel most at home; a quality independent of culture, climate and history. He also believes that this quality can be easily achieved, by any person who chooses to build. It is a matter of recognising the forces within the people who will use the building or site and then balancing those forces with the forces intrinsic to the specific location and society. He even outlines a prescription for achieving this balance: a collection of patterns to duplicate in design, planning and construction, with instructions on how to combine these. A language of patterns to construct our built environment.

Unlike many other polemics, this is highly detailed and descriptive: it describes the quality to achieve and then gives instructions on how to achieve it.

If you live in a large city in Australia, it'll be pretty obvious while reading this book that this is not how building is done. First, Australian building practices place the car as king of all. Any building or neighbourhood must be designed for the maximum convenience of the car: people are a distant second. Second, Australian building practices harken back to some long forgotten European past: everyone wants a little brick English cottage, though nothing could be more generally inappropriate for our climate. The Queenslander is not the standard archetype for Australian residential building unfortunately.

The current popular obsession with being 'green' is driving people to a certain superficial realisation about the car. But that is only a symptom of a far deeper problem. Loudly proclaiming that cars are evil and must be disposed of is never really going to achieve anything. And that sort of unbalanced (in the forces sense) thinking will inevitably lead to other problems. As much as I'm a fan of the specific remedies proposed in Jan Gehl's research paper into Sydney's CBD, I do feel some uneasiness.

The pattern approach that Alexander talks about is intended to completely avoid unbalanced forces. He regularly uses cars in his discussion of patterns. They are real, they are valuable and they're not going to just disappear. A central point of these patterns is that they're not something Alexander has devised as a new architectural '-ism' to imprint his vision on the world. These patterns are things that arise naturally, given the way all humans want to live. Growing organically out of a combination of the people and their surrounds. There is a sequel to The Timeless Way of Building, A Pattern Language, that acts as a catalogue of the most important patterns that Alexander and his colleagues have observed in successful towns and buildings.

In the US there is a growing style of design called 'New Urbanism' that attempts to encourage the buildings and towns that Alexander commends so highly. It is interesting to note that in Europe that name is largely unused, people preferring to use 'The Way Towns are Designed' instead. It is also interesting to note that in Australia, we have neither.

Finally, why did I, a software engineer, read this book? To the surprise of many architects, Christopher Alexander is very well known in the field of computer science. In the 1960's his work was discovered and his concept of patterns was co-opted. No serious software engineer can possibly not be familiar with the world of design patterns: named rules for particular structures of code to solve certain problems. It's my opinion that while initially off to a good start the modern Design Patterns movement has completely missed the point of Alexander's original teaching.

His intent was not to catalogue an exhaustive set of patterns that may be thrown at a problem until a solution emerges. His intent was to define an interlocking language from which you can select appropriate terms to grow a solution. In his case a building or town, in my case a software system. Modern design patterns seems to ignore the essential organic growth aspect of a pattern language, and instead seems to focus on cataloguing. An unbalanced approach.

Monday 17 March 2008

What is this Property?

I'm not a mathematician, just a computer scientist with an interest in maths, so please excuse the simplifications and inaccurarcies in this. I'm going to describe this with some rigour, but I'm bound to get things slightly wrong, please bear with me.

In maths, a function is defined as a relation between the members of two sets, S and R, that produces members of the third set T. Looking at it another way, the set T is defined by the function. Some functions, taking two arguments from the same set S, always produce members of that set S. Addition across the natural numbers is an example of that: for any two numbers greater than 0, the sum will always be a number greater than 0. There are many functions that behave like this.

Functions have properties. A property describes a rule that a function obeys for given sets of parameters. From a mathematics perspective, these properties are interesting. For example, addition across the natural numbers is associative. This means that no matter what order the parameters to the addition function are arranged, the answer will be the same: 2 + 3 = 3 + 2. Fairly simple and obvious, right? But from the same property we can also say 2 + (5 + (6 + 11)) = (2 + (5 + 6) + 11). This is interesting because once we know that a function has the associative property we can arrange the parameters of the function without changing the meaning: this is useful in proofs.

There are many, many of these properties, and most of the interesting ones have names: associative, commutative, distributive. For the last year I've been trying to find out if another property I've noticed also has a name.

Take the function minimum across the natural numbers. Given the sets {4, 6, 100, 1, 43} and {1} minimum gives the same answer: 1. The result of the function minimum is determined by only a single member of the set, no matter how large the set.

Take the function and across the booleans. Given the set {true, true, true, false, true} the answer is false. It doesn't matter how many true's are in the set, the answer will always be false.

And I'm sure you can imagine other functions that behave like this. My question is: does this property have a name, and if it does, what?

If I was more of a mathematician, I'm sure I could actually describe this property a lot more accurately. In fact, I'm not entirely sure there is a consistent property here, and I have no idea if it's interesting if it does exist. But I notice this often, and it sure feels like it should have a name.

Functions whose result is determined by a single member of the parameter set, irrespective of the size of that set: do these have a common property?

Friday 14 March 2008

Midsummer Night's Dream

We went to see Midsummer Night's Dream at the Sydney Theatre tonight. A friend bought the tickets, we were just told it was Midsummer Night's Dream. I really should have found something more out about the performance.

I like fairly challenging books: I believe that the reader should occasionally be made to work for it. I love Pynchon and I enjoy Woolf. I am a huge fan of Shakespeare and I've enjoyed pretty much every production I've seen, even when I didn't know the play, both traditional and modern interpretations.

We left this play at intermission, along with a pretty significant proportion of the audience. I have never done that before. I won't even walk out of a bad movie.

This was plain awful. Absolutely, completely unwatchable. Why? It's about 60% performed in Hindi, with no sub- or sur-titles. If you don't speak fluent Hindi you won't be able to understand what the characters are saying most of the time. I know that's obvious when I say that it's performed in Hindi, but the Sydney Theatre really didn't make this obvious enough. I was also handicapped here as I didn't know the play. I've seen parts of it before, and remember some scenes but I don't know the overall plot and characters. I certainly couldn't imagine what was happening when I couldn't understand the dialogue.

The opening scene to establish the plot was entirely in Hindi, and from then on I had absolutely no idea what was going on. Mana tried to help out by whispering brief explanations as she has previously studied and performed this play. But she couldn't keep this up, and by this stage it was pretty much too late: I already had no idea who any of the characters were.

What do I know of Midsummer Night's Dream? Well, there's one of my favourite Shakespearian lines:

If we spirits have offended, Think but this and all is mended: You have but slumbered here While these visions have appeared. - Puck

That's from memory, so excuse any mistakes. I also remember the sarcasm, wit and lyricism of Puck. And I missed all that in this performance. Surprisingly enough, the play would have been better if it was entirely performed in Hindi: when they were speaking English I could follow what was happening and start to get involved. Then they would switch back to Hindi, kicking me out of any involvement, and leaving me bored and disconnected in my seat. But, then I would try to get involved again in the dance and acting, only to be booted again when they switched back to English.

Shakespeare is entertainment, especially his comedies. These were great works meant to illuminate the human condition, while also highly engaging and entertaining. Anyone should be able to watch a production and enjoy it. The only people who could enjoy this production were those who spoke fluent Hindi, and those who already knew the play intimately. And while I fully support the production of entertainment for specific languages, this should not be promoted to a larger audience as something for everyone. Because is it's not: this is an exclusive production only meant to be enjoyed by those who have already studied the play.

And I don't like this artificial, constructed exclusivity in the arts.

Thursday 6 March 2008

The Three Stigmata of Palmer Eldritch

The Three Stigmata of Palmer Eldritch
Philip K. Dick

Did you end up finding it, Philip? What it means to be human? Religion didn't seem to provide your answer. Did drugs? A Scanner Darkly is famous for your search, but this appears to be some sort of transition between those two searches.

Like Graham Greene, Dick is one of my favourite authors. Over time I'm steadily trying to read all of his novels. I prefer his later ones, so that's generally what I choose. Unlike Greene, not each of Dick's is better than the last: A Scanner Darkly is still my favourite, and one of my favourite sci-fi novels. Sci-fi is typically a pretty pulpy genre: cheap enjoyment, with very little challenge to the reader. Even the best sci-fi with a great idea at it's heart will present that idea in a pretty straightforward form.

Not Philip K. Dick. He did not shy from challenging the reader with unusual ideas, often in outright confusing forms. This book felt like some sort of mental trap that the reader is drawn into. Only with the hope that all will become clear by the end. The confusion is why I read this book now. How hard can you push the reader? How difficult can you make the story to follow? How many tricks can you pull? And still end up with a populist, enjoyable story.

There's a lot to connect Dick and Pynchon. But, Dick just didn't have Pynchon's talent. Sometimes you are left wondering if this was meant to be confusing, or did he just write it a bit too quick. His later work does show that yes, he was aiming to confuse.

Wednesday 5 March 2008

The Ultimate Development Environment

An enormous claim, I know. But this is not about processes, tools or working conditions. This is about something quite different.

Shrew is progressing, it now sports an s-expr to XML evaluator; I'm reading RESTful Web Services to gain a better idea of how it should expose resources. And I'm also working through The Seasoned Schemer. And therein lies the most interesting aspect to Shrew. I am a reasonably experienced, quite competent polyglot software engineer, but learning Scheme has forever changed how I think about programming. And through example crystallised the ultimate development environment that I have been drifting towards.

When I'm working on Shrew, my editor looks something like this:

i-scheme-emacs

In the top-left is the module I am currently working on: writing, expanding or fixing - as I'll try to show there isn't really any difference between those three. In the top-right is a scratch file that contains a bunch of ad-hoc tests for the module I'm working on: nothing structured, just calls of the functions that I'm writing. Across the bottom is the output from a Scheme process running in the background.

Before I go on there's one detail of Scheme I should explain. To write a new function you call a built-in function called define, passing the name of the new function and its body. define is smart enough to simply replace the body if a function of that name already exists.

It sounds like a pretty simple development environment: a plain text editor with three windows on screen. Why so special?

I write a function - not a test, sample or prototype, but the real code I'm planning to commit - I jump to the end of the define and run a command in my editor to evaluate it. That function is then inserted into the running Scheme process and available to be used by anything else that is run in that Scheme process. Or, I'm immediately informed of a syntax error in my code.

I switch over to the scratch file and write some code to call the function I just implemented: typically just one expression, but it can be as many as I need. I evaluate that new code. And immediately see the output in the window at the bottom; the window reflecting the running state of the background Scheme process.

And of course, there's a bug in my function. I switch back to the window containing the module, fix the bug and re-evaluate. I switch back to the test code, re-evaluate that, and see that my change has fixed the bug.

Elapsed time from writing the function through debugging and verifying the fix: 45 seconds.

Instead of having to write a complete library, compile it, write a test harness, compile that and link it to the library and only then run the code to see if it works, I have a Scheme process running in the background that I can just keep adding code to. New code, or code to replace existing code. And at any point I can execute any sub-part of that process and immediately see the output. No delays, no pauses, no backtracking to find which line of code is wrong.

Scheme could be regarded as a fairly direct implementation of a theoretical model of computation: the lambda calculus. Most texts that teach Scheme emphasise this; they encourage you to think of your programs in terms of this theory, in quite some detail. That may sound fairly esoteric, but once you've spent sometime working in this environment you reach this unique state. You are inside your program, reaching around moving code as fast as you can think. There is essentially nothing between your thought and code: no defining boilerplate, no compiling, no creating test harnesses, no waiting for test runs to complete. Your solution simply unfolds before you.

But that's not to say your code is of lower quality. In fact, because you're concentrating more fully, with no distractions and the flexibility to easily push your code in any way you want, the code is of much higher quality. There's no idle thought 'I should test that' which is forgotten in the edit-compile-run-debug cycle: think it, try it. This is flow as Peopleware could only dream.

And once you break out of this magical flow, you're left with complete code; code you lived and breathed for a few hours, code you understand deeply and will have a hard time forgetting. Plus, a comprehensive set of tests to commit alongside.

Monday 25 February 2008

The Blind Assassin

The Blind Assassin
Margaret Atwood

Continuing on my plan to expand the types of books I'm reading: this was from Annabel several years ago. A Booker prize winner, and I've enjoyed some of the others I've read, it's also Canadian and a female author. I just haven't been reading enough female authors recently; though Zadie Smith is one of my favourite authors.

It's an interesting story for a number of reasons. It's structured as a story within a story within a story; it's told backwards and forwards, alternatively; it appears to be centred around a mystery, but really isn't; and, perhaps most interestingly, for a large part of the novel the main character is quite unsympathetic. And though unsympathetic, she still manages to maintain your influence and carry the story.

And stopping for a pause... when I first wrote this review immediately after reading the book over a month ago, I enjoyed the novel but wasn't taken by it. However, it's a novel I haven't stopped thinking about. It just keeps cropping up in my mind over and over again. At the time, I put it down as one of those typical, slightly over-wrought Booker prize winners, but now my opinion is going to have to change.

Slow, deliberate, difficult for not liking the main character, but in the end, memorable and worth it. I will be going back to read more Atwood: and I've found myself browsing her shelf in book stores.