The network is the network, the computer is the computer – sorry about the confusion

This post continues my ongoing theme: That networks are great as long as software doesn’t pretend they are perfect. (I can’t take credit for the title – it’s been floating around for a long time.)

Increasingly, software is being designed based on the idea of treating network resources as local. The practice of generating proxy objects to invoke remote services increases the tendency to think this way, as does the success of HTTP based remote invocation patterns like REST, SOAP, etc. (It’s tempting to think of HTTP services like function calls: Send request with parameters, wait until results come back, proceed.)

But this is a bad idea. The concept of a network as a means of instantaneous communication between software components anywhere on the globe is an incorrect abstraction. Software should be designed based on the notion that networks are unreliable and introduce unpredictable amounts of latency into communication.

For one thing, as networks reach further and further, reliability at the edges gets very poor. Mobile devices like laptops and cell phones are constantly connecting to and dropping from networks. But more importantly, there is increasing integration between networked components managed by different organizations. My email client talks to your email server, which connects to the other guy’s RADIUS server, and so on. So even if network connectivity were 100% reliable, applications would have to be programmed with the defensive assumption that remote services may occasionally be unavailable. Finally, there’s the speed of light to take into account. Communicating with a component on the other side of the globe is going to take at least 40 ms and there’s nothing we can ever do about it.

The problem is that we don’t yet have enough good design patterns for connecting network services that take into account the assumptions of latency and unreliability. Where can we look for inspiration? I have two suggestions: Microsoft Outlook and Google Gears. Microsoft Outlook is interesting because it handles intermittent network connections so gracefully. I run it on several different computers as well as Microsoft Outlook Mobile on my cell phone, which synchronizes with Lab49’s Exchange server wirelessly. In all cases, the email application is perfectly responsive and functional regardless of whether network connectivity is available. If mail can’t be sent immediately it’s quietly stored in an outgoing folder and sent later when a connection becomes available. On my cell phone it’s even more obvious how much the networking has been separated from the email client code – they are two different applications. The sync application handles all the networking and delivers outgoing and incoming mail, while the Outlook client application doesn’t perform any network operations (or at least that’s how it appears to work from a user perspective). As it happens my laptop is often off the network, and my cell phone data connection is terrible, yet my mobile email experience is surprisingly good. I think it’s the way all network software should be designed.

Google Gears is interesting because it takes the same principles and extends them to web applications in general. You can learn more about the Google Gears architecture here.

One comment

  1. This reminds me of the debate in the concurrent programming language community about whether or not synchronous message passing is the right default. Most early formalisms (like Hoare’s CSP and Milner’s provocatively named pi-calculus) chose synchronous behavior. In contrast, the Actor model makes asynchronous message passing primitive — and it’s the basis for some very successful modern languages (like Erlang and Apama).

    I think you’re right that this is going to be an interesting area to watch for library (e.g. Google Gears) and language support in the next few years.

Leave a Reply

Your email address will not be published. Required fields are marked *