Web 3.0

Or, why Web 2.0 doesn't cut it for mobile devices

One of the hottest conversations among the Silicon Valley insider crowd is Web 2.0. A number of big companies are pushing Web 2.0-related tools, and there’s a big crop of Web 2.0 startups. You can also find a lot of talk of “Bubble 2.0” among the more cautious observers.

It’s hard to get a clear definition of what Web 2.0 actually is. Much of the discussion has centered on the social aspirations of some of the people promoting it, a topic that I’ll come back to in a future post. But when you look at Web 2.0 architecturally, in terms of what’s different about the technology, a lot of it boils down to a simple idea: thicker clients.

A traditional web service is a very thin client -- the Browser displays images relayed by the server, and every significant user action goes back to the server for processing. The result, even on a high-speed connection, is online applications that suck bigtime when you start to do any significant level of user interaction. Most of us have probably had the experience of using a Java-enabled website to do some content-editing or other task. The experience often reminds me of using GEM in 1987, only GEM was a lot more responsive.

The experience isn’t just unpleasant -- it’s so bad that non-geeks are unlikely to tolerate it for long. It’s a big barrier to use of more sophisticated Web applications.

Enter Web 2.0, whose basic technical idea is to put a user interaction software layer on the client, so the user gets quick response to basic clicks and data entry. The storage and retrieval of data is conducted asynchronously in the background, so the user doesn’t have to wait for the network.

In other words, a thicker client. That makes sense to me -- for a PC.

Where Web 2.0 doesn’t make sense is for mobile devices, because the network doesn’t work the same way. For a PC, connectivity is an assumed thing. It may be slow sometimes (which is why you need Web 2.0), but it’s always there.

Mobile devices can’t assume that a connection will always be available. People go in and out of coverage unpredictably, and the amount of bandwidth available can surge for a moment and then dry up (try using a public WiFi hotspot in San Francisco if you want to get a feel for that). The same sort of thing can happen on cellular networks (the data throughput quoted for 2.5G and 3G networks almost always depends on on standing under a cell tower, and not having anyone else using data on that cell).

The more people start to depend on their web applications, the more unacceptable these outages will be. That’s why I think mobile web applications need a different architecture -- they need both a local client and a local cache of the client data, so the app can be fully functional even when the user is out of coverage. Call it Web 3.0.

That’s the way RIM works -- it keeps a local copy of your e-mail inbox, so you can work on it at any time. When you send a message, it looks to you as if you’ve sent it to the network, but actually it just goes to an internal cache in the device, where the message sits until a network connection is available. Same thing with incoming e-mail -- it sits in a cache on a server somewhere until your device is ready to receive.*

The system looks instantaneous to the user, but actually that’s just the local cache giving the illusion of always-on connectivity.

This is the way all mobile apps should work. For example, a mobile browser should keep a constant cache of all your favorite web pages (for starters, how about all the ones you’ve bookmarked?) so you can look at them anytime. We couldn’t have done this sort of trick on a mobile device five years ago, but with the advent of micro hard drives and higher-speed USB connectors, there’s no excuse for not doing it.

Of course, once we’ve put the application logic on the device, and created a local cache of the data, what we’ve really done is create a completely new operating system for the device. Thsat's another subject I'll come back to in a future post.

_______________

*This is an aside, but I tried to figure out one time exactly where an incoming message gets parked when it’s waiting to be delivered to your RIM device. Is it on a central RIM server in Canada somewhere, or does it get passed to a carrier server where it waits for delivery? I never was able to figure it out; please post a reply if you have the answer. The reason I wondered was because I wanted to compare the RIM architecture to what Microsoft’s doing with mobile Exchange. In Microsoft’s case, the message sits on your company’s Exchange server. If the server knows your device is online and knows the address for it, it forwards the message right away. Otherwise, it waits for the device to check in and announce where it is. So Microsoft’s system is a mix of push and pull. I don’t know if that’s a significant competitive difference from the way RIM works.

0 comments:

Post a Comment