skip to main bit
a man slumped on his desk, from 'The Sleep of Reason Produces
      Monsters'

Oblomovka

Currently:

Archive for the ‘Living on the Edge’ Category

2008-08-22

powerpointomancy

So I remember when my friend and deputy-nemesis Ian Betteridge and I were arguing about the difference between “proper” journalism and blogging. In the end, I rather liked one of his definitions: “Journalism is when you pick up the phone”. That’s to say, journalism requires some actual original research, rather than just randomly googling or getting emailed something and writing it up as news.

I like it, because it’s not platform-specific. There’s a great deal of blogging with original research, and very large quadrant of mainstream media “journalism” that really doesn’t fall under that banner: at it’s best it’s analysis, at its worse it’s anti-journalism: taking readily available facts or rumour and divesting them of the context that would allow you to accurately judge their accuracy or provenance.

Anyway, this is a long-winded way of doing some non-journalism of my own. If you’re interested in what UK ISPs are telling analysts about the future, Carphone Warehouse’s company presentations page, and in particular this Analyst’s Day from April 15 of this year is worth a look. Skip the stuff about phones and go to their discussions of broadband and their plans for their network. It’s full of interesting statistics about P2P usage (before and after they introduced traffic shaping), the effect of iPlayer, and the costs of network and customer retention.

My broad summary would be: yeah, bandwidth usage is an issue, but it’s not like we expected it to go down, and the sensible thing to do is to upgrade our network so our costs per bit drop 80%, something we can do with an investment of tens of millions of pounds, not hundreds.

Stuff that’s interestingly unspoken: upstream rates and their growth, a characteristic obsession with “streaming” (see last post), and which ass they pulled out the estimate the consumer IP traffic will quadruple in 4 years (see the graph with “Web 3.0 in 2011” on it).

But it’s late on a Friday, and I suspect some of you are better at pulling out interesting facts from this than I am, so go right ahead…

2008-08-15

wuala

Woah, sorry about missing last night: I returned home from work and slept from 8pm to 9am. I blogged in my dreams though.

Briefly, yesterday’s copious free time (ie a few minutes) was spent looking at Wuala (thanks, robwiss!), which is a neat popularisation of some of my pet issues: the infrastructure is a decentralised, fault-tolerant, file storage, with private/public/group access created with a cryptographic filesystem (see the Cryptree paper for details on that, and this Wuala-made Slideshare for a general overview of the tech.) It’s notable for having a user-friendly UI, capability to a run the downloader in a browser via a java client, and therefore have linkability (for instance, in theory you should be able to download the Ogg Vorbis version of the Living in the Edge talk here, once it’s uploaded.) It just went public yesterday, and it’s fun to play around with.

I have a few questions about it, which may be more down to my ignorance than Wuala itself: the source is closed, and so I don’t know yet quite how tied the infrastructure is to Wua.la the company (if Wuala disappeared tomorrow, would the network still exist?), or where the potential weakpoints in overall security might be. On the plus side, Wuala is clearly being used in earnest both for public and private sharing, the user interface does a great job of shielding the crazy cryptopunk shenanigans going on underneath, and it’s cross-platform (albeit via Java, which means it’s not quiite working on my PowerPC Ubuntu server right now).

Tahoe is a lot more transparent, but seems to have a different use case at the moment, which is private nests of stable servers used for distributed backup. But if you wanted to do a free software version of Wuala, that still looks like where you’d start (and Wuala is where you would get your inspiration/learn your lessons from).

2008-08-11

gmail down; p2p dns

More fuel for the decentralisation fire with Gmail’s downtime today (Google’s apology). Again, as much as these events people to reconsider keeping all their data marooned on Google’s tiny island in the wider Net, it’s not as if anyone has a more reliable service in place — yet.

It also made me realise that think of another reason why you might want a centralised (or radically decentralised) service that didn’t run on your edge of the Network. Central services are terrible for privacy, but can be better in some contexts for anonymity. Creating a throwaway mail account on a central service (or better still, getting somebody else to), and then using Tor or another anonymising service to access it would provide more temporary anonymity than receiving mail on your own machine (or serving web pages from it). There can also be a big different from serving and hosting data in an authoritarian regime than holding your information remotely in another, more privacy-friendly or remote, jurisdiction. There’s a good reason why a lot of activists use webmail (and why so many were outraged when Yahoo’s webmail service handed over Shi Tao‘s details to the Chinese government).

Tor actually does offer an anonymised service feature, letting you run services from a mystery Tor node, and point to it using a fake domain like http://duskgytldkxiuqc6.onion/. If you were using Tor right now, that would lead you a webpage, served over Tor from an anonymous endpoint. So you can run anonymous services, in theory from the edge. Of course, not everyone is using Tor, so that’s hardly universal provision.

This brings me to another issue that I talked about on Sunday: mapping other non-DNS protocols into the current DNS system. I believe I’ve mentioned before John Gilmore’s semi-serious suggestion a few years back that we grandfather in the current DNS by saying that all current domains are actually in a new, even more top level domain, .icann. — so this would be www.oblomovka.com.icann., allowing us to experiment with new alternatives to DNS, like dannyobrienshomeserver.p2p., or somesuch, in the rest of the namespace.

Other name systems frequently do something like this already: there’s Tor’s .onion fake domain, and Microsoft’s P2P DNS alternative, which resolves to whateveryourhomemachineiscalled.pnrp.net. What neither of those do, however, is have a gateway mapping for legacy DNS users — a DNS server that would respond to standard DNS queries for those addresses, use the P2P protocol to find the IP, then return it to anyone querying using the existing DNS system. That might be a more backward-friendly system than John’s idea.

In Microsoft’s case, that would be pretty easy, even though apparently they don’t do it right now. Resolving .onion in normal DNSspace wouldn’t be possible currently, although I suppose could hack something up (maybe over IPv6, like PNRP) if you were willing to carry all the traffic (and had asked ICANN nicely for the .onion TLD).

I’m not the first person to think that this might be something that would make an interesting Firefox plugin in the meantime.

2008-08-09

Which is more stupid, me or the network?

I was working late at the ISP Virgin Net (which would later become Virgin Media), when James came in, looking a bit sheepish.

“Do you know where we could find a long ethernet cable?” How long? “About as wide, as, well,” and then he named a major London thoroughfare.

It turned out that one of the main interlinks between a UK (competitor) ISP and the rest of the Net was down. Their headquarters was one side of this main road, and for various reasons, most of the Net was on the other. Getting city permission to run cable underneath the road would have taken forever, so instead they had just hitched up a point-to-point laser and carried their traffic over the heads of Londoners. Now the network was severely degraded, due to unseasonable fog.

The solution was straightforward. They were going to string a gigabit ethernet cable across the road until the fog cleared. No-one would notice, and the worse that could go wrong would be a RJ45 might fall on someone’s head. Now their problem was simpler: who did they know in the UK internet community had a really really long ethernet cable?

I cannot yet work out whether being around when the Internet was first being rolled out is a disadvantage in understanding the terrific complexities and expenses of telco rollout, or a refreshing reality-check. I can’t speak for now, but ten years ago, much of the Net was held together by porridge and string in this way.

(Also, in my experience, most of the national press, and all of the major television networks. All I saw of the parliamentary system suggested the same, and everything anyone has ever read by anyone below the rank of sergeant in the military says exactly the same about their infrastructure. Perhaps something has changed in the intervening ten years. Who knows?)

Anyway, I’ve been reading the San Francisco city’s Draft feasibility study on fiber-to-the-home, which is a engaging, clear read on the potential pitfalls and expenses of not only a municipal-supported fiber project, but any city-wide physical network rollout. I love finding out the details of who owns what under the city’s streets (did you know that Muni, the city’s bus company, has a huge network of fiber already laid under all the main electrified routes? Or that there’s an organization that coordinates the rental of space on telephone poles and other street furniture is called the Southern California Joint Pole Committee?)

It’s also amusing to find out Comcast and AT&T’s reaction to the city getting involved in fiber roll-out:

Comcast does not believe that there is a need in San Francisco for additional connectivity
and believes that the market is adequately meeting existing demand. According to Mr.
Giles, the existing Comcast networks in the Bay Area contain fallow fiber capacity that is
currently unused and could be used at a later date if the demand arises.

AT&T does not recognize a need for San Francisco to consider either wireless or FTTP
infrastructure. The circumstances that would justify a municipal broadband project simply do not exist in San Francisco. Service gaps are perceived, not real, according to Mr. Mintz, because AT&T gives San Francisco residents and businesses access to: DSL , T1, and other copper based services from AT&T and Fiber based services such as OptiMAN that deliver 100Mbps to 1 Gbps connectivity to businesses that will pay for it.

My interest in it is more about the scale of any of these operations. The city will take many years to provide bandwidth, and the telco and cable providers are clearly not interested in major network upgrades.

But does rolling out bandwidth to those who need it really require that level of collective action? I keep thinking of that other triumph of borrowed cables and small intentions, Demon Internet, the first British dialup Internet provider, who funded a transatlantic Internet link by calculating that 1000 people paying a tenner a month would cover the costs.

The cost of providing high-speed Internet to every home in San Francisco is over $200 million, the study estimates. But what is the cost of one person or business making a high-speed point-to-point wireless connection to a nearby Internet POP, and then sharing it among their neighbourhood? Or even tentatively rolling out fiber from a POP, one street at a time? I suspect many people and businesses, don’t want HDTV channels, don’t want local telephony, and don’t want to wait ten years for a city-wide fiber network rolled out: they just want a fast cable on their end, with the other end of the cable plugged into the same rack as their servers. And if stringing that cable over the city meant sharing the costs with their upstream neighbours, or agreeing to connect downstream users and defray costs that way, well, the more the merrier. At least we won’t have DSL speeds and be slave to an incumbent’s timetable, and monopolistic pricing and terms and conditions.

I don’t think I would even think such a higglety-pigglety demand-driven rollout would be doable, if I hadn’t seen the Internet burst into popular use in just a matter of months in much the same way. But is the network — and demand — still ‘stupid‘ enough to allow that kind of chaotic, ground-up planning? Monopoly telcos won’t back a piecemeal plan like that for business reasons; cities won’t subsidise it, I fear, because it’s beneath their scale of operation, is too unegalitarian for the public, and undermines their own control of the planning of the city. But if it is conceivable and it is cost-effective, neither should be allowed to stand in its way.

2008-08-06

Wherever you go, that’s where the edge is.

A few people pointed me to Chris Brogan’s report about Nick Saber, a guy who got locked out of Google Apps. It’s a useful example in favour of keeping data on the edge, rather than locked up in Google’s datacenters.

They’re right of course, but I am nothing if not alive to irony, and the fact that I’m currently locked out of my home server (which has wedged itself after an argument with a USB drive while I’m 50 miles away) stops me crowing too hard.

As I travel back to give it a boot, I was thinking a little about what our modern Internet architecture (and its future) means for where you place your data. I’ve been assuming up until now that the parlous nature of the edge (sucky latency, sucky upstream connectivity, sucky servers that crash without attrackive rack-mounted sysadmins with gleaming skintones to reboot them for you) is one of the reasons why people have tended to store data in the cloud. But as my pal John Kim pointed out, that can easily work the other way. Google can lock you out, but so can your crummy last mile connectivity. There’s not much point having five nines of uptime for your data, if you and others have far lower rates of access from your position on the bleeding, bloody, frustrating edge.

Really, what you want on a slow, unreliable network (which for all intents and purposes the Net will be for the foreseeable future, God bless it) is for data to migrate to where it’s being most used. That’s partly what we see as our shared data moves off into the cloud. You want it there because that’s half-way between you and your other accomplices: or you at home (checking your Gmail) and you at work (checking your Gmail).

But we should all be aware of the Wisest Adage of Network Storage ever: “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway”. Especially if you’re in the stationwagon with the tapes. If I can lug my data around with me, and have it always connected, then my data will naturally migrate to me. The latency and throughput on the edge sucks, but only to other people. For me, it’s zero and ethernet speed respectively.

I say this, realising that most of my data has migrated to my (encrypted, backed up) laptop, not my home server. And that I idly walked around with a 250GB drive in my jacket pocket for a week or so before I even noticed it. And that most of us carry multi-gigabyte, alway-on, networked and server-capable smartphones with us most of the day. And that if *that* crashed, I wouldn’t be swearing at Google or my USB drive — I’d just reboot my pocket.

2008-08-05

owning the edge

(I’m going to be at LinuxWorld tomorrow at the Moscone, working the EFF booth and playing “spot the Don Marti“. Come and say hi!)

An important reason why the edge is so underexploited is because of the lack of accurate resource planning by telcos and other broadband providers. Planning the Internet is hard in face of little history, and poor statistics, especially when you’re a near-monopoly with no-one else to accurately ape. I still dine out a little on my trip to Martlesham Research Laboratories in 1995, when the man from British Telecom said they were intending to roll out DSL with a 28.8Kb/s upstream bandwidth. That’s right: modem speed uploading! But how were they to know?

It seems to me that the most efficient thing to do in these circumstances is to actually offer as wide a range of possibilities for all comers as possible (ADSL, sure! Fiber? Okay, but it’s going to cost you, etc), and tie the prices close to cost plus a margin. That’s of course hard to do with a telco monopoly. Costs aren’t always obvious, you’re often eating your own lunch in another sector if you do, there’s no compelling reason to provide all possibilities in all markets, and anyway you’re probably under a bunch of government requirements that wouldn’t let you do it even if you wanted to. Plus you’re trying to run a tight, nationwide, ship here. There’s no point stringing fiber to all of Gwent if the real sweet spot is DSL in Hackney.

The end result is that telcos usually end up wearing blindfolds and sticking a pin on a donkey marked “consumer bandwidth provision”. That all goes fine until suddenly everyone is using iPlayer or hitting YouTube, or uploading every day to Flickr, or expecting zero latencies playing real-time games.

That’s one of the reasons why Derek Slater and Tim Wu’s ongoing research into consumer-owned fiber is fascinating to me. If telco companies — who now appear to be the de facto ISPs for most users — aren’t willing to string up high bandwidth to the last mile, then maybe we can start stringing fiber up the other way, adding fiber “tails” to our homes to add to their resale value, or working together as communities to exploit the municipally-owned fiber out there.

As Derek says, it’s not something that would work everywhere, but it’s worth looking into an experiment. And that experiment is already starting in some places:

This may all sound rather abstract, but a trial experiment in Ottawa, Canada is trying out the consumer-owned model for a downtown neighborhood of about 400 homes. A specialized construction company is already rolling out fiber to every home, and it will recoup its investment from individual homeowners who will pay to own fiber strands outright, as well as to maintain the fiber over time. The fiber terminates at a service provider neutral facility, meaning that any ISP can pay a fee to put its networking equipment there and offer to provide users with Internet access. Notably, the project is entirely privately funded. (Although some schools and government departments are lined up to buy their own strands of fiber, just like homeowners.)

The first part of it would be to try and gauge how much something like that would cost. Unfortunately, the best people to know answers like that are the telcos, and right now either they don’t know, or they won’t say. Governments and monopolies alike would like to have a well-mannered market for planning purposes; when the market isn’t like that, it’s probably worth looking into other ways of satisfying demand — or at least probing to see whether it is there.

2008-07-24

video from “living on the edge”, opentech 2008

Here’s the Zapruder footage of my talk about the cloud and the edge. And, yes, I do appreciate the rich rich irony that I’m hosting this on a video-sharing site, and apologies if it makes you a bit travel sick. Be warned that it cuts out at about 24 minutes, just when I manage to get vaguely serious — the points I make after that are covered in just as rambling way in the original posts.

Not much blogging for the next 24 hours, as I’m about to disappear off to have a (non-scary) medical thing done, for which I will be pleasantly sedated. With a bit of luck, I’ll be deluded enough to blog while on fentanyl, and we can all have a laugh.

                                                                                                                                                                                                                                                                                                           

petit disclaimer:
My employer has enough opinions of its own, without having to have mine too.