skip to main bit
a man slumped on his desk, from 'The Sleep of Reason Produces
      Monsters'

Oblomovka

Currently:

Archive for the ‘Technology’ Category

2009-09-22

you know who i blame? the lurkers

All of these conversations I’ve been having online (as opposed to the dramatic monologues here) have had me thinking about the nature of online discussion, and confronting my own behaviour in them.

What are you like when you’re deep into an argument online? I have two sides: the one which you can see with my postings, which are long, mostly fiercely polite, quasi-grammatical, and, if I may say so, devastatingly reasoned.

You have to imagine me writing these, though, pacing around madly in my bedroom, muttering little speeches to myself and visualizing the horrible death of my correspondent in a hail of unavoidable saucepans. Also I drool, but only a little bit, and only from the mouth.

Is everyone like this? I don’t know, because people don’t like to talk about it. Recently, I’ve been looking at how people manage their own emotions when discussing online. It’s complicated, because the unwritten rules of much online discussion is that “if you emote, you lose”, and others that “if you emote, you win”. Either way, bringing emotions into it changes the game. But what the hell does winning and losing mean?

People talk about the disrespect and ferocity of online flame wars. I think it’s about audience. I think the novel nature of online discussions is that you have a passive, silent audience out there. I think that’s far significant than all that talk of anonymity, or the death of civilized discourse.

The closest equivalent to Internet discussion forums for me when I was young was Paddy, who I lived with. Paddy was a man who could argue for hours without coming up for breath. You’d say your triumphant logicbuster, and magically by the time you’d finished, he’d already have (verbally) posted a five page reply up in your face. I remember one night when I got so mad with him for his relentless logical verbal one-upping that the only snappy come-back I could devise with was to quietly leave the room, go upstairs to the bathroom, spray my entire face with shaving foam so I looked like a giant Michelin head, and then creep up behind him and go “ARRGH!”. I hold that I won that argument squarely and fairly. (You occasionally see this rhetorical device at Prime Minister’s Question Time.)

Anyway, what was annoying with Paddy, as I finally got him to admit one day, was that he wasn’t trying to convince you he was right: he was trying to convince a mysterious third-party.

There was no third-party in our arguments. When we got started both of us could empty a room faster than karoake-ing opera singer.

But on the public Internets, you’ve always got an eye to the third-party. Every talk you see online has an imaginary crowd around it, imaginarily clapping or stomping. Either way, you can’t just communicate these side-line emotions with the person you’re talking to, except by stumbling off into private email. Which is usually about as calming as going outside the bar for the fight. Actually, private email isn’t even private, because there is always this sense it will be magically reforwarded into the public view, exposing your vulnerability to the same audience.

Every discussion is a group monkey dance.

2009-06-05

wishbooks

When do you stop being a reader online, and start being a participant? This would seem to be an important question, especially among those who insist that the exact ratios between consumers and creators should determine how significant the result is. That is, if most “user-generated” content on the Net is made up of a tiny percentage of the overall audience, should we care about it less? Me, I don’t think so, but for arguments that get bogged down in exactly how “democratic” the Internet is, it does seem to be critical.

What I do think is that the very fact that the line is blurred is in itself significant. Let me contrast it with my experience growing up in the Seventies and Eighties. I didn’t go to arty clubs in London; I didn’t make my own teen fanzine. I didn’t even send off for any fanzines. What I did was buy Time Out, and FactSheet Five, and read the reviews. Obsessively. I loved it. I don’t know why I rarely watched the films I read about, or buy the thousands of zines that Mike Gunderloy (pboh) obsessively reviewed each issue. It just seemed a step too far, somehow. I was perhaps a little scared that the reality wouldn’t live up to the dream. But I’m sure there were thousands, hundreds of thousands like me. People read books, never knowing there are whole communities of book-readers who create conventions and have conversations about those books, writing fan fiction and holding long correspondences with the author. It’s not that they can’t imagine it, but it’s that there’s a natural stopping point. You’d have to be crazy to finish the latest Neil Gaiman book, and then think you could write him a letter.

When I went online for the first time, that distinction blurred for the first time. I’d read my heroes posting items, and then I’d reply (just really because the keyboard was there, and the bulletin board prompt gave you that option), and my heroes would write back. I’d be involved. It was barely a transition. It’s the same frisson people get when celebrities call them out on Twitter. Actually, they don’t even have to be acknowledged; just the figment of a conversation is more than you’d expect reading a book or watching a film.

This may be obvious, or even hard to imagine a world without that lack of transition if you’ve grown up with the Net. Talking to Debbie today, she described how Sears Catalogues were called “wishbooks” in the early West, and we talked about how FactSheet Five was a wishbook, too. It broadened your mind: but it only occurred to the most ambitious (or deluded) that you could actually pursue those wishes, or that they represented anywhere that was truly accessible: just viewable. I think old media taught us to observe the spectacle, but assume it took place somewhere else, somewhere remote.

It takes a while, even online, to notice this is possible: that such-and-such may have a blog, and might read the comments, and might reply. But it’s not quite the same leap, especially as you quickly find yourself in a community of others making those leaps just like you. It’s not how many create; it’s how easy the jump from watcher to do-er is. The two are connected: the easier the transition, the more creators there are. But the transitions the thing. Not everybody wants to be a creator; but everybody who wants to create should at least know that that is an option.

2009-06-03

tethering the android

So it was being stuck without wifi in the Library of Congress the other week that finally made me decide to overwrite the T-Mobile firmware on my Android G1 with something with root access. I was talking with the US Copyright and Patent offices about how to improve access to copyrighted material for the reading disabled (in the hopes, partially, to encourage them to support the Treaty for the Visually Impaired at WIPO the following week).

I know some people frown on net access at such affairs, but as Cory once noted, if you think people are distracted when they have net at meetings, you should see how distracted they get when they don’t have net.  A bunch of us were scrabbling to get information in and out of the public meeting in advance of the transcript becoming available. So, for instance, I recorded my comments onto my phone, and then mailed them out to the rest of the EFF international staff to hear as they were already preparing to fly to Geneva.

The same thing happened, only more fervently at WIPO, with Jamie Love and other attendees  frantically twittering out to the wider world about the imminent attempts to kill the treaty, and thus getting the visible external support they needed to put pressure on countries to keep the Treaty alive (thanks to everyone who contacted their governments, by the way).

All of this networked analysis and activism gets much harder when you don’t have laptop connectivity. Because my G1 phone wasn’t rooted (and T-Mobile forbids tethering apps in Google’s Android app Market), I couldn’t link my computer to my phone’s 3G network. And I wasn’t quite ready to multi-task listening to my fellow panellists and attempting to re-flash firmware at the same time.

I’m glad I waited. It turns out that these days, it’s relatively easy to drop in a version of Android that gives you power over your own device. These instructions on how to root your G1 take you through the tortuous (but by now pretty foolproof) procedure.

In the end, I chose to install JesusFreke’s distribution of the Android OS, which now has a great little utility to manage who gets root on your phone (each application’s request is intercepted, and you, as user, get to allow or deny it). This tethering application is incredibly easy-to-use, and lets you share your 3G connection via wifi or bluetooth (I haven’t tried the bluetooth). You can WEP encrypt the wifi connection, or allow access to only selected users.

Of course, next time I go to the LoC, I’ll be sure to keep the wifi node open. I wouldn’t want the MPAA guys doing without!

2009-02-21

things which are still here: fishcam, me

So my schedule these days — I have a schedule! Do you know what a change that is in my life? — anyway, my schedule these days generally involves collapsing asleep at 9PM and waking up, actually refreshed, at around 8PM. I have traded away several hours of my life in return for not feeling attached by a very taut piece of elastic to whatever is the closest bed, tugging tugging tugging me back.

I greatly enjoy feeling well-slept, but it does mean that my usual hours of blogging (and doing any other writing or wild-eyed crazy plotting) are now contemptibly small.

Like everyone, I am still working out how to make do with less.

Also like almost everyone, I stayed up very late on New Years Eve 1999/2000. I wasn’t wandering the streets, drunk like a skunk. I was inside Netscape Communication’s server management offices, munching on sushi, and watching techies desperately guarding against the chance that the Y2K bug would take down netscape.com and other important pieces of Internet infrastructure.

A few minutes before the clockover, I realised that all the clocks in the ops center were set to slightly different times (all the better to see which ones failed, I guess), and I would have no real idea of when midnight actually happened. I eventually got hold of an accurate time signal (I think I caled POPCORN, which is the US’s speaking clock). I was the only person in the cubicles who actually knew what the time really was.

In the seconds around midnight, different engineers would shout out to their colleagues that key services were still operational: “Web3 is okay!” “DNS3 is Okay”.

At the exact moment of 00:00AM, 2000 AD, I can reveal that, at Netscape, the primary concern was the fishcam. “Fishcam is … okay!” (Big cheer).

You’ll be pleased to know it’s okay again.

2008-10-18

trackbacks, backtrack

Two “productive” wastes of time that writing a blog causes you to commit: re-reading your old entries, which is mostly like re-playing funny youtube videos of people’s skateboarding accidents, and following trackbacks, which is mostly like running into the toilet after giving a talk, and then clamping your ear to the cubicle door to listen for people’s opinions. I don’t get many trackbacks, because I am now delightfully obscure and doing Alexa dead-cat bounces, which means the people who link to me are mostly old friends. It is internally flattering though it may perhaps be annoying to everyone else, like the pica-celebrity equivalent of “Christmas Book Picks” that are just people mutually puffing up their friend’s new novels. I will only, then, mention Lee who was always the secret driving force behind NTK, and asked the question that I’d want to ask Neal Stephenson, had I thought of it or ever met him:

This is arguably the first presidential election of the HDTV age. So is it more important a candidate looks good on high-def… or on YouTube?

It’s an allusion to Interface, Stephenson’s great political techno-thriller, where someone notes the difference between looking good on HD and looking good on ordinary TV, and the effect that had on politicians. Feeding a little into that, it’s definitely true that I’ve heard people say that McCain sounds better on radio, and much worse in HDTV — the makeup over his scars is really obvious, in fact evoking a weird awkwardness when he talked about “having the scars to prove it” in his last debate. The other question is: do your micro-expressions look good in Photoshop? My other thought, flicking through my own backlog, is an idea I’ve had for a few years — a site called Backtracks, where we dig up the posts that bloggers were saying five and ten years ago, and hold them to the acid-soaked cottong bud of enquiry. Easy money! Man, if only I could come up with an idea whose demographic wasn’t “people most likely to be running an ad-blocker.”

2008-10-11

python class Culture:

Every Friday at EFF, we have a Python class, where anyone in the org (and a few friends from outside) join up to learn a little Python, talk about coding and share what they’ve learnt. There’s a good mix of seasoned python hackers, coders who don’t know much python, casual programmers, and people for whom this is their first experience of programming.

The part i enjoy the most (apart from congratulating myself for reaching a level of maturity that means I don’t go I KNOW I KNOW whenever i know the answer) is the material that isn’t about the technicalities of programming, but of the culture. We often discuss, for instance, about the most aesthetically pleasing way of writing code. Watching smart coders attempt to verbalise those instincts is fascinating, especially when the instincts begin to spread through the group.

To give an example, we’ve been coding up a Python version of Conway’s Game of Life. We all spent a fair bit of time discussing that niggling problem with counting up how many neighbours a cell has. Do you do it “manually”:

or iteratively:

I think most coders would end up doing the first, but they would feel a bit dirty doing it, just as I always feel a bit dirty when I have x and y as attributes, instead of being able to treat them as different aspects of the same thing. It’s the right instinct to try and generalise, and it was fun seeing starter programmers expressing their mild discomfort.

After we’d got Life to work, Seth rewarded us by showing Golly, which is a great cross-platform Life simulator with many pre-programmed patterns. I really had no idea that they’d managed to code up a Turing machine in Life, let alone patterns that emulate a universal machine, running a program that runs the Game of Life.

2008-10-09

hacker spaces and recessions

It’s awful to say that there are parts of recessions that I rather like. Maybe it’s just familiarity: I came of age in the early eighties, and left college in the 1990-1994 recession. My sense of what’s important gets confused in upturns: everyone is talking all at once about matters that I just can’t get excited about, but I feel somewhat silly for even thinking they might be wrong. Then the recession comes, and all my clever cynicism is (selectively) rewarded. In a recession, the signal to noise ratio seems greater. It’s easier to pick out promising ideas, and it feels better for the soul if you can express optimism when everyone else needs some extra.

I bumped in Jake Applebaum today, and we talked a little about NoiseBridge, the San Francisco Hacker Space that he is helping to launch. It’s a little surprising that SF hasn’t had one before, but I think that’s partly because there are lots of informal, ad hoc spaces, and also because during boom times, there’s little need. Every start-up has a tiny piece of what you need to make a hacker space, and won’t give it up.

The timing to me seems perfect, though. It’s a good time to pool both resources and ideas: gather together everyone to work and talk together about their projects, and co-operate on relieving some of the burdens of getting ideas off the ground. I’ve already thought about how, given that I’m probably going to be moving into an even smaller space myself, how I could deposit some of my most valuable textbooks at NoiseBridge: saving me space, and increasing their use. A lot of people will be wanting to broaden their skills, or spryly cross over to wherever there is a demand for hackerish minds (I remember well the great Perl hacker bioinformatics migration of 2001), so crossover technology like a chemistry lab and dark room is useful.

Something I noticed about the old recessions – the eighties, the nineties, the noughts, was that technology became a route out of poverty and dead-ends: there’s a huge proportion of system administrators and programmers who never made it through college, or high school, and found themselves in Silicon Valley, being airlifted to a sustainable life by one another’s efforts. I imagine this will happen again in this recession too. If we hunker down to build what comes next, it’ll be good to do it in a place where teenagers can help lead the charge.

Now I’m thinking of backspace on the banks of the Thames: an engine that seeded excitement behind a bunch of art and business projects (especially those that could not decide which they were). Is there a new hacker space imminent in London, Edinburgh, Manchester or elsewhere? I think it’s about time. Plenty of city business spaces going spare and empty, soon! Lots of advice available!

2008-10-06

considering an android

I like T-Mobile. I’ve been a subscriber to their mobile service in the US for years, and they’ve been pretty good: their support has always answered my questions, their online interface doesn’t suck, and their signal in the Bay Area has been good enough for me. They’re GSM, so you have a choice of phones, and I’ve never had problems unlocking their phones for when I travel abroad and cavort with foreign SIMs. At the moment I have one of their Nokia 6086 phones, which lets you use your WiFi hotspot to make calls, which means that I have free calls at home and work, and I can use it as an EDGE Bluetooth modem for my laptop when I don’t have Internet access otherwise. It only runs signed applications, which shows that T-Mobile’s love of entirely open hardware is profoundly limited, but, hey, as long as I can just treat it as a pipe to my really open device, I don’t care.

When the Android HFC G1 came out, I was tempted. Temptation was as far as it went, because I really can’t afford another gadget right now. Prodding around, though, I was a bit disappointed by Google’s Android OS. Android applications are Java apps running in their own sandboxed VM (Google’s Dalvik). There are APIs, but they don’t give you complete access to the metal, and everything is running in Java-time. That means that, for now at least, it looks like you have to write in Java, and you can’t try clever tricks outside the API.

That seemed to rule out the two applications I would love to have on a G1: a VoIP app, and a modem tether to connect my laptop to the phone’s 3G network. Of course, those are exactly the kinds of application that T-Mobile would blow a gasket to see on their phone, but that’s not a coincidence. Telcoms only fret about software that their users would snatch in a second.

Now, though, I’ve seen a couple of comments in the Android developers’ community that make me more amenable to buying it. The first is this official statement that the G1 lets you run the Android debugging shell and install files and apps via USB. If cross-compiling for the G1 is as easy as it is on the emulated environment that comes with the SDK, that bodes well for writing tethering links — or even a VoIP application.

This comment saying Android will support the Java Native Interface (JNI) in the future, which would mean that native apps could access the Android Java API, and vice-versa, is also comforting. It looks like JNI is already supported, but undocumented.

It’s funny how, even when the entire OS and development environment is open source, there are still concerns that production Android phones could be locked down, and really no indication whether they are or not in tech media coverage of the phones. I don’t even know if the G1 will be upgradeable to later Android versions, whether I can install my own version of the OS (once Google release the source) or what future restrictions may be placed on my usage. These are questions that aren’t just pertinent to hackers — they are what will determine exactly how flexible the G1 and Android platform will be against the more tightly-controlled, but fast-moving iPhone target.

2008-10-05

technological determinism, open exceptionalism, defensive politicisation

Even though I end up being the person at the party who is (almost literally) contractually obliged to defend a fairly radical set of positions with regards to the Net, I’m often far more fascinated in probing other people’s views on how the Net works, and how it should work — even when they appear to agree with me. Of course, there have been alternative points of view since the Net began: it’s not everyone who was comfortable with the individualist libertarian free-speech default settings that dominated the early Net. But beyond the surface policies, I think there might be a deeper divide in expectations about the future of the net, even among believers in a common set of values. Those who believe in the positive values of having an open Internet, with unencumbered free communication, with non-proprietary solutions to most problems, often have diverging ideas about how those positions should best be defended.

The first, and earliest stance, is technological determinism, which is the stance that assumes that the technology just naturally rolls along to maximize the right degree, and kinds of freedom. The internet is genetically immune to censorship; privacy is provided by encryption, and those who don’t use it deserve to lose theirs; corrupt empires are always stupid, and always fall. If you feel this way, then you probably don’t feel much of a need to overtly defend anything, apart from in Slashdot comments. If a particular situation occurs, you might even argue that its existence gives it a kind of moral credibility (Huge privacy violation? Inevitable consequence of sharing too much online). A lot of people still hold with this position. If you become disillusioned with it, you often end up with a far more sceptical position of the Nets benefits than average. I often read critiques of the Net that starts with a personal voyage of discovery that begins with this stance, and ends with wholesale cynicism of the corporatist, ad-ridden, society-undermining filth of the interwebs. It’s also the most common position to project onto your opponents if you’re criticising “techno-utopianism”.

A modified version of technological determinism states that while the Net and allied systems doesn’t always provide positive values, it can certainly protect its best values when assaulted by alternative models. I guess the earliest model for this is the pragmatically-arguedThe Cathedral and the Bazaar. In this, open systems are presented alongside more closed systems, and it’s posited that they while there’s no inherent technological inevitability about them, their benefits are such that they can hold their own in a free market against other technological futures. There’s still a touch of determinism: Windows’ market share was always going to be eaten away a little by little by Linux; but only by virtue of the fact that Linux’ openness provided key advantages against a more closed system. AOL and TCP/IP can do battle, and AOL could win, but TCP/IP would more likely to, because its’ values of openness provided for better solutions than AOL. Call it open exceptionalism: the open solution will triumph, not because it’s right, nor because it’s built into the nature of technology, but because it has an unassailable market advantage. I think that open exceptionalism is probably the default position of the Google/Linux generation. It implies a greater degree of activity in the world in order to achieve good results than technological determinism, but not by much. It’s sort of the difference between salvation through faith alone and salvation through faith and good works.

And then you have a Lessig-like pessimism about the inevitability of those positive values. Openness is good, but the Net doesn’t always show it, and the preservation of its best attributes requires constant vigilance against vested interests that would undermine it. There’s no exceptionalism here: the Internet was incredibly lucky to reach the place it did quickly enough before anyone realised it would be a threat. It existence is a good in itself, but it can always be bent to bad ends, and may already be collapsing without us realising. We must use all our political tools to protect it, or risk losing any benefits it might once have offered us: a defensive politicisation of the Internet’s basic values.

it’s surprising how these frames of mind can put similarly-thinking people on the opposite sides of policy decisions; think about net neutrality, ISP filtering, DRM, open standards for government in any of these contexts and you’ll see what I mean. I personally oscillate between defensive politicisation and open exceptionalism.

And of course like everyone else I spend a lot of time trying to clarify the often incredibly vague ideas of “open” and “free” that muddy any of these stances.

2008-10-02

the XO abides

There’s a lot of people who have written off the OLPC: a pet project of Negroponte that lost momentum the moment the old staff got jettisoned and replaced with a CEO who said “the mission is to get the technology in the hands of as many children as possible” (in stark contrast to Negroponte’s original “It’s an education project, not a laptop project.”. I think the worst criticism has come from those on the Get-One-Give-One projects, who have regularly expressed disappointed with Sugar, the OLPC’s user interface, and the general state of the software.

What I find fascinating — and this isn’t just true of open source projects, though I think it’s more transparently noticeable — is what happens after that bump of enthusiasm fades. I’m beginning to believe that the great advantage of more open software (whether it’s open standards or open source), is the growing importance of slow-cooked software.

Firefox is a great example. The original Mozilla project, in a commercial context, should have been shuttered long before Firefox was developed: it pretty much was shuttered, by AOL, its major sponsor. But still development trundled along, fixing bugs, developing new enthusiasms, attracting young turks, accreting knowledgeable coders. And it slowly got better. Far too slowly for anyone to notice, until the Phoenix/Firefox team turned it all inside out.

I’d say the same is true of Unix in general. People say that those who do not understand Unix are condemned to repeat it badly, but in everyone else’s defence, Unix’s smug position is largely due to Unix folk making all the mistakes, and then veeery slowly backing out of them over a period of decades. When other, proprietary, systems go over a cliff, you frequently never see them again: and certainly the market gives them no time to learn from their mistakes. Who knows the lessons that PenPoint learnt? A lot of OS X’s benefits come from being a slow-cooked product: years of gently baking the Cocoa class library under the faint heat of NeXT’s limited audience.

With everyone’s attention off the OLPC, it nonetheless abides. The platform has shipped something like 400,000 laptops already. They’re getting ready to release a new update of the software, based on the latest version of Fedora, and with a whole bunch of UI and activity updates. Most G1G1 users won’t know all this, sadly, because they’re not a school, and consequently miss out on a lot of the support that the OLPC is designed to benefit from (if you do have a XO sitting on a shelf, you might want to try the latest builds).

It’s still not quite there, in my opinion, but it’s getting somewhere. They’re learning lessons, and the lessons they’re learning are school lessons, taken from educator’s experience in developing countries. The hardware is still gorgeous, especially the screen, and they’re only just beginning to exploit its potential (the only bugbear being the mousepad which turned out to be a bit a of lemon: there’s a great deal of hacky code in place just to stop it from jumping around, and I believe they’ve abandoned its graphics tablet mode entirely).

It’s true — there was a great deal about the initial rollout of the OLPC that was screwed up, and if it was a strictly commercial concern, I wonder if it wouldn’t have gone to the wall by now. But it wasn’t, and it didn’t, and I’m fascinated to see what happens next.

                                                                                                                                                                                                                                                                                                           

petit disclaimer:
My employer has enough opinions of its own, without having to have mine too.