skip to main bit
a man slumped on his desk, from 'The Sleep of Reason Produces




unclassy acts

I wonder how many socioeconomic classes I’ve really hopped? There’s definitely a version of my bio that let’s me sound rags to riches: Basildon (so déclassé even the rest of Essex looked down on it) to Oxford and a weird proximity to Tory grandees of the future, to Silicon Valley where I sat close on by as the mere millionaires of the 2000s self-inflated into Tessier-Ashpool decadence. But honestly, I was pretty middle-class through all of that. Other kids bullied me for my book learning and BBC accent in Basildon, I grew up mostly in bourgie Chelmsford, I was a grammar school kid at Oxford, and I was mostly in the journalism/non-profit complex in California. Like a stick of rock with “home counties” written right through it.

But I have got to spend a bunch of time with a fair spread of classes, even if it was mostly just dropping by their parties before going back to hide in the bedsit with my laptop. The main class development I’ve noticed during the journey was mostly external to it: people (culture? the dominant media?) were pretty forgiving of the rich (less so the gentry) in the neoliberal 90s. Then after 2008, the resentment of the differently-funded got more and more steep. I was noting with one of my most loyally socialist friends the other day how, nowadays, almost every article ends with a little condemnation of capitalism and the rich, like a perfunctory curlique sign-off, or Casey Kasem saying “and remember, keep your feet on the ground, and soon, soon, you will burn the blood-sucking parasites of the sybarite class as they cower trapped in their stolen mansions”.

Anyway, I guess one of the things I genuinely puzzle ablikeout is how much variance there is between people in each class. I notice a lot of people seem to presume rich people are cleverer; a lot of people also presume that they’re morally bankrupt. You can even — often — believe both: that rich people are sociopathic geniuses. And the reverse is true: that poor people are stupid, and default to ethical purity — except of course, when forced by privation to transgress some minor social rule or other.

I’ve read a couple of papers which claim to prove the richness = turpitude equation; they’re not super-convincing. One identified that the rich become less sympathetic to the poor. Okay. The poor get far less sympathetic to the rich too; people in foreign countries have some strange ideas about locals, and vice versa. It just seemed like an outgroup thing.

Anyway, my own observation is that, at least in terms of ethics and general intelligence, the curves seem to mostly stay the same as you jump up and down the economic ladder. Unethical rich people probably do more damage, simply because they have more power. Unethical poor people, on the other hand, will fuck you up directly, and are terrifying to somebody wimpy but verbal like me, in a way that a monied lizard is not. Is that because I’m a white guy? I don’t think so: again, I’ve met landied racists, but nothing matches being stuck on a nightbus for in-your-face violent prejudice.

Intelligence, is, of course, weirdly even more subjective contested as a value. As someone who came in with the standard prejudices, I am perpetually surprised at how dumb many of the rich are, mostly in a Tim Nice-But-Dim way. I was never surprised by the raw intelligence of members of the working class, because I grew up there. I’m pretty clever, but from an early age was pretty clear how far behind my academic prowess was from just people who could deal with reality faster, more flexibly, and with a quicker learning curve than me. I got out, and they were stuck, but that wasn’t due to intelligence so much as preference, and their comfort with how well they could handle what was in front of them, versus my discomfort in everything that wasn’t safely cushioned in abstraction and safety.

Let me go meta a little bit here: If you’re already disagreeing with me, I think you’re probably right, or at least, no less right than me. I’m knocking out a trite, and factually unfounded opinion piece here. I think it’s a rarely-stated and intriguing opinion, but only because I’m falling back to a wriggly contrarianism.

All I can say is: I too am a sinner. Even if I’d stitched in a few links that back up my point, it’s hardly better than the average Substack blather.

What I really yearn for online are more articles that aren’t like this. What I’d like to do is make predictions or do original research. But that takes up more time than writing; it needs some gearing and machinery underneath the probabilistic GPT text generation of my left hemisphere.

I have some ideas about how I could do that better, but first I need to build up this habit of writing. If I’m not saying anything, I can’t test my thoughts. I am, right now, somewhat sluicing out the opinions as I try to work out what’s valuable and what’s not.

(850 words)



I have a couple of friends who (like my other friend below, who is also not made up) are very irritated by the non-stop AI/GPT coverage. I’m really intrigued by the hard and somewhat arbitrary line between those — like me, I admit — who are just endlessly fascinated by all its developments, and those who can’t bear or understand any part of it.

One said, paraphrasing, that it was really the ridiculous level of hype and investment that could go on important things: local university AI labs, and smart cities, and stupid business plans, and bad social media algorithms, and endless snake oil, and so on. That makes complete sense to me! I’ve just, over time, complete compartmentalized that apart from what I see as the compelling, transformative parts. I have much training in this, having lived through the dotcom boom, the blogging boom, the twitter revolution boom, and the crypto boom: constantly panning for gold in Eldorado’s rivers of shit.

The other was more intriguing: they just couldn’t bear the discourse, because they had ideas on it, and they needed to focus on other things. It was too attractive a solution, but fixing it was not their job, their job was elsewhere. This, too, I sympathize with: whatever this is, it still needs to fit in with all the other work we do, and just because it’s cat-nip for a certain kind of brain, doesn’t mean those brains couldn’t also be put to work attending to other pressing challenges.

I’m sure I’ve mentioned this before, but when I first saw things that I had a baroque interest in suddenly turn into the wider world’s obsession and most lucrative industries — primarily computers and nerd culture — I believed that i had seen them first out of some profound predictive insight. It was only after a while that I realized that, no, I was just part of cohort of privileged people whose tastes were aligned, and who would then, over time, go on to pursue those tastes, with plenty of investment and support from other, near identical, people. Who they would then sell all this stuff to. I wasn’t different: I was just the same as every other white boy.

I don’t want to exaggerate this — you can whip up a culture out of nothing but the collective delusions of a privileged class, but it’s near impossible to craft it and maintain it without some connection to reality. Marvel movies exist because a middle-class generation of my age liked those comics, and went into the film business with that sensibility. But Stan Lee had to have built that on some fundamental narrative truths.

Separating that true component, the point at which the spirit of the age touches the eternal verities, from all the bullshit, is a skill. It’s a less marketable skill than you think, because, once again, you’re just one person recognizing the popular delusions of your own cohort: and the real money is in the popularity, not separating the delusions fro the ground. I have to keep remembering this: we don’t prophesy, we herd. We make the future, but we make it out of the things close at hand, using the opportunities and wealth the past handed us.

(542 words)


some fire in me yet

So I’ll keep this one short: it feels like I’m getting back into my stride, and I managed to knock out 2000 words on cognitive liberty and decentralization for a (shh secret) magazine that Mike Masnick is editing for us at the Foundation. The bad news is that my brief was 800-1000 words, but hey, better to kangaroo ahead in first gear a bit than not start the engine at all.

Here’s a sampler, the final mag will be openly licensed:

The PC was always intended as a machine that augments individual abilities. That ambition has deep roots, from Vannevar Bush’s 1945 essay “As We May Think“, Doug Engelbart’s 1962 paper “Augmenting Human Intellect“, through Ted Nelson’s 1974 manifesto “Computer Lib“, Steve Job’s 1980 “Bicycle For The Mind” campaign, to Sherry Turkle’s 1984 book “The Second Self” and beyond.

In this way of thinking about digital tech, the personal computer is an extension of your brain and its abilities. Its memory is to help you remember; its processing power is there to help you think faster; its network connection is for you to reach out to others; its interfaces are to connect more closely to you. It is yours in the same way as your hands belong to you, as your eyes, as your imagination.

Something has taken us from that tradition. The PC has inched closer to our faces, and under our skin. It has become ever more personal and intimate (do you sleep with your phone?) It has in many ways, become more “user friendly”. But it has also much much less user controlled. Its memory and processor now spends its time on showing advertisements, enforcing copyright protection rules, and sly surveillance of your habits that all resist your ability to evade them. That network connection is used to stream out your behavior to strangers, rather than let you voluntarily choose who to communicate to.

No matter how they ape the liberatory language of this tradition, many of us look at Neuralink or VR and see it as a fundamentally alienating tech, controlled by others, leering into our personal space; foreign body horror rather than extensions of our selves.

Those on the cutting edge of technological adoption, like the elderly and the disabled, know the profound difference between tech that expands your personal autonomy, and those that are limited and controlled by others. Many others who might think they have more freedom in what tech they adopt, are feeling the walls close in too.

(400 words)


Only fans

My PC died yesterday, screaming in pain as its brain heated over boiling point. I went out to Central Computer, San Francisco’s local computer store to get it a new fan. I got the wrong one of course, but jury-rigged it in anyway. It didn’t help: I think the CPU cooler may have died too, in the wreckage.

One of the things I’ve been punishing that machine with is Whisper, a speech-to-text ML model that you can cram into a consumer GPU. Peter Thiel likes to say that cryptocurrency is libertarian, and AI is communist (because it requires powerful, locally-connected resources, and might be thrown at the calculation problem). AI certain seems to be generating massive crop surpluses: Whisper was literally a side project for OpenAI so that they could use it to parse and suck down video sources for GPT’s maw. I find this to be just one of the indications of an age of wonder. I’ve spent years worrying that open source was falling behind commercial speech recognition tooling, and OpenAI just chucked one over the transom as favor. Oh, and it also translates, tolerably, and sometimes accidentally.

But my point here is what a pleasure it is to run these tools locally. As Simon, now AI whisperer to the world, notes, there’s a substantial difference from feeding an LLM through a grate in OpenAI’s door, to having it run under your own control, and/or passing around the model among friends and submitting it to the processes of open improvement.

Having it sit with my domain, means that I can do things like record myself all day, and then convert everything I’ve said into text at bedtime. Even though I mostly seem to talk to my cat, just my asides or mutterings are useful to me. I can throw videos or talks at it, I can use it to control my house (ah, the geek dream). I suspect, when GPT or llama gets lopped down enough to comfortably fit on that machine, it’ll be straightforward to wire all of these tools: voice -> text -> GPT -> voice. I imagine this is weeks if not days away. After years of sharing everything with Google, I’ll be able to have a private conversation with my computer again.

I also, in passing, think of that cautionary tale of open science, piracy, and brain uploading, Lena. What strange shapes will these models be stretched in private homes? What does it feel like to stick your hand into these talking machines?

And then, always conscious that it is not conscious, but nonetheless reminded of Dannie Abses’ poem, “In the Theater”, which describes a neurosurgeon, whose mistakes spark broken replies in a patient’s brain, as he tries futilely to remove their tumor.

‘Leave my soul alone, leave my soul alone,’   

that voice so arctic and that cry so odd   

had nowhere else to go—till the antique   

gramophone wound down and the words began

to blur and slow, ‘ … leave … my … soul … alone … ’


poles apart

Exasperated, I once said to a friend: “You can’t behave like you’re right all the time!”.

She looked confused. “How else am I supposed to act?” she said.

Strange attractors

It’s unlikely that I’m correct on everything: even more unlikely when I’m in a minority. I don’t like music, much. But so many other people love music! So I’m probably making some sort of error in that, even if it’s just an error of taste, or a personal incapacity. My theories on the Russian people are heavily outweighed by the estimations of, at the very least, the Russian people, and many more besides. I have some funny ideas on how Brexit happened. The accuracy of those theories are, to some extent in my mind, inversely correlated with how funny they are to other people. I’m not saying that I let the world democratically override my convictions: but the lonelier I am in my convictions, the more suspicious I become.

Despite being politically engaged, I don’t really identify with the right or the left. But so many other people do! And their views seem, often, to be more coherent, more well-thought out, backed up by dozens of books and essays that make the connections I fail to make.

It doesn’t stop me thinking what I think, nor feeling that sense of intuitive agreement whenever I do stumble on someone who, randomly, thinks the same as me on topic. My sister once told me that long before she understood the details of politics, she knew what she felt about the matters of the day. Does that make her right? No, it makes her who she is. Should we fail to add our opinions to the contemporary discussion, just because in a hundred years time, a chunk of them — maybe crucial, fundamental parts of them — will have failed to pan out?

The big bang theory of polarization

Everyone worries about polarization, and online radicalization. But we don’t often seem to worry about our own process of radicalization. Like many of my friends, I’d characterise my politics as having grown sharper over time, in contrast to the softening that I’d been told to expect comes with age. Despite my neither-left nor right-ness, if pushed, I will say I’m an anarchist, for goodness’ sake! A market anarchist! They don’t make those in moderate sizes!

But even among the anarchists, I feel like I need to watch my lip a bit. I find it really easy, in group chats or polite gatherings, very easy to stumble out of the consensus. I don’t know whether this is just me. When someone confesses to feeling like they can’t really say everything they want, that this is what I think they’re touching on. It’s not like I think I’m going to be cancelled: It’s just easy to touch on a topic where disagreement hides.

Of course, this may just be the fricking anarchists. It’s not like it’s a milieu famous for marching lockstep in calm display of unified visions and solidarity. But I also see this fractiousness elsewhere; I see it everywhere.

I sometimes think of online polarisation as being how the inflationary universe was described to me once (and oh boy, if I’m wrong about some things, I really bet I’m wrong about the structure of the early universe). The universe is expanding, I was told, but from any one spot, you won’t see it expanding. You just see everything moving, on average, further apart. Like ink marks on the surface of a balloon that’s being inflated, the universe is always unbounded, but somehow the distances grow in every direction.

That’s what the world’s opinions feel like to me. Some of it is that the Internet provided us with better space telescopes to see across this universe: Europeans knew something of America, but now they hear directly from Americans, and vice versa. Who knew what evil lurked in the hearts of men, until NextDoor came along?

But some of it is more active: as our universe expands, we get to (whether we like it or not) explore that idea space. We can zoom off in new directions, alone or with strange new attractors. We wander into the woods, and then look back, and everyone is further away, because they have so many more choices that they could make.

I find this, in my impossibly optimistic way, rather lovely. I don’t know whether I’m right, but I’m out here, noodling around the Noosphere, reporting back like Major Tom.

In our lanes, bowling alone

Another, different, good friend of mine, as close as one can be, is much as I remember him when I met him at college. We spent a lot of our life together, and I can instantly connect on the rare occasions we meet. We bond on so many features of the modern world, and politely disagree on a few, too.

He is totally convince 9/11 was an inside job, steel girders and planted explosives and everything. Unlike, say, our attitude to the West Country, he gets very annoyed when I express any scepticism about this. He is exasperated that no-one he knows can see the self-evident truth. I asked for evidence, and one Christmas, he sent me videos. For that single holiday weekend, I was convinced of it too: then I snapped out of it. We avoid the topic now.

Our universe of opinions and facts and statements and intuitions is multi-dimensional. Like GPT (everything will be analogised to GPT for the next few months, get used to it), there are millions of vectors in this state space, n-dimensional distances that connect each idea to one another. It’s really easy to just scoot down one or two of these numbers — start where you or I grew up, and then just spin a couple of numbers on the million-chambered one-armed bandit, until we’re the same, except you’re now millions of miles away in a single direction. I’m in London, and you’re in London too, but hundreds of miles upward. We both stayed on the Greenwich Meridian, but you stayed in your flat near Greenwich, and I pivoted off to Algeria. The universe of possible opinions balloons: even if we start close, we fly apart.

So how do we even talk to each other any more? How do we tolerate such distances? How do we stop us all just drifting further apart, from our family, from our friends, from a collective society, into some sort of heat death, or worse?

When the polarisation truly began to hit in the United States, back in 2015, I read a lot about the Reformation in Europe. It’s hard to extract much solace from the 100 years war, but I did. The West crafted a ceasefire from the religious wars that spilled out from those 95 new axes’ of freedom. The United States, in particular, was an unexpected commitment between religious maniacs, so intolerant that they were physically as well as conceptually displaced thousands of miles away, maniacs who thought that their neighbors — only a little more distant than those crammed into Southern England or Holland — were literally irredeemable. Somebody wants you dead in 2023? These people thought you deserved to die, then burn in hell for all eternity.

The truce failed when it came to many other inhabitants of that continent; but just the re-closing of that impossible distance fascinated me.

I am, of course, messing around with GPT, Llama, Galactica, Pygmalion and the rest. (Did you know there’s a GPT-4chan? You’d think they’d be writing about that, in the grown-up newsprobably going to hell, and risked taking their children with thempapers, wouldn’t you? Do they even know what’s happening now, what’s heading straight for us, rappelling down toward our tiny island of human consciousness, down every one of those billion parameters?).

Anyway, one of the things I’m messing around with is to use GPT as a bridge across that gulf. I get it to take some post that I don’t like, that I can’t read because it irritates me so much, the thing that shuts me off from new or distant ideas, and I automatically ask my pet GPT to rewrite it so I won’t bounce off it. Not buy into it: but not be alienated by its apparent proximity or distance from the worlds I do believe I understand. Texts in Chinese, in Hindu; local beliefs expressed in sneers and in dismissals. Love I don’t understand, fears I can’t sympathise with.

In Greg Egan’s Diaspora, humans have differentiated radically across the universe. Faced with a threat that could destroy them all, they create vast human chains of fractionally differentiated, intermediate consciousnesses, long chains of translators that are just close enough to their neighbour on each side of the chain, that they can, across the gradient of thousands of identities, convey an idea to and from an utterly alien descendant of mankind.

That’s my model of what we need to do, already, and will need to do more, not less. We are becoming alien to each other: but we can build tools that let us work together across long distances, as we did once before.

None of us are entirely right, but we need to talk to each other to triangulate, find out what’s wrong, and fix it, together.

(1400 words)


Text at Gunpoint

I remember reading at some impressionable age that there was “no such thing as writer’s block”. I don’t recall the context, but I’m guessing it was the same as my friends who said “there’s no such thing as jetlag”: a small crucifix to wield at the devil itself, rather than a statement of fact.

I don’t know about a “block”, but I have traumatic amounts of writer’s procrastination. Apart from it taking a starring role in my bio, I spent a lot of life devising increasingly byzantine ways of handling it — mostly through invented draconian punishments, and commandeering friends or co-workers to execute on them. When I wrote stage shows, my sinister manager figure, Ed Smith the VIII, literally locked me in a room to write them. When I wrote NTK, I would stay up all Thursday night until five (AM or PM) and I could feel the hot glares of its readers on the back of my neck. My columns were extracted by force. My various writing gigs were gently prised out of my hands by family who never wanted to see me suffer again.

And yet, shit got written. I just spent a few minutes procrastinating by looking through Oblomovka’s back catalogue — ostensibly to find other times I complained about all this, and a) that I remember none of this nor how it got done, and b) it’s fine. Even the 2008 Nanowrimo is okay in retrospect!

Anyway, the point of this is that I’ve been feeling some heavy back-pressure in my head to start writing things down. For the last few years, I’ve been mostly pursuing a role as oral storyteller, where I give (largely unrecorded) talks about what I think, and then people constantly harangue me to document it more permanently. I have slowly realised that I am leaning a little more heavily on my charming British accent than actual facts in my statements, so just for everyone’s safety, I should probably switch to structured text.

Secondly, back at the day job, as our duties and responsibilities have grown, so has my ability to keep it together in my head, shrunk. Processes must be recorded. Atittudes explained. Yelps of discontent justified. The Sumerian brainhack must be reactivated, Socrates be damned!

So I’m writing again. 200 words at least a day here, other wordage elsewhere. Forgive me the heavy drinking, the bouts of undirected anger, the weeping and the sleep deprivation. Onward!

(400 words)


On Stable Diffusion

A good friend made a Facebook post saying

“Sadly it turns out that the latest AI photo app y’awl using to look hot and sexy is built off the back of a training set full of work stolen from artists without payment.

How disappointing.

We sorted this shit years ago with Creative Commons licensing. It’s not hard to get right. #paytheartists”

It led to a heated debate! Here’s (with some few modifications) how I replied, which was sufficiently long that I felt I should pluck it out of the Facebookosphere, and settle it here:

I understand that people worry that large models built on publicly-available data are basically corporations reselling the Web back to us, but out of all the examples to draw upon to make that point, Stable Diffusion isn’t the best. It’s one of the first examples of a model whose weights are open, and free to reproduce, modify and share: . Like many people here in the comments, you can download it, inspect it, run it locally, and share it. You need a GPU to run it at a reasonable speed, which makes it a little pricey to run. The cost of building these models is very pricey — around $600,000 or so, which means that there’s currently a power differential between large corporations who can afford to speculatively build and experiment with these models, and the rest of us. But the knowledge of how to do it is built on open science, and a number of orgs are doing it truly in the open — for example, . All of these things, as ever, will get cheaper, and spread in use and experimentation.

Most importantly, the tool itself is just data; SD 1.0 was about 4.2GiB of floating-point numbers, I believe (taken from ). I’m currently using (literally, right now!) another open model, Whisper, which is 3GiB, and allows me to convert most spoken audio into text, and even translate it. I use it to, securely and privately, transcribe what I’m saying to myself through the day. I expect it will be encoded into hardware at some point very soon, so we will have open hardware that can do the kind of voice to text that you otherwise have to hand over to Google, Amazon, and co.

The ability to learn, condense knowledge, come to new conclusions, and empower people with that new knowledge, is what we do with the shared commonwealth of our creations every day. Copyright has not always been a feature of that process, but in many ways, it’s been an efficient adjunct to it: a way to compensate creators by taking a chunk from the costly act of copying itself. It’s a terrible fit to the modern digital world, though, just because that act of making a copy is now practically zero. Attempts to update it, have unfortunately revolved around trying to recreate the physical limits of previous copying equipment, and bolt it onto a system where that’s not where the revenue comes from.

It’s always been hard to stop these temporary monopolies from impeding the open commons that they all draw from, especially after we built a automatic copyright system post the Berne Convention, where everything was maximally locked down by default. That’s why Creative Commons was invented — because without that work, it was costly and near impossible to grant back to the commons, with legal certainty, the way that the commons could exist by default before the 1970s.

Again, I understand if people are worried that, say, Google is going to build tools that only they use to extract money from our shared heritage. But the problem isn’t that those tools should be illegal, and that anyone building or using them (like me, like EleutherAI, like any one following the instructions spelled out by the increasing, accelerating field of machine learning, and drawing on the things around them). It’s that the tools should be free, and open, and usable by everyone. Artists should get paid; and they shouldn’t have to pay for the privilege of building on our common heritage. They should be empowered to create amazing works from new tools, just as they did with the camera, the television, the sampler and the VHS recorder, the printer, the photocopier, Photoshop, and the Internet. A 4.2GiB file isn’t a heist of every single artwork on the Internet, and those who think it is are the ones undervaluing their own contributions and creativity. It’s an amazing summary of what we know about art, and everyone should be able to use it to learn, grow, and create.


no words

I’ve been having a bit of a rollercoaster time at work; nothing you need to bother your giant head about, but this week has been a mix of incredible highs, and also some really hard introspection.

One of them, which will seem silly to regular readers of this blog (who are the only ones left right now, and I’m not even sure what “regular” means in this context), is having to admit to a co-worker that I have terrible writer’s block and have had for… ooh, thirty years or so? I remember reading somewhere that “writer’s block didn’t exist” and I think that added another couple of decades to my refusal to really take it seriously. That and at least two book deals, careers in scriptwriting and journalism, and a perpetual sense that I was failing at the one thing people expected me to do.

Well, you know, I have taken it seriously — lots of therapy where we’d eventually get around to it as a topic, that whole “lifehacks” side-tour, endless agonising and bending the ear and wetting the shoulder of my closest confidants. But I’ve never really said it in a work context as a thing that people need to watch out for.

Mostly, I just say I will try to do better. But I really don’t; this is who I am. To give you an example: these days I’m practically a walking Oral Tradition — I don’t write memos at work, I give hour long internal talks, and fill up meetings with these improvisational marvels of ad-libbed wisdom that amaze me, even as they probably bore or annoy a sizeable chunk of my co-workers (though they are all really nice about it).

Of course, I’ve had that thought where I just go, hey, maybe I could just speak to a microphone, and transcribe all of this genius and magically turn it from mid-brow hand-waving into high-status prose. Somehow it doesn’t work that way.

Anyway, usually this kind of post ends with me dedicating myself to you, regular reader, and promising to blog more, or what have you. Followed by another six months or more of silence. I know enough to know not to say that now, and bring down the curse. But I guess what I am feeling is a sense that even if I don’t write it’s time to saddle up again and try and push the ideas out there, with a keyboard, or a prosc arch or livestream or something. It’s never too late to start, and it’s always too early to end.


o’brain worms

I guess it’s appropriate that we can’t agree on what the brain worms metaphor’s original vehicle actually is. In his description of the Internet culture term, Max Read claims, reasonably, that the originals are maybe like tapeworms or toxoplasma. But I always think about the Ceti Eel in Wrath of Khan (but then, I’m always thinking of Wrath of Khan, especially, these days, the imminent off-Broadway musical).

To be infested with a brain worm is to have become a one-note (or a cacophony of discordant notes) speaker. To have all your behaviors, at least online, collapse into one strident position. To shore up every exit from that position with every mental barricade. A mind trap.

I will insist that I’m right about the best analogy. Like the Ceti eel, the modern brain worm usually gets in via your ear (or Twitter feed). It “render[s] the victim extremely susceptible to suggestion,” as Khan notes: Chekov later confirms that “the creatures in our bodies… control our minds …made us say lies …do things”. Madness, then death follows. Metaphorical brain worms, with COVID and measles, can kill you nowadays. In happier times, you could get away with just agyria.

Brain worms certainly seemed to have grown more virulent, more vicious, recently. I worry about my proximity to them. As I’m hinting, I’m considering slinking into punditry again, and woah nelly, do brain worms seem to be an occupational hazard in those dark woods. I think I’ve lost more friends and acquaintances to brain worms than the pandemic. From 9/11 truthing, to whatever it is that’s slamming around Glenn’s cortex these days, from election-disbelievers to Russia-runs-it-allism.

Since I was a young man clutching the Loompanics catalog for the first time, I’ve actively explored strange new views; sought out new lies and new inclinations. But watching good people all around me just be consumed by an idea, possessed and ridden by these loa, trapped by an illusion that if they just moved one foot to the left or right, would dissolve away, has given me serious pause. If I open my mouth and speak my mind again, will the brain worms get in that way? Start polishing up my prejudices until they’re clean, consistent, and shiny, and one day find myself unable to drag my eyes away from their distorted mirror image?

Or you know, maybe the brain worms have already got me? Like most people who read books or say long words, I have a few brain worms that I keep as pets. They’re fun, they’re conversation pieces, and you can bring them out for people to coo at during parties. 

I’m still confident that if they turned rabid and started attacking my friends, I’d have the sense to put them down — the worms, not my friends, of course (oh no maybe they have already got me)?

My pet brain worms: the Internet (still with its capital letter); anarchism of a harmless, de-fanged kind; a litter of related ones bred from the same pedigree. These days, decentralization would be the obvious one, I guess. My friends and relatives, watching me wading in booty-shorts through the cryptocurrency swamp, worry, but I think that’s a little too obvious to snag me.

But, of course, nobody with a brain worm thinks they have brain worms. So how do you protect yourself? Alan Moore’s old trick was to tell his closest that they should retrieve him from whatever mindfuck he was pursuing, but only if he started becoming less productive. I’m not sure I want to take advice from Alan Moore on this matter, however, especially as I suspect a brain worm would make far more prolific, not less. I mean, this is why pundits have them — they’re superspreaders. A brain worm that doesn’t target pundits would not be a successful brain worm. Just ask Richard Dawkins: a man who, on some deep level, must know that the memes are now defining him, not the other way around.

Making hard-to-wriggle-out-of testable predictions — make your beliefs pay rent, as the origin of so many geek brain worms whispers to me from his wicked lair — would, I would hope, help ground me. But I need to avoid pattern-matching as I seek out those beliefs! Or else there’s a mountain of evidence awaiting me that supports my position! You just need to let me devote more time to finding it!

Ultimately, all I can assume is that the best practical guard against monsters is to make sure you’re not hurting anyone — or inspiring others to hurt themselves or others. No one deserves it, no matter what the worms say. It may make you a quieter, weaker source of thought: but tell the voices in your head that worms who prosper long term will be the ones who don’t kill their hosts.


Unwanted thoughts

You can hear in the background of this blog, like a creek at the end of a field, a constant wash of attitudes changing. Not much, to be honest, or not as much as I’d hope. At the end of college, a friend of mine was terrified of backing into just one role, ending up stuck in just one life. I, optimistic and insufferable, told her that I was looking forward to transforming into many different people, bouncing around the mental state-space as the world changed around me. The truth seems to be that you can steer between these two camps, and thank god. How we change is under some of our control, or it feels that way.

There’s certainly a lot of character pinballing around, with those slow Ron Paul->Bernie arcs being overtaken by Mises->Nazi, SomethingAwful->Tankie, PostModern->Mencius, KPop->Antifa, slam tilts. One constant that I see people in their forties and above refer to is the old pseudo-Churchillian (maybe Batbie? Maybe Burke? Probably anonymous Tory.) line: “If you are not a democrat/liberal/socialist at 25, you have no heart. If you are not a conservative at 35 you have no brain”, followed generally by a humblebrag that they’ve switched from being a liberal as a youth to being, in these present times, a flaming communist by thirty-five.

This one lands weirdly for me because, in some ways, being a Labor-left, unionizing, nationalizing, we’ll see the Red Flag flying here, type, is, within my messed-up internal political compass, actually pretty old school. I was raised on the Left. At that time, it felt like less of a political stance and more like a refugee movement. In the UK and US, that branch of the Left was summarily ejected from the electoral power it needed to execute its plans, and nobody seemed to have good ideas on how to get it back. Chernobyl and the collapse of the Soviet Union were, to that whole ideological space, what the 2008 recession was to free-market, free-trade fans — an undeniable, universally damaging unwinding of the best arguments for its dominance. Something like, “In the Nineties at twenty-five, if you weren’t seriously questioning socialism, you had no friends; if you were not spending some time considering the benefits of neoliberalism at 35, you probably had no job.” (Don’t write in, I know you met a lot of cool people at Red Wedge, I’m just trying to bend the quote to fit, dammit.)

Anyway, when I hang out these days with the youngsters quoting theory at each other, I am thrown backward, not forward in time. I got into Benjamin Franklin when I first came to America — a very 2000s thing to do, but also, obviously, pretty 1770s of me too. Eventually, I snapped out of it by thinking: I’m pretty sure Peak Nation-Building did not end in the late 18th century, and there may have been more advancements in political science by non-bewigged professionals since then.

On the other hand, I definitely would not have considered upgrading to Marx as much improvement. Partly because it would only have been a jump forward from the founding fathers by fifty years or so, but mostly because it would have felt like a shift backward for me personally, to 1979. It would have been an act of internal conservatism. 

I guess now, faced with new information, I should thrash ahead to a new neo-Marxist vision. Alas! I am not changing as quickly as I did. The lightcone of my character has been narrowing since my thirties. Back then, I would amuse myself by wondering what it would be like to be an aging hippy of the future. And here I am, as 90s as they were 60s: Eyes blurring with tears, I will, unprompted, relate how you can almost see, with the right eyes, the high-water mark on the Internet, where the decentralization revolution washed over the world, and then broke and fell back. Re-litigating long-dead arguments about SMTP and NNTP as much as I heard warmed-over fights about the SWP or SDS in my youth; thinking myself a radical who avoided the Churchill rails, but actually a conservative sitting athwart any progress.

But! There is a twist here, and I clutch onto it. The weird thing about the Left in the eighties was that it kept its beat, even if that wasn’t the main rhythm of the time. It is hard now to describe how sidelined it was and how it held itself together, even when everything– at least in the anglosphere — was working against its success. I remember thinking: how odd and inspiring to keep on believing in something when everything conspires (maybe through class war, maybe through your own movement’s recent fuck-ups) to undermine your conclusions.

Grudging respect! I thought unpopular thoughts at the time (“information wants to be free!”, “fast, cheap and out of control!”, “we reject: kings, presidents, and voting. We believe in: rough consensus and running code”), but they weren’t actively being rejected – they just weren’t very well known at the time. While they were obscure, they had the advantage of fitting the current setting; they made predictions, and then the predictions came true. So when more people came to believe them, it wasn’t a surprise. It was barely a validation. Like those old school (with a slave-owning asterisk) heroes would say, we held those truths to be self-evident.

So brave to think new thoughts: but holding onto your beliefs when they’re well-known and yet disregarded is another matter entirely. I ignored the Left in the eighties because it was both well-explored and curiously mismatched to the world I saw around me. You can put that down to insidious capitalist propaganda: but, again, the fecundity of thought at the time came, for me, from imagining a world outside stagflation and the 70s, plotting an escape from a utopian vision whose roof had fallen in. 

And yet, some people stuck around to carefully rebuild the roof: tedious thankless work.

So, ironically — conservatively? — the lesson I’ve learned is that there is some value to being an aging hippy, to be a person who squats on creaking knees with the tired ideas of the last decade and learns the lessons, and stitches on patches, in a quiet corner. The fact that the Left managed to roar back into relevance the moment the last age wobbled is perhaps why leftist thinking has evolved the way it has. It’s designed to pop back up. And if that’s so, maybe it’s resilient to be unwanted for a while. Sometimes we make a wrong turn and need to back up a little to go forward again.


petit disclaimer:
My employer has enough opinions of its own, without having to have mine too.