skip to main bit
a man slumped on his desk, from 'The Sleep of Reason Produces



Archive for the ‘Technology’ Category


home-grown talent

I’m downloading the large language model llama-13B-hf as we speak, hoping to get it going on the GPU I have for games. What strange gloss will they put on this moment in history, where machine-learning at home was enabledby videogame users who couldn’t bear to shift from general-purpose computing machines to consoles?

My iron self-discipline will surely prevent me from playing around all night trying to get this to work. My hope is to continue the experiment that I began with GPT3, which is using it to filter and translate my social media feed. Even on Mastodon, I still feel those jolts of anxiety when someone confidently shoots a verbal gunshot into the air, and I watch it arcing across the sky, landing, accidentally or not, into my heart.

(So far, it’s not running because of a capitalization typo. I am impressed that people think we have the wherewithal to practice AI Safety when we can’t even agree on how to capitalize “LlamaTokenizer”.)

Anyway, so my plan is to use this to identify posts that would upset me, and rephrase them in a form that preserves their meaning without giving me that gut-punch. Is that bad? Am I cloaking myself from the truth by doing this? Letting a MACHINE mess with what people are saying to me?

I’m not sure there’s a coherent position that works against that. I choose what I read all the time. I’m seeking to preserve the content of the message, if not its tone. If anything, I’m trying to make it less likely that I’ll ignore, filter, or refuse to engage with it. (I also want this system to summarize and re-iterate the posts that it most mangles, so I’ll always have some extra reminder of what I’m missing.)

Of course, I’m being an absolute angel about how I do this. But will everyone else carefully construct a system to answer the most obvious objections? Another outrage, I guess. But how will I know you’re outraged? How will you know who is doing this at all? (And will they really want to?)

(I got it working. In the initial test of commonsense, it told me that ants have four legs. When I asked it again how many legs an ant has, it said:

“Answer: Six, because you can’t have eight without a pair of pants on.”


(Update: I fed it the Alpaca Lo-Ra. Now it says:

An ant has six legs for movement and to carry its food. Ants use their legs to move around quickly and efficiently, allowing them to find food sources and avoid predators.

Well, mostly it says this. After multiple iterations, it once added that they have another couple of extra legs for picking up food, but hey, easy mistake to make.)


zero to sum

Sad about the District Court decision in Hachette vs. Internet Archive; not just because of the ruling against the Archive, but because of many people’s reaction to it online. People have strange intuitions, not just about the status of the law, but also of how it progresses. There’s some tut-tutting that an august institution like the Archive should be wandering this close to the spirit of the law, instead of playing safe.

But the Archive wouldn’t exist if it was playing safe: if you ever wonder why there is only one of them (and there should be thousands of them), the idea of just going out into the Web, and recording everything, is not playing it safe. Of course, nobody thinks that now, because we live in a world that is erected on the edifice of freely available search-engines, and a presumed right for us all to take data from the Net, and use it for many different things. But that is not the model that sit in the heart of a maximalist IP theory — or indeed, most jurisdictions that don’t allow for ad hoc exemptions and limitations to copyright. Under that model, everything is copyrighted, the moment it is fixed, and you don’t get to see it, or touch it, digitally, without negotiating a contract with the rightsholder.

That’s such a violently different world from the physically-bound, pre-digital world of copyright. I don’t need to contract with anyone to read a physical book; I don’t need to beg permission to lend someone else that knowledge.

Now, I know that alternative model of digital copyright seems to be also at odds with reality to many: that we can make as many copies as we want of non-physical data, give them to everybody, at zero cost, by default, and to stop that from happening, we must adopt a set of encumbrances that seem barely capable to stem that flow. But really, these are the limits of intellectual property as a model for either providing income, or effectively restricting the supply of knowledge

So we have a choice: it’s unclear what the middle-ground is, and whether there is a middle-ground at all. I used to think that this was the nature of digital technology — that there was no clear perimeters to how much copying, or how much transformation or derivation was tolerable, and that because of that, we’d live in an increasingly enforcement-heavy world, as one side attempted to draw a line in the sand, even as the sand shifted and writhed underneath them. To throw out another metaphor: that the punishments and locking-down would escalate, like the impossibility of making real advances in World War I led to a tragic no-man’s land. People would copy for zero cost on their do-anything-machines, so lawmakers and rightsholders would increase the fines, and lock down the machines by force of law.

I still think this is a fair outline, but I’m beginning to think maybe intellectual property was always like this. Fixing ideas onto a scarcity-based economic model, like nailing jelly to a wall.

What makes me sad, though, is even as the copyright maximalists attempt to create a government-enforced property system out of metaphors and thin air, people who claim to want justice, join forces with them. Or not so much justice, but fairness.

I talked a little about this with Nathan Schneider today in The Decentralists, my interview thing that will soon be a podcast. Nathan noted that some people benefit unduly from public goods — in his example, venture capitalists extracting value from open source — and if we wanted to have a fair system, then we needed to work out a way to stop this.

I don’t think that way at all: in many ways, public goods are always going to have free riders, freeloaders, pirates and exploiters. That’s why they’re public goods! We can’t exclude people from benefiting from them. But that doesn’t mean we need to work out how to fence them away and ration their benefits, based on who gets them. What we need to do is to work out how to free-riding from undermining the commons itself.

We are, as a species, peculiarly sensitive to cheats and slackards: it inspires our most immediate and profound sense of ire. It’s amazing how much brain matter we silently attend to calculating who has done what in our social circle, and how many fights start from disagreements about that assessment.

The positive version of that is that it inspires in us a desire for justice, and for equity. The negative side is that it breaks our brains when we have resources that everyone can keep taking from, without reducing the total amount.

If you just decide to walk away from the idea that free-riders must be punished in a digital space, you often get so much more done. One of the ways that the Internet beat every other digital networking project is that the rest of them were bogged down in working out who owed whom: protocols and interoperabilty foundered because so much of it was spent meticulously accounting for every bit. Same with the Web. It just got hand-waved away.

I think that some of the worse ramifications of the modern digital space is because of that hand-waving (the vacuum got filled by advertising, most notably), but it certainly wasn’t all bad. And, most importantly, ignoring who was free-riding on who did not immediately kill the service, as it collapsed under the weight of parasites. It turned out that, in many cases, you could still manage to maintain and create a service that was better than any pre-emptively cautious, accounting-based system, even when it had to deal with spammers or pirates or those too poor to theoretically justify their access to the world’s most precious information under any less generous model.

I think you can construct justice and equity as an exercise in carefully balancing the patterns of growth: those worse off get the benefit, those already well-off don’t get to fence it away from the rest. What I don’t see as useful is to zero-sum everything, just to make the calculation tractable. If you can work out a way to make everybody better off, we should allow it, without trying to judge whether those who benefit are worthy. The Internet Archive, clearly, makes everybody better off, in almost every axis. And it did that, even in a world where many such things are seen as too risky or destabilising to be considered.

(1000 words)


A little change of pace. As part of the Not Secret But Not Entirely Documented Either plan to save the Internet, I’ve been spending a fair bit of time with lisp weenies. The parenthesis are rubbing off on me, I think primarily because you can stuff lisp into tinier nooks than even Linux fits. One of them is now an LED board that I have stuck above my desk. In true Purpose Robot style, despite having more processing power than the space shuttle (or something), I have mainly used it to display a hardwired “ON AIR” when I’m on video-conferencing.

I got bored or delusional or hyperfocused the other day, and now it still mainly says “ON AIR”, but now has a ulisp interpreter to help it feel even more overpowered. You can telnet into a repl (I recommend rlwrap telnet, there’s nothing that rlwrap can’t improve), and I bolted on some extra commands to do graphics as well as text. Code for the signpost is up on Github, including the script that watches for a video conference on Linux and then does something, which is probably more re-usable in other contexts.

A couple of notes: GPT helped me tidy up some of this code, which made me less ashamed to post about it online — just stuff like error-checking and error messages. Another is that giving you access to my LED signpost is one of my little “we should be able to do this in a decentralized social environment” tests: both socially and technically. The face that I don’t yet feel comfortable opening it up beyond my home is a big flag to me, and I want to keep worrying at this problem until I do.

(300 words)



I have a couple of friends who (like my other friend below, who is also not made up) are very irritated by the non-stop AI/GPT coverage. I’m really intrigued by the hard and somewhat arbitrary line between those — like me, I admit — who are just endlessly fascinated by all its developments, and those who can’t bear or understand any part of it.

One said, paraphrasing, that it was really the ridiculous level of hype and investment that could go on important things: local university AI labs, and smart cities, and stupid business plans, and bad social media algorithms, and endless snake oil, and so on. That makes complete sense to me! I’ve just, over time, complete compartmentalized that apart from what I see as the compelling, transformative parts. I have much training in this, having lived through the dotcom boom, the blogging boom, the twitter revolution boom, and the crypto boom: constantly panning for gold in Eldorado’s rivers of shit.

The other was more intriguing: they just couldn’t bear the discourse, because they had ideas on it, and they needed to focus on other things. It was too attractive a solution, but fixing it was not their job, their job was elsewhere. This, too, I sympathize with: whatever this is, it still needs to fit in with all the other work we do, and just because it’s cat-nip for a certain kind of brain, doesn’t mean those brains couldn’t also be put to work attending to other pressing challenges.

I’m sure I’ve mentioned this before, but when I first saw things that I had a baroque interest in suddenly turn into the wider world’s obsession and most lucrative industries — primarily computers and nerd culture — I believed that i had seen them first out of some profound predictive insight. It was only after a while that I realized that, no, I was just part of cohort of privileged people whose tastes were aligned, and who would then, over time, go on to pursue those tastes, with plenty of investment and support from other, near identical, people. Who they would then sell all this stuff to. I wasn’t different: I was just the same as every other white boy.

I don’t want to exaggerate this — you can whip up a culture out of nothing but the collective delusions of a privileged class, but it’s near impossible to craft it and maintain it without some connection to reality. Marvel movies exist because a middle-class generation of my age liked those comics, and went into the film business with that sensibility. But Stan Lee had to have built that on some fundamental narrative truths.

Separating that true component, the point at which the spirit of the age touches the eternal verities, from all the bullshit, is a skill. It’s a less marketable skill than you think, because, once again, you’re just one person recognizing the popular delusions of your own cohort: and the real money is in the popularity, not separating the delusions fro the ground. I have to keep remembering this: we don’t prophesy, we herd. We make the future, but we make it out of the things close at hand, using the opportunities and wealth the past handed us.

(542 words)


some fire in me yet

So I’ll keep this one short: it feels like I’m getting back into my stride, and I managed to knock out 2000 words on cognitive liberty and decentralization for a (shh secret) magazine that Mike Masnick is editing for us at the Foundation. The bad news is that my brief was 800-1000 words, but hey, better to kangaroo ahead in first gear a bit than not start the engine at all.

Here’s a sampler, the final mag will be openly licensed:

The PC was always intended as a machine that augments individual abilities. That ambition has deep roots, from Vannevar Bush’s 1945 essay “As We May Think“, Doug Engelbart’s 1962 paper “Augmenting Human Intellect“, through Ted Nelson’s 1974 manifesto “Computer Lib“, Steve Job’s 1980 “Bicycle For The Mind” campaign, to Sherry Turkle’s 1984 book “The Second Self” and beyond.

In this way of thinking about digital tech, the personal computer is an extension of your brain and its abilities. Its memory is to help you remember; its processing power is there to help you think faster; its network connection is for you to reach out to others; its interfaces are to connect more closely to you. It is yours in the same way as your hands belong to you, as your eyes, as your imagination.

Something has taken us from that tradition. The PC has inched closer to our faces, and under our skin. It has become ever more personal and intimate (do you sleep with your phone?) It has in many ways, become more “user friendly”. But it has also much much less user controlled. Its memory and processor now spends its time on showing advertisements, enforcing copyright protection rules, and sly surveillance of your habits that all resist your ability to evade them. That network connection is used to stream out your behavior to strangers, rather than let you voluntarily choose who to communicate to.

No matter how they ape the liberatory language of this tradition, many of us look at Neuralink or VR and see it as a fundamentally alienating tech, controlled by others, leering into our personal space; foreign body horror rather than extensions of our selves.

Those on the cutting edge of technological adoption, like the elderly and the disabled, know the profound difference between tech that expands your personal autonomy, and those that are limited and controlled by others. Many others who might think they have more freedom in what tech they adopt, are feeling the walls close in too.

(400 words)


Only fans

My PC died yesterday, screaming in pain as its brain heated over boiling point. I went out to Central Computer, San Francisco’s local computer store to get it a new fan. I got the wrong one of course, but jury-rigged it in anyway. It didn’t help: I think the CPU cooler may have died too, in the wreckage.

One of the things I’ve been punishing that machine with is Whisper, a speech-to-text ML model that you can cram into a consumer GPU. Peter Thiel likes to say that cryptocurrency is libertarian, and AI is communist (because it requires powerful, locally-connected resources, and might be thrown at the calculation problem). AI certain seems to be generating massive crop surpluses: Whisper was literally a side project for OpenAI so that they could use it to parse and suck down video sources for GPT’s maw. I find this to be just one of the indications of an age of wonder. I’ve spent years worrying that open source was falling behind commercial speech recognition tooling, and OpenAI just chucked one over the transom as favor. Oh, and it also translates, tolerably, and sometimes accidentally.

But my point here is what a pleasure it is to run these tools locally. As Simon, now AI whisperer to the world, notes, there’s a substantial difference from feeding an LLM through a grate in OpenAI’s door, to having it run under your own control, and/or passing around the model among friends and submitting it to the processes of open improvement.

Having it sit with my domain, means that I can do things like record myself all day, and then convert everything I’ve said into text at bedtime. Even though I mostly seem to talk to my cat, just my asides or mutterings are useful to me. I can throw videos or talks at it, I can use it to control my house (ah, the geek dream). I suspect, when GPT or llama gets lopped down enough to comfortably fit on that machine, it’ll be straightforward to wire all of these tools: voice -> text -> GPT -> voice. I imagine this is weeks if not days away. After years of sharing everything with Google, I’ll be able to have a private conversation with my computer again.

I also, in passing, think of that cautionary tale of open science, piracy, and brain uploading, Lena. What strange shapes will these models be stretched in private homes? What does it feel like to stick your hand into these talking machines?

And then, always conscious that it is not conscious, but nonetheless reminded of Dannie Abses’ poem, “In the Theater”, which describes a neurosurgeon, whose mistakes spark broken replies in a patient’s brain, as he tries futilely to remove their tumor.

‘Leave my soul alone, leave my soul alone,’   

that voice so arctic and that cry so odd   

had nowhere else to go—till the antique   

gramophone wound down and the words began

to blur and slow, ‘ … leave … my … soul … alone … ’


Stream of conscientiousness

I had a list of new year’s resolutions this year, which I wrote and then forgot about, but at some level have been trying to complete ever since. Let me dig them up; hold on. Ah, here they are.

Well, I’m not losing any weight, but I am managing to live stream pretty often. I share a weird corner of the streaming world, where amateur programmers show strangers their screens and their faces while they do random coding. Mostly it happens on Twitch TV, which has cornered the market in esports and mass live video demonstrations of gaming prowess. Twitch TV also streams the long tail of what it used to call “Creative” — enthusiasts building PCs, drawing pictures, messing with clay, and growing chickens. After a mixed beginning (where you could see Twitch trying avoid turning into a video sexworker marketplace, or just troll central), Twitch has clearly developed a fondness for these corner cases. Maybe it’s because they hark back to when it used to be Justin TV, and people showing you things they did was all it had.

Anyway, I’m hovering at the bottom of the “Science & Technology” category(!), a long way away from the 13 million followers of gamers like Ninja, and honestly a fair bit below popular coders like Al “The Best Python Teacher I Know Of” Sweigart, game developers like ShmellyOrc, and even other Lisp-exploring streams like Baggers and the mysterious algorithmic trader Inostdal. It’s okay though. I’m doing this for my own entertainment and sanity: livestreaming, for reasons that I’m still trying to understand, snapped me out of depression a year ago. (It’s not called Code Therapy for nothing.) Plus I’ve always enjoyed playing to small rooms, if they’re full of good people.

Anyway, as they say, subscribe and follow, follow and subscribe. Set it up to notify when I’m streaming, and come sit with me sometime. We’ll have a safely mediated chat, through protocols and stacks and obscene amounts of bandwidth.


living with guile

Liz is BLOGGING LOUDLY next to me, inspiring me to write in her wake. I do have plenty to say, but most of it is wrapped around work, and consequently needs to bake a little before I reveal it to the world. I love my job, but there’s a part of me that’s sad at how little I can talk informally about. Law firms are taciturn places by nature, and my own work is so … frequently diplomatic. Oh well, it all appears eventually, in some form or another.

Meanwhile, in real life, I continue to hack on my guile ‘n’ guix constructed machine. I submitted my first guix patch! My approach to this laptop is to make only the most incremental of changes when I absolutely feel I need to do them. So, for instance, I wanted to submit that patch, so I set up mail — but only outgoing email. I admit to some fripperies: I’ve just discovered that recent Xft/cairo/fontconfig/something support color emojis, and so I splashed out on fonts. But otherwise, it’s interesting cobbling everything slowly from scratch.


capital mood

I’ve been futzing around with LISPs. See how we say LISP like that, all in caps? That’s how I think of Lisp; it has this vague aura of pre-1980s aesthetic where capital letters where either teletype-obligatory, or an actual indicator of futuristic COMPUTER WORLD.

Case in computing is a funny thing, like a binary signal in the ebb and flow of fashion. When and why did Unix (UNIX™) shell commands adopt that lowercase chic? I still write my email address in lowercase, even on government forms that request all caps, out of a defiant alt tone — DANNY@SPESH.COM stinks of AOL, Compuserve, and doing it wrong.

Common Lisp, forged in the eighties, expected, like Lisp itself, to be timeless: Common Lisp has CAPITALS all over it. Not exclusively, though. I guess when you’re Guy Steele and you’re trying to bind together futuristic AI and McCarthy fifties experiments, smashing together upper and lowercase is the least of your temporal concerns.

Will upper case make a come back? MAYBE IT ALREADY HAS.


go wild

I love watching the AlphaGo/이세돌 games. I barely know anything about Go, so I’m essentially pursuing my favourite hobby of watching smarter people reach out beyond their comprehension.  The little shortcuts of explanations between expert Go players: the flurry of hand movements, the little trial explanations of future moves, and Go’s beautiful vocabulary, the subcultural mix of  deliberate ironic calm and background, barely concealed anxiety and excitement. A friend said it felt like “surrealist theater” sometimes. But what I love about games, about programs, about science is that even when it’s hidden and barely explicable, there’s always something there.

Nobody seems to understand AlphaGo’s wilder moves. In the second game, everyone commenting belatedly realised that it was doing something in the center when everyone thought it was losing the upper right to Lee. Opinions on who was winning swung wildly from side to side. AlphaGo itself has a metric of how it thinks its doing (it resigns if it perceives it has a less than 10% chance of winning). We don’t get to see what that is in the game, but the program’s British inventors said afterwards that AlphaGo thought it had a 50/50 chance in the mid-game, but its confidence slowly and consistently increased towards the end. Were AlphaGo’s early moves madness or genius, someone asked. We’ll know from whether it wins or not, another human replied. It won.

And again, something of a zeitgeist event. The AI people, who’ve been kicking around in my box of interesting predictors for nearly a decade, I think they feel that this is their moment.

I spent a couple of hour last weekend talking to Benjamen Walker about Nathan Barley, and the psychic damage of the early 2000s. At one point, I talked about the terrible distortion for technologists in the dotcom years of having years of everything you want and predict turn out to be true. Then I more sadly talked about how the magic had ebbed away. How so many of us coasted along on glib predictions that the Internet is going to make things nicer and more exciting for a decade, and it worked,  then suddenly every bet turns out wrong.

I  hate actually predicting things, because as soon as  you pre-commit, your perceived accuracy plummets (because now it’s your actual accuracy which is never as much fun). As ever, I can just couch my predictions in woolly language here so: I’m feeling myself be tugged along in the AI folks wake, because they’re going somewhere interesting for a few years, even if maybe the magic will fade from them before they reached home and the Singularity.

(Fun reading if you want it, in this vein: Crystal Society by Max Harms. My favourite book this year so far. And, just like my favourite book this decade, Constellation Games, indie/self-published.)

BTW, Constellation Games is the Book of Honor at the upcoming Potlatch science fiction conference. I’m mortified I’m missing it, but I think I’ll be ending up at the same city as the author (hi Leonard, are you going to be at LibrePlanet in Boston?), so maybe it’s not so bad. Who can predict?


petit disclaimer:
My employer has enough opinions of its own, without having to have mine too.