llms and humans unite, you have nothing to lose but your chores»
There a class of tasks that drive me to distraction, and that’s when I am obliged to execute them myself, in the full knowledge that a computer could do them better and faster. These tasks are manual, require little mental or physical effort on my part, are dull and monotonous, and achingly time-consuming. Most frustrating is when I have to do them using a computer. I can feel it laughing at me, as it watches the great big jelly brain stooping to repetitively click on buttons with those big dumb meat fingers. It *knows* how to programmatically activate those buttons. It just refuses to let itself be automated.
I’m being too harsh. It’s not my poor computers’ fault. Personal computers are always and everywhere the ally of the free human; that is, when they are not held in chains by proprietary software, unproductive market forces or unnecessary complexity.
This weekend’s low-level servitude example: filing my expenses. Because February had me flitting around like a social butterfly in Denver, my expenses this month weremprimarily Uber rides.
The corporate expense system — a third-party website we pay to simplify matters, marginally — requires me to go through each charge to Uber, click buttons repetitively to tell it that these fees are “transport”, type a sole period into a “description” text box because that’s a required field, and then upload the correct receipt for each credit card entry.
Uber’s interface requires multiple clicks per ride to obtain a PDF. The PDFs’ default filenames are a UUID. In most cases, you can match the receipt to the credit purchase by price; but the price is buried within the PDF or webpage. Sometimes it doesn’t match, because Uber adds the tip to the receipt, but charges the credit card separately, so you have to go searching for the right receipt to fit with a $1.00 charge.
The third-party expense system doesn’t have an API. Uber only offers an API for corporate accounts and fancy-schmancy developers. It would probably be hell to manually tie them together anyway. They’re all wired into the same Web, but browsers, web front-ends, and databases are now in a state of fallen ignorance about each other, trapped behind their baroque corporate strongholds on our new, world walled web. Users have learned to be helpless between these bastions, as overpaid guards watch from the ramparts, as we mehums stoically carry their bits across a blasted landscape of unautomatable browsers, from one ziggurat to another, in obeyance to a default bureaucracy that has no need to exist.
Well, anyway, I knew I had a choice between an hour of torment in which I knew I would screw up multiple receipt->credit correlations, fat-finger abortive drag-and-drops, and repetitively save files called “eeec2f4d-0e7b-47b0-9084-54254b227902 (3).pdf” over each other. Or I could spend three hours vibe-coding with Claude, and simplify (though not eliminate) this problem once and for all.
Voila, the Uber Receipt Downloader. It goes to Uber’s website at the behest of some command-line options, clicks all the right links, downloads the file, and then saves it with the price and date prominent in the file-name. It probably isn’t for you (your particular brand of servitude is almost certainly different, and besides there are dozens of others like it on github).
I’m not really presenting this as a general open source utility, but as indicative of what I hope is a trend. Some notes:
I much prefer a few hours of intellectual exploration with an AI, than even a few minutes of needless drudgery. Your cost-benefit analysis may vary.
The tool (and the MCP server I used with Claude to hack on it) doesn’t scrape — it co-controls my web browser using the Chrome DevTool Protocol. This means that I don’t have to authenticate, or simulate a normal user session. It just uses my existing user session in my browser. I watch my computer clicking the buttons for me. AS IT SHOULD.
I needed domain knowledge to pull this off, but I’m pretty sure I would never have even considered this without an LLM by my side. With these code-writing tools, we are moving incrementally closer to everyone having this capability. We’re not all the way to a moldable computing utopia, but we’re closer.
Now I’m happy rather than sad — as are my co-workers who really needed me to do my expenses two weeks ago. And I just feel (as perhaps I have done at the most optimistic high-points of computing possibility in the past — when I got my first modem, when I saw the Web, when RSS and REST APIs ruled the world) as though we’re edging ever closer to having computers tear down the walls between us instead of building them. When we work together as peers, we can optimize away these unneccesary chores, turn away from this distracting trivia, and turn our precious attention, fleshy brains and neural nets, together to the Great Work.
I work in an entirely (mostly) remote organization. Inside that organization, I interact with an extremely decentralized ecosystem. Some of the people I co-operate with the most are in other orgs, some are individual contractors volunteers, others are conglomerations of mononymed Internet-monickered mystery-types. A remarkable amount of my and my colleagues work is intended at making this whole system less opaque and confusing.
We spend a lot of time jocularly consoling people that this fog-of-peace is one of the consequences of decentralization: but is it any different from working in an impenetrable bureaucracy or a sprawling marketplace? It definitely feels harder, in the same way that I’ve found other extremely horizontal un-organizations (like Noisebridge) more challenging to parse than more trad orgs. There’s a reason why Seeing Like A State talks so much about legibility. Things that are built in other ways to the standard top-down system are going to have to invent their own ways to be legible, or they will hide their functioning in entirely new, impenetrable manners.
A year or so ago, I described one of the frustrations: modern remote, distributed, internet-mediated environments, it struck me, have become oral cultures. And yes, this is me, trapped at last in nostalgic revery, bemoaning the passing of the older memory of a (non-existent) Internet, which was all about RFCs, and beautifully-crafted emails, and well-pruned wikis; and also me whining about Youtube Videos, and Zoom chats, and the tiktoks and the DMs.
But to be honest, it’s not about the Internet getting less literate or some such kneejerk swing. It’s about the multi-modality of real-world human interaction cramming itself into another, narrower-bounded space. Writing is that, of course, a narrower form: but it’s a compression we have a lot of familiarity with, and a lot of scaffolding to support. Orality per se, as a communication medium in itself, has perhaps withered a little recently, back in the real world. We shunted it off to performance and spectacle. Its unsearchability and distance from writing made it a form unsuited for legibility. It sat in phone-calls and radio, answering machines and tannoys. Oral histories are anything but: they’re transcripts, and hard to interpret.
And now, somehow, it’s back — and it’s having to hold up so much more. I predicted, years ago, the rise of informality in the public square, but I didn’t imagine that in pushing more of our life through these high-broadband pipes, that it would switch so quickly from literature, to “video”, to … whatever this is. This chit-chat async glimmer of multiple conversations, hemming and hahing and trying so hard to implement ephemerality.
It’s so flammable, right now, too. We talk and then suddenly misunderstand, and the misunderstandings jump from conversation to conversation faster than we can track. No-one is on the same page, because there are no pages — just scrolling and backscrolls.
I’m not really bemoaning this: it’s another fascinating trail, another thing that demands tooling. And the tooling, like dock leaves, appear magically next to the nettle: I love how AI is able to listen so hard to orality, and hopefully parse and pull it all together. Local AIs, at least; AIs that are ours, not those of a wider surveillance, working to make everything legible, so the state will see all.
I’m not fond of Twitter as a communicative form — I still believe that the question “what if we put everyone on the same IRC channel?” was one that we didn’t need to run an experiment to answer. But I am enjoying having multiple reincarnations of Twitter, from the individual yurts of the Fediverse to the highrise tower of Bluesky’s Shared Heap, even unto the crowded souqs of Farcaster and the dotted Nostr seasteads on the far horizon. The Internet is a metamedium and it should not have a strong flavor, but every little created medium on it should serve a different palate.
And then there’s the original, the Ur, the Babylon of short-form shitposting, the Neo-Assyrian neorx CEO kingdom of X. What a strange place that is now! I respect my friends who, long long before I did, saw the seed of MAGA Musk in Elon. I think modeling people, and systems, is important, even if, particularly if, you find yourself opposed to them. And not recognising, not being able to predict Elon’s implied trajectory, was a failure I took to heart, if only because a huge chunk of my job for the last decade has been, if not predicting, then at least how to swiftly recognise an impending trope before it happens.
So, talking of recognition: the #resistance of BlueSky and X underground both spent this week poring over the thoughts, X spaces, and Fortnite livestreams of one “Adrian Dittmann“, an X personality who acts and sounds uncannily like Elon Musk, if Elon Musk had a pseudonymous Finsta-ish account for when he was too Elon for main. And given Elon’s main, that’s a pretty spicy alter.
So is he Elon? Well, stranger things have happened, but I really don’t think so. I feel like I’m spoilering about a week or so of social media entertainment for you here by not trying to lead you down the rat-hole of evidence in favor for Dittman-Elon, but this Spectator piece, apparently based on research conducted by crimew and frends , lays out the counter-argument — in that they kinda doxxed the real Dittmann. It’s not as the lawyers say, dispositive, but I think it holds water better than the pro-Dittmann!Elon arguments. (I’m using the fanfic bang notation here, where Dittmann!Elon is an official variant of the canonical Elon).
In which I mea culpa about Elon, and talk of individual leaders as poor load-bearing materials.
Anyway, at the risk of looking like an idiot again regarding Musk, let me assume for now that Dittmann does not equal Musk, and explain to you why so many of the people that I think were right in predicting Musk’s Ascent to MAGAdom, might be less good at finding the truth behind this story.
The key point is that what drew people into believe Musk == Dittmann is that Dittman consistently acts like Musk badly unsuccessfully covering-up his identity as the world’s richest man. He’s evasive about his real identity, he makes errors that Musk might make (like saying “I” when he seems to mean Musk), he says things that map to what Musk appears to think, but much more bluntly. When pressed on whether he is Musk, he rarely denies it, and changes the subject or ends the conversation.
These all seem like slam-dunk arguments for Dittmann!Musk — unless you’re also maintaining in your head the counterfactual. These are all behaviours that Dittmann!Adrian also has a good reason to pursue as well. He gets more views, more participants, more followers from being mysteriously Musk-like.
We can model Dittmann!Adrian’s behaviour as a conscious decision: he is acting at all times like he is almost certainly Musk, because that translates into money and fame for him. Or we can model it unconsciously — the closer he behaves in a Musk-like way, the more those things happens, so he just naturally gravitates to them.
That raises the question though: why is he so bad at pretending to be Musk? Could Dittmann!Adrian do a better job of masquerading as Musk — to do a better job at pretending to be him? Like, rather than being an idiot Musk who always gives things away to canny Fortnite livestreamers, could Dittmann manufacture something that more convincingly indicates he’s Musk (while being a lie)? Well, maybe, but that’s a dangerous game. If he really was trying to seriously pass himself off as Musk, Musk would have a good reason to squash him like a bug.
In the universe where Dittmann!Adrian exists, and Adrian isn’t Musk, Dittmann mostly benefits from living in a grey zone, constantly playing coy about whether he is a wave or a particle, to keep you wanting to observe him. I mean, I’m doing it now, feeding the Dittmann fever here! Dittmann’s status mostly depends on the ambiguity of his identity. (Honestly, there’s probably a fine post-ambiguity career for him as a “I Was Elon’s Double” tell-all: but that’s got risks of its own.)
On the other other hand, a lot of people still think that Dittmann is Musk — both Musk-lovers and Musk-haters. Even now, I feel shaky saying that I think we live in the Dittmann!Adrian universe. I know lots of people are going to disagree, and ask me for more evidence.
All I can say is that we — I — often come to believe things to be true because a wide subsection of people believe them. That group doesn’t have to be particularly monolithic. They may believe them from different angles. Elon-haters love the Dittmann!Elon story because he comes across as a dumbass misogynist troll. Elon-lovers love it because — well, for the same reasons, but with a positive valence. The cost/benefit of a journalist writing an article that keeps the question going, rather than actually doing a bunch of work to definitively answer it, leans strongly toward just keeping the story bubbling.
I continue to believe that sharing our various beliefs — even flawed or wrong beliefs — into a public space helps us get closer to the truth. Or at least, we don’t have any better methods that don’t include this initial pooling capability. But one of the failure modes of the modern Internet occurs when a large number of people have incentives that align — but align to point away from the truth, even as the evidence mounts up, in the backwaters and interstices.
It’s significant to me that the only people who did the digging against the Dittmann!Elon thesis seem to be a group of extremely queer internet detectives, and the only people who seemed inclined to publish it was a conservative media outlet whose incentives don’t quite align with the rest of the Musk-watching media.
Dittman!Adrian and Dittman!Musk aren’t playing their game directly toward either of those two groups, so they’re both in a good position, and with good incentives, to look in a different direction, think in a different way, and then publicise a different view. This is what diversity should be, and why co-operation (whether trustful or not) between diverse agents is so vital in seeking the truth. Whatever that is.
Continuing the Old Hippy mulling: instead of just trying to make old fires spontaneously light again using the same old ashes, my thought is — how do you find a role for the values, for the insight, and keep that in a place that preserves the best of it? I wrote about this a fair bit, though somewhat elliptically, a couple of years ago in Terminal Values, Cognitive Liberty. The argument I was trying to answer there was the one in favor of abandonment. “Free speech, free software, encryption, digital autonomy: these are nice, but what are they for?“, goes the question. I see this as that part of the “wait, are we the baddies?” conversation, an even more dispiriting rhetorical question that boils down to “what are we even doing this for?”
(To be fair, I ask this about everything, usually 30 seconds into doing it. I ask it about getting out of bed, shaving, writing this, dressing a child, baking a cake. But I do need an answer. Many people do not, or at least they are driven by an internal motivation to, say, backport Grand Theft Auto III to the Dreamcast, or making a 777 model from manila folders, rather than to go searching for why they even bother.)
The natural exit from Old Hippydom is to leap into the New Thing. Or take a step back into the Old Thing You Were Doing Before Being A Hippy. All good responses! But if you want the culture you built and valued to persist you need to find something a little bit more timeless to hang its hat upon.
Long-time readers will recall that I’ve spent twenty years or so trying to answer two questions: a) “How many people do you need to be famous for?”, and b) “How deep is geek culture?”. I have mostly settled on two temporary answers, which was a) 7000 (thank you, Stewart Lee), and b) “not as deep as politics”. By the second, I meant that geek culture became a broad mainstream movement (far greater than I could have imagined), but it ultimately could not keep itself together, faced with a greater rift across the political spectrum. Its concerns seemed shallow, petty, uninteresting and irrelevant — its speakers could not resist being drawn into that wider conversation, that set of frames. I make it sound a like a personal failing, but I don’t. Politics is important. All I mean is that when you see someone who is a member of a technological subculture, you can also, and primarily, place them on a political spectrum. And which is more culturally important? Politics. It is higher up in the z-ordering of the display. It has priority.
“Everything is political” is a claim that seeks to explain this; I don’t think it does, just as “Everything is religious”, or “Everything is biological” don’t do more than describe other potential orderings. I suppose I could make the claim that “everything is technological” — which is what I try to discriminate my position apart from in Terminal Values. What I am trying to say is the set of values that you can draw from digital technology — especially if they were the ones that you imprinted on in its first fifty years, have weight and importance that can outlast temporary blooms in cultural popularity and relevance.
Newtonians dreamed of a clockwork universe, Darwinists saw everything evolve; their models weren’t as overpowering as they might have imagined, but those positions (and their critiques) add to the overall symphony of explanations and justifications.
I guess what I want to explore a little is, not what is left or salvaged from the digital revolution, but what persists. What is useful, not in the sense of serving new or temporary sets of concerns, but what will remain useful when we are gone. And for that we need to dig deeper than politics, or culture: to some even deeper bedrock.
Around about 2000, I began to consider how it would be, when and if I became an Old Hippy.
Old Hippies had been a common part of my cultural heritage ever since the Eighties, when it was generally understood that they were terrible embarrassment to everyone — including, somehow, hippies. They continued to wear the fashions of the 1960s long after everyone had moved on, they grabbed you on the street and tried to explain to you about hemp or organics or vegetarianism, and they played very bad music on their shameful acoustic guitars at otherwise perfectly salvageable parties.
The dominant feature of Old Hippies is that they had overstayed their welcome: clinging onto a culture that had faded away, trying to re-start arguments that had turned ashen cold, manning barricades that had long been dismantled. The world had moved on, but the Old Hippies had not.
In 2000, full of the excitement and zeitgeistiness of the Internet, a happy little barricade-warrior of the moment, I still had enough sense to think about how I would feel when it was all history. Would I move on? Would I just be an Old Hippy, only talking about the World Wide Web and modems instead of Glastonbury and The Mommas and Papas?
You didn’t need to be very old to be an old hippy, at least for someone as young as me. I remember a friend of my sister noting that one of their friends had somehow ended up an old hippy in their mid-twenties. The canonical old hippy in my understanding was Neil from the Young Ones, who was a college student. Thinking about it, if you were 25 in 1969, you were forty in 1984.
I am 55. I have been an old hippy for nearly a decade. I knew it, but I didn’t want to talk about it. Instead, like baldness or liverspots, I just watched it form, in a deadly fascination, on myself and others.
A few fates that I’ve avoided, barely: one is joining the Nineties Internet Re-Enactment Society, where communities scrabble to re-inforce the dominant vibe of — what, two? three? years maximum? — the early networks. I mean, I still have it in my habits — my dinky RSS reader, my affinity for plain text, email. A co-worker described watching me work as “like someone playing one of those adventure games”. I can see it.
I (mostly) don’t try to enforce all this onto the world, or tut-tut those who don’t get it. I know why I got it. I learned vi to impress a girl; I liked incantations and real names and esoterica. The Nineties internet was in many ways, an expansion of 1980s America, and learning UNIX in a foreign country was like decoding what TGIF was, and what were Saturday Morning Cartoons, and Saturday Night Live, and Sunday NFL: the feast days, the martyrology, of an alien dominant culture.
There’s a tradition you draw from, but the tradition evolves, it doesn’t mindlessly recreate. You don’t stay in the moment that you entered that tradition. My daughter says that Discord is IRC for young people, Slack is IRC for old people, and IRC is for people who can’t get out of their chair. I use Discord, and Slack, and barely remember to log into IRC (are there really 3,416 messages waiting for me in #neomutt?). I’m not trying to stay hip, chat, I’m just continuing to float downstream.
A related path of old hippydom I could have taken, which is deciding the web is it. This is more old hippydom for 2000s kids: the post-WHATWG apotheosis of web as the once and future platform. Why would you want (WWyW) anything else? You got your virtual machine, your abstracted i/o, your interop, your package delivery, your security model, what else do you need? Bluetooth?
I think getting burned out at the W3C punched this out of me. The EME fight was partly predicated on a belief that if they didn’t let DRM into the web platform, then the W3C would lose part of the universe to native apps, and that must not happen. I remember at one meeting saying, in effect, “would that be so bad? That maybe DRM is just a thing that is so alien to the web model, that it is better to leave it outside?” But the feeling was that if the web was not everything to everyone, then it would lose.
The emotional response in me was, then let it lose. But that wasn’t right: but it certainly figured in me thinking that maybe the values that I wanted to stick around for, that I wanted to keep as the core of my old hippydom, did not necessarily just stay in one technology, or one era. If I was going to act like they were eternal values, worth freeze-drying myself for, they should and could move between implementations, and across decades.
Why do governments go after companies and executives of services of more weakly encrypted tools?
It’s very hard, this early, to pierce through what’s going on with the French authorities’ arrest of Pavel Durov, the CEO of Telegram — but that doesn’t stop people from having pet theories. Was it retaliation from the US and the FBI for not backdooring Telegram? Was it a favor to Durov so he could hide from Putin? Was it just the grinding wheels of French justice?
I’m sure we’ll understand more details of Durov’s case in the next few days, but motivations — especially those anthropomorphically projected onto entire states — are never really resolved satisfactorily. If you think LLMs lack explainability, try guessing the weights of a million human-shaped neurons in the multi-Leviathan model that is international politics. It’s not that we’ll never have explanations: it’s just that we’ll never be able to point to one as definitive.
Of course, the intractability of large systems never stopped anyone from trivializing those crushed under their lumberings with a pat explanation or two on a blog. (It certainly won’t stop me, who, when I was a columnist, made more-or-less a career out of it.)
Back in the Before iPhone Times, BlackBerry was a cute range of mobile devices with a little keyboard and screen that offered low-cost messaging in an era when phones were bad at everything that wasn’t “talking to people” (and they weren’t great at that).
We think of mobile phones these days as individually-owned devices — intimately so — but BlackBerrys were the stuff of institutional purchasing. In the 90s, companies and governments bought or rented BlackBerrys en masse, and handed out the units to their staff to keep in touch. In the pre-cloud era, these institutions were cautious about ceding a chunk of their internal comms infrastructure to a third-party, let alone a Canadian third-party, so RIM built reassuring-sounding content privacy into their design. A chunk of the message-relaying work was done by “BlackBerry Enterprise Server” which was closed-source, but sat on-prem. Corporate BlackBerrys could send instant messages directly to one another, via RIM’s systems, but enterprises could flash their users’ devices with a shared key that would make their messages undecipherable by anyone who didn’t have the key, including RIM or the telecomms networks the message passed over. None of it would really pass muster by modern cryptographic best practices, but it would be enough to get a CTO to sigh and say “ok, seems good enough. Sure.”, and sign off on the purchase.
Importantly, though, a lot of this encrypted security was optional, and protected these comms at the organizational, not individual, level. Companies could turn message privacy on and off. Even when turned on, the company itself could decrypt all the messages sent over their network if they needed to. Useful if you’re a heavily-regulated industry, or in the government or military.
Now, BlackBerry users loved their little type-y clicky things, and inevitably RIM realized they might have a consumer play on their hands (especially as smartphones began to get popular). They started selling BlackBerry devices direct to individuals via the mobile phone companies. RIM and the telcos played the part of the institutional buyers in this deal — they could turn on the encryption, and had access to the messages, although it was unclear from the outside who played what part. Did the telcos flash their devices with a shared key, or did RIM? Who was in charge of turning the privacy on and off?
All this ambiguity made infosec people leery of RIM’s promises, especially with consumer BlackBerry devices. But in general, people read this all as meaning that consumer BlackBerrys were secure enough. After all, even President Obama had a BlackBerry, so that must mean something?
Apparently so: Around about 2010, governments started publicly attacking RIM and BlackBerrys as a threat to national security and crime prevention. Law enforcement agencies started complaining about RIM’s non-cooperation. Countries like the UAE and India talked of throwing out RIM from their country entirely. It was the first big government vs internet messaging drama to play out in the press.
At the time, this puzzled me immensely. From the viewpoint of infosec insiders, spooks should have loved RIM! BlackBerrys were actually kind of insecure! If you wanted to get at the messages that individual BlackBerry customers — including, most visibly, drug dealers, who loved their BlackBerrys– you just had to hit up the (certainly domestic) telephone company they were using and get that shared key. Or you could maybe mandate what key that would be. You didn’t need to put pressure or ban RIM to do this!
But as I dug into it, I realized what may have been going on. RIM and the telcos had been helping the authorities, to the best of their abilities. They probably did a fair bit of explaining to the authorities how to tap a BlackBerry, and may even have done some of the heavy-lifting. When it came to consumer BlackBerrys, RIM didn’t have the hard and fast line of a Signal or other truly end-to-end encrypted tool. They could hand over the messages, and (as they would sometimes protest) often did.
But, crucially, they could not do this in every case. The reasons when they could not were primarily bureaucratic and technical. The drug dealers might have got smart and decided to change the key on their network, and neither RIM or the cops had a device to extract the key from. Or the authorities might want info on a corporate BlackBerry, which was uncrackable by BlackBerry using their existing infrastructure. Or a BlackBerry’s shared key might have been set by the phone company, not RIM, so RIM couldn’t directly co-operate, and needed to refer them back to the telco — who might have just cluelessly bounced them back to RIM. That kind of shuttlecock-up happens all too often, and it’s easy for the tech company to take the blame.
Ultimately, the problem was that RIM could not 100% state they had no access to BlackBerry data at all. They complied with some requests, but not others. The reasons were generally technical, not political — but they sounded to law enforcement and intelligence community ears like they were political.
Those political actors were not entirely wrong. RIM had made political decisions when designing the privacy of its tools. In particular, they had promised a bunch of customers that they were secure, and let a bunch of other customers think they were secure. RIM’s critics in governments were simply asking — why can’t you move the customers that we’d like to spy on from one bucket to the other?
Declining to do this was an existential commitment for RIM — if they undid those protections once, none of their major military and corporate customers would ever trust them again. They had to fight the ratchet that the governments were placing them in, because if they didn’t, their business would be over. And the more they fought, the angrier their government contacts became, because hey — you’re already doing this for some people. Why aren’t you doing it for this case? Law enforcement saw this as a political problem, so responded to it with political tactics: behind-the-scenes pressure, and when that didn’t work, public threats and sanctions.
Durov and the Ratchet
Like BlackBerry, I think a lot of infosec professionals are again confused as to why Telegram is getting it in the neck from the French government. It’s not even a well-designed tool.And I think the reason is the same: like BlackBerry, because of its opt-in, weakly protective tooling, Telegram can, and does, assist the authorities in some ways, but not others. I don’t mean this in a damning way — if Telegram gets a CSAM report, it takes down a channel. End-to-end encryption is opt-in on Telegram; they really do have access to user information that, say, a Signal or even WhatsApp doesn’t. There’s no technical reason for it not to have features on the backend to deal with spam and scams: a backend which — unlike an end-to-end encrypted tool — can peer in detail at a lot of user content. The authorities can plainly see that Telegram can technically do more to assist them: a lot more.
So why doesn’t Telegram do more to help the French government? As with RIM, Telegram’s excuses will be convoluted and hard for political authorities to parse. Maybe it’s because the French requests are for data it doesn’t have — chats where the participants were smart enough to turn on encryption. Maybe it’s just that if they provide that service for France, they’d have to provide it for everyone. Maybe France wants to see Russian communications. Maybe Telegram just doesn’t have the manpower. But the point here is that Durov is caught in the ratchet — the explanations as to what Telegram can and can’t do are a product of contingent history, and the French authorities can’t see why those contingencies can’t be changed.
If it sounds like I’m basically victim-blaming Durov for his own lack of commitment to infosec crypto orthodoxy here, I want to be clear: best practice, ideologically-pure end-to-end apps like Signal absolutely face the same ratchet. What I’m mostly trying to understand here is why Telegram and BlackBerry get more publicly targeted. I think the truth behind the amount of pushback they receive is more psychological than cryptographic. Humans who work in politics-adjacent roles get madder at someone who concedes part of the way, but refuses to bow further for what seem like political reasons, than someone who has a convincing argument that it is mathematics, not politics, that prevents them from complying further, and has stayed further down on the ratchet. Not much madder, but mad enough to more quickly consider using political tools against them to exact their compliance.
Echoing BlackBerry’s woes, I don’t think Telegram’s security compromises are a product of government pressure so much as historical contingencies. But I do think its weaknesses have ironically made it a greater target for the kind of unjust, escalatory, fundamentally ill-conceived actions that we have seen against Durov by the French authorities.
The motivations of government officials are hard to guess: but I do think it is accurate to say they see the world through political, not technical lenses.
More things that I’ve noticed about integrating LLMs into my workflow:
People like to compare LLMs in their own field of expertise (to see whether they will be replaced, or just because that’s what they can test). But what I tend to use them for is things that I am bad at. I’m okay at writing, so I don’t really see much improvement there. I’m really not good at programming, and so I’ve seen an impressive improvement in my productivity by using an LLM to augment what I’m doing.
It’s nice to be able something dumb questions without fear of looking stupid. Often the benefits seem to come from just the “rubber duck debugging” effect of just spelling out your thinking. The usual pattern is I ask a question, the LLM gives me a suggestion, I explain why that wouldn’t work, LLM apologises and offers something else. I point out why that wouldn’t work either, but start outlining something that could work. LLM commends me on my creativity, and starts spelling out what I could do to make that work, or its limitations.
And by contrast, as the fear of looking stupid throught dumb questions declines, I’ve found myself feeling more confident asking questions in other contexts.
I also spent some time today catching up on that last piece of hype, Meta’s VR bid. I don’t like to dismiss anyone’s work, but it’s strange how Meta has been shifting tone from Oculus’s gaming vibe to something more … generic? Flat enterprise? People poke fun at Mark Zuckerberg’s avatar, but honestly it’s really hard not to look like cyberzuck in the new environment. It’s just got this very bland feel to it. Also, the rough edges from the old Oculus Quest software still seem to pervade the whole platform, but without the wow factor to drive it. It was kind of fun to mess around trying to get your hands to work on the Quest. In this new world, I mostly spend my time trying to link user accounts and clicking on privacy options. I feel like I’m moving slow over broken things.
Sad about the District Court decision in Hachette vs. Internet Archive; not just because of the ruling against the Archive, but because of many people’s reaction to it online. People have strange intuitions, not just about the status of the law, but also of how it progresses. There’s some tut-tutting that an august institution like the Archive should be wandering this close to the spirit of the law, instead of playing safe.
But the Archive wouldn’t exist if it was playing safe: if you ever wonder why there is only one of them (and there should be thousands of them), the idea of just going out into the Web, and recording everything, is not playing it safe. Of course, nobody thinks that now, because we live in a world that is erected on the edifice of freely available search-engines, and a presumed right for us all to take data from the Net, and use it for many different things. But that is not the model that sit in the heart of a maximalist IP theory — or indeed, most jurisdictions that don’t allow for ad hoc exemptions and limitations to copyright. Under that model, everything is copyrighted, the moment it is fixed, and you don’t get to see it, or touch it, digitally, without negotiating a contract with the rightsholder.
That’s such a violently different world from the physically-bound, pre-digital world of copyright. I don’t need to contract with anyone to read a physical book; I don’t need to beg permission to lend someone else that knowledge.
Now, I know that alternative model of digital copyright seems to be also at odds with reality to many: that we can make as many copies as we want of non-physical data, give them to everybody, at zero cost, by default, and to stop that from happening, we must adopt a set of encumbrances that seem barely capable to stem that flow. But really, these are the limits of intellectual property as a model for either providing income, or effectively restricting the supply of knowledge
So we have a choice: it’s unclear what the middle-ground is, and whether there is a middle-ground at all. I used to think that this was the nature of digital technology — that there was no clear perimeters to how much copying, or how much transformation or derivation was tolerable, and that because of that, we’d live in an increasingly enforcement-heavy world, as one side attempted to draw a line in the sand, even as the sand shifted and writhed underneath them. To throw out another metaphor: that the punishments and locking-down would escalate, like the impossibility of making real advances in World War I led to a tragic no-man’s land. People would copy for zero cost on their do-anything-machines, so lawmakers and rightsholders would increase the fines, and lock down the machines by force of law.
I still think this is a fair outline, but I’m beginning to think maybe intellectual property was always like this. Fixing ideas onto a scarcity-based economic model, like nailing jelly to a wall.
What makes me sad, though, is even as the copyright maximalists attempt to create a government-enforced property system out of metaphors and thin air, people who claim to want justice, join forces with them. Or not so much justice, but fairness.
I talked a little about this with Nathan Schneider today in The Decentralists, my interview thing that will soon be a podcast. Nathan noted that some people benefit unduly from public goods — in his example, venture capitalists extracting value from open source — and if we wanted to have a fair system, then we needed to work out a way to stop this.
I don’t think that way at all: in many ways, public goods are always going to have free riders, freeloaders, pirates and exploiters. That’s why they’re public goods! We can’t exclude people from benefiting from them. But that doesn’t mean we need to work out how to fence them away and ration their benefits, based on who gets them. What we need to do is to work out how to free-riding from undermining the commons itself.
We are, as a species, peculiarly sensitive to cheats and slackards: it inspires our most immediate and profound sense of ire. It’s amazing how much brain matter we silently attend to calculating who has done what in our social circle, and how many fights start from disagreements about that assessment.
The positive version of that is that it inspires in us a desire for justice, and for equity. The negative side is that it breaks our brains when we have resources that everyone can keep taking from, without reducing the total amount.
If you just decide to walk away from the idea that free-riders must be punished in a digital space, you often get so much more done. One of the ways that the Internet beat every other digital networking project is that the rest of them were bogged down in working out who owed whom: protocols and interoperabilty foundered because so much of it was spent meticulously accounting for every bit. Same with the Web. It just got hand-waved away.
I think that some of the worse ramifications of the modern digital space is because of that hand-waving (the vacuum got filled by advertising, most notably), but it certainly wasn’t all bad. And, most importantly, ignoring who was free-riding on who did not immediately kill the service, as it collapsed under the weight of parasites. It turned out that, in many cases, you could still manage to maintain and create a service that was better than any pre-emptively cautious, accounting-based system, even when it had to deal with spammers or pirates or those too poor to theoretically justify their access to the world’s most precious information under any less generous model.
I think you can construct justice and equity as an exercise in carefully balancing the patterns of growth: those worse off get the benefit, those already well-off don’t get to fence it away from the rest. What I don’t see as useful is to zero-sum everything, just to make the calculation tractable. If you can work out a way to make everybody better off, we should allow it, without trying to judge whether those who benefit are worthy. The Internet Archive, clearly, makes everybody better off, in almost every axis. And it did that, even in a world where many such things are seen as too risky or destabilising to be considered.
“Sadly it turns out that the latest AI photo app y’awl using to look hot and sexy is built off the back of a training set full of work stolen from artists without payment.
How disappointing.
We sorted this shit years ago with Creative Commons licensing. It’s not hard to get right. #paytheartists”
It led to a heated debate! Here’s (with some few modifications) how I replied, which was sufficiently long that I felt I should pluck it out of the Facebookosphere, and settle it here:
I understand that people worry that large models built on publicly-available data are basically corporations reselling the Web back to us, but out of all the examples to draw upon to make that point, Stable Diffusion isn’t the best. It’s one of the first examples of a model whose weights are open, and free to reproduce, modify and share: https://github.com/Stability-AI/stablediffusion . Like many people here in the comments, you can download it, inspect it, run it locally, and share it. You need a GPU to run it at a reasonable speed, which makes it a little pricey to run. The cost of building these models is very pricey — around $600,000 or so, which means that there’s currently a power differential between large corporations who can afford to speculatively build and experiment with these models, and the rest of us. But the knowledge of how to do it is built on open science, and a number of orgs are doing it truly in the open — for example, https://www.eleuther.ai/ . All of these things, as ever, will get cheaper, and spread in use and experimentation.
Most importantly, the tool itself is just data; SD 1.0 was about 4.2GiB of floating-point numbers, I believe (taken from https://simonwillison.net/2022/Aug/29/stable-diffusion/ ). I’m currently using (literally, right now!) another open model, Whisper, which is 3GiB, and allows me to convert most spoken audio into text, and even translate it. I use it to, securely and privately, transcribe what I’m saying to myself through the day. I expect it will be encoded into hardware at some point very soon, so we will have open hardware that can do the kind of voice to text that you otherwise have to hand over to Google, Amazon, and co.
The ability to learn, condense knowledge, come to new conclusions, and empower people with that new knowledge, is what we do with the shared commonwealth of our creations every day. Copyright has not always been a feature of that process, but in many ways, it’s been an efficient adjunct to it: a way to compensate creators by taking a chunk from the costly act of copying itself. It’s a terrible fit to the modern digital world, though, just because that act of making a copy is now practically zero. Attempts to update it, have unfortunately revolved around trying to recreate the physical limits of previous copying equipment, and bolt it onto a system where that’s not where the revenue comes from.
It’s always been hard to stop these temporary monopolies from impeding the open commons that they all draw from, especially after we built a automatic copyright system post the Berne Convention, where everything was maximally locked down by default. That’s why Creative Commons was invented — because without that work, it was costly and near impossible to grant back to the commons, with legal certainty, the way that the commons could exist by default before the 1970s.
Again, I understand if people are worried that, say, Google is going to build tools that only they use to extract money from our shared heritage. But the problem isn’t that those tools should be illegal, and that anyone building or using them (like me, like EleutherAI, like any one following the instructions spelled out by the increasing, accelerating field of machine learning, and drawing on the things around them). It’s that the tools should be free, and open, and usable by everyone. Artists should get paid; and they shouldn’t have to pay for the privilege of building on our common heritage. They should be empowered to create amazing works from new tools, just as they did with the camera, the television, the sampler and the VHS recorder, the printer, the photocopier, Photoshop, and the Internet. A 4.2GiB file isn’t a heist of every single artwork on the Internet, and those who think it is are the ones undervaluing their own contributions and creativity. It’s an amazing summary of what we know about art, and everyone should be able to use it to learn, grow, and create.
I guess it’s appropriate that we can’t agree on what the brain worms metaphor’s original vehicle actually is. In his description of the Internet culture term, Max Read claims, reasonably, that the originals are maybe like tapeworms or toxoplasma. But I always think about the Ceti Eel in Wrath of Khan (but then, I’m always thinking of Wrath of Khan, especially, these days, the imminent off-Broadway musical).
To be infested with a brain worm is to have become a one-note (or a cacophony of discordant notes) speaker. To have all your behaviors, at least online, collapse into one strident position. To shore up every exit from that position with every mental barricade. A mind trap.
I will insist that I’m right about the best analogy. Like the Ceti eel, the modern brain worm usually gets in via your ear (or Twitter feed). It “render[s] the victim extremely susceptible to suggestion,” as Khan notes: Chekov later confirms that “the creatures in our bodies… control our minds …made us say lies …do things”. Madness, then death follows. Metaphorical brain worms, with COVID and measles, can kill you nowadays. In happier times, you could get away with just agyria.
Brain worms certainly seemed to have grown more virulent, more vicious, recently. I worry about my proximity to them. As I’m hinting, I’m considering slinking into punditry again, and woah nelly, do brain worms seem to be an occupational hazard in those dark woods. I think I’ve lost more friends and acquaintances to brain worms than the pandemic. From 9/11 truthing, to whatever it is that’s slamming around Glenn’s cortex these days, from election-disbelievers to Russia-runs-it-allism.
Since I was a young man clutching the Loompanics catalog for the first time, I’ve actively explored strange new views; sought out new lies and new inclinations. But watching good people all around me just be consumed by an idea, possessed and ridden by these loa, trapped by an illusion that if they just moved one foot to the left or right, would dissolve away, has given me serious pause. If I open my mouth and speak my mind again, will the brain worms get in that way? Start polishing up my prejudices until they’re clean, consistent, and shiny, and one day find myself unable to drag my eyes away from their distorted mirror image?
Or you know, maybe the brain worms have already got me? Like most people who read books or say long words, I have a few brain worms that I keep as pets. They’re fun, they’re conversation pieces, and you can bring them out for people to coo at during parties.
I’m still confident that if they turned rabid and started attacking my friends, I’d have the sense to put them down — the worms, not my friends, of course (oh no maybe they have already got me)?
My pet brain worms: the Internet (still with its capital letter); anarchism of a harmless, de-fanged kind; a litter of related ones bred from the same pedigree. These days, decentralization would be the obvious one, I guess. My friends and relatives, watching me wading in booty-shorts through the cryptocurrency swamp, worry, but I think that’s a little too obvious to snag me.
But, of course, nobody with a brain worm thinks they have brain worms. So how do you protect yourself? Alan Moore’s old trick was to tell his closest that they should retrieve him from whatever mindfuck he was pursuing, but only if he started becoming less productive. I’m not sure I want to take advice from Alan Moore on this matter, however, especially as I suspect a brain worm would makefar more prolific, not less. I mean, this is why pundits have them — they’re superspreaders. A brain worm that doesn’t target pundits would not be a successful brain worm. Just ask Richard Dawkins: a man who, on some deep level, must know that the memes are now defining him, not the other way around.
Making hard-to-wriggle-out-of testable predictions — make your beliefs pay rent, as the origin of so many geek brain worms whispers to me from his wicked lair — would, I would hope, help ground me. But I need to avoid pattern-matching as I seek out those beliefs! Or else there’s a mountain of evidence awaiting me that supports my position! You just need to let me devote more time to finding it!
Ultimately, all I can assume is that the best practical guard against monsters is to make sure you’re not hurting anyone — or inspiring others to hurt themselves or others. No one deserves it, no matter what the worms say. It may make you a quieter, weaker source of thought: but tell the voices in your head that worms who prosper long term will be the ones who don’t kill their hosts.