You’re absolutely right!
I continue to be one of those people using AI, but in a way that I hope isn’t too insufferable; or at least, I try to be sensitive to the suffering it may cause to those around me. I also feel like I am using it in a certain way: an emerging sub-sect of practice for a certain clientele. It may be age-related: I am someone who believes in hypertext, in composable tools, and malleable interfaces, in people writing and sharing their code, of owning the means of computation, and a free culture of sharing. I am also very Unixy, which is not unrelated. Why yes, I do own a mechanical keyboard, why do you ask?
My etiquette is tied up with all this, I think: I resonated with Alex Martsinovich’s It’s rude to show AI output to people, whose guidelines I find myself following. I don’t, as he warns, just say “Hey I asked ChatGPT this and it said”, or (even worse) paste the slop directly into the chat. I do think, like Fedora may, that one should be transparent about AI usage. And my added twist is that I really want to make my use of LLMs transparent and reproducible, to the extent that LLMs are.
To give an example, a friend of mine pointed me to the writer Samantha Hancox-Li, and I was curious to understand her journal, Liberal Currents, better. I found a Youtube video explaining it, but I didn’t really need to spend the time watching the whole video. So I grabbed the subtitles, fed that into a LLM, and asked it for a detailed summary. That answered my question, but I wanted to check in to make sure the model wasn’t on crack, so I threw a tiny bit of it into my chat with my friend, along with a “does this sound right?”. I sat between my friend and the LLM, I’d signalled that it may be unreliable, I’d edited it so that it was relevant and interesting.
But also, I wanted to show the process, including my own potential mistakes and biases that might have led to anything wrong. So I stuck this at the end of my mesage to show my working:
|
1 2 3 |
yt-dlp --write-auto-sub --skip-download -o "/tmp/subs.%(ext)s" "https://www.youtube.com/watch?v=FfZWlFjmmo0" files-to-prompt /tmp/subs.en.vtt | llm "Can you turn this transcript into a (detailed) description of the conversation" |
This lets you regenerate the text that I used (of course, it won’t be exactly the same, because llms rarely repeat themselves, but as time goes on, it’ll probably be a better summary). It also shows what I am depending on, including my own prompt and the programs I used.
This seems to be both polite in the way that Martsinovich would like us to be, lives up to my own personal set of ethics. It’s also a bit of an affectation and I’m not sure how long or consistently I’ll do it — but hey, sometimes these nerdish quirks become as fleeting and ephemeral as writing your geekcode or hand-crafting micro-formats, and sometimes they become smileys and markdown. I hope dearly that llm’s skill at coding (and explaining coding) will mean we can throw around such executable fragments until we all become a little literate in programs as well as words.

October 7th, 2025 at 11:33 pm
So, to be clear, you weren’t prepared to spend your own time to understand and form your own summary of the video, but you were prepared to ask a friend to spend their time verifying it for you?
You don’t sound like a good friend.
October 8th, 2025 at 4:27 pm
I’m not sure you’ll ever read this, Nik, but, no, that’s not what happened. Rather than burden my friend for an explanation of who this person is, I went out, found a video, pulled *that person’s* description of themselves, and then said, does that sound right?
And my friend said “yes!” because she had enough context to be able to confirm or deny. It was not only a second or so in the conversation, but it was a pleasant part of it, and we moved on.
I sort of understand where you’re going, but I actually spend a lot of time thinking about where the externalities are in conversations. My number one rule is the person who has the closest access to the context should probably provide it in a conversation. So when I bring in my thoughts, I try to link to their sources — including, in this case, code — because it’s much harder for someone else to try and find what I’m talking about than it is for me to provide the resources.
Another thing I try to do is not hurt people with my words. Did you think that posting this was going to make me happier? If you wanted to make me less happy, why was that? Did you want to punish me for what you thought I was doing to my friend?
Now that you have a better idea that what you thought did not happen, do you feel different? What do you feel? Why do you feel that? I still feel hurt, but maybe I’m too sensitive.