skip to main bit
a man slumped on his desk, from 'The Sleep of Reason Produces
      Monsters'

Oblomovka

Currently:

Archive for August 29th, 2025

2025-08-29

LLMs for the Old and Infirm

A few people have asked me (an old man) how I manage to use LLMs in my life without being driven insane by their horrid new-fangledness, their hallucinations, their wanton sycophancy, the hype, the grift, and the everpresent risk of being lured into psychosis. The simple answer is that, as a command-line fogey, I use Simon Willison’s excellent llm program in the terminal, and trap the poor things in confines of being just another unix utility in my toolkit, along with sed, pandoc, and the rest.

Below is a list of examples of how I use llm, plucked from a random day. I generated the list by running:

I converted that result.md into this (also pasted below), using pandoc -i result.md -o index.html. I suppose I could have asked the llm to output directly into HTML, but I always peruse and tweak the output of these models, and that’s easier to do in markdown.

To make the linked pages, which I anticipated should contain rough transcripts of the results of those llm commands, I asked llm to write me a program to generate them:

files-to-prompt is a simple program that concatenates a file(s)’ contents with its name — a great way to slam a lot of files into a prompt with sufficient context. So I’m throwing this llm the output of the last llm command’s results.

The -T SQLite bit gives my llm model of choice (this is all being run on Anthropic’s Claude by default, but I could switch it to OpenAI, or a local LLM very easily) a tool taht gives it read-only access to a local sqlite file, here giving access to the llm command’s own logs. Very recursive. LLMs know enough SQL to be dangerous, and the tool gives it enough context to know it’s talking to sqlite, so it can find out schemas, and explore the contents by itself.

The -x restricts llm‘s output to just the bit of the LLM’s answer that is surrounded by `-style markdown code prompts, a very effective way to just get the source code, without any of the tedious explanation that might accompany it.

That produced (with a few very minor tweaks by me) this Python program. And that nice HTMLification of that Python program came via this command:

As you can see, I’m still fairly heavily stuck in the 1990s, Unix and hand-crafted HTML and all. But now I have a happy Sirius Cybernetics buddy from the future to help me. Share and enjoy!

PS Here’s Simon’s far better guide to using llm.

My LLM Usage Log – Categorized Summary

Shell Scripting & Development

System Administration & DevOps

Git & Version Control

Filecoin and Ethereum Technical Support

Technical Troubleshooting & Debugging

Writing & Language Questions

Translation Services

Terminal & Display Tools

Time Zone & Calculations

Meta-Analysis & Documentation

Research & Analysis