skip to main bit
a man slumped on his desk, from 'The Sleep of Reason Produces



Archive for the ‘AI’ Category


home-grown talent

I’m downloading the large language model llama-13B-hf as we speak, hoping to get it going on the GPU I have for games. What strange gloss will they put on this moment in history, where machine-learning at home was enabledby videogame users who couldn’t bear to shift from general-purpose computing machines to consoles?

My iron self-discipline will surely prevent me from playing around all night trying to get this to work. My hope is to continue the experiment that I began with GPT3, which is using it to filter and translate my social media feed. Even on Mastodon, I still feel those jolts of anxiety when someone confidently shoots a verbal gunshot into the air, and I watch it arcing across the sky, landing, accidentally or not, into my heart.

(So far, it’s not running because of a capitalization typo. I am impressed that people think we have the wherewithal to practice AI Safety when we can’t even agree on how to capitalize “LlamaTokenizer”.)

Anyway, so my plan is to use this to identify posts that would upset me, and rephrase them in a form that preserves their meaning without giving me that gut-punch. Is that bad? Am I cloaking myself from the truth by doing this? Letting a MACHINE mess with what people are saying to me?

I’m not sure there’s a coherent position that works against that. I choose what I read all the time. I’m seeking to preserve the content of the message, if not its tone. If anything, I’m trying to make it less likely that I’ll ignore, filter, or refuse to engage with it. (I also want this system to summarize and re-iterate the posts that it most mangles, so I’ll always have some extra reminder of what I’m missing.)

Of course, I’m being an absolute angel about how I do this. But will everyone else carefully construct a system to answer the most obvious objections? Another outrage, I guess. But how will I know you’re outraged? How will you know who is doing this at all? (And will they really want to?)

(I got it working. In the initial test of commonsense, it told me that ants have four legs. When I asked it again how many legs an ant has, it said:

“Answer: Six, because you can’t have eight without a pair of pants on.”


(Update: I fed it the Alpaca Lo-Ra. Now it says:

An ant has six legs for movement and to carry its food. Ants use their legs to move around quickly and efficiently, allowing them to find food sources and avoid predators.

Well, mostly it says this. After multiple iterations, it once added that they have another couple of extra legs for picking up food, but hey, easy mistake to make.)


Only fans

My PC died yesterday, screaming in pain as its brain heated over boiling point. I went out to Central Computer, San Francisco’s local computer store to get it a new fan. I got the wrong one of course, but jury-rigged it in anyway. It didn’t help: I think the CPU cooler may have died too, in the wreckage.

One of the things I’ve been punishing that machine with is Whisper, a speech-to-text ML model that you can cram into a consumer GPU. Peter Thiel likes to say that cryptocurrency is libertarian, and AI is communist (because it requires powerful, locally-connected resources, and might be thrown at the calculation problem). AI certain seems to be generating massive crop surpluses: Whisper was literally a side project for OpenAI so that they could use it to parse and suck down video sources for GPT’s maw. I find this to be just one of the indications of an age of wonder. I’ve spent years worrying that open source was falling behind commercial speech recognition tooling, and OpenAI just chucked one over the transom as favor. Oh, and it also translates, tolerably, and sometimes accidentally.

But my point here is what a pleasure it is to run these tools locally. As Simon, now AI whisperer to the world, notes, there’s a substantial difference from feeding an LLM through a grate in OpenAI’s door, to having it run under your own control, and/or passing around the model among friends and submitting it to the processes of open improvement.

Having it sit with my domain, means that I can do things like record myself all day, and then convert everything I’ve said into text at bedtime. Even though I mostly seem to talk to my cat, just my asides or mutterings are useful to me. I can throw videos or talks at it, I can use it to control my house (ah, the geek dream). I suspect, when GPT or llama gets lopped down enough to comfortably fit on that machine, it’ll be straightforward to wire all of these tools: voice -> text -> GPT -> voice. I imagine this is weeks if not days away. After years of sharing everything with Google, I’ll be able to have a private conversation with my computer again.

I also, in passing, think of that cautionary tale of open science, piracy, and brain uploading, Lena. What strange shapes will these models be stretched in private homes? What does it feel like to stick your hand into these talking machines?

And then, always conscious that it is not conscious, but nonetheless reminded of Dannie Abses’ poem, “In the Theater”, which describes a neurosurgeon, whose mistakes spark broken replies in a patient’s brain, as he tries futilely to remove their tumor.

‘Leave my soul alone, leave my soul alone,’   

that voice so arctic and that cry so odd   

had nowhere else to go—till the antique   

gramophone wound down and the words began

to blur and slow, ‘ … leave … my … soul … alone … ’


petit disclaimer:
My employer has enough opinions of its own, without having to have mine too.