2016-03-10»
go wild»I love watching the AlphaGo/이세돌 games. I barely know anything about Go, so I’m essentially pursuing my favourite hobby of watching smarter people reach out beyond their comprehension. The little shortcuts of explanations between expert Go players: the flurry of hand movements, the little trial explanations of future moves, and Go’s beautiful vocabulary, the subcultural mix of deliberate ironic calm and background, barely concealed anxiety and excitement. A friend said it felt like “surrealist theater” sometimes. But what I love about games, about programs, about science is that even when it’s hidden and barely explicable, there’s always something there.
Nobody seems to understand AlphaGo’s wilder moves. In the second game, everyone commenting belatedly realised that it was doing something in the center when everyone thought it was losing the upper right to Lee. Opinions on who was winning swung wildly from side to side. AlphaGo itself has a metric of how it thinks its doing (it resigns if it perceives it has a less than 10% chance of winning). We don’t get to see what that is in the game, but the program’s British inventors said afterwards that AlphaGo thought it had a 50/50 chance in the mid-game, but its confidence slowly and consistently increased towards the end. Were AlphaGo’s early moves madness or genius, someone asked. We’ll know from whether it wins or not, another human replied. It won.
And again, something of a zeitgeist event. The AI people, who’ve been kicking around in my box of interesting predictors for nearly a decade, I think they feel that this is their moment.
I spent a couple of hour last weekend talking to Benjamen Walker about Nathan Barley, and the psychic damage of the early 2000s. At one point, I talked about the terrible distortion for technologists in the dotcom years of having years of everything you want and predict turn out to be true. Then I more sadly talked about how the magic had ebbed away. How so many of us coasted along on glib predictions that the Internet is going to make things nicer and more exciting for a decade, and it worked, then suddenly every bet turns out wrong.
I hate actually predicting things, because as soon as you pre-commit, your perceived accuracy plummets (because now it’s your actual accuracy which is never as much fun). As ever, I can just couch my predictions in woolly language here so: I’m feeling myself be tugged along in the AI folks wake, because they’re going somewhere interesting for a few years, even if maybe the magic will fade from them before they reached home and the Singularity.
(Fun reading if you want it, in this vein: Crystal Society by Max Harms. My favourite book this year so far. And, just like my favourite book this decade, Constellation Games, indie/self-published.)
BTW, Constellation Games is the Book of Honor at the upcoming Potlatch science fiction conference. I’m mortified I’m missing it, but I think I’ll be ending up at the same city as the author (hi Leonard, are you going to be at LibrePlanet in Boston?), so maybe it’s not so bad. Who can predict?