There’s been a lot of alarming but rather brief statements in the past few days about Haystack, the anti-censorship software connected with the Iranian Green Movement. Austin Heap, the co-creator of Haystack and co-founder of parent non-profit, the Censorship Research Center, stated that the CRC had “halted ongoing testing of Haystack in Iran”; EFF made a short announcement urging people to stop using the client software; the Washington Post wrote about unnamed “engineers” who said that “lax security in the Haystack program could hurt users in Iran”.
A few smart people asked the obvious, unanswered question: What exactly happened? Between all those stern statements, there is little public information about why the public view of Haystack switched from it being a “step forward for activists working in repressive environments” that provides “completely uncensored access to the internet from Iran while simultaneously protecting the user’s identity” to being something that no-one should ever consider using.
Obviously, some security flaw in Haystack had become apparent. But why was the flaw not more widely documented? And why now?
As someone who knows a bit of the back story, I’ll give as much information as I can. Firstly, let me say I am frustrated that I cannot provide all the details. After all, I believe the problem with Haystack all along has been due to explanations denied: either because its creators avoided them, or because those who publicized Haystack failed to demand them. I hope I can convey why we still have one more incomplete explanation to attach to Haystack’s name.
(Those who’d like to read the broader context for what follows should look to the discussions on the Liberation Technology mailing list. It’s an open and public mailing list, but it with moderated subscriptions and with the archives locked for subscribers only. I’m hoping to get permission to publish the core of the Haystack discussion more publicly.)
First, the question that I get asked most often: why make such a fuss, when the word on the street is that a year on from its original announcement, the Haystack service was almost completely nonexistent, a beta product restricted to only a few test users, all of whom were in continuous contact with its creators?
One of the many new facts about Haystack that the large team of external investigators, led by Jacob Appelbaum and Evgeny Morozov, have learned in the past few days is that there were more users of Haystack software than Haystack’s creators knew. Despite the lack of a “public” executable for examination, versions of the Haystack binary were being passed around, just like “unofficial” copies of Windows (or videos of Iranian political violence) get passed around. Copying: it’s how the Internet works.
But the understood structure of Haystack included a centralized, server-based model for providing the final leg of censorship circumvention. We were assured that Haystack had a high granularity of control over usage. Surely those servers blocked rogue copies, and ensured that bootleg Haystacks were excluded from the service?
Apparently not. Last Friday, Jacob Appelbaum approached me with some preliminary concerns about the security of the Haystack system. I brokered a conversation between him, Austin Heap, Haystack developer Dan Colascione and the CEO of CRC CRC’s Director of Development, Babak Siavoshy. Concerned by what Jacob had deduced about the system, Austin announced that he was shutting down Haystack’s central servers, and would keep Haystack down until the problems were resolved.
Shortly after, Jacob obtained a Haystack client binary. On Sunday evening, Jacob was able to conclusively demonstrate to me that he could still use Haystack using this client via Austin’s servers.
When I confronted Austin with proof of this act, on the phone, he denied it was possible. He repeated his statement that Haystack was shut down. He also said that Jacob’s client had been “permanently disabled”. This was all said as I watched Jacob using Haystack, with his supposedly “disabled” client, using the same Haystack servers Austin claimed were no longer operational.
It appeared that Haystack’s administrator did not or could not effectively track his users and that the methods he believed would lock them out were ineffective. More brutally, it also demonstrated that the CRC did not seem able to adequately monitor nor administrate their half of the live Haystack service.
Rogue clients; no apparent control. This is why I and others decided to make a big noise on Monday: it was not a matter of letting just CRC’s official Haystack testers quietly know of problems; we feared there was a potentially wider and vulnerable pool of users who were background users of Haystack that none of us, including CRC, knew how to directly reach.
Which brings us to the next question: why reach out and tell people to stop using Haystack?
As you might imagine from the above description of Haystack’s system management, on close and independent examination the Haystack system as a whole, including these untracked binaries, turn out to have very little protection from a high number of potential attacks — including attacks that do not need Haystack server availability. I can’t tell you the details; you’ll have to take it on my word that everyone who learns about them is shocked by their extent. When I spelled them out to Haystack’s core developer, Dan Colascione late on Sunday, he was shocked too (he resigned from Haystack’s parent non-profit the Censorship Research Center last night, which I believe effectively kills Haystack as a going concern. CRC’s advisory board have also resigned.)
Deciding whether publishing further details of these flaws put Haystack users in danger is not just a technical question. Does the Iranian government have sufficient motivation to hurt Haystack users, even if they’re just curious kids who passed a strange and exotic binary around? There’s no evidence the Iranian government has gone after the users of other censorship circumvention systems. The original branding of Haystack as “Green Movement” software may increase the apparent value of constructing an attack against Haystack, but Haystack client owners do not have any connection with the sort of high-value targets a government might take an interest in. The average Haystack client owner is probably some bright mischievous kid who snagged it to access Facebook.
Lessons? Well, as many have noted, reporters do need to ask more questions about too-good-to-be-true technology stories. Coders and architects need to realize (as most do) that you simply can’t build a safe, secure, reliable system without consulting with other people in the field, especially when your real adversary is a powerful and resourceful state-sized actor, and this is your first major project. The Haystack designers lived in deliberate isolation from a large community that repeatedly reached out to try and help them. That too is a very bad idea. Open and closed systems alike need independent security audits.
These are old lessons, repeatedly taught.
New lessons? Well, I’ve learned that even apparent vaporware can have damaging consequences (I originally got re-involved in investigating Haystack because I was worried the lack of a real Haystack behind the hype might encourage Iranian-government fake Haystack malware — as though such things were even needed!).
Should one be a good cop or a bad cop? I remember sitting in a dark bar in San Francisco back in July of 2009, trying to persuade a blasé Heap to submit Haystack for an independent security audit. I spoke honestly to anyone who contacted me at EFF or CPJ about my concerns, and would prod other human rights activists to share what we knew about Haystack whenever I met them (most of us were skeptical of his operation, but without sufficient evidence to make a public case). I encouraged journalists to investigate the back story to Haystack. I kept a channel open to Austin throughout all of this, which I used to occasionally nudge him toward obtaining an audit of his system, and, finally, get a demonstration that answered some of our questions (and raised many more). Perhaps I should have acted more directly and publicly and sooner?
And I think about Austin Heaps’ own end quote from his Newsweek article in August, surely the height of his fame.”A mischievous kid will show you how the Internet works”, he warns. The Internet is mischievous kids; you try and work around them at your peril. And theirs.
September 14th, 2010 at 8:55 am
[…] #1: Danny O’Brien has written a post which includes information about pirated copies of Haystack being used in Iran and the fact that […]
September 14th, 2010 at 10:52 am
Hey Danny,
Thanks for the clarification, and your efforts behind the scenes. What I am sending here may well be 100% redundant information, probably stuff you learned so long ago you can’t remember where or when, but WTF, just in case…
When Haystack first surfaced in the technology press, I went and read their entire website. Even with only a few superficial clues about how Haystack works, basic security problems with the protocol were obvious. I wrote to the site’s contact address with my concerns, links to relevant technical articles, and one very important feature request: Security tools need to be open source.
The Haystack site included a statement that they would not release source code because doing so would enable the Iranian government to defeat it, which is semantically equivalent to saying, “We are green beginners in the data security field, we have not learned any of the fundamentals yet, and you would be a fool to use our tools.” I say this because there is no way to prevent the resources of a State from decompiling and analyzing “closed source” software. If a State security agency wants to examine the internals of a piece of software, they will obtain a copy of the software and do so.
All Haystack’s promoters actually prevented by refusing to release the source code, was competent criticism and useful contributions from a community of interested developers. Network security professionals do not use or trust closed source software, especially when encryption is involved – crypto is hard to get right and if it fails, the user sees no difference and has no reason to suspect a failure.
The consensus among network engineers and security consultants is that any cryptosystem or application or protocol dependent on a cryptosystem, is only as secure as the keys used. All security vests in keeping the keys out of hostile hands, no security vests in keeping knowledge of how the application or protocol works out of hostile hands – because, without question, “they” will figure out how the thing works and, if that is all they need to break it, it is broken.
September 14th, 2010 at 11:18 am
[…] Haystack, the circumvention tool aimed at helping Iranians evade Internet censorship. A range of posts that have been written, critiquing the project. The actions of the project’s creators have […]
September 14th, 2010 at 12:00 pm
[…] Heap, one of the tool’s developers, has faced sharp criticism from Appelbaum and others for failing to vet the tool with security professionals before distributing it for use. The media […]
September 14th, 2010 at 2:12 pm
[…] Danny O’Brien’s Oblomovka has an authoritative (and compassionate) account. Snip: Lessons? Well, as many have noted, reporters do need to ask more questions about too-good-to-be-true technology stories. Coders and architects need to realize (as most do) that you simply can’t build a safe, secure, reliable system without consulting with other people in the field, especially when your real adversary is a powerful and resourceful state-sized actor, and this is your first major project. The Haystack designers lived in deliberate isolation from a large community that repeatedly reached out to try and help them. That too is a very bad idea. Open and closed systems alike need independent security audits. […]
September 15th, 2010 at 6:34 am
Thanks for this nice article explaining how Haystack is – and has been – dangerous from the beginning, when it would have been easily preventable had personalities not gotten in the way.
Morgan Sennhauser
Director of Research and Investigation, Sedazad
http://sedazad.org
September 15th, 2010 at 1:05 pm
This reminds me a little of the schism that occurred in the Cult of the Dead Cow/Hacktivismo projects that came in the wake of the Tiananmen massacre. The original project, Peek-A-Booty, was simple, straightforward, easy to use, and deemed insecure by most of the group. In due course, a couple of members split off to complete a fork of it, but it could never be turned into a secure project, due to fundamental design flaws. The others spent several months hammering out the pretty gnarly security problems involved, and those discussions eventually inspired Mixter’s coding of Six/Four, which was more of a bother to use, but which I believe has never been publicly broken. Yet, when people try to reinvent the wheel, they predictably make the same sorts of mistakes that Peek-A-Booty did.
The temptation to use existing privacy mechanisms seems to be too strong for some designers to resist, even though things like SSL and VPN technology were NEVER intended to confer anonymity, and are broken if they do; you either have proof of the identity of who you’re talking to (a valid digital certificate), or you have no idea who you’re talking to (a self-signed certificate), in which case you may well be sending your traffic to a government agent. They were also not designed to deal with traffic analysis, or any other attack which can extend through a large portion of the Internet. They were not designed to deal with governments who have the ability to install thousands of transparent proxies, coerce keys out of CAs, and so on. They’re okay for what they were designed to do, but no more than that, and when designers try to make them do more, the people they’re trying to help may wind up dead.
That leaves DIY options which *may* be effective, but which are inconvenient. They may be too technically or logistically difficult for the average computer user, requiring things like the (unpublicized) IP address of a trustworthy party outside of the country. If you don’t advertise server addresses it’s a big hassle for users, but if you do, you can be sure that your adversary will have blacklisted or attacked those IPs minutes after they’re announced. Tor is an example of a fine privacy tool which cannot be used in China or Iran for exactly this reason. Tor was also not designed to protect bloggers from governments — any government willing to buy hundreds of machines and set them up as Tor end points can sniff traffic for logins and passwords. Tor is best used as a read-only system.
I hope this whole fiasco serves as food for thought to any who would try (yet again) to tackle the need for privacy+anonymity in the face of an extremely powerful and well-funded adversary. It’s a very ugly combination of challenges, and any pretty answers that appear are probably dangerous mistakes.
September 15th, 2010 at 2:13 pm
“It’s an open and public mailing list, but it with moderated subscriptions and with the archives locked for subscribers only.”
Restricted subscriptions and restricted archives? How is that open and public?
September 16th, 2010 at 9:51 am
sounds to me this blogger has some issues with austin heap..austin’s haystacks was effective in early stages of iran election
September 16th, 2010 at 11:19 pm
“Copying: it’s how the Internet works.”
Pffft. One of the silliest expressions I’ve read recently.
September 17th, 2010 at 12:14 am
The lead attacker of Austin Heap appears to have been Evgeny Morozov, who has close ties with George Soros and the former Soviet bloc, and who writes for Foreign Policy.
Evgeny Morozov claims to be a Belarus-born researcher and blogger who works on the political effects of the internet. Morozov expresses skepticism about the Internet’s ability to provoke change in authoritarian regimes, and instead believes that the internet can strengthen authoritarian regimes.
He is a Yahoo! fellow at Georgetown University’s Walsh School of Foreign Service and a contributing editor of and blogger for Foreign Policy magazine.
Prior to his appointment to Georgetown, he was a fellow at George Soros’s Open Society Institute, where he remains on the board of the Information Program. Before moving to the US, Morozov was based in Berlin and Prague, where he was Director of New Media at Transitions Online, a media development NGO active in 29 countries of the former Soviet bloc.
He was previously a columnist for the Russian newspaper Akzia. His writings have appeared in various newspapers and magazines around the world, including The Economist, Newsweek International, International Herald Tribune, Boston Review, Slate, and the San Francisco Chronicle. He has been chosen as a TED fellow where he spoke about how the Web influences regime stability in authoritarian, closed societies. He is a critic of the impact of the internet and other technologies in bringing about social or political change, sometimes even aiding dictatorships.
Inasmuch as Austin Heap is devoted to bringing about regime change via the internet, and Morozov is devoted to the exact opposite viewpoint, the two are at least intellectual enemies. There are also ties to the former Soviet bloc and to George Soros, so Morozov could also be acting as a puppet.
September 17th, 2010 at 6:21 pm
While this does not deal with all of the issues, http://www.cypherpunks.ca/otr/ is an interesting privacy enhancement.
September 18th, 2010 at 10:03 pm
[…] Heap, one of the tool’s developers, has faced sharp criticism from Appelbaum and others for failing to vet the tool with security professionals before distributing it for use. The media […]
September 19th, 2010 at 11:55 am
[…] been (link is to Danny O’Brien’s twitter feed. He also wrote an article about Haystack here). Even their main developer […]
September 26th, 2010 at 4:58 am
[…] nobody wants to release the code in case someone tries to use it – but the meta-technology. As this post makes clear, perhaps the biggest problem was that it was half-open, half-closed. The code wasn’t […]