Currently:
Archive for the ‘Committee to Protect Journalists’ Category
2024-08-25»
Pavel Durov and the BlackBerry Ratchet»
Why do governments go after companies and executives of services of more weakly encrypted tools?
It’s very hard, this early, to pierce through what’s going on with the French authorities’ arrest of Pavel Durov, the CEO of Telegram — but that doesn’t stop people from having pet theories. Was it retaliation from the US and the FBI for not backdooring Telegram? Was it a favor to Durov so he could hide from Putin? Was it just the grinding wheels of French justice?
I’m sure we’ll understand more details of Durov’s case in the next few days, but motivations — especially those anthropomorphically projected onto entire states — are never really resolved satisfactorily. If you think LLMs lack explainability, try guessing the weights of a million human-shaped neurons in the multi-Leviathan model that is international politics. It’s not that we’ll never have explanations: it’s just that we’ll never be able to point to one as definitive.
Of course, the intractability of large systems never stopped anyone from trivializing those crushed under their lumberings with a pat explanation or two on a blog. (It certainly won’t stop me, who, when I was a columnist, made more-or-less a career out of it.)
So let me dig out an old theory, which I think may fit the facts here. I think Durov and Telegram are prisoners of the same ratchet that trapped Research In Motion (RIM)’s BlackBerry in the 2000s.
Back in the Before iPhone Times, BlackBerry was a cute range of mobile devices with a little keyboard and screen that offered low-cost messaging in an era when phones were bad at everything that wasn’t “talking to people” (and they weren’t great at that).
We think of mobile phones these days as individually-owned devices — intimately so — but BlackBerrys were the stuff of institutional purchasing. In the 90s, companies and governments bought or rented BlackBerrys en masse, and handed out the units to their staff to keep in touch. In the pre-cloud era, these institutions were cautious about ceding a chunk of their internal comms infrastructure to a third-party, let alone a Canadian third-party, so RIM built reassuring-sounding content privacy into their design. A chunk of the message-relaying work was done by “BlackBerry Enterprise Server” which was closed-source, but sat on-prem. Corporate BlackBerrys could send instant messages directly to one another, via RIM’s systems, but enterprises could flash their users’ devices with a shared key that would make their messages undecipherable by anyone who didn’t have the key, including RIM or the telecomms networks the message passed over. None of it would really pass muster by modern cryptographic best practices, but it would be enough to get a CTO to sigh and say “ok, seems good enough. Sure.”, and sign off on the purchase.
Importantly, though, a lot of this encrypted security was optional, and protected these comms at the organizational, not individual, level. Companies could turn message privacy on and off. Even when turned on, the company itself could decrypt all the messages sent over their network if they needed to. Useful if you’re a heavily-regulated industry, or in the government or military.
Now, BlackBerry users loved their little type-y clicky things, and inevitably RIM realized they might have a consumer play on their hands (especially as smartphones began to get popular). They started selling BlackBerry devices direct to individuals via the mobile phone companies. RIM and the telcos played the part of the institutional buyers in this deal — they could turn on the encryption, and had access to the messages, although it was unclear from the outside who played what part. Did the telcos flash their devices with a shared key, or did RIM? Who was in charge of turning the privacy on and off?
All this ambiguity made infosec people leery of RIM’s promises, especially with consumer BlackBerry devices. But in general, people read this all as meaning that consumer BlackBerrys were secure enough. After all, even President Obama had a BlackBerry, so that must mean something?
Apparently so: Around about 2010, governments started publicly attacking RIM and BlackBerrys as a threat to national security and crime prevention. Law enforcement agencies started complaining about RIM’s non-cooperation. Countries like the UAE and India talked of throwing out RIM from their country entirely. It was the first big government vs internet messaging drama to play out in the press.
At the time, this puzzled me immensely. From the viewpoint of infosec insiders, spooks should have loved RIM! BlackBerrys were actually kind of insecure! If you wanted to get at the messages that individual BlackBerry customers — including, most visibly, drug dealers, who loved their BlackBerrys– you just had to hit up the (certainly domestic) telephone company they were using and get that shared key. Or you could maybe mandate what key that would be. You didn’t need to put pressure or ban RIM to do this!
But as I dug into it, I realized what may have been going on. RIM and the telcos had been helping the authorities, to the best of their abilities. They probably did a fair bit of explaining to the authorities how to tap a BlackBerry, and may even have done some of the heavy-lifting. When it came to consumer BlackBerrys, RIM didn’t have the hard and fast line of a Signal or other truly end-to-end encrypted tool. They could hand over the messages, and (as they would sometimes protest) often did.
But, crucially, they could not do this in every case. The reasons when they could not were primarily bureaucratic and technical. The drug dealers might have got smart and decided to change the key on their network, and neither RIM or the cops had a device to extract the key from. Or the authorities might want info on a corporate BlackBerry, which was uncrackable by BlackBerry using their existing infrastructure. Or a BlackBerry’s shared key might have been set by the phone company, not RIM, so RIM couldn’t directly co-operate, and needed to refer them back to the telco — who might have just cluelessly bounced them back to RIM. That kind of shuttlecock-up happens all too often, and it’s easy for the tech company to take the blame.
Ultimately, the problem was that RIM could not 100% state they had no access to BlackBerry data at all. They complied with some requests, but not others. The reasons were generally technical, not political — but they sounded to law enforcement and intelligence community ears like they were political.
Those political actors were not entirely wrong. RIM had made political decisions when designing the privacy of its tools. In particular, they had promised a bunch of customers that they were secure, and let a bunch of other customers think they were secure. RIM’s critics in governments were simply asking — why can’t you move the customers that we’d like to spy on from one bucket to the other?
Declining to do this was an existential commitment for RIM — if they undid those protections once, none of their major military and corporate customers would ever trust them again. They had to fight the ratchet that the governments were placing them in, because if they didn’t, their business would be over. And the more they fought, the angrier their government contacts became, because hey — you’re already doing this for some people. Why aren’t you doing it for this case? Law enforcement saw this as a political problem, so responded to it with political tactics: behind-the-scenes pressure, and when that didn’t work, public threats and sanctions.
Durov and the Ratchet
Like BlackBerry, I think a lot of infosec professionals are again confused as to why Telegram is getting it in the neck from the French government. It’s not even a well-designed tool.And I think the reason is the same: like BlackBerry, because of its opt-in, weakly protective tooling, Telegram can, and does, assist the authorities in some ways, but not others. I don’t mean this in a damning way — if Telegram gets a CSAM report, it takes down a channel. End-to-end encryption is opt-in on Telegram; they really do have access to user information that, say, a Signal or even WhatsApp doesn’t. There’s no technical reason for it not to have features on the backend to deal with spam and scams: a backend which — unlike an end-to-end encrypted tool — can peer in detail at a lot of user content. The authorities can plainly see that Telegram can technically do more to assist them: a lot more.
So why doesn’t Telegram do more to help the French government? As with RIM, Telegram’s excuses will be convoluted and hard for political authorities to parse. Maybe it’s because the French requests are for data it doesn’t have — chats where the participants were smart enough to turn on encryption. Maybe it’s just that if they provide that service for France, they’d have to provide it for everyone. Maybe France wants to see Russian communications. Maybe Telegram just doesn’t have the manpower. But the point here is that Durov is caught in the ratchet — the explanations as to what Telegram can and can’t do are a product of contingent history, and the French authorities can’t see why those contingencies can’t be changed.
If it sounds like I’m basically victim-blaming Durov for his own lack of commitment to infosec crypto orthodoxy here, I want to be clear: best practice, ideologically-pure end-to-end apps like Signal absolutely face the same ratchet. What I’m mostly trying to understand here is why Telegram and BlackBerry get more publicly targeted. I think the truth behind the amount of pushback they receive is more psychological than cryptographic. Humans who work in politics-adjacent roles get madder at someone who concedes part of the way, but refuses to bow further for what seem like political reasons, than someone who has a convincing argument that it is mathematics, not politics, that prevents them from complying further, and has stayed further down on the ratchet. Not much madder, but mad enough to more quickly consider using political tools against them to exact their compliance.
Echoing BlackBerry’s woes, I don’t think Telegram’s security compromises are a product of government pressure so much as historical contingencies. But I do think its weaknesses have ironically made it a greater target for the kind of unjust, escalatory, fundamentally ill-conceived actions that we have seen against Durov by the French authorities.
The motivations of government officials are hard to guess: but I do think it is accurate to say they see the world through political, not technical lenses.
6 Comments »
2023-03-28»
Program Think»
I admit that, post-EFF, when I read about some terrible Internet regulatory proposal, or knotty problem of digital ethics, I often have a burst of “well, thank goodness it’s someone else’s job to deal with this now.” (Except for the narrower domain that is still my problem, I guess).
And then again, sometimes, I just feel the same pain as before. I read this article today, on a Chinese cybersecurity worker, jailed for seven years for a crime the authorities wouldn’t disclose, even to his wife. She is pretty sure she has finally worked out what that crime was: her was Program Think, a prolific anonymous blogger whose postings stopped the day before her husband was arrested:
The freewheeling blog offered a mixture of technical cybersecurity advice and scathing political commentary – including tips on how to safely circumvent China’s Great Firewall of internet censorship, develop critical thinking and resist the increasingly totalitarian rule of the Chinese Communist Party.
The blogger took pride in their ability to cover their digital tracks and avoid getting caught – even as a growing number of government critics were ensnared in Chinese leader Xi Jinping’s strident crackdown on dissent.
Working on EFF’s international team and before that at CPJ, Program Think has a familiar feeling: the independent, “arrogant” techy, staying up all night to write because something is not only wrong on the Internet, but wrong in the country, too. We still tend to characterize them as bloggers, but before, during, and after peak blogging, they were also independent journalists, and writers, and cranks, and nobodies, and brilliant alternative voices.
Popular sympathy about this kind of character has faded recently in the West, but they do keep typing. I have a lot of criticism of the U.S., Europe, and much of the rest of the world too, but I’m relieved that I’m somewhere where seven year sentences’ for writing what you think is not culturally accepted, isn’t coded into the law, and is recognized as an aberration by the majority of the establishment, and almost certainly the population too.
“Since June 2009, (Ruan) has used his computer to write more than a hundred seditious articles that spread rumors and slander, attack and smear the country’s current political system, incite subversion of state power, and intent to overthrow the socialist system,” the court verdict said.
It added that the articles, published on overseas platforms, attracted “a large number of internet users to read, comment and share, causing pernicious consequences.”
Program Think’s archive is still available, on blogspot.
Comments Off on Program Think
2010-09-14»
Haystack vs How The Internet Works»
There’s been a lot of alarming but rather brief statements in the past few days about Haystack, the anti-censorship software connected with the Iranian Green Movement. Austin Heap, the co-creator of Haystack and co-founder of parent non-profit, the Censorship Research Center, stated that the CRC had “halted ongoing testing of Haystack in Iran”; EFF made a short announcement urging people to stop using the client software; the Washington Post wrote about unnamed “engineers” who said that “lax security in the Haystack program could hurt users in Iran”.
A few smart people asked the obvious, unanswered question: What exactly happened? Between all those stern statements, there is little public information about why the public view of Haystack switched from it being a “step forward for activists working in repressive environments” that provides “completely uncensored access to the internet from Iran while simultaneously protecting the user’s identity” to being something that no-one should ever consider using.
Obviously, some security flaw in Haystack had become apparent. But why was the flaw not more widely documented? And why now?
As someone who knows a bit of the back story, I’ll give as much information as I can. Firstly, let me say I am frustrated that I cannot provide all the details. After all, I believe the problem with Haystack all along has been due to explanations denied: either because its creators avoided them, or because those who publicized Haystack failed to demand them. I hope I can convey why we still have one more incomplete explanation to attach to Haystack’s name.
(Those who’d like to read the broader context for what follows should look to the discussions on the Liberation Technology mailing list. It’s an open and public mailing list, but it with moderated subscriptions and with the archives locked for subscribers only. I’m hoping to get permission to publish the core of the Haystack discussion more publicly.)
First, the question that I get asked most often: why make such a fuss, when the word on the street is that a year on from its original announcement, the Haystack service was almost completely nonexistent, a beta product restricted to only a few test users, all of whom were in continuous contact with its creators?
One of the many new facts about Haystack that the large team of external investigators, led by Jacob Appelbaum and Evgeny Morozov, have learned in the past few days is that there were more users of Haystack software than Haystack’s creators knew. Despite the lack of a “public” executable for examination, versions of the Haystack binary were being passed around, just like “unofficial” copies of Windows (or videos of Iranian political violence) get passed around. Copying: it’s how the Internet works.
But the understood structure of Haystack included a centralized, server-based model for providing the final leg of censorship circumvention. We were assured that Haystack had a high granularity of control over usage. Surely those servers blocked rogue copies, and ensured that bootleg Haystacks were excluded from the service?
Apparently not. Last Friday, Jacob Appelbaum approached me with some preliminary concerns about the security of the Haystack system. I brokered a conversation between him, Austin Heap, Haystack developer Dan Colascione and the CEO of CRC CRC’s Director of Development, Babak Siavoshy. Concerned by what Jacob had deduced about the system, Austin announced that he was shutting down Haystack’s central servers, and would keep Haystack down until the problems were resolved.
Shortly after, Jacob obtained a Haystack client binary. On Sunday evening, Jacob was able to conclusively demonstrate to me that he could still use Haystack using this client via Austin’s servers.
When I confronted Austin with proof of this act, on the phone, he denied it was possible. He repeated his statement that Haystack was shut down. He also said that Jacob’s client had been “permanently disabled”. This was all said as I watched Jacob using Haystack, with his supposedly “disabled” client, using the same Haystack servers Austin claimed were no longer operational.
It appeared that Haystack’s administrator did not or could not effectively track his users and that the methods he believed would lock them out were ineffective. More brutally, it also demonstrated that the CRC did not seem able to adequately monitor nor administrate their half of the live Haystack service.
Rogue clients; no apparent control. This is why I and others decided to make a big noise on Monday: it was not a matter of letting just CRC’s official Haystack testers quietly know of problems; we feared there was a potentially wider and vulnerable pool of users who were background users of Haystack that none of us, including CRC, knew how to directly reach.
Which brings us to the next question: why reach out and tell people to stop using Haystack?
As you might imagine from the above description of Haystack’s system management, on close and independent examination the Haystack system as a whole, including these untracked binaries, turn out to have very little protection from a high number of potential attacks — including attacks that do not need Haystack server availability. I can’t tell you the details; you’ll have to take it on my word that everyone who learns about them is shocked by their extent. When I spelled them out to Haystack’s core developer, Dan Colascione late on Sunday, he was shocked too (he resigned from Haystack’s parent non-profit the Censorship Research Center last night, which I believe effectively kills Haystack as a going concern. CRC’s advisory board have also resigned.)
Deciding whether publishing further details of these flaws put Haystack users in danger is not just a technical question. Does the Iranian government have sufficient motivation to hurt Haystack users, even if they’re just curious kids who passed a strange and exotic binary around? There’s no evidence the Iranian government has gone after the users of other censorship circumvention systems. The original branding of Haystack as “Green Movement” software may increase the apparent value of constructing an attack against Haystack, but Haystack client owners do not have any connection with the sort of high-value targets a government might take an interest in. The average Haystack client owner is probably some bright mischievous kid who snagged it to access Facebook.
Lessons? Well, as many have noted, reporters do need to ask more questions about too-good-to-be-true technology stories. Coders and architects need to realize (as most do) that you simply can’t build a safe, secure, reliable system without consulting with other people in the field, especially when your real adversary is a powerful and resourceful state-sized actor, and this is your first major project. The Haystack designers lived in deliberate isolation from a large community that repeatedly reached out to try and help them. That too is a very bad idea. Open and closed systems alike need independent security audits.
These are old lessons, repeatedly taught.
New lessons? Well, I’ve learned that even apparent vaporware can have damaging consequences (I originally got re-involved in investigating Haystack because I was worried the lack of a real Haystack behind the hype might encourage Iranian-government fake Haystack malware — as though such things were even needed!).
Should one be a good cop or a bad cop? I remember sitting in a dark bar in San Francisco back in July of 2009, trying to persuade a blasé Heap to submit Haystack for an independent security audit. I spoke honestly to anyone who contacted me at EFF or CPJ about my concerns, and would prod other human rights activists to share what we knew about Haystack whenever I met them (most of us were skeptical of his operation, but without sufficient evidence to make a public case). I encouraged journalists to investigate the back story to Haystack. I kept a channel open to Austin throughout all of this, which I used to occasionally nudge him toward obtaining an audit of his system, and, finally, get a demonstration that answered some of our questions (and raised many more). Perhaps I should have acted more directly and publicly and sooner?
And I think about Austin Heaps’ own end quote from his Newsweek article in August, surely the height of his fame.”A mischievous kid will show you how the Internet works”, he warns. The Internet is mischievous kids; you try and work around them at your peril. And theirs.
15 Comments »
2010-03-19»
what i did next»
For a moment, climbing out of the too-fresh sunshine and with the taste of a farewell Guinness still on my tongue, slumping into the creaky old couch in the slightly grimy, Noisebridge to write something from scratch, San Francisco felt like Edinburgh in August, a day before the Festival.
Edinburgh for me was always the randomizer, the place I hitched to every year, camped out in, and came out in some other country, six weeks later, with hungover and overdrawn, with a new skill or passion or someone sadder or more famous or just more fuddled and dumber than ever.
Today was my last day at EFF. Just before our (their? Our.) 20th birthday party in February, where I had the profoundly fannish pleasure to write and barely rehearse a 30 minute sketch starring Adam Savage, Steve Jackson, John Gilmore, me in my underpants, and Barney the Dinosaur, I callously told them I was leaving them all for another non-profit. We commiserated on Thursday, in our dorky way, by playing Settlers of Catan and Set and Hungry Hippos together. They bought me money to buy a new hat. I logged off the intranet, had a drink, and wandered off into a vacation.
In April, after a couple of weeks of … well, catching up on my TV-watching, realistically … I’ll be kickstarting a new position at the Committee to Protect Journalists as Internet Advocacy Coordinator.
I’ve known the CPJ people for a few years now, talking airily to them about the networked world as they grimly recorded the rising numbers of arrested, imprisoned, tortured, threatened and murdered Internet journalists in the world. Bloggers, online editors, uploading videographers. Jail, dead, chased into exile. As newsgathering has gone digital, it’s led to a boom in unmediated expression. But those changes have also disintermediated away the few institutional protections free speech’s front line ever had.
CPJ has incredible resources for dealing with attacks on the free press on every continent: their team assists individuals, lobbies governments at the highest levels, documents and publicizes, names and shames. They were quick to recognize and reconfigure for a digital environment (you have to admire an NGO that knew enough to snag a three letter domain in ’95). Creating a position for tackling the tech, policy and immediate needs of online journalism was the next obvious step.
The question I had for them in my interview was the same that almost everybody I’ve spoken to about this job has asked me so far. On the Internet, how do you (they? We.) define who a journalist is?
The answer made immediate sense. While “journalism” or “newsgathering” or “reportage” as an abstract idea might seem problematic when cut from its familiar institutions, and pasted into the Internet… nonetheless, you know it when you see it. When someone is arrested or threatened or tortured for what they’ve written, if you can pull up what they said in a mailreader or a browser, it really doesn’t take long to identify whether it’s journalism or not.
What’s harder is untangling the slippery facts of the case — whether the journalist was targeted because of their work, or other reasons; whether it was the government or a criminal enterprise that did the deed; where the leverage points are to seek justice or freedom.
In those fuzzier areas, in the same way as EFF uses its legal staff to map the unclear world of the frontier into clear legal lines, CPJ uses its staff’s investigative journalist expertise to uncover what really happened, and then uses the clout of that reinforced and unassailable truth to lobby and expose.
Honestly, I’m still only beginning to map out how I might help in all this. I spent a week last month in New York where CPJ is based, listening to their regional experts talk about every continent, all the dictators, torturers, censors and thugs, all the bloggers and web publishers and whistleblowers.
I know I am starting on that ignorance rollercoaster you get when striking out into new territory. I can tell these people about proxies, AES encryption and SMS security, but I still can’t pronounce Novaya Gazeta, or remember what countries border Kenya. You surprise yourself with how much old knowledge becomes freshly useful, at the same time as you feel stupid for every dumbly obvious fact you fail to grasp.
I think part of my usefulness will come from writing more, and engaging more with the communities here I know well to explain and explore the opportunities and threats their incredible creations are creating today. At the same tie, I’m already resigned to taking a hit in my reputational IQ as I publicly demonstrate my ignorance (my friends in Africa and Russia are already facepalming, I can tell). Hope you’ll forgive me.
In the mean time, I’ll be setting up my monthly donation to EFF. I’ve said it before and I’ll bore you again, EFF are an incredible organization, made up of some of the smartest and most dedicated people I’ve ever met. I smugly joined in 2005 thinking I understood tech policy, and spent the next few years amazed at what it was like to live as the only person who didn’t have an EFF to help me understand what I was looking at and what to do about it. I guess I finally got the hang of juggling five hundred daily emails, a dozen issues refracted through dozens of cultures across the world. And I guess that’s aways the cue to switch tracks and reset to being dumb and ready to learn again.
Incidentally, EFF is looking for an IP attorney right now. I don’t know how many lawyers read this blog, but if you know a smart IP legal person who wants to randomize their life for the opportunity to become even smarter for a good cause, get them to apply. They won’t regret it, not for a minute.
7 Comments »