Fire on the Firewall: A Debate Breaks Out

By Katherine Maher | April 14, 2011

Yesterday I wrote a post summarizing an event I went to at Freedom House for the release of their latest report on the use of censorship circumvention tools. Last night, Tor developer Jacob Appelbaum wrote a post questioning the methodology of the Freedom House report. This morning, a discussion broke out on Twitter among people with varying stakes in the outcome. You can look up Robert Guerra (@netfreedom), Rafik Dammak (@rafik), Katrin Verclas (@katrinskaya) and Tactical Tech (@onorobot) to follow along (Appelbaum is @ioerror).

Having still not read the report [ed.--it's not a great excuse, but we are in the midst of donor deadlines], I can't comment on Appelbaum's criticism--in one instance, he seems to find that rankings did not account for technical performance, but rather user perception of performance--but would recommend his post as worth reading alongside the FH report. It is interesting that FH did not define the parameters of the technical evaluation, or publish numbers of respondent users.

Appelbaum brings invaluable skills to the community--not the least of which includes accessible writing that endeavors to demystify thorny technical questions. His post raises interesting points, but it's worth noting the criteria he advances for evaluation do not necessarily track with our own experiences of activist considerations.

The report should rate tools based on availability, whether the tool requires administrative privileges, validated security claims, anonymity, design and implementation details available for peer review, centralization or de-centralization, and other qualities that clearly show a distinction between tools in a meaningful sense.  By understanding these qualities, users will be able to understand how their tool may or may not function in the event of a major Internet outage; users will better informed about the security claims and about actual risk that is mitigated by the tools of their choosing.

Ideally, users should understand these qualities. Often they do not. A paragraph that begins with 'the report should rate' and ends with 'users will be able' makes a significant leap from the expectation that a report assess these factors to the assumption that this assessment will result in activist understanding of said qualities. Even if the activist fully understands the concepts, that may not be enough to make an informed or best decision.

The Freedom House report appears to attempt to meld an evaluation on the technical merits with an evaluation on the difficult-to-quantify holistic user experience. The discussion that has arisen over its methodology and findings embodies the substantial gap that exists between expert knowledge and confident user practice. Unfortunately, it also indirectly prioritizes the tools themselves--and their specific technical merits--over a discussion of practical application and comprehensive security in context. 

As the ultimate consideration in these arguments should always be the end user, it would be wise to keep perspective--and realize how far our conversations often remain from the actual applied experience.

Share