Friday, April 16, 2010

The Great Australian Firewall just won't work

The proposed Australian Internet censorship rules will not work. Like most such blacklist-based schemes without active human monitoring it can be trivially bypassed by anybody capable of following simple instructions. As such, all the people it's designed to stop (like kiddie porn scum) will ignore its supposed blocking effects completely. Meanwhile we'll all have to live with the performance degradation, reliability problems, and so on.



It can't work because:
  • Blacklists are not useful if you don't know what to block. Private websites (content unknown until you have the password), websites that switch content dynamically based on source IP or authentication details, etc all defeat blacklists unless an insider reports the site as a problem.
  • Blacklists are always out of date. A content provider can hop domains and IP addresses much faster than a blacklist has any hope of keeping up with, and can easily inform interested parties of the changes over IRC, email, etc.
  • HTTPS prevents content inspection. A filter cannot content-filter encrypted communication like HTTPs, so all they can go by is the IP address and reverse DNS. If the blacklist doesn't cover that IP address, the communication must be permitted.
  • Encrypted tunnels and VPNs can completely bypass the filtering. You can't block all encrypted traffic through the filter. You'd prevent secure online payment, remote access to business networks by teleworkers, system administrators logging in to maintain their networks, etc. To bypass the filter all somebody needs to do is have a host outside Australia that runs a proxy server and accepts SSH connections, terminates PPTP VPNs, accepts IPSec connections, or any number of other encryption options. The filter cannot see the content of the data stream as it's encrypted, but the user has free and unimpeded web browsing through the proxy. You can get a web host with SSH access for around $50/month, and groups of people could easily get together to set up proxy hubs. In fact, they're certain to, and few of them will be criminals - most will just want their web browsing to work.
  • Peer-to-peer file transfer clients support encrypted communication. Even if there were effective methods to fingerprint files (such as images) transferred via BitTorrent and such, those methods are easily foiled. A client may encrypt the data so that only the sender and recipient know what it is, rendering a filter between the two unable to tell the difference between "My Gradmother's 100th Birthday Videos" and kiddie porn. Even if the P2P client didn't support encryption, transferring an encrypted zip file with the password or decryption key in a text file would work just as well.
  • Files can be transferred out-of-order and cannot be fingerprinted unless complete. This means that you either need to block out-of-order transfers (which will stop all sorts of legit things that programs you use every day, like Adobe Reader, rely on) or have the filtering system completely download the file before passing any of it on to the recipient. The former isn't acceptable if you claim the impact of the filter will be minimal on legit users. The latter is a trivial denial of service - request 1kb of every multi-megabyte PDF you can find, forcing the filter to download all of them without costing you any significant local resources. Yet this is a perfectly legitimate thing to do, and the filter should not prevent it. A store and forward system like this would also massively slow all web browsing, and require mind boggling amounts of fast storage to successfully operate.
None of the above is difficult. You might not know the technical terms involved, but actually using it is absolutely trivial - along the lines of "visit the website https://10.101.122.144/", "Go to https://safeandhappy.com/secretlogin.php", or "Click on connect to VPN and enter the address 10.101.122.144 with username fred and password fred".
So, the filtering won't stop any criminals except the very, very stupid. For example, it's been proved time and time again that child porn traffickers, who're one of the groups being bought up repeatedly in arguments in favour of this filter, are quite sophisticated in their use of technology to hide their activities. They use encryption and private secure web sites heavily, for one thing. As a result, they may not even notice the introduction of filtering. The only ones who will get caught or blocked are the occasional casual sickos searching the Internet. While disgusting, they are not the dangerous ones, because they're not the ones abusing children to supply their habit.
The same applies to basically any illegal activity. Stop people obtaining "terrorist training materials"? Yes, you'll stop legitimate researchers afraid to bypass the filters, the incredibly stupid, and nobody else.
As if its total ineffectiveness wasn't bad enough, such filtering schemes introduce a huge and complex system that sits between Australia and the rest of the 'net. There will be an inevitable reliability impact due to outages of the system. Additionally, it'll block content that should not be blocked - even the best systems have false positive rates as high as a couple of percent, or 1 in 50 files/pages. I don't even want to imagine the effect that'll have on automatic software updates, XML-RPC, and other non-web-browsing communication that often happens over HTTP. If they try to filter traffic on ports other than 80, it's going to be vastly worse in terms of the amount of software it just breaks.
The filter is clearly nonsensical from a practical and technical perspective. That's without even getting into the argument about whether it is right.
If there was a system that could effectively catch (blocking is pointless; they'll just use other methods) people trafficking child porn and the like, I'd be all for it - even if it had some significant side effects. There isn't, though, and no technical advance can create one so long as you consider encrypted communication without a government decryption backdoor to be allowable.

Whitelists

Fundamentally, the only sort of filter that actually works is one that disallows all communication of any sort not known to be "safe". No encrypted communication may be permitted except to sites known to be entirely legal, so most secure logins and online payment is out. All websites must be checked in depth before being added to the whitelist ... and must be regularly re-checked to make sure they're still compliant. No user-generated content websites can be allowed unless they check all submissions before posting.
Even then, it's necessary to have a huge team of humans working for the filtering provider check random samplings of URLs being requested and view random samplings of images, etc. An apparently legit site can trivially hide a back door into a secure private area where less nice things go on. The site operator might not even know the back door is there, since servers are regularly broken into and defaced or used to host malware by malicious 3rd parties.
It's also necessary to fingerprint files and compare them to a list of known prohibited files as another step toward detecting this sort of thing. This requires store-and-forward proxying (since you can't block a file you've already passed on to the recipient), which as noted above is insanely expensive in storage and processing power.
Despite all that work, malicious parties can still get a lot done between breaking in to a server and the break-in being detected. File fingerprinting is of limited use as it's not particularly reliable and any system without absurdly high false positive rates is easily tricked by small changes in the file.
It's also possible to hide information in images and certain other file types in such a way as that it's actually undetectable but can still be extracted if you know it is there. This is called steganography. The amount of information that can be hidden in a given image is fairly small, but over time quite a bit can be transferred. This technique can be used to undetectably transfer whatever you want in blog post after blog post of picures of cats.
The result of a whitelist scheme (which is not proposed, but is the only thing that could work) would be that only a tiny percentage of the Web would be accessible. There could would be no email hosted outside Australia. No peer to peer file transfers. No access to Facebook, LiveJournal, etc as untrusted user content can be put up there. No instant messaging until/unless the companies involved worked with the filter providers to enable filtering of messages and file transfers done through their programs. No way to administrate servers located outside Australia (no SSH). No way for remote workers to log in to company networks internationally. The list is almost endless, actually ... and even with all these consequences it'd just make life a bit more difficult for the people you're trying to stop.
Note that you can allow encryption if you force all encryption users to disclose their decryption keys to the government. However, the computational overhead of testing and decrypting each data stream would be astounding. Furthermore, in some cases you can't tell if you have the right key or not until the whole data stream has been transferred, meaning that the filter must store the data stream without forwarding it to the eventual recipient until it's all been received. The storage performance required would be incredible, and the compuational requirements of store-and-forward gatewaying all Australian web browsing etc is incomprehensible.

No comments:

Post a Comment

Captchas suck. Bots suck more. Sorry.