How should the internet be moderated?

Cloudflare recently made a decision to stop protecting the reprehensible 8chan forum after 8chan was seen to be actively supporting many of the ultra-right wing, white-supremacist shootings we’ve seen recently in the US.

Cloudflare doesn’t publish any content itself, it just provides protection for content published elsewhere by ensuring said content can’t be taken offline by DDoS (Distributed Denial of Service) attacks.

As soon as Cloudflare’s protection was withdrawn, hackers launched a DDoS attack against 8chan that took it offline and it’s still offline as I write.

As far as I can tell, no particular law compelled Cloudflare to make the decision to take 8chan offline. It would not be considered a publisher of content, to which some related laws would apply, but rather a distributer of content, to which fewer (if any) laws apply.

I can see the distinction between a publisher and a distributer. You wouldn’t expect a newsagent to be responsible for the content of the newspapers they distribute.

Cloudflare used its discretionary powers as a private company — able to choose to whom it provides services — to end its arrangement with 8chan. Cloudflare did not make this decision lightly but I think it was the correct decision in the circumstances.

But 8chan only remains offline due to the consistent DDoS attacks it’s suffering from hackers and this is an illegal activity itself. It’s hardly an ideal form of censorship and I suspect 8chan will eventually find another, suitably protected home anyway.

This all raises the issue of how we effectively go about dealing with censorship on the internet. I can’t imagine anything would work in the global sense because we can’t force laws on Russia or China, for example, but maybe Western democracies can come up with some some sort of consensus.

It has to be done carefully of course because there’s often a fine line between free speech and hate speech.

Ben Thompson has an interesting and well-thought out take on this.