Recently I have seen a lot of interest in a petition on WhiteHouse.gov entitled “Require Porn to Be an Opt-Out Feature of Internet Service Providers Rather than a Standard Feature”. The title makes me chuckle; I doubt any ISP advertises the existence of porn on the Internet as one of their “features”.

This petition has expired now, so you can’t see the content of the actual petition. Just trust me when I say the wording of the petition fails to live up to the standard of the title, and that’s saying something. Fortunately, it has now expired, and you can’t vote on it anymore. It is now in the hands of the Federal Government, which makes me worry less about this now because if the Federal Government has to spend $600 million dollars for a simple website like HealthCare.gov, well, I don’t think they can actually borrow enough money to pull off any reasonable implementation of this petition, if there was one, which there isn’t.

I’m not a fan of pornography on the internet. I wish it wasn’t there. However, I am a fan of freedom. I’m also a fan of reality. So I’m going to explain why this, and other similar petitions, are 1) technological problems that are, for all intents and purposes, nearly impossible to solve, and 2) are likely to achieve exactly the opposite of what the people proposing them are hoping to achieve.

To begin, let me try to help you understand why content filtering is so difficult.

Suppose you are reading text in a web page. We are talking about written word, plain text, no pictures. You read a paragraph and determine that it is pornographic. As a result you close that web page.

Exactly what went into that decision?

Well, it may be that the paragraph you read was very graphic, explicit, leaving nothing at all to the imagination. It is likely that anyone who read that would agree that it is offensive.

But what if it isn’t graphic or explicit? You probably wouldn’t require it to be so extreme to consider it pornographic. Your brain would evaluate all sorts of characteristics of the written content, like nuance, metaphor, intent, verbiage, specific words and word patterns, context, and more. Importantly, your brain would evaluate how reading that made you feel, what psychological and physiological reactions you had to reading that content, in order to categorize it as inappropriate or not.

Now imagine trying to draft a set of objective rules that you could apply that could be given to an objective third party to evaluate content before you see it. Perhaps you have a friend (this is hypothetical, remember) who would happily look at all your content before it shows up on your computer, evaluate it objectively according to your rules, and then let you see it, or not, based on the application of your rules. How likely do you think it would be that you could come up with rules that would always filter out anything you would find offensive and always let you see things that you would not find offensive? How do you quantify the application and interpretation of context, intent, and nuance in a way that can be objectively applied?

To take this a step further, suppose your hypothetical friend offers to do this for everyone who lives on your street. How likely do you think it would be that you could come up with a set of rules that would filter for everyone on your street in a way they all agreed with? Do you think there would be any content living somewhere in the middle that you find offensive but your neighbor is okay with?

Even if you could complete the nearly impossible task of codifying the rules for evaluating text in a way that meets your standards, how would you be able to do that in a way that meets the standards of everyone in your neighborhood, let alone everyone in your community or your country?

When people talk about having the government filter the internet, or having the government require service providers to filter the internet, this is exactly what they are proposing: That the government, somehow, come up with a set of rules that will correctly identify everything that is pornographic as pornography, never identify anything as pornographic that is not pornography, and will work for everyone in the entire country.

Keep in mind that I chose the easiest example. Pictures, music/spoken word, and video are orders of magnitude harder to solve.

Software that can perform this task is essentially active filtering software, software that evaluates the appropriateness of content as it passes through. Usually when we think of active filtering software, we are thinking of software that runs on our home computer, not at our service provider. But it is the same idea. Software that can perform this task well is hypothetical. This is the problem with active filtering software. People are paying a lot of money for active content filtering software which is, honestly, not very good at actually filtering. It will either let in stuff you don’t like, or keep out stuff you want, or probably both.

The human brain is very adept at fuzzy logic, the science of answering questions like “Is this paragraph offensive?” Computers are not. Training computers to perform this task, even poorly, requires paying lots of people for countless hours of evaluating the performance of the software against their own interpretations. Yes, if you are paying for an active content filter, like Net Nanny, you are paying people to look at pornography every day for their job. Sorry if that upsets you.

When I read this petition, it was as though people think that the service providers somehow have really advanced algorithms for correctly identifying and filtering this type of content, algorithms far beyond the capabilities found in consumer-grade content filtering software, that for some reason they are keeping to themselves, and that we just need to force them to use it. I hate to tell you but that just isn’t true. And we somehow want them to apply these magical filters on content as it passes through their switches at the rate of thousands of requests per second or even more. Sorry, but passing laws to require them to do what is essentially impossible doesn’t make it more possible.

So that’s why the action proposed by this petition is practically impossible to implement. But isn’t it theoretically possible? Isn’t it worth a try, if it means no more offensive content online?

The answer is No, and I’ll tell you why: Censorship is a knife that cuts both ways.

I am a Christian (specifically a Mormon); being Christian is something I happen to hold in common with many of those I associate with in person or online. This is why many people I know are signing this petition; we don’t like pornography, we think it is evil. We’d like to be able to use the Internet having to deal with the evil.

It seems like a good idea to pass laws to keep the offensive content out by default, and let me choose if I want the offensive content. But what is meant by offensive?

As a Christian, I believe the Bible to be God’s word meant to guide our path here on earth. However, there are many stories in the Bible that, if read objectively through the lens of filtering offensive content, might be flagged as “offensive” to at least some portion of the population. Certainly, some in the population would claim that any religious content is offensive to them, and would also want that filtered. Don’t believe me? It’s already happened. Ask yourself why we don’t allow prayers in school anymore.

Put simply: You can’t choose to have the government filtering “offensive” content without running the risk that something you prize and value online would fall prey to the “offensive” content filter. You might find that because you want pornography filtered from your internet that you can no longer read the Bible online, it being filtered due to a story about a king who committed adultery with a beautiful neighbor woman.

It’s a slippery slope, this. It seems like such a good idea. But believe me when I tell you this: It is never a good idea to allow the government to pass laws that force people to choose the right. We do not become better people by being forced to do right. We become better people only by choosing right over wrong, when we have the option to do wrong and we choose to do right anyway.

So where do we end up? You have a couple of options:

  1. You can choose to avoid technology. Just don’t ever use the Internet. It’s like deciding to not drive a car because you might get in an accident. Seems silly, but there’s a pretty low chance you will die in an automobile accident if you never ride in an automobile.
  2. You can give up and embrace our new evil overlords. Learn to not be offended by pornography!
  3. You can recognize that in order for you to have the freedom to speak freely online you have to allow others to have the freedom to speak freely online about things that are deeply offensive to you. You don’t have to listen, and just because they have the right to speak doesn’t mean they have the right to force you to listen. But if you want your freedom, you have to afford them theirs, and sometimes you will hear what they are saying on accident. You will need to know how to handle it when that happens.

Given the choices, I go with #3.

Here’s how we do it in our home:

  • The internet is only accessed in public parts of our home.
  • Our kids know a simple plan of action: If something bad tries to get into the house, you “close the door and tell Mom or Dad.” Even my youngest kids know this rule. It applies to strangers at the door, to bad television shows, and to stuff online.
  • We use a basic, free content filter. We don’t use an active filter that requires me to pay money so that someone will have the job of looking at pornography all day. We use OpenDNS. It is free, it applies to every device on my Wi-Fi (not just a single PC), and it is maintained by the community of people that use it, like me. It is not perfect, but I’m pretty happy with it.
  • We take a family approach that filtering is something we do as a family to keep us all safe, as a family. It is not something the parents do to the kids because we don’t trust them. It is something we all do – parents and kids – to the Internet because we don’t trust the Internet. This puts us all on the same side of the issue, and makes our approach a lot more constructive.