i dream of being possible

the burden of privacy

I feel like I’ve read a lot of article about privacy and the internet in the past week. Most recently, “Everything is Broken” by Quinn Norton. I should mention that this post isn’t actually about that article, since it largley discusses of software and how it is all crappy and insecure. I also read something about libraries and using Firefox (with ghostery/disconnect, https everywhere, etc).

And this is sort of following Facebook’s recent change of heart concerning privacy settings. Namely, instead of your FB posts being public by default, they’ll be private to your friends. Privacy will be the new default. Of course, this is only a certain kind of privacy given that nothing you do will be private from Facebook.

The thing is, though, is that pretty much everyone approaches the web as if it is public by default. And that all web spaces are public by default. Importantly, it should be noted that this is a belief that is nurtured and maintained by large tech companies, web advocates, and the like. Largely based around the discourse of the ‘open’ web.

That this attitude and perspective has a structural impact on the web is pretty obvious from large companies like FB. This notion that, if you’re on the up and up, nothing you do should need to be ‘private.’ Or Twitter, where unless you mark your account as ‘private’ everything you write is ‘open’ to everyone.

What is aggravating about most (all) articles on privacy and security is that they all squarely place the responsibility of these things on individuals. Ever single, individual is supposed to (probably by magic) be able to protect their data/selves from large multi-million dollar government agencies, tech companies, etc. If we take what Quinn Norton writes in the article to heart, this is simply impossible because there is no such thing as ‘secure’ software.

One of the most interesting things to come out of the whole Snowden/NSA thing is the deafening silence of all the non-protests happening against the NSA. Yes, a bunch of tech companies were named and shamed for cooperating with them1. Yes, a bunch of other tech companies wagged their fingers at the NSA. And, of course, a bunch of the article/posts/tweets/etc that came out of this in the tech community had a common thread of “here is what you can do to protect your privacy against state surveillence” instead of “wow, maybe we should really get the government to stop spying on its citizens.”

Or even deeper than that: not understanding that this NSA business is literally nothing new. The fundamental premise for the outrage over this is the idea that the state has begun to fulfill George Orwell’s prophecy of the all seeing eye. The strange thing about all of this is, is that the reason why Orwell’s novel has such a great impact, is because it is a cautionary tale. Orwell took certain trends and elements of the society that existed during his day (and today) and took them to their extreme (but logical) conclusion.

The state begins collecting data on us from the day we are born (see birth certificates) and this goes on until the day we die. Importantly, the state always gathers as much data as they can get away with vs. the data actually needed for its operations2. That the state isn’t necessarily concerned about the overall well being of the population is also revealed in the contentious and troubled history of who, exactly, is a ‘citizen.’3 This very assertion of authority by a settler state necessarily means that all people within its borders must be accounted for and monitored. This is the real threat of undocumented people in a place like the US, they subvert and challenge one of the critical powers of the state and, in a very basic way, cannot be ‘observed’ and ‘known’ by the state. And if they cannot be ‘known’ then they cannot be controlled.

And before anyone gets on my case, I’m not a libertarian arguing for a minimal state that leaves people almost entirely alone. It really depends on the state. Settler states like Canada and the US have always been grounded in violence and oppression. So while it is possible for a benevolant state to exist, it will never be Canada or the US.

In fact, what I’m angling at here, is the libertarian/zero regulation approach that most ‘open’ web advocates, tech companies, privacy/security experts seem to want. And how this defaults into and understanding that the default for everyone is public and extra steps must be taken if you want privacy. And that this is entirely up to the individual, which, in a roundabout way, is also victim blaming: “Oh, your FB feed was hacked, LOL, pick a better password!” “Oh, your credit card got spoofed? LOL, use cash loser”. The responsibility for bad behaviour like malicious hacking, NSA surveillence, etc is on the shoulders of the people doing it. This is why I wish there were more articles focusing on malicious hackers TO STOP DOING IT. More articles focusing on how to dismantle settler states whose very existence is grounded in violence and oppression.

Instead, we see article after article about how to use PGP or encrypt this or that. Get told we need to pay money for a password manager (like lastpass or whatever). We see very privileged white, hetcis men worrying about the NSA or Google or whatever reading their emails and chat logs (despite being the least likely targets of negative state surveillance). I’m pretty sure I haven’t seen a single tech person who whined about the NSA or internet neutrality say a single solitary word about, say, stop and frisk in NYC and how this constant surveillance of Black and/or Latin@ people by the police is, you know, a bad thing. Or how banks (for corporate surveillance) frequently target Black families for subprime mortgages and other predatory loans.

Don’t get me wrong, though, academics aren’t any better about this sort of thing. Part of the cause of the above is the default assumption that everything is ‘public’ by default. If you are a Black man walking around NYC and you get stopped and frisked? Well, you should have been at home or maybe try not existing. These are obvious, visible, and high-profile ways that this ‘public by default’ attitude manifests.

But it comes out in subtle ways when academics and/or journalists engage with the work of marginalized people on the web. Since obviously, everything on the web is public by default. Combined with the idea that many people have that any marginalized person talking about our lives automatically equals a public, political, social justice-esque expression. And that these can be consummed for research articles, dissertations, newspaper articles, whatever without necessarily gaining consent or, in quite a few cases, even crediting the creator and, the worst, outright stealing ideas and words.

The web is strange like this. Because of the nebulous place that blogs/social media still have with regards to publishing and the like, they are considered as both inferior and informal, thus do not require the same consideration that a traditionally published article/book/etc. And so this ‘public by default’ replicates the situation in the ‘real’ world whereby exiting as a marginalized person in a ‘public’ space necessarily means becoming an object for exploitation, surveillance, and oppression. You become an object that lacks any coherent or substantive agency and, thus, your consent to be a research subject for someone’s dissertation simply isn’t necessary.

Because if you wanted privacy (and a sense of personal security), you should have stayed at home. You shouldn’t have logged in and blogged that post. Or tweeted that hashtag. Even more so, if you wanted to log in and still have privacy/security, well, why didn’t you jump through the 5 billion hoops necessary to encrypt all the things?

And then we get to the double bind, if you did avail yourself of one of the kinds of ways to gain privacy/security (a common one with marginalized people because it is easy and doesn’t require a lot of technical knowhow) by using a social network like Tumblr or Twitter where you could pick a pseudonym, how can you expect anyone to credit/cite you in their paper? I mean, ‘satifice’ isn’t a real person/name/entity and so cannot have any agency or, heck, copyrights.

You can never win.

Why?

Because it isn’t about what we do (or don’t do) as individuals to protect our privacy/security. If you are attacked or your privacy violated, the fault entirely lies with the person/state/organization who did it. I want to see more analyses on the ethics of online behaviour. Especially from academics who exploit the fact that slow moving regulatory bodies haven’t really adabpted to this new environment (or purposefully choose to exploit this gap themselves4) have uncritically accepted this notion of ‘public by default’ and simply do not care about the harm they may be perpetuating.

Instead, I know I can look forward to yet another article about how ‘kids’ don’t really understand privacy/security. But no articles about how this ‘adult’ notion of privacy is itself a shitty and oppressive way to conceptualize privacy.

  1. Although why anyone would be surprised by this is beyond me. 

  2. Seriously, recording the ‘sex’ of an infant is a useless data point nowadays and completely unnecessary, in addition to being a mechanism of violence. 

  3. In the US the Indigenous peoples were only ‘official’ citizens after about 1924 (Indigenous citizenship in both Canada and the US is really complicated). But even this ‘citizenship’ didn’t necessarily include the right to vote. 

  4. Look, I’m not being hyperbolic or wearing a tinfoil hat here. I know academics who have been told by their ethics boards that they don’t, in actual fact, need informed consent when using digital research subjects. Heck, I know one academic who, because they are themselves a marginalized person, wanted to get this consent but was explicitely told not to bother because it would slow everything down. But of course, no one is listening to the marginalized people who’ve explicitely stated that this violates their consent and that they do not wish to be the subjects of anyone’s research without their explicit permission.