Reading Zeynep Tufekci’s Twitter and Tear Gas made me start to become paranoid all over again. After reading Algorithms of Oppression, I switched to DuckDuckGo as my primary search engine on my Macbook (I already rely on Firefox, using Safari only when Firefox won’t work, which happens with a few sites).
Now, I’m generally not a paranoid person. I often sit on my sofa with the sliding glass curtain open so I can see the yard or the dog (or dogs, now!), but my husband always closes it. “The neighbors behind us can see in here,” he tells me. Well, I mean, they could. But I’m normally confident enough that I assume they have better things to do than watch me lie on the sofa reading a book, watching television, or using my Macbook. Or eating. I guess they could be tracking me eating…. That curtain is staying closed. It wouldn’t be difficult to track how many trips I make across the house from that angle–this is the corner of the house.
But then I read the section on surveillance in the book (there are so many things to talk about in this book!). “Activists are often targeted for surveillance. Movements grow by expanding their networks, and when it comes to surveillance, networks are as weak as their weakest point” (252) Turfekci writes. And on the surface, it’s easy to think, “Well, I’m not mounting a coup of a foreign government, so I’m fine.”
Who decides what constitutes a threat to the government? I frequently post online about my opposition to racism and white supremacy. Ten years ago, this seemed like an obvious position to take. But given that I am directly criticizing the positions of people in power, could that be seen by those people in power as a threat? And then that voice in your head says, “There are a billion people online at least. How would they find me?”
And you start to realize the data is already harvested. Someone just has to look for it. And then I started thinking about who, other than the federal government, would want to have me under surveillance?
And then I saw this Tweet from Tufekci, and now I have even more thoughts:
Tufekci is linking to this article: https://www.washingtonpost.com/technology/2018/11/16/wanted-perfect-babysitter-must-pass-ai-scan-respect-attitude/?utm_term=.a1c5a8b9ccee
And the idea that say, a school district, or a parent, could have your data scraped, could take these steps to analyze teachers, really worries me. Because while I use my real name on Twitter (@JenAnsbach) and live my life there as a public space, I’m often having discussions that are arguments in the sense of the word that means, “I’m open to persuasion–let’s talk.” (I tend to save my more strident comments for the DMs to neutral third parties to keep myself out of trouble). So what might they find as part of my profile, and what can they tell on where I was resisting and where I was agreeing?
I’m thinking here of all the education protests I’m involved in: standing against arming teachers, asserting students’ right to protest (because the teachers and administrators are dictating those terms of protest more and more, and we have raised students to believe that what the adults say is gospel–so if they say you can’t protest, you can’t–we’ve completely subverted their power of protest and engage students only when we like what they are standing up for); fighting to diversify teaching staffs, decolonize curricula, advocating against testing culture and in favor of student-driven, choice models that allow flexible learning, taking a stand for LGBTQ staff and students and working to integrate curricula to make LGBTQ people visible in the curricula…. Lots of issues that many people could have an issue with me taking a stand on. On pages 68-69, Tufekci discusses the logistics and planning involved in the civil rights march in Washington, the hard work of organizing and the role people played in working with each other. How does today’s student protests, which are networked even when the students come from the same school, get this work done? How does it connect to other schools, and how can we help students do this in ways that their Facebook and Google dossier isn’t a weapon against them later in life?
I don’t have the answers to these questions.
Last year, I bought a Petzi.
My husband refused to let me install it, insisting that it was allowing people to watch us inside our home. We don’t have an Alexa or Google Dot or anything else, although Siri is always listening on multiple devices. We have sticky notes over our cameras on our Macbooks. But I really wanted to check on Neville Dogbottom while I was at work, so I reached out to a friend who is an expert in cybersecurity. His answer: if they are using the encryption software correctly and the way it is intended to be installed, there is no way someone could view it. And I thought, “Hooray!” Then his second Tweet appeared, telling me there is no possible way to confirm that the Petzi people have installed the encryption the “correct” way. And I thought about how my students have no expectation of privacy anymore, nor do they think it is something to be concerned about. Tufnekci has me leaving the Petzi in the box a little longer, but I do worry one day, I’ll install it so I can see Neville and our new best friend (she arrived the day before Thanksgiving!) Luna Lovedog.
[Page numbers refer to Zeynep Tufekci, Twitter and Teargas, Yale University Press, 2017.]