This talk was delivered at the University of Richmond. The full slide deck can be found here.
Thank you very much for inviting me to speak here at the University of Richmond – particularly to Ryan Brazell for recognizing my work and the urgency of the conversations that hopefully my visit here will stimulate.
Hopefully. Funny word that – “hope.” Funny, those four letters used so iconically to describe a Presidential campaign from a young Illinois Senator, a campaign that seems now lifetimes ago. Hope.
My talks – and I guess I’ll warn you in advance if you aren’t familiar with my work – are not known for being full of hope. Or rather I’ve never believed the hype that we should put all our faith in, rest all our hope on technology. But I’ve never been hopeless. I’ve never believed humans are powerless. I’ve never believed we could not act or we could not do better.
There were a couple of days, following our decision about the title and topic of this keynote – “Ed-Tech in a Time of Trump,” when I wondered if we’d even see a Trump presidency. Would some revelation about his business dealings, his relationship with Russia, his disdain for the Constitution prevent his inauguration? Should we have been so lucky, I suppose. Hope.
The thing is, I’d still be giving the much the same talk, just with a different title. “A Time of Trump” could be “A Time of Neoliberalism” or “A Time of Libertarianism” or “A Time of Algorithmic Discrimination” or “A Time of Economic Precarity.” All of this is – from President Trump to the so-called “new economy” – has been fueled to some extent by digital technologies; and that fuel, despite what I think many who work in and around education technology have long believed – have long hoped – is not necessarily (heck, even remotely) progressive.
I’ve had a sinking feeling in my stomach about the future of education technology long before Americans – 26% of them, at least – selected Donald Trump as our next President. I am, after all, “ed-tech’s Cassandra.” But President Trump has brought to the forefront many of the concerns I’ve tried to share about the politics and the practices of digital technologies. I want to state here at the outset of this talk: we should be thinking about these things no matter who is in the White House, no matter who runs the Department of Education (no matter whether we have a federal department of education or not). We should be thinking about these things no matter who heads our university. We should be asking – always and again and again: just what sort of future is this technological future of education that we are told we must embrace?
Of course, the future of education is always tied to its past, to the history of education. The future of technology is inexorably tied to its own history as well. This means that despite all the rhetoric about “disruption” and “innovation,” what we find in technology is a layering onto older ideas and practices and models and systems. The networks of canals, for example, were built along rivers. Railroads followed the canals. The telegraph followed the railroad. The telephone, the telegraph. The Internet, the telephone and the television. The Internet is largely built upon a technological infrastructure first mapped and built for freight. It’s no surprise the Internet views us as objects, as products, our personal data as a commodity.
When I use the word “technology,” I draw from the work of physicist Ursula Franklin who spoke of technology as a practice: “Technology is not the sum of the artifacts, of the wheels and gears, of the rails and electronic transmitters,” she wrote. “Technology is a system. It entails far more than its individual material components. Technology involves organization, procedures, symbols, new words, equations, and, most of all, a mindset.” “Technology also needs to be examined as an agent of power and control,” Franklin insisted, and her work highlighted “how much modern technology drew from the prepared soil of the structures of traditional institutions, such as the church and the military.”
I’m going to largely sidestep a discussion of the church today, although I think there’s plenty we could say about faith and ritual and obeisance and technological evangelism. That’s a topic for another keynote perhaps. And I won’t dwell too much on the military either – how military industrial complexes point us towards technological industrial complexes (and to ed-tech industrial complexes in turn). But computing technologies undeniably carry with them the legacy of their military origins. Command. Control. Communication. Intelligence.
As Donna Haraway argues in her famous “Cyborg Manifesto,” “Feminist cyborg stories have the task of recoding communication and intelligence to subvert command and control.” I want those of us working in and with education technologies to ask if that is the task we’ve actually undertaken. Are our technologies or our stories about technologies feminist? If so, when? If so, how? Do our technologies or our stories work in the interest of justice and equity? Or, rather, have we adopted technologies for teaching and learning that are much more aligned with that military mission of command and control? The mission of the military. The mission of the church. The mission of the university.
I do think that some might hear Haraway’s framing – a call to “recode communication and intelligence” – and insist that that’s exactly what education technologies do and they do so in a progressive reshaping of traditional education institutions and practices. Education technologies facilitate communication, expanding learning networks beyond the classroom. And they boost intelligence – namely, how knowledge is created and shared.
Perhaps they do.
But do our ed-tech practices ever actually recode or subvert command and control? Do (or how do) our digital communication practices differ from those designed by the military? And most importantly, I’d say, does (or how does) our notion of intelligence?
“Intelligence” – this is the one to watch and listen for. (Yes, that’s ironic that “ed-tech in a time of Trump” will be all about intelligence, but hear me out.)
“Intelligence” means understanding, intellectual, mental faculty. Testing intelligence, as Stephen Jay Gould and others have argued, has a long history of ranking and racism. The word “intelligence” is also used, of course, to describe the gathering and assessment of tactical information – information, often confidential information, with political or military value. The history of computing emerges from cryptography, tracking and cracking state secrets. And the word “intelligence” is now used – oh so casually – to describe so-called “thinking machines”: algorithms, robots, AI.
It’s probably obvious – particularly when we think of the latter – that our notions of “intelligence” are deeply intertwined with technologies. “Computers will make us smarter” – you know those assertions. But we’ve long used machines to measure and assess “intelligence” and to monitor and surveil for the sake of “intelligence.” And again, let’s recall Franklin’s definition of technologies includes not just hardware or software, but ideas, practices, models, and systems.
One of the “hot new trends” in education technology is “learning analytics” – this idea that if you collect enough data about students that you can analyze it and in turn algorithmically direct students towards more efficient and productive behaviors, institutions towards more efficient and productive outcomes. Command. Control. Intelligence.
And I confess, it’s that phrase “collect enough data about students” that has me gravely concerned about “ed-tech in a time of Trump.” I’m concerned, in no small part, because students are often unaware of the amount of data that schools and the software companies they contract with know about them. I’m concerned because students are compelled to use software in educational settings. You can’t opt out of the learning management system. You can’t opt out of the student information system. You can’t opt out of required digital textbooks or digital assignments or digital assessments. You can’t opt out of the billing system or the financial aid system. You can’t opt of having your cafeteria purchases, Internet usage, dorm room access, fitness center habits tracked. Your data as a student is scattered across multiple applications and multiple databases, most of which I’d wager are not owned or managed by the school itself but rather outsourced to a third-party provider.
School software (and I’m including K–12 software here alongside higher ed) knows your name, your birth date, your mailing address, your home address, your race or ethnicity, your gender (I should note here that many education technologies still require “male” or “female” and do not allow for alternate gender expressions). It knows your marital status. It knows your student identification number (it might know your Social Security Number). It has a photo of you, so it knows your face. It knows the town and state in which you were born. Your immigration status. Your first language and whether or not that first language is English. It knows your parents’ language at home. It knows your income status – that is, at the K–12 level, if you quality for a free or reduced lunch and at the higher ed level, if you qualify for a Pell Grant. It knows if you are the member of a military family. It knows if you have any special education needs. It knows if you were identified as “gifted and talented.” It knows if you graduated high school or passed a high school equivalency exam. It knows your attendance history – how often you miss class as well as which schools you’ve previously attended. It knows your behavioral history. It knows your criminal history. It knows your participation in sports or other extracurricular activities. It knows your grade level. It knows your major. It knows the courses you’ve taken and the grades you’ve earned. It knows your standardized test scores.
Obviously it’s not a new practice to track much of that data, and as such these practices are not dependent entirely on new technologies. There are various legal and policy mandates that have demanded for some time now that schools collect this information. Now we put it in “the cloud” rather than in a manila folder in a locked file cabinet. Now we outsource this to software vendors, many of whom promise that because of the era of “big data” that we should collect even more information about students – all their clicks and their time spent “on task,” perhaps even their biometric data and their location in real time – so as to glean more and better insights. Insights that the vendors will then sell back to the school.
Big data.
Command. Control. Intelligence.
This is the part of the talk, I reckon, when someone who speaks about the dangers and drawbacks of “big data” turns the focus to information security and privacy. No doubt schools are incredibly vulnerable on the former front. Since 2005, US universities have been the victim of almost 550 data breaches involving nearly 13 million known records. We typically think of these hacks as going after Social Security Numbers or credit card information or something that’s of value on the black market.
The risk isn’t only hacking. It’s also the rather thoughtless practices of information collection, information sharing, and information storage. Many software companies claim that the data that’s in their systems is their data. It’s questionable if much of this data – particularly metadata – is covered by FERPA. As such, student data can be sold and shared, particularly when the contracts signed with a school do not prevent a software company from doing so. Moreover, these contracts often do not specify how long student data can be kept.
In this current political climate – ed-tech in a time of Trump – I think universities need to realize that there’s a lot more at stake than just financially motivated cybercrime. Think Wikileaks’ role in the Presidential election, for example. Now think about what would happen if the contents of your email account was released to the public. President Trump has made it a little bit easier, perhaps, to come up with “worse case scenarios” when it comes to politically-targeted hacks, and we might be able to imagine these in light of all the data that higher ed institutions have about students (and faculty).
Again, the risk isn’t only hacking. It’s amassing data in the first place. It’s profiling. It’s tracking. It’s surveilling. It’s identifying “students at risk” and students who are “risks.”
Several years ago – actually, it’s been five or six or seven now – when I was first working as a freelance tech journalist, I interviewed an author about a book he’d written on big data and privacy. He made one of those casual remarks that you hear quite often from people who work in computing technologies: privacy is dead. He’d given up on the idea that privacy was possible or perhaps even desirable; what he wanted instead was transparency – that is, to know who has your data, what data, what they do with it, who they share it with, how long they keep it, and so on. You can’t really protect your data from being “out there,” he argued, but you should be able to keep an eye on where “out there” it exists.
This particular author reminded me that we’ve counted and tracked and profiled people for decades and decades and decades and decades. In some ways, that’s the project of the Census – first conducted in the United States in 1790. It’s certainly the project of much of the data collection that happens at school. And we’ve undertaken these practices since well before there was “big data” or computers to collect and crunch it. Then he made a comment that, even at the time, I found deeply upsetting. “Just as long as we don’t see a return of Nazism,” he joked, “we’ll be okay. Because it’s pretty easy to know if you’re a Jew. You don’t have to tell Facebook. Facebook knows.”
We can substitute other identities there. It’s easy to know if you’re Muslim. It’s easy to know if you’re queer. It’s easy to know if you’re pregnant. It’s easy to know if you’re Black or Latino or if your parents are Syrian or French. It’s easy to know your political affinities. And you needn’t have given over that data, you needn’t have “checked those boxes” in your student information system in order for the software to develop a fairly sophisticated profile about you.
This is a punch card, a paper-based method of proto-programming, one of the earliest ways in which machines could be automated. It’s a relic, a piece of “old tech,” if you will, but it’s also a political symbol. Think draft cards. Think the slogan “Do not fold, spindle or mutilate.” Think Mario Savio on the steps of Sproul Hall at UC Berkeley in 1964, insisting angrily that students not be viewed as raw materials in the university machine.
The first punch cards were developed to control the loom, industrializing the craft of weaving women around 1725. The earliest design – a paper tape with holes punched in it – was improved upon until the turn of the 19th century, when Joseph Marie Jacquard first demonstrated a mechanism to automate loom operation.
Jacquard’s invention inspired Charles Babbage, often credited with originating the idea of a programmable computer. A mathematician, Babbage believed that “number cards,” “pierced with certain holes,” could operate the Analytical Engine, his plans for a computational device. “We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves,” Ada Lovelace, Babbage’s translator and the first computer programmer, wrote.
But it was Herman Hollerith who invented the recording of data on this medium so that it could then be read by a machine. Earlier punch cards – like those designed by Jacquard – were used to control the machine. They weren’t used to store data. But Hollerith did just that. The first Hollerith card had 12 rows and 9 columns, and data was recorded by the presence or absence of a hole at a specific location on a card.
Hollerith founded The Tabulating Machine Company in 1896, one of four companies consolidated to form Computing-Tabulating-Recording Company, later renamed the International Business Machines Corporation. IBM.
Hollerith’s punch card technology was first used in the US Census in 1890 to record individual’s traits – their gender, race, nationality, occupation, age, marital status. These cards could then be efficiently sorted to quantify the nation. The Census was thrilled as it had taken almost a decade to tabulate the results of the 1880 census, and by using the new technology, the agency saved $5 million.
Hollerith’s machines were also used by Nicholas II, the czar of Russia for the first (and only) census of the Russian Imperial Empire in 1897. And they were adopted by Hitler’s regime in Germany. As Edwin Black chronicles in his book IBM and the Holocaust,
When Hitler came to power, a central Nazi goal was to identify and destroy Germany’s 600,000-member Jewish community. To Nazis, Jews were not just those who practiced Judaism, but those of Jewish blood, regardless of their assimilation, intermarriage, religious activity, or even conversion to Christianity. Only after Jews were identified could they be targeted for asset confiscation, ghettoization, deportation, and ultimately extermination. To search generations of communal, church, and governmental records all across Germany – and later throughout Europe – was a cross-indexing task so monumental, it called for a computer. But in 1933, no computer existed.
What did exist at the time was the punch card and the IBM machine, sold to the Nazi government by the company’s German subsidiary, Dehomag.
Hitler’s regime made it clear from the outset that it was not interested in merely identifying those Jews who claimed religious affiliation, who said that they were Jewish. It wanted to be able to find those who had Jewish ancestry, Jewish “blood,” those who were not Aryan.
Hitler called for a census in 1933, and Germans filled out the census on pen and paper – one form per household. There was a census again in 1939, and as the Third Reich expanded, so did the Nazi compulsion for data collection. Census forms were coded and punched by hand and then sorted and counted by machine. IBM punch cards and IBM machines. During its relationship with the Nazi regime – one lasting throughout Hitler’s rule, throughout World War II – IBM derived about a third of its profits from selling punch cards.
Column 22 on the punch card was for religion – punched at hole 1 to indicate Protestant, hole 2 for Catholic, hole 3 for Jew. The Jewish cards were processed separately. The cards were sorted and indexed and filtered by profession, national origin, address, and other traits. The information was correlated with other data – community lists, land registers, medical information – in order to create a database, “a profession-by-profession, city-by-city, and indeed a block-by-block revelation of the Jewish presence.”
It was a database of inference, relying heavily on statistics alongside those IBM machines. This wasn’t just about those who’d “ticked the box” that they were Jewish. Nazi “race science” believed it could identify Jews by collecting and analyzing as much data as possible about the population. “The solution is that every interesting feature of a statistical nature … can be summarized … by one basic factor,” the Reich Statistical Office boasted. “This basic factor is the Hollerith punch card.”
Command. Control. Intelligence.
The punch card and the mechanized processing of its data were used to identify Jews, as well as Roma and other “undesirables” so they could be imprisoned, so their businesses and homes could be confiscated, so their possessions could be inventoried and sold. The punch card and the mechanized processing of its data was used to determine which “undesirables” should be sterilized, to track the shipment of prisoners to the death camps, and to keep tabs on those imprisoned and sentenced to die therein. All of this recorded on IBM punch cards. IBM machines.
The CEO of IBM at this time, by the way: Thomas Watson. Yes, this is who IBM has named their “artificial intelligence” product Watson after. IBM Watson, which has partnered with Pearson and with Sesame Street, to “personalize learning” through data collection and data analytics.
Now a quick aside, since I’ve mentioned Nazis.
Back in 1990, in the early days of the commercialized Internet, those heady days of Usenet newsgroup discussion boards, attorney Mike Godwin “set out on a project in memetic engineering.” Godwin felt as though comparisons to Nazis occurred too frequently in online discussions. He believed that accusations that someone or some idea was “Hitler-like” were thrown about too carelessly. “Godwin’s Law,” as it came to be known, says that “As an online discussion grows longer, the probability of a comparison involving Hitler approaches 1.” Godwin’s Law has since been invoked to decree that once someone mentions Hitler or Nazis, that person has lost the debate altogether. Pointing out Nazism online is off-limits.
Perhaps we can start to see now how dangerous, how damaging to critical discourse this even rather casual edict has been.
Let us remember the words of Supreme Court Justice Robert Jackson in his opening statement for the prosecution at the Nuremburg Trials:
What makes this inquest significant is that these prisoners represent sinister influences that will lurk in the world long after their bodies have returned to dust. We will show them to be living symbols of racial hatreds, of terrorism and violence, and of the arrogance and cruelty of power. … Civilization can afford no compromise with the social forces which would gain renewed strength if we deal ambiguously or indecisively with the men in whom those forces now precariously survive.
We need to identify and we need to confront the ideas and the practices that are the lingering legacies of Nazism and fascism. We need to identify and we need to confront them in our technologies. Yes, in our education technologies. Remember: our technologies are ideas; they are practices. Now is the time for an ed-tech antifa, and I cannot believe I have to say that out loud to you.
And so you hear a lot of folks in recent months say “read Hannah Arendt.” And I don’t disagree. Read Arendt. Read The Origins of Totalitarianism. Read her reporting from the Nuremberg Trials.
But also read James Baldwin. Also realize that this politics and practice of surveillance and genocide isn’t just something we can pin on Nazi Germany. It’s actually deeply embedded in the American experience. It is part of this country as a technology.
Let’s think about that first US census, back in 1790, when federal marshals asked for the name of each head of household as well as the numbers of household members who were free white males over age 16, free white males under 16, free white females, other free persons, and slaves. In 1820, the categories were free white males, free white female, free colored males and females, and slaves. In 1850, the categories were white, Black, Mulatto, Black slaves, Mulatto slaves. In 1860, white, Black, Mulatto, Black slaves, Mulatto slaves, Indian. In 1870, white, Black, Mulatto, Indian, Chinese. In 1890, white, Black, Mulatto, Quadroon, Octoroon, Indian, Chinese, Japanese. In 1930, white, Negro, Indian, Chinese, Japanese, Filipino, Korean, Hindu, Mexican.
You might see in these changing categories a changing demographic; or you might see this as the construction and institutionalization of categories of race – particularly race set apart from a whiteness of unspecified national origin, particularly race that the governing ideology and governing system wants identified and wants managed. The construction of Blackness. “Census enumeration is a means through which a state manages its residents by way of formalized categories that fix individuals within a certain time and a particular space,” as Simone Browne writes in her book Dark Matters: On the Surveillance of Blackness, “making the census a technology that renders a population legible in racializing as well as gendering ways.” It is “a technology of disciplinary power that classifies, examines, and quantifies populations.”
Command. Control. Intelligence.
Does the data collection and data analysis undertaken by schools work in a similar way? How does the data collection and data analysis undertaken by schools work? What bodies and beliefs are constituted therein? Is whiteness and maleness always there as “the norm” against which all others are compared? Are we then constructing and even naturalizing certain bodies and certain minds as “undesirable” bodies and “undesirable” minds in the classroom, in our institutions by our obsession with data, by our obsession with counting, tracking, and profiling?
Who are the “undesirables” of ed-tech software and education institutions? Those students who are identified as “cheats,” perhaps. When we turn the cameras on, for example with proctoring software, those students whose faces and gestures are viewed – visually, biometrically, algorithmically – as “suspicious.” Those students who are identified as “out of place.” Not in the right major. Not in the right class. Not in the right school. Not in the right country. Those students who are identified – through surveillance and through algorithms – as “at risk.” At risk of failure. At risk of dropping out. At risk of not repaying their student loans. At risk of becoming “radicalized.” At risk of radicalizing others. What about those educators at risk of radicalizing others. Let’s be honest with ourselves, ed-tech in a time of Trump will undermine educators as well as students; it will undermine academic freedom. It’s already happening. Trump’s tweets this morning about Berkeley.
What do schools do with the capabilities of ed-tech as surveillance technology now in the time of a Trump? The proctoring software and learning analytics software and “student success” platforms all market themselves to schools claiming that they can truly “see” what students are up to, that they can predict what students will become. (“How will this student affect our averages?”) These technologies claim they can identify a “problem” student, and the implication, I think, is that then someone at the institution “fixes” her or him. Helps the student graduate. Convinces the student to leave.
But these technologies do not see students. And sadly, we do not see students. This is cultural. This is institutional. We do not see who is struggling. And let’s ask why we think, as the New York Times argued today, we need big data to make sure students graduate. Universities have not developed or maintained practices of compassion. Practices are technologies; technologies are practices. We’ve chosen computers instead of care. (When I say “we” here I mean institutions not individuals within institutions. But I mean some individuals too.) Education has chosen “command, control, intelligence.” Education gathers data about students. It quantifies students. It has adopted a racialized and gendered surveillance system – one that committed to disciplining minds and bodies – through our education technologies, through our education practices.
All along the way, or perhaps somewhere along the way, we have confused surveillance for care.
And that’s my takeaway for folks here today: when you work for a company or an institution that collects or trades data, you’re making it easy to surveil people and the stakes are high. They’re always high for the most vulnerable. By collecting so much data, you’re making it easy to discipline people. You’re making it easy to control people. You’re putting people at risk. You’re putting students at risk.
You can delete the data. You can limit its collection. You can restrict who sees it. You can inform students. You can encourage students to resist. Students have always resisted school surveillance.
But I hope that you also think about the culture of school. What sort of institutions will we have in a time of Trump? Ones that value open inquiry and academic freedom? I swear to you this: more data will not protect you. Not in this world of “alternate facts,” to be sure. Our relationships to one another, however, just might. We must rebuild institutions that value humans’ minds and lives and integrity and safety. And that means, in its current incarnation at least, in this current climate, ed-tech has very very little to offer us.