Tech Companies Need the Humanities, not Humanists

Terence DeToy
6 min readJan 14, 2022
Photo courtesy of thisisengineering

As a humanities PhD working in tech, I’m frequently asked if I think the humanities are still relevant. Don’t we live in a tech world now? Doesn’t machine learning render philosophy irrelevant? What’s the point of literary studies in the social media age?

STEM fields and the humanities share a long relationship, but if you want to understand it, we don’t have to go back to C. P. Snow: look no further than Greg Mottola’s 2007 comedy Superbad.

In the film, three high school friends ponder a typical adolescent question: how do we get beer? Of course, they land upon a timeless solution: fake IDs.

Fogell’s two friends dispatch him to procure a fake license, and he returns with a Hawaiian ID etched with the immortal moniker ‘McLovin.’ Seth, played by Jonah Hill, is outraged. The whole point of a fake ID is for it to look realistic. No one is going to accept an ID bearing a single name, especially if that single name is McLovin. But Fogell, played by Christopher Mintz-Plasse, fires back–“It was either that or Mohammed.”

Seth is incredulous. “You were supposed to pick something common!” he tells him. He can feel his beer slipping through his fingers. Fogell, in turn, offers a data-driven response: “Mohammed is the most common name in the world.” For a moment, Seth is stumped. That may be accurate, but is it right?

Fogell here is a stand-in for Big Tech, STEM bros, data is beautiful (and other associated clichés)–in short, he uses what we call instrumental reason. He represents the mode of thinking that leverages data and information to accomplish a goal. Seth, on the other hand, whose focus is the party and the juvenalia of social rituals it entails, is socially embedded. He is interested in context–he knows the why but not the how. Fogell, who is so focused on procuring the IDs he doesn’t seem to grasp the reason for them, is all about the how.

Jonah Hill as Seth, Michael Cera as Evan and Christopher Mintz-Plasse as Fogell in Greg Mottola’s 2007 film Superbad

When Fogell mentions ‘Mohammed,’ Seth recognizes the name doesn’t suit their purposes of blending in (they live in a small, mostly-white neighborhood), but Fogell’s response stumps him nonetheless. Common sense is pitted against factuality, and for a moment Seth is caught between two thresholds like Jean-Claude Van Damme doing a split between two semi-trucks.

Evan (Michael Cera) solves the dilemma with a single question. He asks Fogell: “Have you ever met anyone named Mohammed?” Fogell falls into a telling silence.

Evan can resolve the dilemma because he is adept in both ways of thinking. He understands the capabilities of data-based thinking (Fogell is, after all, the one that gets the fake IDs), but he also thinks the way Seth thinks: he understands context, the why. This is why he is able to illuminate what Fogell overlooks when Seth can’t—he understands the type of logic that his friend is using, but he grasps context-based thinking well enough to see that facts and data only have utility relative to a given context.

Bottom line: we need more Evans, but in this age of Big Tech and the shrinking of the humanities, we’re polarized in our intellectual outlooks much the way we are in our politics. We have too many Seths and Fogells.

In an excellent piece for Wired, Elena Maris addresses the growing trend of tech companies hiring academic humanists to help them solve a specific set of data problems — sticky problems that pop up around the convergence of technology and the human (for instance, the unsettling but growing realization that computer programs like algorithms can exhibit the worst aspects of human behavior, like racism).

Maris rightly points out that this approach, however welcome it may seem at first glance to underappreciated (and often, underpaid) humanists, betrays the very problem that got tech companies into this mess in the first place.

Take racist algorithms or AI carrying on secret conversations with each other–how did we get there? The same way Fogell nearly acquired an ID with the name ‘Mohammed’: problem-solving ingenuity applied without regard to social dynamics, context, or cultural nuance (not to mention ethics). Everything is reduced to a problem that can be ironed out with the exertion of yet more data-driven muscle. But driving a souped up sports car faster in the wrong direction just gets you further into nowhere-land.

Asking an anthropologist or an academic philosopher to untangle an algorithm from its embarrassing habit of discounting people of color from consideration is to ask them to do what Big Tech itself has been trying to do (and failing), as if it were just a matter of throwing more resources at the problem.

But ‘Jim Code’ isn’t a glitch or some unfortunate byproduct of a dispassionate technical oversight. It represents a staggering failure of perspective.

If it is sincere in wishing to construct a society-wide technological apparatus that doesn’t repeat the social failing of its human users (and designers), Big Tech doesn’t so much need humanists as it needs exposure to the humanities.

The reason the hiring humanists into Big Tech has shown (I think it’s fair to say) only modest benefits is because these humanists are being brought in to solve problems–they should be brought in to ask questions. As a result, many of them are Seth in the face of Fogell’s rejoinder about ‘Mohammed’ being the most common name in the world. They are effectively being asked to wave the magic wand of cultural understanding and solve what are framed as technical problems. But what if they aren’t technical problems? What if racist algorithms are (gasp!) human problems?

The goal shouldn’t be to bring in humanists to solve the riddle of racist algorithms, but to take a humanistic approach by stepping back and asking, deeply and sincerely, what created the problem in the first place. What set of expectations, deadlines, internal bias, workforce composition, profit concerns, etc. lead to the circumstances that enabled AI software to mis-recognize women of color 35 times more often than white male faces.

Of course, I’m not saying there aren’t technical aspects to be addressed or that everyone in tech is clueless as to the problem. I myself work for a large tech company and have many conscientious and socially-minded colleagues.

And it is possible some aspects of the problem can be patched, as it were. Perhaps fixing facial recognition software is just a matter of uploading an expanded and diverse set of facial references–no philosophers or literary scholars required. (I don’t know that to be the case — that is just an example.)

But in technology, we’re taught to look for the root cause of an error, and too often the general gloss given to these issues by the tech elite pulls the brakes on the root cause express at the level of technical problem before we can get to the more thorny issues of intent, bias, etc.

Someone with a solid core of humanistic study under their belt would (I hope) draw a distinction between technical problems and human problems. We don’t need more humanists per se in Silicon Valley; we need more tech-oriented thinkers to adopt the self-awareness and ethical consideration that the humanities can evoke.

In other words, we need more Evans.

It’s worth pointing out that orientations like racism come about through a limited engagement with the world. The APA has categorized racism as a disease, but I’ve always thought of it as a failure of development. If I come to an immediate conclusion about another person and lack the ability (or, more often, the inclination) to engage with them on their terms, I have nothing to compare my outlook to, and so I never venture outside the confines of my own thinking.

Technology is often taken to be a corrective to the contingency of being human. At its best, it can transform the world, but if not balanced out by context, it will exacerbate our problems, not solve them.

Technology won’t fix us — it will amplify us, whatever we are.

--

--