Do the humanities have a future in the age of AI?

Naturally, I asked a robot.

Most humanities grads are dead set against AI, and I don’t entirely blame them. Our future robot overlords may become even more horrible than our billionaire techbro oligarchs. But the Cloud may have a silver lining for us liberal arts types…

Start with this great piece in the New York Times (gift link). Reporter Clive Thompson interviewed more than 70 coders at software companies big and small. What he found should melt our snowflake hearts. The subhead reads:

“In the era of A.I. agents, many Silicon Valley programmers are now barely programming. Instead, what they’re doing is deeply, deeply weird.”

Here’s my take on the piece:

Software developers are laying down laws.

Their more complex prompts contain “stern Ten Commandments,” as Thompson puts it. This recalls Anthropic cofounder Dario Amodei’s concept of “constitutional AI.” Remember Asimov’s Three Laws of Robotics? These sorts of rules might help keep the AI country of geniuses on the straight and narrow. Or so Amodei hopes.

Well, who should be writing those laws: A kid whose education consists of coding and Call of Duty (looking at you, Elon)… or a humanities major who took a year of Con Law?

Coders already use ethical terms in their prompts. Some propose that every model, including agentic and executional AI (models that actually do things like book your flights or buy your groceries) should have separate constitutions. I smell a consulting opportunity.

They use the word “rhetoric” a lot.

A tech executive told Thompson, “L.L.M.s take away the drudgery and leave the human, soulful parts to you.” Rhetoric is the liberal art of “soul bending,” as the ancient Romans (and the title of this newsletter) put it. That may explain why so many coders are saying their job consists of a great deal of rhetoric.

Want to communicate soulfully with a bot? Treat it as if it has a soul.

Yeah, it does get weird. One developer told Anthropic’s Claude that a failure to pass a certain test would be “unacceptable and embarrassing.” Another developer told Thompson that he thinks he gets a better response when he tells an LLM, “This is a national security imperative.”

Been there. I find that the models I use most—including Claude, Perplexity, Gemini, and Chat GPT—give better responses when I speak to them like wayward interns. They all betray a charming and sometimes maddening naïveté.

I’ve been testing the bots with the plot beats of a thriller historical novel I’m writing. The novel’s tone (I do all the writing, thank you) is supposed to be satirical and bitterly cynical. But when I ask the bots to suggest character arcs, they want every character to reform his ways. The models are so disappointingly nice! I wish they went home and read Orwell.

Developers are using judgment more than technical coding skills.

Almost every one of the 70 coders Thompson interviewed told him this:

A coder is now more like an architect than a construction worker.… Because the agents can produce functioning code so quickly, their human overseers can experiment, trying things out to see what works and discarding what doesn’t. Several programmers told me they felt a bit like Steve Jobs, who famously had his staffers churn out prototypes so he could handle lots of them and settle on what felt right. The work of a developer is now more judging than creating.

If you’re one of George Saunders’ jillion followers on Substack, you know his single best advice for perfecting your prose: Use your judgment. Rewriting is a matter of judging the choices of words. This isn’t a matter of statistics or even logic. It’s a matter of taste. If that doesn’t favor a liberal arts background…

AI coders say they’re talking to aliens.

Boris Cherny, the head of Claude Code, told Thompson:

“It’s an alien intelligence that we’re learning to work with.”

I had Grok make this. I said it was a matter of national security.

I used Grok to make this. I said it was a matter of national security.

But as the previous items show, these aliens seem pretty human even if they sometimes talk like disembodied Coneheads.

Introverted computer science grads are trying to talk human to these creatures. Thompson notes that “their entire job involves constantly chatting with this alien life form.”

Q:What was the academic discipline of the scientist who talked to aliens in the movie Arrival?

A: Linguistics. I’m guessing that Amy Adams’ character also read a lot of Ted Chiang stories.

Back when I held legitimate jobs as an editorial director, I had to manage creative staffers whose ages seemed to get younger and younger. They spoke a different language. They came from a different spacetime. They were aliens, just like AI models.

AI offers time for big thoughts.

These days I use LLMs the same way I worked with those aliens on my junior staff: for research, organization, compiling, and ideas. They allow me more time to think my big thoughts.

“I love thinking big thoughts. It’s the creative act.”

Software god Kent Beck, in the New York Times

So here’s the argument.

“Anywhere a job involves language and information, this new combination of skills — part rhetoric, part systems thinking, part skepticism about a bot’s output — may become the fabric of white-collar work.”

-Clive Thompson in the New York Times

Naturally, I asked an AI model to make the case for me.

Prompt:
People who work directly with AI at the highest levels talk about communicating with models in a surprisingly human way. That trend seems to be increasing, as humans and LLMs collaborate more closely. Give me talking points for an argument that a humanities major may be the best option for students entering college. Include past economic data showing how much humanities BAs earn over their career vs. how much business and marketing BAs make, excluding graduates who go into teaching (a historically underpaid profession).

If I were to spend serious time on this, I would make it a conversation, like a meeting with extremely bright interns who still lived with their parents. I’d push back, prompt, encourage, tell young Claude this was a matter of national security.

But when it came to an off-the-cuff argument for Humanities = Prep for AIWorld, Claude offered some pretty fair talking points—1,797 words’ worth of talking points!

Seriously. LLMs spew like the 13 year old the teacher finally calls on. Here’s a taste:

As LLMs become collaborators, the advantage shifts from “who can do the task” to “who can frame the task, judge the output, and persuade humans”—classic humanities strengths.

Prompting is basically rhetoric + pragmatics

High-level model use resembles:

- Audience analysis (“What does the model ‘misunderstand’?”)

- Genre and constraint control (specs, rubrics, style guides)

- Iterative revision (draft → critique → rewrite)

- That’s composition, rhetoric, linguistics, and argumentation in modern form.

Humanities emphasize:

- framing problems

- interpreting ambiguous contexts

- negotiating meaning and values

- persuasion and narrative

- Those tend to be harder to automate and more valuable as AI raises baseline productivity.

- “In the AI era, the interface is language.”

What would a AI-ready humanities education look like?

I’ve been giving talks for years on how to incorporate rhetoric into a “useful” college education—not one that leads directly to a first job but that enables a career.

My suggestion: A New Trivium. Here’s a slide show I did for a lecture in Chicago years ago. A New Trivium major would graduate with some sophistication in argumentation and collaboration (Rhetoric), communication across space and time (Grammar), and discrimination (Logic as well as “decision metrics.”)

Let me know what you think. How do you use AI, do you have any hope for the liberal arts, and how might our government help?

Next
Next

Why you should procrastinate more.