Learning to Code, Part 2
Knowledge workers, including many journalists, hate AI for solid reasons. But we should use it much more in realms where it doesn't violate any editorial values.
You can read Part 1 of this series here.
Learning to Code, Part 2
Why Everyone Hates AI
My decision to create an alternative to TurboTax came at a good time. Fortunately, you don’t need to know how to code anymore to create software. Unfortunately, you have to use AI if you want to build anything decent, and I shared the same view as so many others: it sucks.
AI has a bad reputation for several good reasons.
First, the influential creative classes, especially Hollywood, are panicked about the technology’s potential to replace jobs, especially ones that seemed safe only recently, not just set designers but screenwriters. The backlash is intense enough that the coolest new film festival is Justine Bateman’s Credo23, billed as “a filmmaker-first, no-ai event.” There’s a similarly fierce anti-AI backlash in other creative industries, including music and journalism. When I worked at Politico, the union considered an anti-AI plank one of its biggest victories during contract negotiations with management.
Second, in recent months, AI has been blamed for mass layoffs at a long list of companies, including Block (4,000 layoffs), Amazon (30,000), UPS (20,000), Microsoft (15,000), HP (4,000-6,000), and Pinterest (15%). Meta is reportedly on the verge of cutting 20% of its workforce. (Some cuts began today.)
How many of these cuts are really related to AI is an open question. Some firms are clearly using AI as an excuse for cuts that have nothing to do with the technology, a practice common enough that there’s a word for it: AI washing, when companies message overhiring corrections and restructurings as AI-driven efficiency plans because it sounds forward-looking rather than embarrassing. Other companies are cutting jobs in anticipation of AI disruption, not because AI has actually replaced any of their workers.
While the causation may be murky, the overall vibe these announcements create is fear and uncertainty, and every class of white-collar worker has a reason to be as anti-AI as the actors, songwriters, and reporters are.
AI industry leaders have not eased the fears. “I think that we’re going to have a human-level performance on most, if not all, professional tasks,” Microsoft AI Chief, Mustafa Suleyman, told the Financial Times in February. “So white-collar work where you’re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person, most of those tasks will be fully automated by an AI within the next 12 to 18 months. And we can see this in software engineering.”
Ford CEO Jim Farley has said that AI will “replace literally half of all white-collar workers in the U.S.” There are glimmers of these warnings in employment data: recent college graduates are struggling to find work. But there’s no data yet to indicate that the most dire predictions are accurate.
Third, AI has become shorthand for poor quality. “It sounds like ChatGPT made it” is about the worst thing you could say about someone’s writing or music or video. This criticism is starting to feel dated, given the capabilities of the models released in recent months, but it will take a long time before AI sheds its association with slop.
Fourth, the public hates big tech—and for valid reasons. The same tech industry leaders who, 15 years ago, hyped the liberating potential of social media are now exiting that era with reputations akin to tobacco company executives (but with trillion-dollar valuations) while hyping the new liberatory power of AI. One could be excused for not believing the likes of Mark Zuckerberg, who in 2021 told us that “an embodied internet” called “the metaverse” would soon have a billion users and “touch every product we build.” It also doesn’t help that the industry’s most prominent face in the Trump era is the extremely unpopular Elon Musk, who not long ago was warning that AI is akin to “summoning the demon,” and is “far more dangerous than nukes,” and now runs what is considered the most reckless of the big AI companies.
“There has never in the entire history of business communications been any set of people so spectacularly bad at communicating as the contemporary leaders of the AI industry,” Nathaniel Whittemore noted last week on his popular daily podcast about AI. “Really, since the launch of ChatGPT, it has just been a clinic in how not to talk to people and how not to build public support for what you’re building.”
It is no surprise that trust in AI continues to drop. A recent NBC News poll showed that AI is more unpopular than ICE and more unpopular than Donald Trump.
But the most striking finding in recent polling is that AI usage is the single strongest predictor of opinion about AI, stronger than party, age, gender, or race. Data for Progress reported that AI has a +57 point net favorability among daily users, versus a -42 net favorability rating among those who rarely or never use it—a 99-point gap.
The public is polarizing into power users who love the technology and non-users who despise it.
Don’t Don’t This
I experienced the anti-AI backlash firsthand last month when I ran a week-long experiment publishing a daily newsletter written completely by AI. The idea was not to turn over content creation to AI, but simply to test how good AI was at an entry-level journalistic task—reading and intelligently summarizing the news.
Before this experiment, most of my AI usage was limited to searching with chat. I used AI the same way I used Google. The newsletter experiment was the first time I spent long hours experimenting with the best new models, their coding tools, and their agentic capabilities. Frankly, I was impressed by the quality of the analysis, especially by Opus 4.6. Its ability to synthesize and connect disparate news stories was superior to that of some reporters with whom I’ve worked.
At the start of my newsletter experiment, it took me longer to get the AI to write a serviceable newsletter than it would have taken me to do it myself. I spent most of my time fiddling with Claude Code, creating databases of high-quality news sources so the agent wouldn’t run loose on the web and summarize stories from, say, The Epoch Times. It took hours to craft strict rules for how the model wrote news and fact-checked itself to avoid hallucinations.
At one point, when I failed to extend these rules to the AI’s creation of maps and charts, the AI generated a map of the Middle East with the exact locations of US aircraft carriers that it said Trump had moved there. When I asked how it knew those positions, Claude admitted that it made them up.
By the fifth day, though, I had a system that, with the push of a button, created a fully formatted and fact-checked 2500-word newsletter with multiple data visualizations.
But readers hated it, even as an experiment to test the technology’s capabilities. The reaction was overwhelmingly negative.
The New York Times ran a similar experiment testing whether readers could tell the difference between human and AI writing, and the reporters who designed the quiz faced a similar revolt. “I woke up this morning, and I checked my social media feeds, and I saw messages like the following: ‘You’re garbage, and I hope you lose your job and become homeless. God, what a waste of sperm you are,’” said Kevin Roose, who co-authored the quiz. “When you tell them that they prefer the AI-written passages, they get very mad.”
So for modern newsrooms, two strong forces are keeping a check on AI penetration: news unions hate it, and readers, listeners, and viewers hate it.
A recent report from economists at Anthropic backs this up. They ranked every occupation in America by its vulnerability to AI disruption, then sifted through a million Claude chats to determine how much AI is used to assist with the specific tasks associated with each occupation.
I went through their data and pulled out 15 occupations across six different categories of white-collar professions—tech, media, finance, entertainment, medicine, and law—and plotted both the theoretical AI exposure and the actual AI usage for each job.
Essentially, the blue area tells you how replaceable you are in theory, and the red area tells you how far along you are in using AI to replace yourself.

A few things jump out. The text-focused media professions—reporters, editors, authors—have a massive gap between their high theoretical exposure to AI and their very low observed usage. AI just hasn’t penetrated newsrooms and publishing houses the same way it has Silicon Valley and Wall Street. (For financial analysts, the observed usage actually exceeds the theoretical usage, which might suggest some limitations in these methods of measurement.) Lawyers are less exposed than you might imagine, medicine has very low adoption for obvious regulatory and safety reasons, and in Hollywood, so far, only the tech-forward visual effects artists are using AI aggressively.
The big spike you see is for the highly exposed profession of computer programming, which is the number one most threatened field across 756 occupations that the Anthropic researchers measured. For coders, a majority of their core work is already being replaced by AI.
Digging into this data convinced me of a few lessons for journalists:
1. You’re not going to be replaced. Many of the journalistic tasks that are theoretically achievable by an LLM are things that nobody wants from AI. There’s no market for AI-written columns. The fastest way to lose readers and erode trust in a news product is to publish AI-generated editorial content. That’s not going to change for a long time, even as other industries, where there aren’t similar editorial standards around trust and factual accuracy, surrender to the robots.
2. Being human is your competitive advantage. AI is having a corrosive effect on journalism, especially in areas that were already suffering. Local news deserts are seeing AI-generated summaries fill the void, SEO content farms have been supercharged with AI slop, and Google search no longer sends as much traffic to publishers. These are big problems outside of the scope of the narrow point I’m making here. While the Anthropic research looks stark—all that blue surface area in the picture—the data overstates the threat to the media because it doesn’t capture a key cultural shift: journalism is increasingly a field that values personal relationships between the (human) creator and their audience, a trend that started before the AI explosion but has accelerated because of the backlash against AI. Fortunately, human connection is by definition the one thing that AI can’t replace.
3. You should use AI more. AI adoption in journalism lags surprisingly far behind, even on tasks that don’t violate any core editorial values. The AI gurus often talk about the difference between “efficiency AI” and “opportunity AI.” Efficiency AI is about replacing workers. Opportunity AI is about giving workers tools to make them more productive and creative. This sounds like consultant B.S., but there’s something to it.
I’ve been deeply immersed in AI tools for the last two months, and as my knee-jerk anti-AI views have melted into a more nuanced take, I’ve found that the efficiency vs. opportunity distinction serves me well when assessing what aspects of the technology to embrace or reject. For instance, the charts on this page were created in seconds with Claude. It would have taken me hours to download, organize, and sift through the data just to create that radar chart.
But what really convinced me of this is spending time immersed in the world of coders, who are in the middle of a full-blown occupational revolution. Over the last year, their lives have been upended by AI.
Coders are like the lead climbers on a rope team of all knowledge workers. They are navigating the unknown terrain of loose rock, missing footholds, and sheer drops before the rest of us. And since I had decided to become a coder myself—bad timing!—I needed to understand how they were doing it.
Coming in Part 3 tomorrow: AI Comes for the Coders



