Categories
Blog

AI Makes Me Feel Cheap

A two-bit excuse for why I haven’t been keeping up with my history podcast, “You Are A Weirdo.”

Crudely made ballpoint pen drawing of a robot speedily wheeling away with a bag that reads "YOUR LUNCH" in its claw.
Are robots stealing our lunch?
Image crudely drawn by Doug Sofer (not by a robot, nor by a competent human artist)

It’s been close to ten months since I released my last podcast episode. No, I haven’t given up entirely, and I’m actually working on some new ones. Still, I feel like I owe my halves-of-dozens of listeners some kind of explanation.

This essay is some kind of explanation.

It’s definitely not a complete explanation. Writing one of those in essay form would involve a level of oversharing that neither you nor I want me to engage in.1Trust me: we’ve both really lucked out on that front. But among the multiple flavors of mid-life, late-life, and get-a-life crises I’m experiencing these days, I’m also facing an artificial-life crisis:

I’ve been a little freaked out by AI.

I’ve found it hard to find value in writing blog posts and recording new episodes when AI can do many of those tasks in the time it takes a pair of standard human eyelids to close and then reopen. 2I’m trying to avoid phrasing with clichés since repeating clichés is precisely what AI bots excel at. It’s possible I may have tried too hard with this one, though. No, large language model (LLM) AI bots don’t make especially good blog posts or podcast episodes, but the fact that they can make either reasonably well has troubled me greatly. It’s simply made some of the kinds of work I do feel cheap.

On this point, I’m ready and willing to share my feelings—the main one being fear. I fear that AI writing and podcasting represents a major break in the history of humans and technology. Up to now, many of our technologies have made our species quicker at doing human tasks. A domesticated ox made it easier for people to plough. A tractor makes many of those processes still easier for humans.3And much easier for the ox. Better musical instruments allow musicians to express themselves in new ways with new sounds and techniques. The first digital audio workstations (DAWs) made it easier for humans to record the music they themselves created. Present-day DAWs like LogicPro, the one I use for my podcast episodes, are cheaper and easier to use than ever and have democratized humans’ ability to create our own polished music and other audio programming.4I love digitally editing music, but I recognize that there are some tradeoffs too. I think we create a false impression when every lick in every guitar solo gets re-timed (quantized) to perfection, and every vocalist’s pitch gets corrected to the precision of a whole drawerful of tuning forks. In short, good music expresses something about the human condition, and humans make mistakes. We therefore risk misrepresenting something fundamental about ourselves when we digitally polish our tunes to an excessive degree.

AI itself has many uses in this realm and can play major roles in the process. To give one lesser-known-outside-of-Idaho example, take the example of an AI-powered potato grading machine. Yes, one of these ChatGPTater5ChatGPTater is ©2025 Doug Sofer. OpenAI, I’m willing to sell you the rights to this name. Please see my hash brown clause in the earlier footnote. robo-tools can tell the difference between what the U.S. Government defines as a “fairly well shaped” U.S. Number One grade potato that’s just over the requisite one-and-seven-eighths inches minimum diameter size, and a U.S. Number Two potato that’s not “fairly well shaped,” but instead is only “not seriously misshapen” and still over one-and-three-fifths inches in diameter.6The AI powered machine in question may be found here https://ellips.com/grading-machine/potato/ , accessed 22 Sept., 2025. Oh, and note to the Ellips Company execs: if anyone reads my blog and happens to buy any of your awesome equipment, feel free to contact me at doug@findyourselfinhistory.com to arrange my 8% sales commission. I accept payment in hash browns. Sure, humans could do that the traditional way—by poking through the old-fashioned Potato Visual Aid charts to figure out just what kinds of spuds they’ve sprung out of their soil.7An unofficial PDF of the U.S. Department of Agriculture potato grading visual aid charts may be found here: https://www.ams.usda.gov/grades-standards/potatoes-grades-and-standards , accessed 22 Sept., 2025. But I totally get why someone might want AI to get all of this tedious work out of Farmer Frankie’s fingers. Ultimately I get that AI can be useful, whether inside and outside the glamourous world of tuberous vegetable categorization.

The point is that AI can make some human work easier. But something feels different when the bots starts writing and podcasting because they can take the humans out of the creative process entirely. That’s not progress; it’s devolution.8It’s exactly what Devo, those wacky guys in the 80s who wore plastic seed-starter pots on their heads, warned us about. Sure, the Tater-Grader 3000 mentioned above9Not its actual name. The Tater-Grader 3000 name is ©2025 Doug Sofer. See earlier ChatGPTater footnote for details.might similarly poop on the parties of those rare farmers who find satisfaction in measuring their own spuds, but it does not actually detract from farming. Writing and audio production, by contrast, are both creative enterprises that bring richness, joy, knowledge, and satisfaction to the human beings that produce them. At their best, they represent human brains thinking through problems, wrestling with difficult questions, expressing what it is like to be alive. Even the existence of a bot that can superficially appear to do these things is just this lifeless thing, a magic trick that’s only pretending to sound alive. It feels like some sneaky ploy to rob our species of something important.

For the record, I don’t think tech companies created these technologies to steal our humanity; and doing so would make for lousy PR. But there is a degree of something akin to theft that accompanies AI that is worth taking seriously. LLM AI technology comes from gobbling up entire libraries full of text.10I asked ChatGPT 5 how many New York Public Libraries of information a current model LLM has. It offered many different ways of estimating that number, but the amount of information is, in fact, equivalent to an entire NYPL full of text. The texts are different, of course, and less coherent than the books and other materials at the library, but very, very roughly, that’s the nearly impossible to comprehend scale we’re dealing with right now in 2025. Natural language processing simply works like a very sophisticated version of auto-complete. The “very sophisticated” part comes from its ability to understand language in context via another exciting concept called “recurrent neural networks.” Those technologies allow AI bots to keep track of important words and concepts and ideas over the course of a conversation; that is, to keep it in memory and return to those ideas again. These are extremely clever technology bits, but the point is that they prioritize importance through training on real humans’ writings.11See, e.g., Christopher Summerfield, These Strange New Minds: How AI Learned to Talk and What It Means (New York: Viking, 2025), Ch. 12, Kindle Edition.

In some sense, we’re just like those bots; we also learn by hearing other people talking and, say, by reading books in actual libraries. The difference is that we still, in 2025, seem to be a lot better at generating original ideas than our robo-amigos.12Psychologist and AI researcher Christopher Summerfield argues that it’s not that large language model AI can’t think for itself per se. It’s just that it can’t think in ways that are nearly as sophisticated as human beings can—yet. They also lack the kinds of biological needs, aesthetics, and sensors that humans have. He writes “…to say that LLMs do not think at all requires a new and rather convoluted definition of what it means to ‘think’.” See Summerfield, These Strange New Minds, 178 and Ch. 22, passim, Kindle Edition. And, as far as we can tell, when they write or develop audio content, they’re simply predicting the kinds of language products that a typical person would likely generate. For the most part, then, it’s not really capable of writing originally at all, instead leaning heavily on clichés. Something similar takes place with the audio content from Google’s NotebookLM. It sounds amazingly real, and the audio quality is through the roof. But the content it generates is so generic as to be almost cringe—a Gen-Z adjective that seems practically made for this development.

All of which is to say that the AI revolution has indeed cheapened writing and audio production—but mostly the generic flavors of both. What you,13I’m assuming you’re a fellow human. If you’re a robot and are interested in being trained by me in how to think originally, I am available for hire. Please link up with your potato grading colleagues and arrange my payment in hash browns (see above footnotes on this recurrent hashbrown theme). and I hopefully can create by writing originally may in fact be becoming rarer. And as with gemstones, precious metals, and good ideas, rare things are ultimately extremely valuable. That realization is why my current bout of demoralized sulking has not (yet) defeated me. I’m still hanging in there and I will definitely be updating the podcast with new episodes reasonably soon, simply because doing so still matters—if only just to me.

That’s what I’m telling myself anyway, at least until the bots get smarter.

  • 1
    Trust me: we’ve both really lucked out on that front.
  • 2
    I’m trying to avoid phrasing with clichés since repeating clichés is precisely what AI bots excel at. It’s possible I may have tried too hard with this one, though.
  • 3
    And much easier for the ox.
  • 4
    I love digitally editing music, but I recognize that there are some tradeoffs too. I think we create a false impression when every lick in every guitar solo gets re-timed (quantized) to perfection, and every vocalist’s pitch gets corrected to the precision of a whole drawerful of tuning forks. In short, good music expresses something about the human condition, and humans make mistakes. We therefore risk misrepresenting something fundamental about ourselves when we digitally polish our tunes to an excessive degree.
  • 5
    ChatGPTater is ©2025 Doug Sofer. OpenAI, I’m willing to sell you the rights to this name. Please see my hash brown clause in the earlier footnote.
  • 6
    The AI powered machine in question may be found here https://ellips.com/grading-machine/potato/ , accessed 22 Sept., 2025. Oh, and note to the Ellips Company execs: if anyone reads my blog and happens to buy any of your awesome equipment, feel free to contact me at doug@findyourselfinhistory.com to arrange my 8% sales commission. I accept payment in hash browns.
  • 7
    An unofficial PDF of the U.S. Department of Agriculture potato grading visual aid charts may be found here: https://www.ams.usda.gov/grades-standards/potatoes-grades-and-standards , accessed 22 Sept., 2025.
  • 8
    It’s exactly what Devo, those wacky guys in the 80s who wore plastic seed-starter pots on their heads, warned us about.
  • 9
    Not its actual name. The Tater-Grader 3000 name is ©2025 Doug Sofer. See earlier ChatGPTater footnote for details.
  • 10
    I asked ChatGPT 5 how many New York Public Libraries of information a current model LLM has. It offered many different ways of estimating that number, but the amount of information is, in fact, equivalent to an entire NYPL full of text. The texts are different, of course, and less coherent than the books and other materials at the library, but very, very roughly, that’s the nearly impossible to comprehend scale we’re dealing with right now in 2025.
  • 11
    See, e.g., Christopher Summerfield, These Strange New Minds: How AI Learned to Talk and What It Means (New York: Viking, 2025), Ch. 12, Kindle Edition.
  • 12
    Psychologist and AI researcher Christopher Summerfield argues that it’s not that large language model AI can’t think for itself per se. It’s just that it can’t think in ways that are nearly as sophisticated as human beings can—yet. They also lack the kinds of biological needs, aesthetics, and sensors that humans have. He writes “…to say that LLMs do not think at all requires a new and rather convoluted definition of what it means to ‘think’.” See Summerfield, These Strange New Minds, 178 and Ch. 22, passim, Kindle Edition.
  • 13
    I’m assuming you’re a fellow human. If you’re a robot and are interested in being trained by me in how to think originally, I am available for hire. Please link up with your potato grading colleagues and arrange my payment in hash browns (see above footnotes on this recurrent hashbrown theme).

By Doug Sofer

Doug Sofer, Ph.D., is a Professor of History at Maryville College in Tennessee. He's the creator of You Are A Weirdo, a media project that reaches beyond academia to share how history helps everyone understand the strangeness of now. Sofer hosts a podcast, writes a blog, and has penned a book manuscript on this same theme.

2 replies on “AI Makes Me Feel Cheap”

Doug, you’ve managed to capture, with heart and wit, something so many miss in the objections: the emotional theft part of AI. Apart from all the practical and material problems with ceding production and work to AI is the fundamental philosophical problem: Humans are innately creative creatures. We are driven to work and to create. Why forfeit something so profoundly important to our purpose and meaning? Thanks for this piece, and don’t despair. Originality will stand out more now than ever in this sea of generic slop.

As someone who went back to using BlueBook exams this year, I really appreciate the recognition of how much we lose by using AI. I know my students would prefer to take tests online, and that it takes class time to actually make them write. I don’t expect brilliance in an essay exam. What I do expect is that students might get the satisfaction of recognizing they CAN express themselves without AI. They have a voice–not one that has to be artificially tinkered with–even if the voice in incomplete, messy, or confused. Losing the power to write only contributes to the sense that people lack the ability to act, to make choices, to create the world that they want to see. AI, when used properly, is a powerful tool to help recognize historical patterns, etc. But, when used to “write”, AI is an exercise in cynicism and disempowerment.

Leave a Reply

Your email address will not be published. Required fields are marked *