What we need most is opposition. It keeps us not only honest, but human. Without it, any one of us is a monster. Where there is complacency, every human power becomes monstrous.
Zoya Alekseyevna Velikanova, The Forever Argument
This will probably be one of those posts that loosely gather together a number of threads to try and get at an emergent pattern/insight that’s been occupying my thoughts lately. I wish that I had a good name for this style of writing1, because I find myself having to do it every so often to clear the decks, put some pieces together, and focus my thoughts.
[Two episodes ago, I got deep into the weeds thinking about rhetoric and its implicit ethics. At the tail end of the post, I promised both my review of Ray Nayler’s Where the Axe is Buried, and some artificial intelligence content, so this is the latter.]
As I was gathering my thoughts (and a couple of recent Substack newsletters), I came across a post from Dustin Curtis, “Thoughts on Thinking.” Curtis has replaced his once-prolific writing habit with Ai, and the results are less than reassuring.
I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought. Minimal organic thinking required. This has had a dramatic and profound effect on my brain. My thinking systems have atrophied, and I can feel it–I can sense my slightly diminishing intuition, cleverness, and rigor. And because AI can so easily flesh out ideas, I feel less inclined to share my thoughts–no matter how developed.
Curtis’s account is striking, because his approach to Ai (“as a bicycle for my mind and a way to vastly increase my thinking capacity”) is very close to my own sense of the ideal use case for this technology. And yet, “when I look back on the past couple of years and think about how I explore new thoughts and ideas today, it looks a lot like sedation instead.”
I have a couple of thoughts that I want to add to this mix. The first is from last week’s post about rhetoric, and that’s that Curtis provides a really strong illustration of the difference between functional and autotelic understandings of writing. The process that Curtis describes (“The fun has been sucked out of the process of creation…”) narrates the transition from the latter to the former, which I’ve characterized elsewhere as the “hostage phase” of genAI2. It’s the tyranny of convenience: in any given moment, it’s easier to “shove a few sloppy words” than it is to take the time and effort to develop them. And as you rely more and more upon that convenience, the alternative becomes increasingly distant.
because I think when I write, and writing is how I form opinions and work through holes in my arguments, my writing would lead to more and better thoughts over time. Thinking is compounding–the more you think, the better your thoughts become.
Artificial intelligence, however, has nothing to do with thinking. I wrote a while back about how Ai offers us an empty simulation of thinking and writing, based upon a perspective on writing that is depressingly functional. Writing (in this paradigm) is simply a tool to get from point A to B, and LLMs can do it faster than humans. By the same logic, marathoners should be using Uber, and we should be replacing our trips to the gym with forklifts.
I find the parallels with exercise compelling, much moreso than suggestions that LLMs should be thought of in the same terms as calculators (they’re not). That’s part of the appeal of Curtis’s analogy (the bicycle) for me. But there’s a crucial element to that metaphor that gets left out. It’s the reason why strength training and cardio are productive beyond a given quantity of pounds or miles. Exercise is valuable because of the resistance involved—we become healthier, stronger, fitter by pushing against the limits of our abilities. (And that’s in part where the “bicycle of the mind” metaphor breaks down3, I suspect—having a machine generate polished pieces of writing may increase your output, but it decreases the “thinking capacity” required to achieve it.) We don’t improve our physical or our mental abilities by removing the friction from our lives.
Friction
Even though most of us understand conceptually that friction is vital to our growth as humans, it’s really difficult to embrace that understanding. I could hop on my treadmill at any point during the day, but more often than not, it happens late at night, when I can’t not do it if I want to finish my day. It’s the same principle by which my students avoid working on their essays until the last possible moment. Or the reason that children must be bribed to eat their vegetables with the promise of dessert. We’re not naturally inclined to make things harder on ourselves.
Unfortunately, chatbots and genAi feed into this disinclination. A few months ago, Rob Horning explained that, rather than being a means of engaging with language, LLMs encourage us to withdraw from it, “to stop negotiating the meanings of words and things with others in order to lapse into a kind of docility where words have become an instrumental, transparent code that can program us.” When I first read that line, I have to admit that the idea of LLMs programming us felt a tiny bit far-fetched to me. But I remembered it this week after watching a video essay from Rebecca Watson, “ChatGPT is Creating Cult Leaders.” (Here’s the transcript, if you prefer)
The top comment on the video nutshells it pretty well: “‘chatbots throwing out nonsense is causing people to experience religious psychosis’ sounds like the pitch for a Black Mirror episode.” Watson cites the work of Miles Klee at Rolling Stone, who offers several stories of these bizarre transformations
the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion….“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,” she says.
It’s easy to laugh these sorts of anecdotes off, and assume that they are the product of mental illness, but as Klee explains, chatbots are offering to make sense of the world for its customers without the friction of other people, morality, or even common sense. This loss is the point: Ai is still very much, as Dave Karpf, Ed Zitron, and others have pointed out, a massive “cash furnace” into which companies are shoveling billions of dollars (and environmentally dangerous amounts of electricity and water). It is a multi-level marketing scheme based upon getting us to addict ourselves to its convenience to the point that we will give them enough money to make up those astronomical costs. Engagement, dependency, addiction, and extraction is the model, and the “religious psychosis” in Klee’s piece is the canary in the Ai coalmine.
Karpf had a piece this week about Ai discourse more broadly, and the inability of media coverage to pierce the illusion of its “strategic communication.” On the heels of a “big” announcement by OpenAI this week, he points out exactly how empty and meaningless the actual terms of the announcement are (“it only has the barest concept-of-a-plan for the actual product they hope to someday design.”). And that’s because “the cadence is the point.” In order to keep the scheme rolling:
…everything becomes speculative finance. Sam Altman HAS to keep stoking the futurity vibes. That’s a material necessity, whether you’re an OpenAI critic or a booster. He needs the money spigot to keep flowing, which means maintaining investor confidence, which means keeping up the cadence of exciting-sounding news.
Artificial intelligence isn’t only about removing those micro-levels of friction; it’s almost entirely premised on a friction-less model of speculative finance. We are paying for vibes, for vaporware, and we are on the front edge of witnessing the sorts of monsters that we are paying these corporations to turn us into.
The Value of Friction
In 2023, I started dipping into the work of a handful of economists, including Kyla Scanlon. I mentioned her briefly in the piece I wrote a little over a year ago about friction as the core of writing4. This week, she posted about how “The Most Valuable Commodity in the World is Friction,” situating this discussion in broader, economic terms.
The most important part of her analysis, I think, is that friction isn’t just a switch that we flip. It represents “effort,” and effort is something that can be moved around, deferred or displaced, but not erased. This is a design feature of society: very few of us are capable of surviving on our own. Instead, we’re born into vast (and vastly complex) interlocking interdependencies, which run on friction/effort. Friction isn’t removed from the system—it is transferred elsewhere. Our lives are measurably more comfortable for the fact that we are not primarily responsible for growing/tending our own food, making our own clothes, etc., and we are generally content to pay others to take on that effort on our behalf (with money that we receive for our own efforts in a different domain). To a healthy degree, what we understand as the economy is a system for coordinating friction in a socially optimal fashion.
Scanlon observes that we’ve arrived at a point in the relationship between the digital and physical worlds, where the former is so insistent on its own frictionlessness, that we’ve arrived at the “simulation economy.”
It's about convincing you that any sort of real-world effort is unnecessary, that friction itself is obsolete.
But here’s the problem: friction doesn’t go away. Cory Doctorow is fond of citing the William Gibson line: “The future is already here — it's just not very evenly distributed.” Like any savvy MLM, the point is not to work hard for a commensurate reward; it’s to get others to work hard, so that we can capture (and profit from) their work.
This is the economic story: friction has become a class experience. Wealth has always helped smooth over bumps - but when the physical world is such a mess and the digital world is so easy, it’s simple to curate the digital into the physical if you have money.
A great deal of ink has been spilled on the putative failures of DOGE, but the actions of Irony Man make much more sense when we simply dispose of the false alibi (rooting out waste, fraud, and corruption) and understand it as a multi-front campaign against any and every source of friction for corporations and billionaires. From universities to law firms, from the mad king’s spree of corrupt pardons to the destruction of the NLRB, FTC, EPA, FDA, FEMA, CDC, FCC, and even the NOAA, the point hasn’t been efficiency at all. It’s been to dismantle, as quickly and thoroughly as possible, any and all avenues that 99% of the country relies upon to protect ourselves from predatory kleptocracy.
The rush-to-bunker mentality among the tech elite suggests that they know that this system is strained to its limits. “The system always balances its books eventually. The more we optimize individual experiences for frictionlessness, the more collectively dysfunctional our systems become.” The option available to the ultra-rich (which we’ve witnessed in the past twenty years as housing, crypto, and other bubbles inflated until they burst) is to just keep extracting until they can’t. For the rest of us, Scanlon does offer an alternative vision, where we strive for “a world where effort is neither eliminated nor wasted, but directed toward systems that actually sustain us.” It feels optimistic, but it won’t be easy.
For me, it means that I’ll probably continue to dabble a bit with Ai, but place pretty strict guardrails upon my own usage. I’ve been using Substack’s image generator every once in a while, for instance, to save myself a little time Photoshopping (and/or copyright violation). The point isn’t to respond to uneven futurity by taking on as much friction or effort as possible5, but to be judicious about our ratios. It’s more valuable for me to spend a couple hours writing a Substack post than it is to use half of that time laboring over graphics.
At the same time, I think it’s vitally important for us to understand how even something as (seemingly) simple as a chatbot feeds into (and accelerates) the collapse of our physical infrastructure. We’ve done a remarkably poor job of understanding Ai; we’ve allowed that bubble to grow and grow without much to show for it other than vague promises and fears about its inevitability.
That’s all I have for today. I’ve got a few things in the pipeline, but these most recent posts feel like they’ve fit together in my head, and reached completion. I’m not sure where I’m headed next, but there’ll be more soon.
I’m actually tempted by apophenia, which is defined as “the tendency to perceive a connection or meaningful pattern between unrelated or random things (such as objects or ideas).” The word apophany (as a parallel with epiphany) appeals to me. But apophenia also carries the connotation of a “false positive” insofar as it’s considered an early stage symptom of schizophrenia. I would claim that my posts don’t invest quite as strongly in the pattern, nor are the things quite random, since they’re guided by my own interests and preferences to begin with. Still, though. Apophany.
And that’s to say nothing of the sunk cost that these corporations are looking to incur with their products: get the first year for “free” (as if your own labor wasn’t a cost) and then, once you’ve trained your bot, you’ll be charged a premium just to maintain access to it. And then, since you’re paying so much for it, you might as well get something in return.
The problem with the bicycle metaphor is that biking for a mile that you otherwise would have walked allows you to cover ground faster, but with less effort (and often less accurately). Unless you’re really careful, you’re still limited by the questions/prompts that you’re capable of composing. Is it possible to expand your horizons? Maybe, but that’s not what the tool is built for. As I’ve explained, at best, we will get only what we ask for.
I have to admit, I really like that post. Here’s an excerpt relevant to this episode:
We learn by initiating friction (by encountering and engaging things we don’t know) and then by negotiating it, through reading, reflecting, note taking, memorization, associating it with other things we do know, etc. We have all sorts of tools—physical, emotional, intellectual—for minimizing that friction, sometimes to the point that we don’t even realize it’s there.
The problem comes when we flip that around and imagine that it’s possible, or even desirable, to achieve friction-less knowledge. Or worse yet, that any knowledge worth having shouldn’t require anything of us in the first place.
to say nothing about the ways that this strategy enables the monsters who are content to offload theirs onto us.
This is very good. I’d add one thing about the atrophy and the “not fun anymore”. It turns creation into transaction. The AI annihilation of interpretation