What does writing with AI actually feel like?
Spoiler: It can't save you from lazy thinking, it can't find your voice, and it can't write your truth.
If you haven’t read Part 1, I’d suggest starting there. This piece builds on it.
For a writer, there’s nothing more torturous than having something to say and not being able to say it.
Vauhini Vara, who you met in Part 1, had been living with that for years.
Her sister had died of cancer when she was very young. And despite being a writer, she had no words for it.
She then did something that felt, by her own admission, illicit. She asked an AI to help her find the words.
She and her husband were both writers. They both understood what technological capitalism was doing to their craft. And yet here she was, late at night while he slept, opening a web app called the Playground (powered by GPT-3) and attempting to write the one story she had never been able to tell.
Vara would type a sentence and GPT-3 would complete the story based on the most statistically probable version of what a human might say next. She wrote nine stories this way, each one a little more honest, a little closer to her reality than the one before.
In the first story, Vara told GPT-3 her sister had been diagnosed with cancer. GPT-3 assumed she had survived. “She’s doing great now,” it wrote. Vara corrected it. She had died.
The AI tried again.
In the second story, it assumed that Vara had responded to her loss by running across America for a children’s cancer charity. This is what AI thought dealing with grief looked like. Except it wasn’t Vara’s truth. Not even close.
But Vara kept going. With each new story, she gave GPT-3 more—more detail, more honesty, more of what it actually felt like to lose her sister. She kept pushing her truth onto it. By the ninth and final story, the most personal one she had ever written, only two lines belonged to GPT-3. Everything else was hers.
Did GPT-3 write that essay? Not really. But without it, Vara might never have written it either.
—
The Tyranny of the Blank Page
Writing is cognitively and emotionally brutal for everyone, not just grieving sisters. Most of us aren’t writing about loss that heavy, but we all know that feeling. Of having to say something and not being able to begin. Of the page sitting there and staring at us, blank and judgmental.
Writers have developed all kinds of strange rituals to deal with this.
Hemingway would stop every writing session mid-sentence, deliberately, so he’d always have somewhere to start the next day. Agatha Christie did her best plotting not at a desk, but soaking in the bath eating apples. Victor Hugo, when he couldn’t write, locked away all his own clothes so he had no choice but to stay home and write.
These rituals were about making the conditions for writing possible. Creating the circumstances in which starting felt less impossible.
This raises a question. Can AI do the same thing?
Sharing the Cognitive Load
Cal Newport, a writer and academic, decided to find out. He wrote with ChatGPT and documented the experience in an essay for the New Yorker. What he found was messier and more interesting than either side of the AI debate would have you believe.
At first, it felt frustrating “as though I were excavating an essay instead of crafting one.” But when he thought about the psychological experience of writing, he began to see its value. ChatGPT wasn’t generating polished prose, but it was providing starting points, raw material to work with, and research ideas worth pulling on.
As Newport puts it:
For all its inefficiencies, this indirect approach did feel easier than staring at a blank page; “talking” to the chatbot about the article was more fun than toiling in quiet isolation. In the long run, I wasn’t saving time: I still needed to look up facts and write sentences in my own voice. But my exchanges seemed to reduce the maximum mental effort demanded of me”.
Alan Knowles, a researcher who studies how writers use AI, has a name for what Newport experienced. He calls it Rhetorical Load Sharing. The idea is simple. AI doesn’t write for you, but it shares enough of the cognitive burden that writing stops feeling quite so brutal.
—
Before I started writing this piece, I gave Claude all my notes and asked where to start this piece. It suggested I open with the confession from part 1, that I’d had it open the whole time, what that meant and how I used it. It made sense, but after almost 7 days of research, I’d come to my own conclusion.
I wanted to start with Vara’s story. It was the most relevant and poignant example of human-AI collaboration I had found. An algorithm wasn’t going to convince me otherwise.
That said, Claude did help. It gave me starting points for sections I was stuck on. It suggested the writers’ rituals. It helped me see my own arguments from angles I hadn’t considered. When I asked it to push back on something I’d written, it did.
What Claude couldn’t do was think for me. Every idea in this piece came from me. Every decision about what to include, what to cut, and where to go next.
Nothing in this piece is solely AI. But it did have a role in how this piece came together, and I won’t pretend otherwise.
—
A few years after her essay “Ghosts” went viral, Vara published her book Searches: Selfhood in the Digital Age. In it, she went back to the final two lines that GPT-3 had written and replaced them with her own words. She took the story back. Which is, I think, what writing with AI should always look like — you asserting yourself on it, not the other way around.
After engaging with this topic for over a month, this is where I land. Your writing with AI will only be as good as you. Vara’s was only as good as Vara. Mine is only as good as me. The tool doesn’t save you from yourself.
I’d like to revisit this a year from now. Things are moving fast and my opinions on this might look very different. Maybe there’s a Part 3 in this. We’ll see.



