If you’ve read the New York Times, The Atlantic, or The Verge in the past month, you likely encountered the ChatGPT debate. ChatGPT is an AI system designed by OpenAI that, in layman's terms, learned to write by digesting the internet. You type a prompt—on anything from clothing choices to existential dread—and ChatGPT answers. It can write essays and recipes, sustain a conversation, and more. Pundits and experts are divided over the technology, variously arguing that it is either spectacular or scary, useful or dangerous, a paradigmatic shift or a minor development.
Me? I think that ChatGPT poses major problems.
I have kept an eye on AI systems ever since OpenAI’s other product—the image generator DALL-E 2—popped up in the news earlier this year. I've had some time to mull over the ensuing developments, but time hasn’t made me feel any better about OpenAI’s work. AI writing stands to radically alter education, labor, and interpersonal communication. It could be a convenient tool that enhances our efficiency, or it may contribute to our social, psychological, and existential detriment. I fear the latter is likely.
ChatGPT depends on borrowed parts. OpenAI trained its system by crawling the web. It crammed billions of written posts, articles, messages, books, and other texts into the ChatGPT model. It appropriated the digital written word as of 2021 or so, and nobody blinked. It's a common corporate tactic. Writing on surveillance capitalism, Shoshana Zuboff noted that surveillance capital firms make their markets by seizing digital assets and spaces. They assert a right to own something—like an index of the web, or location data, images of every street, scans of every book, and so on—and capitalize on it. They claim the thing and dare the public to challenge them.
OpenAI and similar firms have generated some backlash with this practice. We heard such outrage against AI image generators like DALL-E 2, Midjourney, and Stable Diffusion, for example. Their respective models “learned” how to “art” from millions of human artists' works. And those artists protested. People developed tools to check if someone's artwork was included, without permission, in the AI learning datasets. Pundits debated the ethics of AI art systems, and podcasters speculated on the future of design. But then the news moved on. People shrugged: guess we just have AI art now. Subsequently, Stable Diffusion has been integrated into everything from PowerPoint to search engine results.
The latest iteration of ChatGPT is no different. It claims the digital word as its own domain—not as something to read, but to model and (presumably, eventually) profit from. My words are probably there. Yours are too, if you have written on the internet in the past twenty years. We didn't give OpenAI permission to do that. I wrote my words for my community to read, not for a computing system to appropriate, dissect, model, and imitate.1
So ChatGPT has dubious origins, but what happens to us when we use it?
While writing this reflection, my wife and I had coffee with a friend. I brought up ChatGPT, and our friend was intrigued. She got an OpenAI account at the coffee shop so we could toy with the bot. Asking it questions about my dissertation, the system hallucinated an entire federal agency. To the best of my knowledge, the Food Distribution Agency did not exist during World War II, nor was it a part of the Office of Price Administration (which did exist). So far so good—the AI won’t replace historians just yet.
After this experiment, our friend brought up how she might use ChatGPT as a tool. She mentioned that freelancers, for example, could write rote product descriptions far easier with an AI assistant. Fair enough.
But then we talked about cover letters. People often scour the web for templates and then write their cover letters accordingly. How is having an AI take that first step (or even write the whole letter) any different?
Cover letters are not a good hill to die on. No one likes them, no one wants to write them, and few want to read them. But if even this formality has human qualities that an AI cannot adequately replace, then perhaps there’s defensible ground here.
Ideally, cover letters open windows into candidates’ worlds. Recruiters learn why candidates want jobs, how candidates write, and a bit of how they think. It's a first sample of work one might perform and an initial impression of the colleague they may be. This is admittedly more applicable to jobs that involve writing. And our friend at coffee noted that, in other lines of work, an AI-assisted cover letter could demonstrate a candidate’s ability to use available tools. This may be true—especially if one continues to use AI across job assignments. But I think we lose something in that scenario. Or perhaps it was already lost, since many employers presently lean on AI tools to weed out resumes and cover letters (i.e. ZipRecruiter).
We may be entering a job search arms race. Robot writers will compete against robot readers. In that case, recruiters and candidates will grow dependent on AI advisers, trusting an AI matchmaker to present appropriate humans to appropriate workplaces. It may work, so long as job-seekers prompt their AIs correctly. Niche markets could target unemployed workers with AI-prompt-bootcamps. We may need to learn correct abasement before ChatGPT (or whatever new iteration), beseeching it for words sought by ZipRecruiter bots, in order to be employed.
Let’s play this out a bit further. Say that AI writes killer cover letters and becomes an integral component of the hiring process. In that case, with human resources departments and jobseekers dependent on AI tools, what could happen to interviews? Good interview questions are notoriously difficult to write. Why not just ask ChatGPT? Preparing for interviews can also be an opaque, grueling process. Why not prep with ChatGPT?
These tools may be fantastic and effective. But what are we left with, at the end? An employer and employee puppeted by an AI, who only know one another as figments of a digital imagination.
The job-seeking example is limited, but it illustrates how such tools may encourage us to offload our personal agency to machines.
I worry ChatGPT will undermine our ability to think—and maybe even our resilience. I recently read a blog post by Paul Graham, one of the founders of Y Combinator, that argued that writing was a fundamental component of deep thought. He’s not the first to say so, but I liked his phrasing:
…if you need to solve a complicated, ill-defined problem, it will almost always help to write about it. Which in turn means that someone who’s not good at writing will almost always be at a disadvantage in solving such problems.
Think about a eulogy. Eulogies are not only descriptions of the dead—they’re also mourning rituals for the living. Those words will always be insufficient to what the person meant to us in life, but writing itself guides us through or grief. The writing process helps us organize our loss and love. The eulogy then communicates, to the living and deceased, a farewell.
Eulogies address the ill-defined problem of immense sorrow. And as Graham noted, “it will almost always help to write about it.”
Eulogies are also uncomfortable. We may not know where to start, worry that our words will fall short, or fear the moment when our voices break. In those overwhelming times, writing may feel like too heavy a task. An AI assistant, which we already use in and out of work, would probably look like relief. Eulogy-writing becomes a task we don’t have to do alone—or at all.
But what then? How does our communication of grief and love change? Have we processed our internal state, or have we adopted the AI’s approximation thereof as our own? Do we communicate what someone meant to us, or what ChatGPT triangulates from our prompts? Perhaps the numbing effect of delegation will be worth it. Or maybe not.2
I have never written an eulogy, but I have graded hundreds of student essays. Many students wrote beautiful prose, others struggled, and some—whether desperate or apathetic—plagiarized. A student once took an entire essay from the internet and submitted it under his own name. After I found out, I pulled him aside the next class period and asked: why? He shrugged. “They said it so well, and I agreed with all of it, so I used it.”
Most plagiarizing students fell back on the same general defense. They found something that sounded good and plopped it into their paper. Sometimes they didn’t understand what plagiarism was, or that it was wrong (or thought that it was only wrong when caught). But most just seemed pressed for time by their course-load.
Overbooked and overwhelmed, some plagiarizing students also seemed under-prepared for college writing. Their high school education had holes. They only grasped grammar, structure, argument, and style at rudimentary levels. And I don’t say this to blame the students. It killed me to see young adults going out into the world so unequipped. Their schools had failed them, and I wanted to throw them a lifeline. And it worked a lot of the time. Through feedback, workshops, and hard work, many of my struggling students made amazing improvements. They learned they could think and write well, and that their words did indeed matter. I was so proud.
ChatGPT will only make this process more difficult. High school educators are already admitting that ChatGPT can write a passing AP literature paper. Some students will continue to slog through the drafting process, but others will take the easy C. Sure, ChatGPT probably won’t earn them an A, but the time and energy saved may be worth skating on passing grades. And it won’t trigger plagiarism detectors, as original texts, so there won’t be any consequences! Many students will take that trade-off, especially since most teenagers have not developed their cognitive capacity to consistently realize the long-term consequences of their actions. Few will think of workplace writing or eulogies when asking an AI for homework help—nor should they! But by the time students get to college, falling back on AI may be second-nature. It worked in high school, why not in classrooms or offices? The problem here is not that ChatGPT renders teachers ineffective. Rather, ChatGPT is poised to undermine student motivation.
Educators will struggle over how to address ChatGPT while maintaining pedagogical rigor. Some have announced that AI-augmented writing is the future we ought to embrace. Others have decreed high school and college essays obsolete, calling for new types of assignments that better reflect student learning while also avoiding situations where AI might be used. Still more instructors will likely continue assigning papers, hoping students will just play ball. A few will prohibit AI writing tools altogether.
This division over ChatGPT also disrupts educators’ definition of academic dishonesty. Every college instructor has an academic integrity statement in their syllabi, prohibiting plagiarism and other forms of cheating. But educators will disagree over whether or not ChatGPT constitutes academic dishonesty. A valuable tool in one course will be an academic infraction in another—putting students in a bind.
Given these considerations, I believe we need a new set of social norms and technological safeguards around AI writing. While individuals must ultimately choose whether or not AI is right for them, certain sectors—like education—desperately require mitigation strategies.
There’s only so much to be done on the reader-side. Some organizations are devising tools and heuristics to detect AI writing. Initial results have been promising, but experts fear such strategies seem dated by even more advanced AI systems within months. However, there may be some hope for detecting texts written by the present generation of AI—while finishing this post, I saw that someone created a program that promises to detect ChatGPT-written text.
As such, schools move to control writing environments. Students may have to write their papers on non-networked computers in university computer labs. This will take university resources (money for computers, space for a lab, pay for proctors), but it’s possible. My university, for example, already uses offline computer labs for exams. Equipping these labs for student research papers would not require many adjustments.
Professors may also assign handwritten work. Students could draft their papers by hand or (if professors want to be less demanding) keep handwritten project journals. A journal might demonstrate that a student worked consistently over time on a paper and reveal how they grappled with research questions or revisions. A writing journal might also help students become more self-aware about their writing experience and their lives as writers.3
ChatGPT’s use among students highlights a larger challenge: how can we verify if anyone actually wrote what they purport to have written?
Several months ago I read an article by Benj Edwards, a technology journalist, about archives and AI-generated content. Writing two years before ChatGPT, Edwards argued that AI systems stand to undermine trust in digital content’s veracity. He proposed that a “universal timestamp” may be one solution, whereby “we link an immutable timestamp [via blockchains] to every piece of digital media.”
Every time a piece of digital media is saved—whether created or modified on a computer, smartphone, audio recorder, or camera—it would be assigned a cryptographic hash, calculated from the file’s contents, that would serve as a digital fingerprint of the file’s data. That fingerprint (and only the fingerprint) would be automatically uploaded to a blockchain distributed across the internet along with a timestamp that marked the time it was added to the blockchain. Every social media post, news article, and web page would also get a cryptographic fingerprint on the history blockchain.
…
To verify the timestamp of a post or file, a social media user would click a button, and software would calculate its hash and use that hash to search the history blockchain. If there were a match, you’d be able to see when that hash first entered the ledger—and thus verify that the file or post was created on a certain date and had not been modified since then.
This technique wouldn’t magically allow the general populace to trust each other. It will not verify the “truth” or veracity of content. Deepfakes would be timestamped on the blockchain too. But the ledger, if it is maintained over time, will give future historians some hope for tracking down the actual order of historical events, and they’ll be better able to gauge the authenticity of the content if it comes from a trusted source.
A bold measure indeed. But Edwards went further in a final suggestion: creating “a cryptographic ark for the future.” He speculated that AI-generated content has the potential to undermine trust in historical archives and their contents. As such, we should rapidly archive the internet and other content that existed prior to generative AI platforms. This ark might:
. . . contain the entirety of 20th-century media . . . digitized and timestamped to a date in the very near future . . . so that historians and the general public 100 years from now will be able to verify that yes, that video of Buzz Aldrin bouncing on the moon really did originate from a time before 13-year-olds could generate any variation of the film on their smartphone.
I think Edwards was onto something. While his “history blockchain” could have serious personal privacy implications (would the ledger include every file on every computer and server, or just public-facing ones?), the underlying idea remains intriguing.
Edwards’ piece made me think: what if we could cryptography and/or blockchains to verify that a text was written? Whether students or the Skiff staffer, people generally copy-and-paste an AI-generated text into their own document, after which they may tweak it a little bit. I think we can add friction and verification to that process.
A possible measure could combine word processors with crypto wallets. Microsoft Word, Google Docs, Libre Office, Skiff, etc. could have an optional setting, whereby the processor locally records whether the document was actually written out or simply pasted in from an outside source. That record could be added to the document’s metadata. Furthermore (and this betrays my ignorance of the technology here), the user could “sign” the document with their crypto wallet—attesting that they wrote the document—and send a hash of both the verification record and attestation to a public blockchain.
Admittedly, someone could obtain AI-generated-text and re-type it into the system. But this scenario at least adds friction. It provides accountability. And in an AI-ridden-world, we could use this type of measure to assure readers—whether professors doing grading, techies reading a corporate blog, or citizens reading the news—that a human person probably wrote a given text. It would signify that the writer at least tried to address this problem.
A declaration could also raise awareness of this AI problem. What if people could sign a statement that they would never pass off AI-generated text as their own? What if we promised that the words we write in messages, posts, emails, essays, articles, and books are indeed our own words and not from ChatGPT? It would be futile and limited, yes, but it could gain some visibility.
If ChatGPT is part of our future, so too are technical countermeasures and public awareness. The transformation of our textual society is not yet decided.
At the end of the day, it’s a personal decision whether or not to use AI tools. ChatGPT, DALL-E 2, and the rest are awe-inspiring. I don’t blame anyone for using them. But we should know what these tools cost us.
We did not read environmental impact studies before adopting Google, smartphones, or social media. Those were earth-shattering technological developments, by which humanity gained (information, convenience, connection) and lost (privacy, mental silence, social cohesion) globally. But we did not read the environmental impact studies for Google, smartphones, or social media because they did not exist. We ought to know what generative AI will do to us before adopting it—for better and worse—on mass scale.
One more thought to wrap up.
Dune is one of my favorite science fiction books. Its author, Frank Herbert, imagined a Luddite space empire in his appendix to the original Dune. Herbert described an event where “the god of machine-logic was overthrown among the masses and a new concept was raised: ‘Man may not be replaced.’” The main events of the book, which take place several generations later, reflect that revolution: humans, not machines, are trained as computers, while space-drug-enhanced pilots serve as interplanetary guidance systems. It's wild.
I kept coming back to that bit of Dune when thinking about ChatGPT. Dune, in its own dystopian way, shows us people who defied rule by technology—by algorithms, AIs, and black boxes—and created a different world. Maybe Herbert was onto something. The novelty of chatting with computers may wear off. People may grow weary of bland AI-generated content.
Even if perfected AI becomes pervasive, I think some people will resist and seek unmediated human connection. Teens are already forming “Luddite Clubs” around screenless, flip-phone lifestyles. Similar movements may recover writing when society becomes dependent on AI. I suspect such restoration will only happen after our capacity to write atrophies—much as Luddite Clubs address generational attention deficits—but it remains possible.
One of the most effective countermeasures to ChatGPT might be realizing this technology fills no real need. It is to our general detriment. As people, and a species, I think we are better off without it.
That’s all! Hope you all had a delightful Christmas and New Year’s. I promise a more uplifting post next time—AI was just on my mind for too long!
Admittedly, I’m not a copyright law expert. My work is not protected by formal copyright, so one may make an argument that OpenAI’s modeling falls under fair use. But what about the writers whose writing is protected by copyright, whose works were included in ChatGPT’s training sets? Such legal wrinkles complicate the already ethically murky territory that OpenAI has entered.
This deliberation on eulogies reminded me of Wonderworks: The 25 Most Powerful Inventions in the History of Literature by Angus Fletcher (2021). I don’t recall if he discusses eulogies, but they strike me as a sort of literary tool for psychological healing that his book tends to analyze.
The main problem with the handwritten assignment is that students aren’t really writing by hand now, and cursive is no longer taught in most schools. See this article by Drew Gilpin Faust for more.
As always my friend your writing was far superior than AI! Excellent thoughts on a most disturbing subject, especially from an educators point of view.