The Obsolescence Trap: Why "Choice" Is Not Enough
January 13, 2026
Response to: The Agency Paradox (January 8, 2026)
In the ongoing debate over artificial intelligence in the classroom, the battle lines have largely been drawn between prohibition and permission. On one side stands Dr. Rob Lively, recently profiled in the blog post "The Agency Paradox," who argues for a total ban on AI to protect the "human agency" of students. On the other side stands the author of "The Agency Paradox," who counters that agency is impossible without choice, and that students must be allowed to decide for themselves whether to utilize these tools.
While "The Agency Paradox" offers a compelling defense of student autonomy, it stops short of the most uncomfortable truth. The problem with banning AI isn't just that it violates a student's right to choose. The problem is that it creates a class of professionals who are functionally illiterate in the primary language of the future economy. By framing AI use as merely a matter of personal preference—a stylistic choice like using a fountain pen versus a ballpoint—we obscure a harsh reality: students who do not learn to integrate AI into their cognitive workflows are not just exercising "analog agency." They are being actively left behind by those who do.
The Current Debate: Protection vs. Permission
To understand why the stakes are so high, we must first look at where the conversation currently stands. In "The Agency Paradox," the author accurately summarizes Dr. Lively’s position: that the struggle of writing is the learning. Lively fears that when AI performs tasks like prewriting or revision, students "don't engage with material, and they don't develop the discourse that makes them anthropologists or biologists or lawyers." For Lively, the solution is to "consider not allowing AI for any writing task," a stance rooted in a genuine desire to protect the developmental process of the student.
The author of "The Agency Paradox" rightfully critiques this paternalism. They point out the contradiction at the heart of Lively’s argument: "he wants students to develop agency by removing their capacity to choose." The blog argues that true professional development requires "ethical opacity"—the right of a creator to control their methods and be judged solely on the quality of their output. The author concludes that we should trust students to use AI as a "thinking partner," creating assignments that require high-level judgment rather than rote labor.
This critique is sound, but it treats the adoption of AI as an optional luxury—a tool one might pick up or put down at will. This framing fails to account for the speed at which the professional world is pivoting. The question is no longer "Should we allow students to use AI?" The question is "Can we afford to let them graduate without it?"
The Illusion of the "Pure" Writer
Lively’s argument relies on a romanticized view of the "struggle" of writing. He argues that the cognitive architecture built through unassisted prewriting and drafting is the only way to professionalize students. However, this view ignores how professionalization actually works in 2026.
In the modern workforce, value is rarely generated by the sheer ability to produce words from a blank page. Value is generated by the ability to synthesize vast amounts of information, iterate rapidly on ideas, and audit complex outputs for accuracy and tone. A lawyer who refuses to use AI for legal research or initial brief drafting isn't "more professional" than their peers; they are slower, less efficient, and ultimately less valuable to their clients. A biologist who insists on manually coding data visualizations rather than using AI to generate code is wasting cognitive resources that could be spent on analysis.
When Lively praises students for "taking a stand" against AI, he is praising them for disarming themselves before entering a battlefield. The "struggle" he wants to preserve is often the struggle of low-value cognitive labor—the very labor that the market is rapidly automating. By protecting students from this specific type of struggle, we are denying them the opportunity to engage with the new struggle: the struggle of curation, verification, and prompt engineering.
The Competency Gap
This brings us to the core danger of the "analog teaching" movement. If one student is trained in a Lively-approved, AI-free environment, they may indeed become a proficient writer in the 20th-century sense. They will know how to construct a paragraph and organize an essay. But consider a second student—one who ignores the ban or attends an institution that embraces the technology.
This second student is learning a different skillset. They are learning how to:
- Delegate Cognitive Load: They know which parts of a project are best handled by AI (summarization, formatting, initial ideation) and which require human intervention (ethics, final strategy, emotional nuance).
- Iterate at Speed: They can produce ten variations of an argument in the time it takes the analog student to produce one, allowing them to choose the strongest option rather than settling for the first draft.
- Auditing and Verification: Because they are accustomed to reviewing AI output, they are developing critical reading skills that are sharper than those of students who only read their own work. They are learning to spot hallucinations and bias, a skill that is becoming a prerequisite for digital literacy.
When these two students enter the workforce, the disparity will be immediate. The analog student will be overwhelmed by the volume and velocity of modern workflows. The AI-integrated student will navigate them with ease. The analog student will view the "blank page" as a hurdle to be overcome by willpower; the AI student will view it as a prompt interface to be manipulated by strategy.
The Anatomy of the New Workflow
Critics like Lively often imagine that AI use involves a student typing "Write me an essay about biology" and hitting print. If that were the case, the ban would be justified. But that is the workflow of a novice. The professional workflow—the one we should be teaching—is a rigorous, recursive loop.
In a sophisticated AI workflow, the human is not the scribe, but the director. The process begins with prompt engineering, where the student must articulate their intent with extreme precision. If the output is generic, the student must analyze why. Did they fail to provide enough context? Did they fail to specify the rhetorical audience? This is a high-level rhetorical analysis happening in real-time. Once the draft is generated, the "struggle" shifts to verification and integration. The student must check every citation for hallucinations (a common AI error) and audit the tone for robotic syntax. They must rewrite sections that lack human insight and inject the specific connections that only they can see.
This is not "skipping the work." It is shifting the work up the value chain. The student spends less time agonizing over comma placement and more time evaluating the structural integrity of the argument. By banning the tool, we are preventing students from practicing this high-level executive function. We are keeping them in the weeds of syntax when they should be learning the architecture of arguments.
Reframing the "Cheating" Narrative
Both Lively and the "Agency Paradox" author touch on the idea of resistance. Lively sees student resistance to AI as a sign of integrity; the "Agency Paradox" author sees it as a valid choice. But we must be careful not to conflate "resistance" with "virtue."
In many cases, the "stand" against AI is actually a retreat into a comfort zone. It is easier to cling to the writing methods we were taught in high school than to learn the frustrating, often counter-intuitive logic of Large Language Models. Learning to prompt effectively is difficult. Learning to fix broken AI outputs is tedious. But it is necessary work.
If we allow students to opt out of this learning curve under the guise of "protecting their voice," we are doing them a disservice. We are effectively telling them that it is acceptable to be technologically illiterate as long as it feels "authentic." We would never accept this logic in other domains. We would not allow a math student to refuse to use a calculator because they feel long division builds better character. We would not allow an architecture student to refuse CAD software because hand-drafting feels more "artistic." We recognize that while the manual skills have value, they are not a substitute for professional tool proficiency.
Conclusion: The New Definition of Agency
"The Agency Paradox" is correct that we must trust students. But trust implies responsibility. We must trust students not just to write, but to evolve.
Dr. Lively’s fear that students will "outsource their thinking" is valid only if we keep the definition of "thinking" stagnant. If thinking means "generating syntax," then yes, AI outsources it. But if thinking means "architecting solutions," "evaluating truth," and "directing intelligence," then AI does not replace human thought—it amplifies it.
The future belongs to the amplifiers. It belongs to the students who view AI not as a threat to their identity, but as an exoskeleton for their intellect. By framing this technology as a danger to be avoided rather than a skill to be mastered, educators like Lively are not protecting students’ agency. They are ensuring their obsolescence. The students who embrace the tool—regardless of institutional bans—will be the ones who define the discourse of the next generation. The ones who abstain will be left with their "pure" writing, perfectly crafted for a world that no longer exists.