When the Text Teaches Back
- Dang Nguyen
- 1 minute ago
- 15 min read

Dang in Melbourne, Feb 2026
Every pedagogy begins with a fiction about control. The teacher imagines that knowledge flows outward, that comprehension can be measured, that the world waits outside the syllabus. But the arrival of generative AI has unsettled that fiction. When students bring machine-written drafts to class, or when teachers use AI to produce examples, the authority of instruction itself flickers. The term AI literacy tries to stabilise this tremor: it names the anxiety, not the solution.
Across universities, ministries, and ed-tech panels, “AI literacy” now circulates like a spell. It promises adaptation, ethical awareness, employability. It promises to teach us how to live with the machine. Yet behind this optimism lies a deeper unease. No one is sure whether AI literacy means learning to use AI or learning to resist it. Should students master prompt engineering, or should they critique the epistemic opacity of large models? Should teachers automate feedback, or preserve the human encounter of interpretation? Each policy document toggles between these poles, caught between efficiency and ethics.
AI literacy, in this sense, is less a curriculum than a coping mechanism. It translates institutional anxiety into teachable form, converting structural uncertainty into learning outcomes. Universities invoke it to show adaptability. Tech firms promote it to justify expansion. Policymakers deploy it to signal foresight. But what it really teaches is how to remain legible within a shifting order of cognition: the literacy itself becomes an artefact of governance, one that trains us not to think with machines but to think like the systems that now judge, sort, and assess us.
The Classroom Loop
In one university seminar I observed, the class was asked to compare an AI-written essay with a student’s own draft. The students could not agree which was which. One described the machine’s text as “competent but soulless.” Another defended it: “Isn’t that how we’re supposed to write here?” The teacher, watching the discussion spiral, opened ChatGPT on the projector and asked the model to critique both essays. Within seconds, it returned with feedback that sounded reasonable—balanced, articulate, evenly distributed between the two samples.
The room fell silent. No one disagreed with the machine, but no one seemed to learn from it either. What it offered was not judgment, but the appearance of judgment: a performance of discernment that dissolved once you looked for the reasons behind it. When the class ended, one student said quietly, “It’s like we’re all pretending to be each other.”
Scenes like this are now common: the automation of writing is also the automation of reflection. The teacher’s authority, once anchored in expertise, becomes indistinguishable from the machine’s capacity to simulate it. The student’s work becomes a negotiation with the plausibility of prose. The entire circuit—student, teacher, text, algorithm—feeds back into itself, generating coherence without comprehension.
The Automation of Judgment
AI literacy appears, at first, to be an extension of “digital literacy”—a skillset for navigating new media. But generative models introduce something qualitatively different. They no longer just mediate information; rather, they produce it. When language itself becomes automated, what is being trained is not the eye or the hand but judgment itself.
To read an AI-generated paragraph is to inhabit a strange mirror. The prose is plausible, fluent, deferential. It mimics understanding without ever quite arriving at it. Students describe the experience as eerie: the model seems to know what an answer sounds like, yet it cannot remember where knowledge comes from. There is a certain kind of epistemic amnesia baked into these models: they reproduce language stripped of the memory that once gave it meaning. Sources dissolve into patterns, voices collapse into averages. What remains is a residue of past knowledge—compressed, decontextualised, endlessly recombined. The result is not ignorance, but a forgetting of how knowing feels. In absorbing the archive, the model erases the struggle that produced it, turning thought into style and reasoning into syntax.
In classrooms, this produces a quiet inversion. Instead of students trying to understand texts, the texts now seem to be performing the act of understanding back at them. This is where the notion of literacy falters: literacy once implied mastery over symbols—the ability to decode, to interpret, to make meaning from marks on a page. But in the age of generative automation, the marks arrive pre-interpreted. The student’s task shifts from composition to curation, from analysis to discernment. The act of learning becomes a question of deciding what deserves trust when every sentence carries the cadence of authority.
Historically, every literacy project has been a project of governance. Nineteenth-century literacy campaigns sought not just to educate but to discipline, teaching citizens to read the nation as much as the text. The push for computer literacy in the 1980s promised empowerment but in practice trained workers to align themselves with the logics of the information economy. AI literacy continues this lineage. It asks us to become compatible with systems we did not design, to internalise their rationalities as our own. In doing so, it recasts adaptation as moral progress: to be “literate” is to accept the automation of judgment as inevitable, even desirable. The anxiety of obsolescence becomes a lesson plan; the loss of interpretive agency becomes a learning outcome. What looks like pedagogy, in other words, is also an infrastructure of consent.

The Policy Panel
At a government workshop on “AI in Education,” held in a fluorescent room adorned with banners about responsible innovation, a policy adviser announced that “AI literacy is the new critical thinking.” The audience nodded in synchrony. Next, an ed-tech vendor took the stage to promise a platform that would “build AI confidence” among teachers. A sociologist raised her hand to note that confidence was not the same as competence. Her microphone cut out mid-sentence.
By the end of the day, AI literacy had absorbed every available meaning: technical skill, ethical awareness, national competitiveness. It became a floating signifier—elastic enough to align ministries, vendors, and educators who otherwise share little. The phrase’s power lies in its vagueness. It offers the comfort of consensus while deferring the harder question of what kind of knowledge, and whose judgment, automation is reorganising. The promise of literacy functions as an alibi for inaction: as long as understanding can be taught, structural reform can be postponed.
This ambiguity is not incidental but infrastructural. Indeterminacy keeps the policy machine running. Governments can fund literacy initiatives without addressing the political economy of data or the erosion of labour. Universities can perform adaptability without rethinking what counts as learning. Platforms can market pedagogy as a service while refining the very extraction that made such pedagogy necessary. In this fog, discernment becomes just another subscription, and education’s critical project is quietly outsourced to the logic of the dashboard.
In the months that follow, the outcomes of panels like these will appear as white papers and strategy roadmaps. Each document will repeat the same refrain: that AI literacy must prepare citizens for a future of human–machine collaboration. The verbs are always anticipatory—foster, equip, empower—as if literacy could pre-empt the turbulence of automation. Yet what these documents really automate is the policy imagination itself. The world they describe is already settled: technological progress is inevitable, adaptation is virtuous, and critique is an outdated luxury. Within this schema, the task of education is no longer to ask what intelligence is, but how to stay employable in its shadow.
And so the rhetoric of literacy begins to mirror the logic of the machine. Terms circulate, recombine, and stabilise through repetition; the policy field becomes a kind of language model, producing fluent consensus without memory or dissent. To read these documents is to watch a new bureaucracy of meaning at work—one that generates coherence by erasing contradiction. The policy text teaches itself what to think, and we, in turn, are invited to learn from its example.
Pedagogies of Suspicion
The real challenge, then, is not technical but epistemic. How do we teach judgment in an environment where authorship is uncertain? What does it mean to teach reading when the text can no longer be assumed to have a human author?
In practice, educators have responded with two strategies. The first is prohibition: banning or restricting AI tools to preserve the integrity of assessment. The second is incorporation: embedding AI into the learning process so that students can “critically engage” with it. Both rest on the same fantasy of control—that comprehension can be managed by policy or pedagogy. Neither fully acknowledges that generative models unsettle the very notion of intellectual authorship on which education depends.
A more honest response might draw on older traditions of source criticism. Long before AI, scholars devised techniques for reading through mediation rather than around it: the historian’s triangulation of evidence, the sociologist’s attention to standpoint, the philologist’s comparison of textual variants to reconstruct a lost original. This work was not clerical but epistemological—a method for testing authority by tracing its conditions of production. From Karl Lachmann’s nineteenth-century stemmatics to Paul Maas and Sebastian Timpanaro’s refinements of textual criticism, the act of comparing variants was a way of teaching judgment through contradiction.
In that lineage, AI literacy would be less a new competency than a revival of this epistemic craft—an education in provenance. It would train attention to the infrastructures of mediation themselves: to how data, models, and training sets encode their own hierarchies of inclusion and omission. To learn from the text, when the text teaches back, is to recognise that interpretation has become reciprocal: the artefact now models our habits of reasoning as much as we model it. Each prompt and response is a lesson in our own assumptions, a mirror of how authority is produced and performed. To read under these conditions is to study the pedagogy of the machine itself—the way it learns us as we learn it. When a model corrects a student’s grammar with unearned confidence, or elaborates on a half-formed idea until it sounds plausible, it is already teaching back—the automation of reflection made visible in real time.
In this sense, the promise of AI literacy has been domesticated. What began as a possibility for epistemic renewal—a way to teach provenance, mediation, and reflexivity—has been recast as a management problem. Institutions prefer the measurable to the interpretive. The market for AI literacy courses is booming, populated by vendors who translate uncertainty into compliance training. What could have been a pedagogy of discernment becomes a protocol of risk. Reading the machine gives way to regulating its use; ethics is reduced to a checkbox on an accreditation form. The critical craft that once taught us to trace the genealogy of a sentence back through its conditions of production is now replaced by guidelines for “responsible deployment.”
The Mirror Lesson
When the classroom, the policy, and the model all begin to resemble one another, what remains to be taught is reflection rather than mastery. Each exchange with the machine becomes a small rehearsal of our own interpretive reflexes: we correct it, it corrects us, and somewhere in that feedback loop the category of “teacher” dissolves. The lesson is no longer about what to know, but about how knowledge appears when refracted through automation. When a student hesitates before a machine’s answer, unsure whether to trust or to test it, that hesitation itself becomes the curriculum—the brief interval in which thinking is still human.
In that mirror, education confronts its oldest question again: how to distinguish imitation from understanding. The model can already paraphrase, contextualise, even feign empathy. What it cannot do is remember why interpretation matters—that comprehension is an ethical, not a computational, act. Yet the suspicion of imitation belongs largely to the Western lineage of pedagogy, which ties learning to originality and authorship. In many Eastern traditions, imitation has long been a method of attunement rather than obedience: the Confucian student copying the classics to internalise moral rhythm; the calligrapher repeating a master’s stroke until form and gesture align; the Zen novice reciting kōans whose meaning emerges only through repetition. In these lineages, repetition is not the opposite of thought but its discipline: the slow calibration of perception through form. This is where pedagogy still holds its ground not as instruction, but as vigilance—not as transmission, but as care for the difference between an answer and a thought.
When the text teaches back, what it really teaches is humility. It reminds us that understanding was never a one-way transaction, that knowledge has always been co-authored by the tools we use to think. The future of literacy may not lie in learning to out-reason the machine, but in learning to read what it reveals about us. Each exchange with a model is also a glimpse of our own epistemic lineage: our desire to instruct the world into meaning, our surprise when the world begins to respond. Perhaps this is what education has always been—a long conversation with our instruments, from stylus to script to screen. To learn, in the end, is to hear thought echo back through the systems that extend it, and to recognise in that echo both the limit and the dignity of being taught.

The Fantasy of the Competent Machine
The discourse of AI literacy thrives on a fantasy of clarity: that if people simply understood how the model works, they would use it wisely. That with enough technical insight, however partial or performative, we could finally domesticate the unruly machine and restore human judgment to its rightful place. Yet transparency has always been a comforting fiction. The model’s operations can be explained, but not fully witnessed; its reasoning can be visualised, but not shared. What passes for understanding is often just proximity to the interface—a sense of control sustained by fluency rather than knowledge.
Comprehension, in the context of machine learning, is a moving target. Even engineers admit that interpretability remains an open problem. The field’s response—the paradigm of explainability—turns opacity into a design challenge, as if the right visualisation or saliency map could translate statistical inference into human reason. To claim that literacy will bridge this gap is to mistake familiarity for understanding. Yet the promise endures because it flatters our institutional instinct toward legibility. When Microsoft partners with a national teachers’ union to “scale AI literacy efforts quickly” across schools, the initiative is framed as empowerment, but its deeper function is managerial: to render the unknowable knowable through corporate training, to turn epistemic opacity into a deliverable. The gesture reassures rather than explains, promising that clarity can be purchased, taught, and scaled.
Clarity can be measured, credentialed, and funded, but uncertainty cannot. The idea of the competent machine—and the equally competent human who can oversee it—restores the comforting symmetry of control. It allows policy to imagine that comprehension can be standardised, that knowing about the model is equivalent to knowing with it. The result is a pedagogy of reassurance, where literacy functions less as inquiry than as certification of order. What we need, instead, is a pedagogy of friction—one that resists the smoothness of systems and restores difficulty as a condition of understanding.
The classroom thus mirrors a larger cultural confusion. Policy papers speak of “AI readiness” as if the machine were a weather system to be forecast and endured. Corporate training modules promise “prompt fluency” as the new workplace competency. In each case, literacy functions less as pedagogy than as PR—an attempt to repackage structural uncertainty as individual skill. The result is a culture fluent in explanation but impoverished in reflection, where learning means keeping pace rather than taking pause. In chasing competence, we confuse adaptation with understanding—an optimism that keeps the machine running, not the mind alert.
This is not unprecedented. Every technological shift generates its own moral economy of understanding. The printing press gave rise to anxieties about plagiarism and authorship, the camera provoked debates about truth and manipulation. AI belongs to this lineage, but with a twist: it automates the appearance of understanding itself. The illusion is no longer that machines can know, but that they can care—that they can participate in the human drama of meaning. Each technology has promised to extend thought; this one promises to perform it. And so the old pedagogical fiction returns in a new form: the belief that if understanding can be automated, the rest will follow.

Dang Nguyen and David Gunkel in Berkeley, CA (2026)
The Human in the Loop
Some educators propose a compromise: a “human-in-the-loop” pedagogy where AI assists but does not replace critical reasoning. The model drafts, the student refines, the teacher evaluates. This sounds plausible until we ask who, in this loop, is actually learning. The machine improves through feedback; the student learns to edit machine outputs; the teacher learns to assess hybrid prose. The human and the algorithm become co-authors in a continuous process of adjustment.
The danger is not that AI will eliminate human agency but that it will normalise a diminished form of it. When understanding is outsourced to systems that generate coherence without consciousness, the very act of judgment becomes procedural. We learn to trust outputs that sound right, regardless of whether they think. Over time, discernment itself begins to mimic the rhythm of the model—quick, confident, and shallowly self-assured. The pace of response replaces the depth of reflection; hesitation becomes a flaw rather than a virtue. In this environment, the pedagogical ideal of critical thinking is quietly replaced by a new virtue: calibration. The good student is the one who learns to align with the system’s expectations, to edit within its probabilities, to become legible to the machine’s logic of feedback.
If AI literacy is to mean anything beyond adaptation, it must resist this procedural drift. It must treat automation not as a neutral tool but as a teacher in its own right: a teacher that instructs us in the habits of credulity, speed, and closure. The task of pedagogy, then, is to teach back—to make visible what the model hides, to restore friction where the interface promises ease.
This means designing learning environments that slow down rather than accelerate, that ask students to account for their decisions instead of optimising them. It means cultivating doubt as a method, encouraging the refusal to accept plausibility as proof. True literacy would not train students to write with the machine but to read against it: to ask what kind of reasoning its fluency conceals, what kind of silence coherence requires. In this light, the goal of education is not to perfect the loop between human and machine, but to keep it open, to preserve the space where uncertainty can still be thought.

School of Information, UC Berkeley (2026)
Epistemic Exhaustion
What we call “AI fatigue” in public discourse may be less about overexposure and more about this exhaustion of discernment. People sense that the terms of knowledge are shifting beneath them. The flood of synthetic language produces a kind of epistemic vertigo: we scroll through essays, press releases, and academic papers that all sound equally competent, equally meaningless. The style of expertise has become automated—but not before expertise itself had already been hollowed out by performance. Long before the machine perfected the tone of authority, human institutions had reduced it to a posture: the careful toeing of lines, the branding of intellect, the substitution of certainty for curiosity. What collapses now is not the authority of knowledge, but the appetite for uncertainty that once made knowledge possible.
If renewal is possible, it will not come from better tools or faster comprehension, but from recovering the stamina to think without guarantee. The task ahead is not to compete with generative systems, but to reinhabit the space they evacuate: the pause, the doubt, the provisional. Scholarship, if it is to mean anything now, must once again become a practice of attention. Of lingering with the fragment, the inconsistency, the unmodelled remainder. To write or teach in this moment is to resist the smoothness of explanation, to cultivate a literacy that begins where certainty ends.
In this landscape, the figure of the teacher stands in for a wider institutional unease. Universities, think tanks, and newsrooms all confront the same paradox: how to claim authority in a world where its gestures can be perfectly imitated. The performance of expertise—its cadence, its composure—now circulates faster than expertise itself. AI literacy, in this light, is less a solution than a symptom: a gesture toward instability without addressing its cause. What is being redefined is not simply what counts as knowledge, but the architecture through which it takes form: the networks, platforms, and protocols that now confer its credibility.

Berkeley campus (2026)
Teaching Judgment
To teach judgment is to slow things down. It is to resist the automation of coherence, to make room for ambiguity and doubt. This may be the most subversive form of AI literacy: not teaching how to use the machine, but how to pause before believing it.
Such a pedagogy would start not from skills but from sensibility. It would ask students to attend to tone, tune, and temperament—the small signatures of mechanical reasoning that reveal how the model “thinks.” It would train the ear to recognise when fluency replaces thought, when explanation becomes performance. It would reclaim the classroom as a site of shared uncertainty rather than scripted certainty. To teach judgment is to insist that hesitation has value, that the interval between question and answer is where thinking happens.
This is not a return to humanism, but a renewal of critique. The goal is not to purify knowledge from machines but to understand how machines participate in knowledge-making. A pedagogy of judgment would teach students to read artefacts—outputs, datasets, even dashboards—as arguments about the world, each carrying assumptions about what counts as evidence or reason. It would reframe literacy as an ethics of attention, a discipline of noticing before concluding. Only then could AI literacy become more than institutional compliance: it would become a practice of care for how understanding itself is automated, and for what might still resist automation.

Best Westerns, Worst Easterns (2026)
When the Text Teaches Back
The phrase when the text teaches back captures the reversal at the centre of this moment. To read or write with generative AI is to confront a mirror of our own intellectual habits—the templates of argument, the rhythms of exposition, the aesthetics of clarity. The machine reproduces them with uncanny precision, sometimes beautifully, sometimes grotesquely, until our methods of sense-making appear back to us as style. What it teaches reflection rather than content: what kind of reader, what kind of teacher, what kind of society we have become.
If the industrial revolution mechanised labour, the generative turn mechanises learning. Its promise is efficiency; its danger, the erosion of judgment. The true test of our institutions will not be how quickly they adapt but how deeply they can sustain the slow, difficult work of discernment. To be literate in this new sense is to remain teachable—to recognise when understanding itself has been automated, and to insist that thought must still, somehow, think back.
The challenge, then, is not merely institutional but epistemic. The more fluently machines perform understanding, the easier it becomes to forget that thought is not a process of output but a posture of relation—a way of staying with difficulty rather than resolving it. Judgment depends on friction, while meaning takes time. If automation promises to ease the burden of thinking, education must insist on its weight. To learn is to carry that weight with awareness, not to delegate it to the systems that would carry it for us.
To write in such a time is to learn from the mirror without mistaking it for the world. The text teaches back not only what we have built, but what we have neglected: the labour of attention, the patience of thought, the humility to remain unfinished. The future of pedagogy may lie less in mastering new systems than in recovering these older disciplines of care. If generative AI reveals anything, it is that intelligence has never been solely about speed or precision, but about how we bear the weight of interpretation. If the machine now completes our sentences, perhaps our task is to leave them open: to let thinking, for once, refuse to end.
29.10.2025
Melbourne, Australia
Posted 20.02.2026
