In South Korea’s ultra-competitive university admissions landscape, a single paragraph in a student’s School Life Record Book (SLRB) can shape their academic future. Among the most influential sections is the subject-specific teacher evaluation—a narrative written by teachers that describes students’ learning behavior, collaboration, character, and growth in each subject. Unlike test scores, it is meant to reflect human judgment. A teacher’s voice. A student’s lived reality.
But that voice is rapidly becoming artificial.
In 2023, over 70% of teachers in Seoul reported using ChatGPT to assist in writing these evaluations. Education software companies now offer specialized platforms that allow teachers to input keywords or upload spreadsheets of student behavior logs, which AI then transforms into full paragraphs of refined prose. With just a few clicks, vague classroom notes are polished into official records—sometimes without the teacher writing a single sentence.
What began as a tool to ease administrative overload is now quietly reshaping the very foundation of academic trust. University admissions officers have begun deploying AI-detection tools to examine student records, reporting that over 97% of suspicious entries showed signs of machine generation. Some universities are already planning to strengthen interviews, not to test knowledge—but to verify whether the profile they received reflects a real person.
This is no longer a question of convenience or efficiency. It is a question of authorship. Of fairness. Of whether the student record, long considered a bridge between human observation and academic evaluation, is becoming a scripted document shaped by algorithms, rather than insight.
If AI can write what a teacher is supposed to know, what remains of the relationship between student and school? And if universities can no longer trust what they read, how much longer can the system hold?
AI in the Classroom: From Burden to Automation
The use of AI in South Korean classrooms did not begin as a strategy to deceive, but as a response to exhaustion.
Teachers across the country are expected to not only instruct, mentor, and manage, but also to document—in vivid, personalized detail—every student’s academic behavior, learning attitude, and interpersonal growth. These records, required for each subject and for every semester, are uploaded to the National Education Information System (NEIS) and serve as crucial reference points for university admissions. In competitive schools, they can number in the hundreds per teacher, every year.
Faced with this burden, many educators have turned to generative AI tools like ChatGPT, Claude, or locally developed platforms. Some simply paste student keywords into AI prompts—“led discussion,” “asked questions,” “collaborative”—and let the model return a polished entry.
Others rely on enterprise-level services designed specifically for Korean schools. One such company allows teachers to download student logs from NEIS and upload them in batches; the system automatically generates full drafts of subject-specific teacher evaluations ready for review. Another startup allows students to submit assignments via an online portal, which then feeds the content into an AI model that drafts performance reviews for the teacher.
In many schools, this practice is now normalized.
The logic is simple: AI helps teachers save time, avoid vague or repetitive writing, and meet the expectations of parents who increasingly treat the student record as a competitive asset. In a system where a well-written sentence can imply competence, character, or leadership, precision of language becomes its own currency. And AI, above all, delivers fluency.
Yet, as the practice spreads, so does the risk: the line between record and fabrication begins to blur. Teachers remain the final approvers of AI-generated text, but the authorship—and by extension, the authenticity—becomes harder to trace.
What happens when documentation no longer reflects firsthand insight, but algorithmic interpolation?
The University Backlash: When Trust Fails
While AI offers relief to overworked teachers, its expanding presence in student records has triggered growing concern among South Korea’s universities.
Admissions officers, long accustomed to treating subject-specific teacher evaluations as a reliable proxy for a student’s classroom behavior and learning attitude, are beginning to see signs that the text they read may not reflect genuine teacher observation. Some describe subtle but unmistakable patterns: identical sentence structures across applications, polished but oddly impersonal tone, and abstract praise that lacks contextual grounding. In response, several institutions have quietly begun using AI-detection software to flag suspicious records.
According to one university’s internal analysis, over 97% of entries flagged for review showed strong signals of AI authorship.
This is not simply a matter of academic style—it’s a credibility crisis. If teacher evaluations no longer represent a teacher’s voice, their role as evaluative tools collapses. Unlike grades or test scores, which measure performance numerically, these evaluations are valued precisely because they are human, interpretive, and individualized. If AI weakens that human link, universities may have no choice but to develop new mechanisms for verification.
Already, some institutions are moving to increase the weight of interviews in their admissions process. These interviews, once used primarily for supplemental evaluation, are now being considered as a way to assess whether a student’s actual personality and ability align with the polished narrative presented in the record. Some officials even suggest that teacher-written evaluations will soon be read less as evidence of growth and more as documents requiring cross-examination.
Compounding the issue is a policy shift planned for 2028: Korea’s internal academic grading system will be reduced to a five-tier scale, weakening its discriminatory power. With test scores less decisive and personal narratives potentially AI-assisted, universities are entering uncharted territory.
The admissions process is built on trust—between students, teachers, and institutions. As AI becomes harder to detect and easier to use, that trust is no longer assumed. It must be re-earned, or redefined.
Is It Unfair—or Just the New Normal?
As universities scramble to adapt, a deeper and more complicated debate is emerging: is the use of AI in student records inherently unethical—or is it simply a reflection of evolving educational norms?
From one perspective, AI-assisted writing appears to undermine the spirit of personalized evaluation. The teacher’s written voice—once a unique signal of attention, memory, and judgment—risks becoming a generic output, shaped more by prompt engineering than classroom experience. Critics argue that this shift erodes the fundamental premise of holistic admissions: that schools provide insight, not automation.
But others see it differently.
For many teachers, AI is not replacing judgment—it’s helping articulate it. The evaluations are still based on their notes, observations, and intentions. The AI merely transforms fragmented impressions into complete sentences. In this view, using generative models is no different from relying on spellcheckers, grammar assistants, or template banks—tools that have long been accepted in professional and educational writing.
And there’s another dimension: equity. Well-resourced private schools and elite high schools have long had access to institutional support that ensures clean, compelling student records. If public school teachers can now use AI to achieve comparable fluency, does that not level the playing field rather than tilt it?
At the core of this debate lies an unresolved question: What are universities actually evaluating? Is it the authenticity of teacher expression? The raw truth of student behavior? Or simply the final shape of the document? If the factual content remains accurate—and if no information is fabricated—is language polishing still a violation?
As AI tools become more accessible, affordable, and undetectable, attempts to draw bright ethical lines become increasingly fraught. In a system built on performance, clarity, and competition, it may be that the use of AI is not a distortion of merit—but its next logical evolution.
The Policy Vacuum: Navigating Without a Map
Despite the accelerating use of AI in student evaluations—and the mounting concerns it raises—South Korea’s education authorities have yet to offer any formal guidance. As of mid-2025, there is no national policy, ethical framework, or technical guideline addressing how or whether AI should be used in the composition of school records.
In the absence of centralized rules, schools are improvising. Some principals quietly discourage the use of generative tools. Others leave it entirely to individual teachers. Universities, for their part, are developing their own safeguards—from using AI-detection software to expanding interviews. But this piecemeal approach leaves both educators and applicants in a state of uncertainty. What is allowed? What crosses the line?
Calls for government action are growing. Some admissions experts argue that the current format of teacher-written evaluations should be abandoned altogether in favor of objective checklists or structured activity logs—formats that leave less room for stylistic manipulation. Others advocate for transparency rules: if AI is used, it should be declared, just as some international universities now require applicants to disclose AI assistance in essays and statements.
Globally, precedents are already emerging. In the United States, a number of colleges—including Caltech and the University of California system—have begun issuing AI-use disclosure policies for admissions materials. Singapore’s Ministry of Education has introduced digital literacy guidelines addressing the ethical use of AI in schoolwork. South Korea, by contrast, remains largely silent—despite its reputation as a global leader in education technology.
The absence of official regulation creates more than confusion; it creates inequality. Schools with strong leadership or legal caution may restrict AI use entirely. Others may embrace it quietly, giving students an unseen advantage. Without a common standard, the same student could be evaluated through entirely different lenses depending on how their teacher, school, or region interprets AI ethics.
In a system where stakes are this high, neutrality is not safety—it is risk. And in the silence of policy, the gap between principle and practice grows wider every semester.
The Human Cost: The People Inside the Dilemma
Behind the policy gaps and institutional anxieties are real people navigating real dilemmas.
For teachers, AI has become both a lifeline and a source of guilt. Many describe feeling torn: using AI makes their jobs more manageable, especially in large schools where a single teacher may be responsible for over 200 evaluations per semester. Yet some worry that they are compromising the very integrity they were trained to uphold. What began as a tool to smooth out grammar or eliminate redundancy now feels like a crutch that might betray their professional judgment.
Students occupy an equally uncertain space. Few know whether AI was involved in their teacher’s evaluation—and fewer still know whether that matters. For those who believe their record is truly reflective of their behavior, there is growing concern that universities might view it with suspicion simply because of its polished tone. For others, there's quiet resentment: if some teachers use AI to embellish evaluations and others do not, what does that mean for fairness?
Parents, too, are increasingly invested. In competitive districts, some push for more detailed, eloquent evaluations, regardless of how they are generated. Others voice concern that AI may create "false positives"—inflated portrayals of students that crumble under interview scrutiny.
Universities, caught in the middle, are being forced to adapt quickly to an unpredictable new landscape. Admissions officers now must not only assess the credibility of what’s written, but also decipher who—or what—wrote it. The very idea of “knowing the student” is being redefined.
Each of these groups is responding to a shifting terrain in good faith. But good faith is not enough when the rules are unclear, the tools are unregulated, and the outcomes are unevenly distributed. What is emerging is not a debate over technology—but over trust, authorship, and the invisible labor behind every line of text.
The Weekly Breeze
Keep pace with Busan's deep narratives.
Delivered every Monday morning.






