Explore

  • Home
  • Latest News
  • About
  • Editor

Contribute

  • Send News
  • Contact
  • Join Team
  • Collaborate

Legal

  • Privacy Policy
  • Cookie Policy
  • Terms of Use
  • Editorial Policy
  • Correction & Rebuttal

Connect

Email Contacts

News Tips: [email protected]
Partnerships: [email protected]
Contribute: [email protected]
Information: [email protected]

Address: 30, Hasinbeonyeong‑ro 151beon‑gil, Saha‑gu, Busan, Korea  |  Tel: +82 507‑1311‑4503  |  Online newspaper registration No: Busan 아00471

Date of registration: 2022.11.16  |  Publisher·Editor: Maru Kim  |  Juvenile Protection Manager: Maru Kim

© 2026 Breeze in Busan. All Rights Reserved.

opinion
Chronicle

AI Is Changing Study Faster Than Schools Can Adapt

Generative AI has entered students’ daily routines, but exams, curricula, and national policy remain anchored in pre-AI assumptions.

Jan 22, 2026
13 min read
Save
Share
Maru Kim

Maru Kim

Editor-in-Chief

Maru Kim, Editor-in-Chief and Publisher, is dedicated to providing insightful and captivating stories that resonate with both local and global audiences.

AI Is Changing Study Faster Than Schools Can Adapt
Breeze in Busan | AI Forces Education to Redefine Thinking

Homework has become a quiet collaboration between teenagers and language models. A middle schooler in Seoul copies a math problem into a chatbot, waits a moment, and receives a neatly rendered solution with an explanation of each step. His teacher will see only the correct answer on paper the next morning. Across the Pacific, graduate students in Berlin use generative models as writing partners, reformulating paragraphs and refining arguments well past midnight. Their professors grade essays without asking whether the first drafts originated inside the student’s head or inside a transformer model. The common rule governing exams—“AI strictly prohibited”—remains intact, as if the prohibition could extend backwards into the everyday practice of learning.

The resulting triangle has become difficult to ignore. Students now study with AI. Teachers prepare lessons and worksheets with AI. Exams presume a world without AI. The contradiction rarely appears in policy documents or government press releases, yet it defines the actual lived environment of education in 2026.

Evidence collected by the OECD over the past two years confirms the shift in student behavior. Surveys across European secondary schools show adoption rates reaching levels associated with mature consumer technologies: more than seventy percent of students report using generative tools for homework, concept explanation, comprehension checks, or summarization; among university students in Germany, usage reaches ninety-four percent. The OECD’s more revealing detail concerns the purpose of use. Students describe AI as a study companion or partner rather than as a search engine or calculator. The framing signals a displacement of cognitive labor rather than a mere extension of digital convenience.

Teacher usage patterns look different. Roughly one-third of teachers survey positive for AI use, primarily for administrative tasks, lesson outlines, and material preparation. Few incorporate the technology into live classroom interaction or formative feedback loops. The mismatch between student and teacher adoption widens with age and with subject matter. In Estonia, where high-school students report ninety percent usage, only half of high-school teachers engage with AI in any capacity. The hierarchy of innovation has inverted; students now experiment more aggressively than the adults tasked with structuring their learning.

AI in Education: A Snapshot of Misalignment

Students already study with AI, teachers use AI mostly behind the scenes, and high-stakes exams still assume a world without AI. The graphics below summarize that three-way gap and the performance–mastery split observed in the Türkiye math experiment.

Who Actually Uses AI in the Learning Cycle?

Survey data from multiple countries show a sharp contrast: students rely on AI for everyday study, while teachers adopt it more cautiously and mostly for preparation.

Students – Regular AI Use
Share of students who report using AI weekly or more for school work
Lower Secondary (Europe, selected)
≈ 70–80%
Upper Secondary (Estonia)
≈ 85–90%
University (Germany)
≈ 94%
Teachers – AI Use in Workflows
Share of teachers who report using AI in the past year
Global Average (OECD sample)
≈ 35–40%
High-adoption systems (e.g. Singapore, UAE)
≈ 70–75%
Low-adoption systems (e.g. France, Japan)
≈ 15–20%

Bars show approximate ranges from recent OECD and national surveys. The pattern matters more than the exact percentage: students have normalized AI as a study companion, while teacher use remains uneven and focused on preparation, not live learning.

Three Worlds in One System

Everyday schooling now operates across three different realities: students learn with AI, teachers prepare with AI, and examinations still imagine a pre-AI classroom.

Students
AI as everyday study partner
  • Homework help
  • Concept explanations
  • Summaries and drafts
Teachers
AI mostly behind the scenes
  • Lesson planning
  • Material adaptation
  • Administrative work
Exams
AI strictly prohibited
  • High-stakes tests
  • University entrance
  • Credentialing exams

The learning process stretches across three incompatible rulesets. AI is normal for study, optional for preparation, and banned for evaluation. Policy design rarely acknowledges this fracture.

When AI Inflates Practice but Undermines Mastery

A randomized trial in Türkiye measured how AI-assisted math practice translated into unaided exam performance. Scores below are indexed to the control group (no AI) = 100.

No AI (Control)
Baseline group, traditional practice and exam
Practice score (index)
100
Exam score (index)
100
AI – Answer-Giving Model
Students could request full solutions on demand
Practice score (index)
≈ 148
Exam score (index)
≈ 83
AI – Tutor-Style Model
Model gave hints and questions, not final answers
Practice score (index)
≈ 227
Exam score (index)
≈ 102 (close to control)

In the answer-giving condition, practice performance surged but exam performance fell below the no-AI group. The tutor-style condition produced large gains in practice while keeping exam outcomes roughly aligned with the control. The design of AI support, not AI itself, determined whether practice translated into mastery.

National and municipal governments, meanwhile, maneuver to capture the moment rhetorically. Press conferences and strategic plans announce “AI innovation ecosystems,” “future learning hubs,” and “digital textbook revolutions.” The language carries the momentum of industrial policy rather than the discipline of pedagogy. Absent from most public briefings is a question that education researchers pose with increasing urgency: what portion of learning can safely be outsourced to a model, and what portion must remain inside the student’s own reasoning?

Research over the past eighteen months indicates that the boundary is not cosmetic. Learning hinges on struggle, error, and self-explanation—processes that decline when models supply polished answers without delay. In multiple countries, students achieve near-perfect marks on AI-assisted practice sets, only to falter when asked to replicate the reasoning unaided. The contrast between effortless practice and fragile mastery has emerged as the defining feature of the generative era in education.

The gap between public enthusiasm and cognitive reality sets the stage for a more searching investigation. Countries are not merely deciding whether to admit AI into schools; they are deciding how to integrate a technology that can either sharpen thinking or smother it, depending on design choices that remain invisible to the casual observer. The case studies that follow—Türkiye’s randomized mathematical trial, Europe’s tutor-style pilots, and the rapid policy deployments unfolding in South Korea—expose the stakes with greater clarity than any abstract debate over innovation or risk.


When Practice Soars and Mastery Collapses

The most striking demonstration of generative AI’s double edge surfaced not in Silicon Valley nor in East Asia, but in a high school in Türkiye. Researchers affiliated with the Wharton School partnered with teachers to run a semester-long mathematics study built around a simple question: what happens when students are allowed to solve practice problems with a language model, but must sit for the actual exam without assistance?

The students were divided into three conditions. One group received no access to AI; another used a stripped-down, answer-giving model that displayed full solutions upon request; a third worked with a more deliberate system designed to emulate a human tutor, nudging students with hints and questions rather than revealing the final answer. Teachers delivered the same instruction to all classes, then stepped aside as the experiment unfolded.

The early results astonished both faculty and parents. In practice sessions, scores surged across the AI-enabled groups. The answer-giving model lifted performance by nearly half. The tutor-inspired system more than doubled it. Worksheets filled with correct answers. Students finished sets in record time. The atmosphere resembled the euphoria that calculators once produced in primary schools: the sense that a previously laborious subject had been tamed by technology.

Euphoria evaporated during the exam. Students who had relied on the answer-giving model performed worse than those who had practiced without AI at all. The gap proved neither subtle nor statistically ambiguous; the decline measured in double digits. The tutor group avoided the collapse, maintaining parity with the control group, but without the spectacular gains observed during practice. The pattern exposed a gap between two different cognitive activities: producing correct answers when the machine scaffolds the path, and reconstructing reasoning under one’s own power.

Researchers dissected the interaction logs to understand the divergence. Students using the answer-giving model rarely articulated intermediate reasoning. They pasted problems, collected solutions, and moved on. Error detection, self-explanation, and the frustrating micro-delays that strengthen long-term retention went missing. Students using the tutor-style system engaged in slower, more effortful exchanges—attempting steps, requesting clarification, revising arguments. Their homework took longer, and their practice scores lacked the dramatic spike of their peers, yet their exam performance did not collapse.

The contrast revealed a principle that cognitive scientists have documented for decades: learning depends less on the correctness of the final answer than on the struggle to produce it. Generative models dissolve struggle with alarming efficiency. A student who sees a perfect solution once can easily mistake recognition for mastery, a phenomenon psychologists describe as the “illusion of competence.” The illusion rarely survives a closed-book test.

It would be misleading to treat the Turkish trial as a universal verdict on AI in education. The study unfolded in a single subject area, within a particular secondary curriculum, under a design that withheld AI during assessments. The external validity remains limited. Yet as a window into emerging failure modes, the experiment carries unusual clarity. Performance soared during assisted practice; mastery crumbled when assistance vanished. The vending-machine version of AI, optimized for responsiveness and efficiency, replaced the very mental work that learning requires.

The episode also illuminated a policy fault line. Schools and governments often assume that homework and exams measure the same underlying capability. The Turkish results challenge that assumption. Homework had become a collaborative task between humans and machines, while exams preserved a pre-AI definition of individual cognition. No educational system built around that contradiction can withstand prolonged exposure to generative tools.


The Tutor That Doesn’t Give Answers

If Türkiye demonstrated the risks of delegating reasoning to a machine, a separate strand of experiments has begun to explore the opposite architecture: AI not as an oracle, but as a patient tutor that withholds the solution long past the point of convenience.

Harvard researchers staged the comparison inside an introductory physics course known for its dense conceptual hurdles. A group of students took the traditional route, working through problem sets and office hours in a classroom that followed the “active learning” model faculty had refined over years. A second group interacted with a generative model tuned to ask clarifying questions, surface misconceptions, and nudge students through intermediate steps. The model rarely delivered direct answers. Instead it asked why a certain assumption seemed plausible, or how a diagram might change if a force vector reversed direction. The sequence resembled the cadence of a seasoned tutor who lets silence linger long enough for frustration to do its work.

When the two cohorts were evaluated, the AI-tutored group moved faster, reached higher levels of comprehension, and reported stronger motivation. The most surprising detail surfaced not in test scores but in the student interviews. Many described the AI not as a machine but as a partner that insisted they continue thinking. The refusal to provide answers—engineered rather than accidental—sustained cognitive effort in a way familiar to anyone who has struggled under a demanding instructor.

A separate trial at Stanford took the premise further. Instead of helping students directly, the generative model coached human tutors who worked with adolescents from disadvantaged backgrounds. Tutors received real-time prompts suggesting when to delay hints, when to probe, and when to praise. The intervention did not alter the curriculum or the student population; it altered the quality of feedback delivered by relatively inexperienced tutors. The gains proved uneven but significant: novice tutors approached the performance of veterans, and students with weaker prior performance improved disproportionately. The model amplified human pedagogy rather than replacing it.

In the United Kingdom, an experiment with secondary-school science teachers traced a different benefit. Generative tools reduced preparation time without depressing instructional quality, freeing teachers from the bureaucratic weight that had long constrained their ability to pay attention to the classroom. None of the trials claimed that AI could eliminate the need for human educators. Instead, each treated AI as a buffer against scarcity—of time, of expertise, of individual feedback.

Pilot programs in Europe reveal how policy can scaffold such tools instead of unleashing them without structure. Estonia distributed AI accounts to high-school students and teachers while aligning the tools with the national curriculum and language; Greece launched a controlled high-school deployment with teacher training and evaluation procedures; France targeted its investment at the administrative workload of teachers, building a system that answered routine questions about scheduling, leave, and human resources so that educators could allocate emotional and intellectual energy to students.

Despite differences in scale and ideology, the pilots share a single intuition: learning improves not when AI supplies answers, but when AI enforces the discipline of thinking. The principle sounds counterintuitive in a political environment that treats speed, efficiency, and productivity as public virtues. Yet education rarely rewards speed. It rewards friction, error, and the reorganization of mental models under pressure. The most effective AI deployments protect that friction rather than dissolve it.

The divergence between the European pilots and the Turkish classroom points toward a choice that governments have not fully articulated. AI can make learning appear effortless and strip it of its formative struggle, or it can sit behind the instructor and lend strength where schools are weakest—tutoring, coaching, and feedback—areas that demand patience and labor no modern system has ever supplied at scale. One design flatters performance and damages mastery. The other preserves mastery by slowing performance down.


Fast Policy, Slow Pedagogy: Korea’s AI Dilemma

South Korea moved more aggressively than any major education system to formalize generative AI inside its curriculum. By 2025, the Ministry of Education committed to AI-enhanced digital textbooks across core subjects, with adaptive pathways, automated explanations, and performance dashboards designed to track student progress. The plan aligned with the country’s industrial agenda for semiconductors and AI hardware, and with a political appetite for future-forward reforms. Few nations matched that tempo.

The speed carried a cost. Policy documents described gains in efficiency and personalization without addressing how students would reconcile AI-enabled study with AI-prohibited examinations. Officials spoke of “innovation ecosystems” and “AI literacy,” but left unexamined the cognitive mechanics that determine whether learning transfers when support disappears. No guidelines explained what parts of problem-solving should remain internal to the student, or how feedback from digital tutors would interact with the national assessment regime. Korea adopted the technology before defining the learning it sought to protect.

Municipal governments echoed the rhetoric. Busan styled itself as an “AI education hub,” convening conferences to display school projects involving Python visualization, image generation, and web creation. The events celebrated novelty and participation, yet few sessions grappled with the dull but decisive questions of learning design: how assignments would be evaluated, how teachers would intervene when AI overwhelmed student reasoning, or how a city would monitor the difference between performance gains and mastery gains. The showcase format belonged to economic development policy more than to pedagogy.

Teachers entered the new landscape with ambivalence. Surveys conducted by Korean researchers during the digital textbook rollout recorded enthusiasm for reduced preparatory workload but uncertainty about how to incorporate AI into live instruction without erasing the struggle upon which understanding depends. Some teachers quietly outsourced worksheet generation and explanatory passages, a relief in a system notorious for administrative fatigue. Others experimented with model-assisted questioning, hoping to cultivate Socratic dialogue in classrooms where silence often dominates. The efforts remained fragmented and unsupported by structured research.

The national assessment regime remained frozen in pre-AI logic. High-stakes exams prohibited external assistance, scoring individual performance under strict time pressure. Homework and projects drifted into a collaborative space shared with models. The contradiction resembled the Turkish triangle in miniature: a system that encourages AI during practice but punishes dependence on it during evaluation. Students understood the bargain intuitively. The model completes the practice set; the student memorizes the final form; the exam proceeds without scaffolding. Performance rises on paper, while mastery grows brittle.

Other countries confronted the conflict through pilots and staged evaluation. Estonia restricted early deployments to selected high schools, pairing AI access with teacher training and longitudinal monitoring. Greece conditioned its ChatGPT classroom trial on a defined research question. France routed investment through teachers, building administrative support tools before touching student cognition. Korea leapfrogged the pilot stage and entered broad deployment without an accompanying framework for evidence.

The absence of evidence does not imply failure, but it does remove the mechanism by which a reform learns to correct itself. Educational technology rarely succeeds on the first attempt; it succeeds through iteration and the controlled management of error. In South Korea, iteration has been replaced by announcement. Each new municipal initiative, each new curriculum augmentation, and each new vendor partnership advances under the assumption that adoption equals progress. The research record suggests a more ambivalent truth: progress in education depends on designing the right friction, not eliminating it.

The disjunction between national ambition and cognitive design has become more visible as students integrate AI into their private study. Teachers report essays and reports polished to syntactic perfection without the developmental traces that writing used to reveal—false starts, revisions, and the stubborn awkwardness of learning to organize thought. The polished surface conceals uncertainty about what the student now owns intellectually and what the model produced. Without deliberate rules for authorship and struggle, the distinction risks disappearing.

Korea’s education system excels at execution when goals are clear. The challenge posed by AI is not execution but definition. Before asking how to scale AI in classrooms, the country must answer a quieter question: what kind of thinking must remain human?

Designing the Necessary Failures

The appeal of generative AI in education lies in its capacity to erase effort. A model can compress hours of practice into minutes, neutralizing the uncertainty and frustration that once defined the early stages of learning. The compression satisfies a culture that prizes speed, yet understanding rarely flourishes under conditions engineered for haste. The strongest research from the past two years suggests that cognition needs resistance, and that resistance vanishes when answers arrive too quickly or too perfectly.

Education systems confront a choice for which no prior template exists. To ban AI from learning would deny students the tools that will structure their future work; to integrate it without safeguards would hollow out the very faculties that schooling seeks to strengthen. The experiments in Türkiye, the pilots in Europe, and the deployments in Korea describe three versions of the same dilemma. One replaced thought with automation and exposed the brittleness of unearned performance. Another slowed the process down, turning AI into a mechanism for cultivating attention and explanation. The third scaled adoption before defining the form of thinking it hoped to preserve.

Any attempt to resolve the dilemma requires a recognition that learning depends on friction—the moment when a student’s prior model fails and must be rebuilt. Generative AI can either remove friction or redirect it. A well-designed tutor forces the mind to linger on uncertainty, a poorly designed system eliminates uncertainty altogether. The distinction may determine whether a generation enters adulthood fluent in concept or merely fluent in tools.

Policy cannot settle the matter through slogans about innovation or competitiveness. It must decide what kinds of struggle schools are obliged to protect. The decision extends beyond curriculum and into authorship, evaluation, and equity. Students from affluent backgrounds already use AI to compensate for a deficit of time; students from disadvantaged backgrounds may use it to compensate for a deficit of instruction. Both uses deserve acknowledgment, and both conceal risks. Without deliberate rules for how much assistance is allowed, and for which tasks, the notion of academic work loses coherence.

Governments that treat AI as an industrial policy priority will eventually collide with the realities of learning design. Industrial logic favors rapid diffusion, pilot-free scaling, and eligibility for procurement. Cognitive logic favors staged experiments, longitudinal evaluation, and clear boundaries around what must remain internal to the learner. Reconciling the two demands a tolerance for controlled failure—a willingness to conduct small trials, absorb uncomfortable data, and iterate rather than declare victory.

The question that remains—for Korea, for Europe, for every system engaged in this transformation—is not whether AI belongs in schools. The question is what form of thinking schools must continue to cultivate by hand, and how much of the rest may safely be outsourced to machines. The answer will not arrive from the market. It will emerge from the unglamorous work of teachers, researchers, and policymakers who understand that learning’s most valuable moments are not the ones in which the answer appears, but the ones in which it has not yet been found.

Until that boundary is drawn, education will operate in a liminal state—homework done with AI, exams taken without it, and mastery judged on tests that no longer represent the conditions under which students actually learn. Systems that refuse to define the boundary will watch the boundary define them.

The Weekly Breeze

Keep pace with Busan's deep narratives.
Delivered every Monday morning.

Independent journalism, directly to your inbox.

Strategic Partner
Breeze Editorial
Elevate Your
Brand's Narrative

Connect your core values with a community of
thoughtful and discerning readers.

Inquire Now
Related Topics
Opinion

Spread the Chronicle

Knowledge is most valuable when shared with the community.

Previous Article
Can South Korea Prevent AI From Becoming an Elite Monopoly?
Next Article
How Coupang’s Crisis Response Undermined Public Trust in Korea

💬 Comments

Please sign in to leave a comment.

    Related Insights

    Can South Korea Prevent AI From Becoming an Elite Monopoly?

    Can South Korea Prevent AI From Becoming an Elite Monopoly?

    Artificial intelligence is concentrating power at infrastructure speed. The contest now is whether democratic states can diffuse its gains before consolidation becomes systemic.

    February 25, 2026 min read
    How Coupang’s Crisis Response Undermined Public Trust in Korea

    How Coupang’s Crisis Response Undermined Public Trust in Korea

    The controversy surrounding Coupang’s data breach highlights how crisis responses designed to limit legal risk can generate broader regulatory and political consequences when public process is sidelined.

    December 31, 2025 min read
    Busan’s Skyline and the Vanishing Horizon

    Busan’s Skyline and the Vanishing Horizon

    Busan’s skyline grows higher each year, promising beauty and prosperity. But behind the towers lies a quieter truth — a city losing its landscape, its rhythm, and its memory.

    October 28, 2025 min read

    Expertise Continued by the Author

    Who Learns From War
    Latest Insight

    Who Learns From War

    Read Story
    Can South Korea Prevent AI From Becoming an Elite Monopoly?
    Latest Insight

    Can South Korea Prevent AI From Becoming an Elite Monopoly?

    Read Story