Breeze in Busan

Independent journalism on the politics, economy, and society shaping Busan.

Contact channels

News Tips

[email protected]

Partnerships

[email protected]

Contribute

[email protected]

Information

[email protected]

Explore

  • Home
  • Latest News
  • Busan News
  • National News
  • Authors
  • About
  • Editor
  • Contact

Contribute

  • Send News
  • Contact
  • Join Team
  • Collaborate

Legal

  • Privacy Policy
  • Cookie Policy
  • Terms of Use
  • Editorial Policy
  • Correction & Rebuttal

Newsroom Details

30, Hasinbeonyeong-ro 151beon-gil, Saha-gu, Busan, Korea

+82 507-1311-4503

Busan 아00471

Registered: 2022.11.16

Publisher·Editor: Maru Kim

Juvenile Protection: Maru Kim

© 2026 Breeze in Busan. All Rights Reserved.

Independent reporting from Busan across politics, economy, society, and national affairs.

busan-news
Breeze in Busan

When AI Writes for the Government, Who Takes Responsibility?

Busan’s attempt to lead AI innovation in local government offers a glimpse into the future of intelligent administration. But its newly issued AI ethics guideline, though well-intentioned, exposes a deeper problem.

Apr 7, 2025
4 min read
Save
Share
Maru Kim

Maru Kim

Editor-in-Chief

Maru Kim, Editor-in-Chief and Publisher, is dedicated to providing insightful and captivating stories that resonate with both local and global audiences.

When AI Writes for the Government, Who Takes Responsibility?
Breeze in Busan | Smarter Bureaucracy, Ethical Vacuum: Why AI Governance Needs More Than Guidelines

Busan, South Korea — In early April 2025, the city of Busan, South Korea, signed a major partnership agreement with Naver Cloud, one of the country’s leading AI service providers. The objective? To implement “Busan-style smart administration” through generative AI. The collaboration promises to modernize public services, automate bureaucratic routines, and personalize citizen engagement—an ambitious leap into what’s being framed as the future of intelligent governance.

Busan isn’t alone. From Seoul to Singapore, local governments around the world are exploring ways to infuse generative AI into public administration. But Busan stands out in one regard: it is among the first to publish an official AI ethics guideline specifically for public officials using generative models like ChatGPT. The city claims it will ensure fairness, responsibility, and data security as civil servants begin relying on these tools to write documents, analyze data, and even communicate with citizens.

But there’s a deeper, more complicated question that hasn’t been fully addressed:

When AI starts writing for the government, who’s actually speaking?
And when things go wrong, who do we hold responsible?

Busan’s guidelines rest on four principles: fairness, reliability, responsibility, and security. On the surface, this mirrors the best practices emerging globally. But a closer look reveals a common problem: the rules sound good but lack operational clarity.

How does a civil servant know whether a prompt introduces bias?
Who checks whether the AI-generated draft is factually accurate or legally sound?
If an AI-generated policy summary includes misleading content, is the bureaucrat liable—or the vendor?

Most importantly, while the guideline insists that “final responsibility lies with the human official,” it offers little in the way of institutional protection. In practice, it risks creating a system where AI produces the language, and public servants merely sign off—often under time pressure and limited oversight. That’s not responsibility. That’s rubber-stamping.

The more worrying issue isn’t technical—it’s emotional. Governments don’t just process data; they communicate care. They offer condolence in moments of grief, reassurance in crises, and pride in collective success. When these expressions are automated, however well-written they may be, they risk feeling hollow.

Citizens can tell when no one is behind the message.
And that matters—because trust in government is built not just on what is said, but who is saying it.

If an AI writes the mayor’s statement after a flood, will it still feel like the city is grieving with its people? If a chatbot delivers a rejection of public assistance, will the recipient feel heard?

These are not hypothetical concerns. They go to the heart of democratic legitimacy. Simulated emotion without real authorship can undermine the credibility of even well-intentioned governments.

There are also structural concerns. Busan’s partnership with Naver Cloud is exclusive and long-term, raising questions about vendor lock-in and data control. And while the city is rapidly expanding its AI services—including automated response systems, internal knowledge generation, and predictive analytics—there are no clear standards for when AI should be used, and when it shouldn’t.

Should AI write internal memos? Maybe.
Should it draft legislative language? Perhaps not.
Should it respond to citizen complaints about discrimination or hardship? That’s a harder call—and no current guideline adequately addresses it.

Worse, there’s no mechanism for citizen oversight. If your benefits letter was written by an AI, you may never know. If a chatbot gets your address wrong and you miss a deadline, the appeals process may not even understand what went wrong.

A more meaningful approach to AI governance in the public sector begins not with the technology itself, but with the values that surround it. At the core is transparency: citizens deserve to know when they’re interacting with AI. Whether it’s a chatbot handling a complaint or a public statement generated through a language model, the authorship must be disclosed. Knowing who—or what—is speaking is the first condition for trust.

Beyond transparency, there must be clear lines of accountability. AI-generated outputs should not be quietly approved or passed along by overworked public servants. Instead, governments must establish explicit structures that determine who reviews, validates, and—when needed—retracts AI-generated content. This responsibility should be institutionalized, not left to the discretion of individuals operating without proper guidance.

Finally, democratic oversight is essential. Communities must have a voice in deciding which aspects of government communication can be automated and where human presence must remain intact. Not all messages are just information; some are acts of care, recognition, or responsibility. These moments deserve a human author.

Other cities and countries are already taking steps in this direction. New York State now requires agencies to disclose when AI is used. The European Union is developing a risk-tiered regulatory model. In Singapore, internal review boards have been created to monitor and guide AI implementation in government.

The point is not to resist technology, but to embed it in an ethical and civic architecture. Public administration may benefit from increased efficiency, but it cannot lose its integrity in the process. Governance is not just a function of speed or scale—it is a relationship, built slowly and carefully through trust, tone, and human presence.

If governments do choose to automate, they must do so without automating away the very things that make their voice trustworthy. Because when no one takes emotional responsibility for what is said in the public’s name, the message—however eloquently written—will ultimately ring hollow.

The Weekly Breeze

Keep pace with Busan's deep narratives.
Delivered every Monday morning.

Independent journalism, directly to your inbox.

Strategic Partner
Breeze Editorial
Elevate Your
Brand's Narrative

Connect your core values with a community of
thoughtful and discerning readers.

Inquire Now
Related Topics
Busan news

Share This Story

Knowledge is most valuable when shared with the community.

💬 Comments

Please sign in to leave a comment.

    Related Coverage

    Continue with related reporting

    Follow adjacent reporting from the same newsroom file, with linked coverage that extends the current story's desk and context.

    Busan AI Data Centers Bring Big Investment, but Jobs Remain Harder to Prove
    Mar 17, 2026

    Busan AI Data Centers Bring Big Investment, but Jobs Remain Harder to Prove

    From Microsoft’s existing Busan-area operations to future projects in Eco Delta City and Myeongji–Noksan, Busan is becoming a serious host for AI infrastructure — but not yet a proven engine of high-quality job growth.

    Busan’s Mandeok–Centum Urban Expressway Opens Into a Bottleneck
    Mar 15, 2026

    Busan’s Mandeok–Centum Urban Expressway Opens Into a Bottleneck

    Busan’s 9.62-km Mandeok–Centum Urban Expressway opened in February 2026 to ease east-west congestion, but early traffic data show worsening speeds near Mandeok Interchange, highlighting potential design bottlenecks.

    Busan’s 2026 Local Election Tests PPP Strength Amid Redistricting Delays
    Mar 13, 2026

    Busan’s 2026 Local Election Tests PPP Strength Amid Redistricting Delays

    As the electoral map remains unsettled, Busan’s shrinking districts and weakening conservative base are colliding in one of the city’s most consequential local races in years.

    More from the author

    Continue with the author

    Stay with the same line of reporting through more work from this byline.

    Who Learns From War
    Mar 5, 2026

    Who Learns From War

    Can South Korea Prevent AI From Becoming an Elite Monopoly?
    Feb 25, 2026

    Can South Korea Prevent AI From Becoming an Elite Monopoly?