Skip to content
Busan news
Breeze in Busan

When AI Writes for the Government, Who Takes Responsibility?

Busan’s attempt to lead AI innovation in local government offers a glimpse into the future of intelligent administration. But its newly issued AI ethics guideline, though well-intentioned, exposes a deeper problem.

By Maru Kim
Apr 8, 2025
4 min read
Share Story
When AI Writes for the Government, Who Takes Responsibility?
Breeze in Busan | Smarter Bureaucracy, Ethical Vacuum: Why AI Governance Needs More Than Guidelines

Busan, South Korea — In early April 2025, the city of Busan, South Korea, signed a major partnership agreement with Naver Cloud, one of the country’s leading AI service providers. The objective? To implement “Busan-style smart administration” through generative AI. The collaboration promises to modernize public services, automate bureaucratic routines, and personalize citizen engagement—an ambitious leap into what’s being framed as the future of intelligent governance.

Busan isn’t alone. From Seoul to Singapore, local governments around the world are exploring ways to infuse generative AI into public administration. But Busan stands out in one regard: it is among the first to publish an official AI ethics guideline specifically for public officials using generative models like ChatGPT. The city claims it will ensure fairness, responsibility, and data security as civil servants begin relying on these tools to write documents, analyze data, and even communicate with citizens.

But there’s a deeper, more complicated question that hasn’t been fully addressed:

When AI starts writing for the government, who’s actually speaking?
And when things go wrong, who do we hold responsible?

Busan’s guidelines rest on four principles: fairness, reliability, responsibility, and security. On the surface, this mirrors the best practices emerging globally. But a closer look reveals a common problem: the rules sound good but lack operational clarity.

How does a civil servant know whether a prompt introduces bias?
Who checks whether the AI-generated draft is factually accurate or legally sound?
If an AI-generated policy summary includes misleading content, is the bureaucrat liable—or the vendor?

Most importantly, while the guideline insists that “final responsibility lies with the human official,” it offers little in the way of institutional protection. In practice, it risks creating a system where AI produces the language, and public servants merely sign off—often under time pressure and limited oversight. That’s not responsibility. That’s rubber-stamping.

The more worrying issue isn’t technical—it’s emotional. Governments don’t just process data; they communicate care. They offer condolence in moments of grief, reassurance in crises, and pride in collective success. When these expressions are automated, however well-written they may be, they risk feeling hollow.

Citizens can tell when no one is behind the message.
And that matters—because trust in government is built not just on what is said, but who is saying it.

If an AI writes the mayor’s statement after a flood, will it still feel like the city is grieving with its people? If a chatbot delivers a rejection of public assistance, will the recipient feel heard?

These are not hypothetical concerns. They go to the heart of democratic legitimacy. Simulated emotion without real authorship can undermine the credibility of even well-intentioned governments.

There are also structural concerns. Busan’s partnership with Naver Cloud is exclusive and long-term, raising questions about vendor lock-in and data control. And while the city is rapidly expanding its AI services—including automated response systems, internal knowledge generation, and predictive analytics—there are no clear standards for when AI should be used, and when it shouldn’t.

Should AI write internal memos? Maybe.
Should it draft legislative language? Perhaps not.
Should it respond to citizen complaints about discrimination or hardship? That’s a harder call—and no current guideline adequately addresses it.

Worse, there’s no mechanism for citizen oversight. If your benefits letter was written by an AI, you may never know. If a chatbot gets your address wrong and you miss a deadline, the appeals process may not even understand what went wrong.

A more meaningful approach to AI governance in the public sector begins not with the technology itself, but with the values that surround it. At the core is transparency: citizens deserve to know when they’re interacting with AI. Whether it’s a chatbot handling a complaint or a public statement generated through a language model, the authorship must be disclosed. Knowing who—or what—is speaking is the first condition for trust.

Beyond transparency, there must be clear lines of accountability. AI-generated outputs should not be quietly approved or passed along by overworked public servants. Instead, governments must establish explicit structures that determine who reviews, validates, and—when needed—retracts AI-generated content. This responsibility should be institutionalized, not left to the discretion of individuals operating without proper guidance.

Finally, democratic oversight is essential. Communities must have a voice in deciding which aspects of government communication can be automated and where human presence must remain intact. Not all messages are just information; some are acts of care, recognition, or responsibility. These moments deserve a human author.

Other cities and countries are already taking steps in this direction. New York State now requires agencies to disclose when AI is used. The European Union is developing a risk-tiered regulatory model. In Singapore, internal review boards have been created to monitor and guide AI implementation in government.

The point is not to resist technology, but to embed it in an ethical and civic architecture. Public administration may benefit from increased efficiency, but it cannot lose its integrity in the process. Governance is not just a function of speed or scale—it is a relationship, built slowly and carefully through trust, tone, and human presence.

If governments do choose to automate, they must do so without automating away the very things that make their voice trustworthy. Because when no one takes emotional responsibility for what is said in the public’s name, the message—however eloquently written—will ultimately ring hollow.

Related Topics

Share This Story

Knowledge is most valuable when shared with the community.

Editorial Context

"Independent journalism relies on radical transparency. View our full log of editorial notes, corrections, and project dispatches in the Newsroom Transparency Log."

Reader Pulse

The report's impact signal

0 SIGNALS

Be the first to provide a reading pulse. These collective signals help our newsroom understand the impact of our reporting.

Join the deep discussion
Loading this week's participation brief

Join the discussion

Article Discussion

A more thoughtful conversation, anchored to the story

Atlantic-style discussion for this article. One-level replies, editor prompts, and moderation-first participation are now powered directly by Prisma.

Discussion Status

Open

Please sign in to join the discussion.

Loading discussion...

The Weekly Breeze

Independent reporting and analysis on Busan,
Korea, and the broader regional economy.

Independent journalism, directly to your inbox.

Related Coverage

Continue with related reporting

Follow adjacent reporting from the same newsroom file, with linked coverage that extends the current story's desk and context.

What Busan’s tourism rebound does not fix
NewsApr 23, 2026

What Busan’s tourism rebound does not fix

Visitors are back, but the sectors that give the city economic depth remain under pressure — leaving Busan busier on the surface and more exposed underneath.

Continue this story

More on this issue

Stay with the same issue through adjacent reporting that carries the argument, context, or consequences forward.

Can Smart Monitoring Change an Aging Industrial Complex in Busan?
NewsApr 16, 2026

Can Smart Monitoring Change an Aging Industrial Complex in Busan?

At Seobusan Smart Valley, Busan is trying to use an integrated control system to manage the risks of an older industrial complex. Whether that becomes a working public-safety tool or a technology showcase will depend on results the city has yet to prove.

Busan’s Two Futures
NewsApr 13, 2026

Busan’s Two Futures

Busan is aging, losing younger residents, and struggling to sustain confidence in North Port, its flagship waterfront project. With World Design Capital 2028, the city is trying to show that visible ambition can still produce real urban renewal.

More from the author

Continue with Breeze in Busan

Stay with the same line of reporting through more work from this byline.