When AI Writes for the Government, Who Takes Responsibility?
Busan’s attempt to lead AI innovation in local government offers a glimpse into the future of intelligent administration. But its newly issued AI ethics guideline, though well-intentioned, exposes a deeper problem.
Busan, South Korea — In early April 2025, the city of Busan, South Korea, signed a major partnership agreement with Naver Cloud, one of the country’s leading AI service providers. The objective? To implement “Busan-style smart administration” through generative AI. The collaboration promises to modernize public services, automate bureaucratic routines, and personalize citizen engagement—an ambitious leap into what’s being framed as the future of intelligent governance.
Busan isn’t alone. From Seoul to Singapore, local governments around the world are exploring ways to infuse generative AI into public administration. But Busan stands out in one regard: it is among the first to publish an official AI ethics guideline specifically for public officials using generative models like ChatGPT. The city claims it will ensure fairness, responsibility, and data security as civil servants begin relying on these tools to write documents, analyze data, and even communicate with citizens.
But there’s a deeper, more complicated question that hasn’t been fully addressed:
When AI starts writing for the government, who’s actually speaking?
And when things go wrong, who do we hold responsible?
Busan’s guidelines rest on four principles: fairness, reliability, responsibility, and security. On the surface, this mirrors the best practices emerging globally. But a closer look reveals a common problem: the rules sound good but lack operational clarity.
How does a civil servant know whether a prompt introduces bias?
Who checks whether the AI-generated draft is factually accurate or legally sound?
If an AI-generated policy summary includes misleading content, is the bureaucrat liable—or the vendor?
Most importantly, while the guideline insists that “final responsibility lies with the human official,” it offers little in the way of institutional protection. In practice, it risks creating a system where AI produces the language, and public servants merely sign off—often under time pressure and limited oversight. That’s not responsibility. That’s rubber-stamping.
The more worrying issue isn’t technical—it’s emotional. Governments don’t just process data; they communicate care. They offer condolence in moments of grief, reassurance in crises, and pride in collective success. When these expressions are automated, however well-written they may be, they risk feeling hollow.
Citizens can tell when no one is behind the message.
And that matters—because trust in government is built not just on what is said, but who is saying it.
If an AI writes the mayor’s statement after a flood, will it still feel like the city is grieving with its people? If a chatbot delivers a rejection of public assistance, will the recipient feel heard?
These are not hypothetical concerns. They go to the heart of democratic legitimacy. Simulated emotion without real authorship can undermine the credibility of even well-intentioned governments.
There are also structural concerns. Busan’s partnership with Naver Cloud is exclusive and long-term, raising questions about vendor lock-in and data control. And while the city is rapidly expanding its AI services—including automated response systems, internal knowledge generation, and predictive analytics—there are no clear standards for when AI should be used, and when it shouldn’t.
Should AI write internal memos? Maybe.
Should it draft legislative language? Perhaps not.
Should it respond to citizen complaints about discrimination or hardship? That’s a harder call—and no current guideline adequately addresses it.
Worse, there’s no mechanism for citizen oversight. If your benefits letter was written by an AI, you may never know. If a chatbot gets your address wrong and you miss a deadline, the appeals process may not even understand what went wrong.
A more meaningful approach to AI governance in the public sector begins not with the technology itself, but with the values that surround it. At the core is transparency: citizens deserve to know when they’re interacting with AI. Whether it’s a chatbot handling a complaint or a public statement generated through a language model, the authorship must be disclosed. Knowing who—or what—is speaking is the first condition for trust.
Beyond transparency, there must be clear lines of accountability. AI-generated outputs should not be quietly approved or passed along by overworked public servants. Instead, governments must establish explicit structures that determine who reviews, validates, and—when needed—retracts AI-generated content. This responsibility should be institutionalized, not left to the discretion of individuals operating without proper guidance.
Finally, democratic oversight is essential. Communities must have a voice in deciding which aspects of government communication can be automated and where human presence must remain intact. Not all messages are just information; some are acts of care, recognition, or responsibility. These moments deserve a human author.
Other cities and countries are already taking steps in this direction. New York State now requires agencies to disclose when AI is used. The European Union is developing a risk-tiered regulatory model. In Singapore, internal review boards have been created to monitor and guide AI implementation in government.
The point is not to resist technology, but to embed it in an ethical and civic architecture. Public administration may benefit from increased efficiency, but it cannot lose its integrity in the process. Governance is not just a function of speed or scale—it is a relationship, built slowly and carefully through trust, tone, and human presence.
If governments do choose to automate, they must do so without automating away the very things that make their voice trustworthy. Because when no one takes emotional responsibility for what is said in the public’s name, the message—however eloquently written—will ultimately ring hollow.
Comments ()