Busan, South Korea – When Gangseo District Office announced it would deploy 37 paid ChatGPT Team accounts across all departments, the move was widely seen as a pioneering step toward digital transformation in local governance. Costing just ₩12 million—about USD $9,000—the initiative is believed to be the first publicly documented case in South Korea of a district-level government integrating generative AI tools into every part of its bureaucracy.
Despite this milestone, the rollout has drawn growing scrutiny over the absence of technical safeguards, data protection measures, and legal oversight. The accounts operate via OpenAI’s overseas servers and are not backed by internal firewalls, prompt logging systems, or auditing mechanisms. As a result, it is currently impossible to verify what officials input or what responses are relied upon in shaping administrative decisions.
Gangseo is not alone. According to figures reportedly submitted to the National Assembly, 17 provincial and metropolitan governments in South Korea collectively spent nearly ₩400 million over the past year on generative AI tools, nearly all sourced from U.S.-based vendors. Eleven of these governments now rely exclusively on ChatGPT. However, none have established sovereign infrastructure, standardized logging protocols, or binding oversight frameworks.
In the absence of enforceable controls, local governments have instead turned to education. In Gangseo, five training sessions were held prior to rollout, instructing civil servants to “prompt responsibly.” An internal four-page memo discouraged the input of sensitive data and advised staff to consult IT teams when in doubt. These measures, while well-intentioned, remain voluntary and unenforceable.
Although ChatGPT Team accounts offer an option to opt out of training data collection, OpenAI’s current policy allows retention of interaction data for up to 30 days. There is no guarantee that queries—potentially including anonymized citizen requests, draft policy memos, or internal documents—are not being cached or processed overseas.
This exposes dual vulnerabilities: to data sovereignty and to institutional accountability. Under South Korea’s Records Management Act, all public decisions, communications, and deliberations must be preserved and auditable. However, there is no provision requiring automated recording of prompts or outputs from AI systems. If an AI-generated memo contributes to a policy decision without leaving a trace, the accountability chain is effectively broken.
Such concerns are not hypothetical. In 2023, engineers at Samsung reportedly pasted proprietary source code into ChatGPT, prompting the company to ban the use of external AI tools and develop its own internal model, “Gauss.” While private companies have responded to such risks by tightening safeguards, most public institutions in Korea have expanded generative AI adoption without comparable internal controls.
There are exceptions. Seoul’s Songpa and Seongdong Districts are piloting internal AI systems with localized servers, firewalls, prompt filtering, and restricted API access. Though limited in scope, these projects demonstrate that controlled, sovereign AI deployments are technically feasible. Yet most other jurisdictions, including Gangseo, have bypassed such infrastructure entirely—opting for off-the-shelf SaaS tools with little modification.
The consequences could be far-reaching. If a citizen requests a document influenced by an AI model—or if an erroneous AI-generated response leads to denial of services—there is currently no system in place to trace how such content was created, or by whom. Without prompt histories or model provenance, it becomes impossible to distinguish between human and machine-generated policy input.
The central government has issued soft guidance. In 2023, the National Intelligence Service and the Ministry of Science and ICT jointly released a non-binding “Generative AI Security Guideline.” It recommended using APIs over SaaS platforms, enabling prompt logging, and prohibiting uploads of personal or sensitive data. However, with no enforcement mechanism and limited technical staff in most municipalities, compliance remains minimal.
Legislative support is also lacking. A draft AI Governance Act has remained in committee since 2024, and Busan officials confirm that a municipal ordinance on AI administration is under internal review but not yet enacted. As it stands, no legal requirement compels local governments to classify AI systems, record their decisions, or guarantee human oversight.
Internationally, South Korea's approach stands in contrast to jurisdictions where public-sector AI is strictly regulated. The European Union’s AI Act, which took effect in 2024, mandates risk classification, bias audits, and human-in-the-loop oversight for all government AI deployments deemed high-risk. In Singapore, the AI Verify program subjects models to rigorous technical and procedural evaluations, with non-compliant vendors barred from public procurement.
Even at the municipal level, standards of algorithmic transparency are evolving. In Amsterdam, a live public registry documents all AI systems used by the city government, including their purposes, data sources, and the officials responsible for them. These practices reflect a growing international consensus: when algorithms influence public policy, transparency is not optional—it is foundational.
In Korea, however, cities are embracing generative AI faster than laws or infrastructure can keep pace. Rather than wait for governance frameworks to be established, local authorities are proceeding as if none are needed. This is less a case of regulatory negligence than a structural mismatch—advanced AI systems are being dropped into workflows designed for paper-based administration, without the ability to track, preserve, or challenge how AI shapes decisions.
The result is creeping institutional opacity. If AI-generated content influences official policy but leaves no record, the public archive is not just incomplete—it is compromised. In such an environment, errors can go undetected, and citizens may be left without answers or avenues for redress. The danger is not simply that the AI might be wrong. It is that no one will know how—or why—a decision was made at all.
Unless Korea’s local governments act swiftly to establish internal infrastructure, mandatory logging, and enforceable oversight, their adoption of generative AI will remain structurally fragile. In a climate of rising public skepticism, the failure to govern these tools is more than a technical shortfall—it is a civic risk with lasting institutional consequences.
The Weekly Breeze
Keep pace with Busan's deep narratives.
Delivered every Monday morning.




