Busan’s AI-Powered PR: A Threat to Public Trust?

Busan’s AI-generated promotional image for its ‘Global Hub City’ initiative has sparked debate over the credibility of AI-driven government communication.

Busan’s AI-Powered PR: A Threat to Public Trust?
Breeze in Busan | Busan's AI-generated PR image

Busan, South Korea - Busan’s ambitious embrace of AI-generated policy promotion is being positioned as a technological leap forward in public communication. The city’s use of generative AI to create digital content, including social media campaigns, automated policy videos, and AI-generated newspaper visuals, is touted as an innovative approach to enhancing citizen engagement.

However, beneath the shiny veneer of smart city branding, critical questions arise about the effectiveness, ethical implications, and unintended consequences of relying on AI for public relations. Far from being a policy innovation, AI-driven government PR might be diminishing public trust, eroding content credibility, and reducing meaningful civic participation.

Recent scientific studies suggest that AI-generated content, particularly in news and policy messaging, faces substantial resistance from the public. The psychological and sociological implications of replacing human-driven communication with artificial content generation reveal deeper issues—issues that Busan’s government seems to have overlooked in its rush to automate its public relations strategy.

As artificial intelligence becomes more prevalent in content creation, psychological and sociological studies indicate that people remain deeply skeptical of AI-generated visuals and narratives. Rather than fostering greater transparency and efficiency, AI-driven communication has been shown to weaken public confidence in digital content, fuel perceptions of inauthenticity, and create new ethical dilemmas in policy messaging.

"AI-generated image used in Busan's promotional materials for its ‘Global Hub City’ vision. However, the overly stylized, fairy-tale aesthetic raises concerns about credibility and realism in government messaging."

Ethical and Governance Concerns: Transparency and Narrative Control

Beyond the issue of public skepticism, the integration of AI in government public relations raises fundamental ethical and governance concerns. While AI offers the advantage of streamlining content creation, its use in policy messaging necessitates transparency, oversight, and accountability to prevent potential misuse. Without a clear framework governing how AI-generated messages are created, reviewed, and disseminated, the risk of biased or selective communication increases.

One of the most pressing concerns surrounding Busan’s AI-driven PR strategy is the lack of transparency regarding the technology behind the initiative. The government has yet to specify which AI platforms and models are being used, making it difficult to assess the level of reliability and neutrality in the content being generated. Additionally, there is no clear information on who is responsible for verifying AI-generated messages, raising questions about whether there is adequate human oversight in place to ensure factual accuracy and avoid potential distortions. Furthermore, the process through which the AI system is trained and optimized for unbiased communication remains undisclosed, leaving doubts about how well it can avoid reinforcing pre-existing narratives or omitting critical counterpoints.

Without a clear disclosure of these key details, it remains uncertain whether AI-generated PR is genuinely improving transparency or simply automating selective storytelling. If AI-driven content is being produced without external review mechanisms, it risks becoming a tool for one-sided messaging rather than a method for fostering informed public discourse.

Governments have historically influenced public perception through carefully curated messaging, but AI introduces a new level of subtlety in narrative control. The ability to mass-produce AI-generated content across multiple platforms allows for a more streamlined and automated approach to shaping public opinion, but it also poses a greater risk of selective messaging. If not carefully regulated, AI-generated PR could be strategically deployed to highlight policy successes while minimizing attention on failures, creating an imbalanced portrayal of governance achievements.

The automation of AI-driven content also raises concerns about the lack of independent scrutiny in how government information is presented. If AI-generated narratives are continuously refined to align with political interests, the space for alternative viewpoints and constructive criticism may shrink. Moreover, AI’s ability to mass-produce messaging at an unprecedented scale means that large-scale content distribution can occur with minimal human oversight, potentially amplifying government-controlled narratives without adequate checks and balances.

While there is no direct evidence suggesting that Busan’s AI-driven PR is intentionally being used to mislead the public, the absence of transparency in its implementation creates a legitimate reason for scrutiny. The fact that key details—such as the AI models being used, the verification process, and the extent of human oversight—are not publicly available leaves room for concerns about AI’s role in shaping public perception in ways that may prioritize political convenience over balanced discourse.

Busan’s push for AI-driven PR is part of a larger global trend in which governments are experimenting with AI as a communication tool. While AI undeniably brings efficiency and scalability, scientific research and public sentiment indicate that AI-driven messaging still faces significant trust barriers. If public confidence in government communications is to be preserved, AI must be implemented with clear ethical safeguards rather than being used as an opaque mechanism for messaging automation.

For Busan’s AI-generated PR efforts to be viewed as a true advancement rather than a risk to public trust, the government must establish clear policies ensuring responsible AI use.

Without these safeguards, AI-driven PR may not truly serve as a tool for increased public engagement but rather as an instrument for automating information control. As AI technology continues to evolve, governments must ensure that it fosters transparency and trust, rather than becoming a tool for algorithmic manipulation with limited human accountability.

Ultimately, the question remains: Will AI in government communication be used as a means of strengthening democratic transparency, or will it become a mechanism for digitalized messaging with reduced scrutiny? The way Busan and other governments choose to implement AI-driven PR will set a precedent that determines whether AI enhances public trust—or erodes it.