Explore

  • Home
  • Latest News
  • About
  • Editor

Contribute

  • Send News
  • Contact
  • Join Team
  • Collaborate

Legal

  • Privacy Policy
  • Cookie Policy
  • Terms of Use
  • Editorial Policy
  • Correction & Rebuttal

Connect

Email Contacts

News Tips: [email protected]
Partnerships: [email protected]
Contribute: [email protected]
Information: [email protected]

Address: 30, Hasinbeonyeong‑ro 151beon‑gil, Saha‑gu, Busan, Korea  |  Tel: +82 507‑1311‑4503  |  Online newspaper registration No: Busan 아00471

Date of registration: 2022.11.16  |  Publisher·Editor: Maru Kim  |  Juvenile Protection Manager: Maru Kim

© 2026 Breeze in Busan. All Rights Reserved.

busan-news
Chronicle

Busan’s MyBusan Portal Raises Questions About AI Imagery

Busan City’s new foreign resident portal uses AI-generated human imagery on official guidance pages without disclosure, highlighting a regulatory grey zone as South Korea’s AI Basic Act takes effect.

Feb 3, 2026
8 min read
Save
Share
Local News Team

Local News Team

Local News Team

We cover regional developments, local politics, community issues, and public policy, highlighting how local actions shape global conversations and affect people's daily lives.

Busan’s MyBusan Portal Raises Questions About AI Imagery
Breeze in Busan | Banner images used on Busan City’s mybusan.kr portal for foreign residents | Source: Busan Metropolitan City (mybusan.kr)

Busan, South Korea — Busan City on Tuesday launched mybusan.kr, a redesigned municipal portal aimed at providing settlement-related information for foreign residents living in the city.

The new platform replaces the long-running “Life in Busan” website and consolidates guidance on visas, healthcare, education, employment, and daily life under a single web address. City officials said in an accompanying press release that the project focused on reorganizing existing information, rather than introducing new administrative services, while expanding language support and improving accessibility.

While the site’s overall structure follows familiar public-sector portal conventions, several visual elements mark a departure from previous versions. On multiple informational subpages—particularly section introductions covering education, employment, and settlement guidance—large banner images are positioned at the top of the page.

These banners depict people. Foreign residents and Koreans appear together, presented as everyday users of the city and its public services. The images are styled as photographic portraits rather than icons or illustrations and occupy the most prominent visual space within each section.

However, the visuals do not fully conform to standard photographic characteristics. In several images, background text fragments mid-word, facial features lose coherence at close range, and lighting and composition appear repeated across different scenes. The site provides no attribution, licensing details, or explanation regarding how the images were produced.

Neither the press release nor the platform itself makes reference to artificial intelligence, automated image generation, or synthetic media. Visitors are given no indication whether the figures shown represent actual residents, stock photography models, or digitally generated likenesses.

In public-sector communication, the use of human imagery on guidance pages carries specific implications. Such visuals help frame who a service is intended for and how its users are represented. On a platform addressing legal status, healthcare access, and administrative procedures, those representations take on institutional significance.

South Korea’s AI Framework Act does not prohibit public bodies from using generative imagery. Its emphasis lies on transparency and preventing public misunderstanding when automated systems intersect with civic communication. Within that context, the absence of explanation surrounding human-like synthetic imagery becomes part of the public record rather than a purely aesthetic choice.

Introduced as a tool to help foreign residents navigate life in an unfamiliar city, mybusan.kr reflects broader shifts in how public institutions adopt contemporary design tools. The visual choices embedded in its guidance pages raise questions not addressed in the launch materials—questions about representation, disclosure, and how public authority is communicated in an increasingly synthetic media environment.


Human Images and Public Authority

Public-sector websites do not treat images as neutral decoration. Over time, informal but widely observed conventions have developed around how different types of visuals are used. Icons simplify navigation. Illustrations soften complex procedures. Photographs, particularly those featuring people, serve a different function altogether.

When a public institution places human figures at the top of an informational page, the image operates as a form of representation. It does not merely accompany text; it introduces the subject of the service. In immigration guidance, employment pages, or settlement-related information, the people shown implicitly stand in for the users the institution claims to serve.

For this reason, public agencies have historically relied on either identifiable stock photography or non-representational graphics. Stock images, despite their generic nature, carry traceable licensing frameworks and an understood fiction: the viewer recognizes them as models rather than actual beneficiaries. Illustrations avoid the issue entirely by signaling abstraction.

The images used on selected subpages of mybusan.kr follow neither approach. They present human likenesses with the visual language of photography, yet without the traceability of photographic sources. The figures appear realistic enough to suggest real individuals, while lacking any indication of who—or what—was used to create them.

This ambiguity carries practical consequences. For foreign residents, many of whom rely on official websites to navigate unfamiliar legal and administrative systems, visual cues help establish credibility. A photograph suggests documentation. A face suggests presence. When those cues are synthetic but unlabelled, the boundary between representation and fabrication becomes unclear.

The issue is not aesthetic quality. Nor is it whether the images are convincing. The concern lies in how public authority is visually expressed. A municipal platform does not speak as a private brand. Its imagery inherits the weight of the institution behind it, particularly when used on pages explaining rights, obligations, and access to public services.

In such contexts, the distinction between depicting people and depicting the idea of people is more than stylistic. One implies reference to lived residents. The other signals a conceptual placeholder. Public communication has traditionally treated that boundary with care, precisely because misunderstanding can undermine trust rather than enhance it.

By placing human-like images at the top of guidance pages without explanation, mybusan.kr departs from those conventions. The images function as representations, yet their origin remains opaque. For users encountering the platform for the first time, the site offers no framework for interpreting what they are seeing.

As public institutions begin to incorporate synthetic media into their communications, established distinctions between illustration, photography, and representation are being compressed. The challenge lies not in adopting new tools, but in maintaining clarity about what is being shown—and under what authority.


The Grey Zone of Public Imagery

South Korea’s AI Basic Act was written to regulate systems, not appearances. Its focus lies on risk management, accountability, transparency, and human oversight where algorithmic decision-making affects rights, safety, or access to public services. Visual representation plays no meaningful role in the statute’s architecture.

That omission is structural. Generative images do not determine eligibility, approve applications, or allocate benefits. They produce no administrative outcomes. As a result, synthetic visuals fall outside the law’s most explicit safeguards, even when deployed by public institutions in official contexts.

The absence of regulation, however, does not render the issue neutral. Public-sector communication has long operated under stricter conventions than those applied to private platforms. Before the rise of generative media, official guidelines—formal and informal alike—placed emphasis on source clarity, avoidance of misleading representation, and a clear separation between factual information and promotional design.

These conventions relied on visual distinctions that were widely understood. Photographs suggested documentation or traceable origin. Illustrations signaled abstraction. Each carried an implicit contract with the viewer. Synthetic human imagery collapses that distinction. It adopts the visual language of photography without the accountability that traditionally accompanied it.

The AI Basic Act gestures toward relevant principles. Transparency, explainability, and public trust recur throughout the framework. Yet those principles are articulated with systems in mind—technologies that act. The law presumes harm arises through automated decisions. It does not address harm introduced through representation, where meaning is shaped rather than computed.

This leaves public institutions operating in a grey zone. An AI-generated human figure placed on a guidance page violates no explicit prohibition. At the same time, it bypasses long-standing expectations of disclosure embedded in public communication. When images are unlabeled and uncontextualized, interpretation is left entirely to the user.

International policy debates have begun to confront this gap. In the European Union, discussions around synthetic media increasingly emphasize provenance and disclosure, not because images constitute legal deception, but because they shape public understanding. The concern is less about whether viewers mistake an image for a real person than whether they are given sufficient context to interpret what they see.

South Korea has yet to articulate comparable standards in binding form. Responsibility disperses downward—from legislation to ministries, from ministries to agencies, and from agencies to contractors. Within that chain, visual decisions are often treated as matters of design rather than acts of public representation.

The case of mybusan.kr sits squarely within this silence. The platform does not present itself as technologically experimental. Its launch materials avoid references to artificial intelligence altogether. Yet synthetic human imagery appears on guidance pages traditionally reserved for documented photography or symbolic illustration.

What emerges is not a breach of law, but a lack of governance. Systems are regulated. Accessibility and security are audited. Images—especially those resembling people—remain uncategorized. In that space, standards are undefined and accountability diffuses.

As public platforms continue to modernize, the consequences of that silence grow sharper. Visuals now carry narrative authority, particularly for users navigating unfamiliar languages and institutions. When those visuals are synthetic, the absence of disclosure becomes more than a design choice. It exposes an unresolved question at the heart of public governance: whether representation itself demands regulation.


Trust, Not Technology

Public institutions rarely set out to mislead. More often, they adopt new tools and visual conventions incrementally, without fully examining how those choices are read by the people who rely on them. In the case of mybusan.kr, the central issue is neither technological ambition nor the mere use of artificial intelligence. It lies in how synthetic human imagery has been absorbed into official communication without explanation, at a moment when public trust is itself under regulatory scrutiny.

A platform intended to guide foreign residents through visas, healthcare, employment, and everyday administration operates within a domain where credibility is not abstract. Visual language plays a functional role. Faces suggest presence. Human figures imply lived experience and institutional recognition. When such images are generated rather than documented, the distinction matters—not because it automatically violates the law, but because it alters how authority is visually constructed.

The implementation of South Korea’s AI Basic Act sharpens this context rather than resolving it. The law does not prohibit generative imagery, nor does it single out public websites as a special case. What it does establish, however, is a policy direction: transparency is no longer optional where synthetic content intersects with public-facing systems. The emphasis is not on banning technology, but on preventing manufactured representations from circulating without notice.

Seen against that backdrop, the absence of disclosure on pages featuring AI-generated human figures is not a minor oversight. It signals a disconnect between regulatory intent and institutional practice. The platform adopts contemporary design tools, yet stops short of explaining their use in spaces where clarity has traditionally been treated as a public obligation.

This is where the distinction between public service and promotion becomes critical. Government websites are not branding exercises. Their authority does not derive from visual sophistication, but from reliability. When synthetic imagery is introduced without context, it borrows the credibility of the institution itself—whether or not that borrowing is deliberate.

How mybusan.kr evolves will matter less for its graphics than for the precedent it sets. As artificial intelligence becomes routine in content production, the responsibility of public institutions is not to resist the technology, but to govern its use visibly and carefully. In an environment where synthetic media increasingly resembles the real, transparency is no longer a courtesy extended to users. It is the baseline condition of public trust.

South Korea’s AI Basic Act — What It Covers, and What It Leaves Open
A high-level summary of how the law governs artificial intelligence, and where visual and representational uses of AI remain less clearly addressed.
Effective: 2026
Core principles: trust, transparency, safety
Approach: risk-based, not prohibitive
Areas clearly addressed by the Act
AI systems and decision-making
The law focuses on AI as a system that produces outcomes, especially where automated decisions affect people’s lives, safety, or rights.
High-Impact AI
Systems with potential significant impact on life, safety, or fundamental rights are subject to stronger expectations for oversight and risk management.
Transparency obligations
Operators are expected to reduce public misunderstanding by making clear when AI is used and how outcomes are generated.
Safety and risk control
Emphasis is placed on identifying, assessing, and mitigating risks, particularly in sensitive or public-facing sectors.
National governance framework
The Act establishes national planning and coordination mechanisms to guide AI policy across government and industry.
Areas left relatively open
Synthetic human imagery
The Act does not specifically regulate how AI-generated portraits or human-like images should be labeled or explained when used on public websites.
Visual representation and authority
Who is shown in official visuals—and whether those figures represent real people, models, or generated likenesses—is not explicitly addressed.
Image provenance and disclosure
While transparency is a core principle, the law does not set detailed standards for disclosing the origin or production method of AI-generated images.
Communication without automated decisions
Uses of AI that shape perception rather than make decisions often fall outside the Act’s most explicit safeguards.
The AI Basic Act establishes a national framework for governing how AI systems operate and affect people. Its system-centered design, however, leaves questions of visual representation and unlabeled synthetic imagery in public communication largely to institutional practice rather than explicit statutory rules.

The Weekly Breeze

Keep pace with Busan's deep narratives.
Delivered every Monday morning.

Independent journalism, directly to your inbox.

Strategic Partner
Breeze Editorial
Elevate Your
Brand's Narrative

Connect your core values with a community of
thoughtful and discerning readers.

Inquire Now
Related Topics
Busan news

Spread the Chronicle

Knowledge is most valuable when shared with the community.

Previous Article
The Cost of Making Driving Easy in Busan
Next Article
The Road Moves First as Busan’s New Airport Remains Unbuilt

💬 Comments

Please sign in to leave a comment.

    Related Insights

    Gadeokdo New Airport Wins Rail Approval, but Not a Dedicated Line

    Gadeokdo New Airport Wins Rail Approval, but Not a Dedicated Line

    The 6.58-kilometer connector advances airport access through the Busan New Port corridor, but stops short of creating a dedicated airport railway.

    March 11, 2026 min read
    When Eating Alone Becomes the City’s Problem

    When Eating Alone Becomes the City’s Problem

    The university cafeterias of Busan have shown that cheap meals can restore everyday eating habits. What remains unclear is how a city built around solitary living can sustain those habits outside campus walls.

    March 10, 2026 min read
    Busan Seeks to Expand Medical Tourism With AI Platform

    Busan Seeks to Expand Medical Tourism With AI Platform

    Medical tourism in South Korea has expanded rapidly, but the industry remains centered in Seoul’s dense cluster of clinics and recruitment networks. Busan’s AI platform seeks to lower barriers for foreign patients, though structural advantages still favor the capital.

    March 4, 2026 min read

    Expertise Continued by the Author

    Gadeokdo New Airport Wins Rail Approval, but Not a Dedicated Line
    Latest Insight

    Gadeokdo New Airport Wins Rail Approval, but Not a Dedicated Line

    Read Story
    When Eating Alone Becomes the City’s Problem
    Latest Insight

    When Eating Alone Becomes the City’s Problem

    Read Story