Breeze in Busan

Independent journalism on the politics, economy, and society shaping Busan.

Contact channels

News Tips

[email protected]

Partnerships

[email protected]

Contribute

[email protected]

Information

[email protected]

Explore

  • Home
  • Latest News
  • Busan News
  • National News
  • Authors
  • About
  • Editor
  • Contact

Contribute

  • Send News
  • Contact
  • Join Team
  • Collaborate

Legal

  • Privacy Policy
  • Cookie Policy
  • Terms of Use
  • Editorial Policy
  • Correction & Rebuttal

Newsroom Details

30, Hasinbeonyeong-ro 151beon-gil, Saha-gu, Busan, Korea

+82 507-1311-4503

Busan 아00471

Registered: 2022.11.16

Publisher·Editor: Maru Kim

Juvenile Protection: Maru Kim

© 2026 Breeze in Busan. All Rights Reserved.

Independent reporting from Busan across politics, economy, society, and national affairs.

technology
Breeze in Busan

Civil Society Groups Urge Swift Action to Protect People from Risks Posed by AI Technologies like ChatGPT

As artificial intelligence (AI) technologies advance at an unprecedented pace, civil society groups in the U.S. and Europe are pressing authorities to take swift and decisive action to protect people from the potential threats posed by OpenAI's GPT and ChatGPT models. The rapid proliferation of AI technologies, including language models like ChatGPT, has raised significant concerns about their impact on society, prompting coordinated pushback from advocacy organizations. In the United States, t

Mar 31, 2023
2 min read
Save
Share
Maru Kim

Maru Kim

Editor-in-Chief

Maru Kim, Editor-in-Chief and Publisher, is dedicated to providing insightful and captivating stories that resonate with both local and global audiences.

Civil Society Groups Urge Swift Action to Protect People from Risks Posed by AI Technologies like ChatGPT

As artificial intelligence (AI) technologies advance at an unprecedented pace, civil society groups in the U.S. and Europe are pressing authorities to take swift and decisive action to protect people from the potential threats posed by OpenAI's GPT and ChatGPT models. The rapid proliferation of AI technologies, including language models like ChatGPT, has raised significant concerns about their impact on society, prompting coordinated pushback from advocacy organizations.

In the United States, the Center for AI and Digital Policy (CAIDP) has filed a formal complaint with the Federal Trade Commission (FTC), urging the agency to halt further commercial deployment of GPT by OpenAI until appropriate safeguards are established to prevent ChatGPT from deceiving users and perpetuating biases ingrained in the training data. The CAIDP argues that the rapid adoption of AI technologies necessitates prompt action by regulators to ensure that the potential harms associated with these systems are adequately addressed.

Simultaneously, the European Consumer Organisation (BEUC) has called upon European regulators at both the EU and national levels to launch investigations into ChatGPT. Ursula Pachl, Deputy Director-General of BEUC, has stated that while AI technologies offer numerous benefits to society, the current regulatory framework does not provide sufficient protection against the potential harm they can cause.

The CAIDP is also requesting the FTC to mandate independent assessments of OpenAI's GPT products before and after their launch, as well as establish a more accessible mechanism for users to report incidents involving the GPT-4 language model. Marc Rotenberg, President of CAIDP, has asserted that the FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices, and OpenAI's GPT-4 should be no exception.

Concerns surrounding ChatGPT and other AI-powered chat interfaces, such as Microsoft's Bing and Google's Bard, include the systems' tendency to generate false information—a phenomenon known as "hallucination" in the AI industry—and to amplify biases present in their training data. These concerns have led to calls for swift action by governments and regulatory bodies to ensure that AI technologies are developed and deployed responsibly.

European lawmakers have been working on the Artificial Intelligence Act, a proposed regulatory framework for the AI industry, for nearly two years. However, the rapid advancements in AI technology and the competitive rollout of new services have rendered some of the Act's provisions outdated. As a result, EU institutions are now scrambling to modernize the bill to effectively address the challenges posed by AI systems like ChatGPT.

With the AI Act still under negotiation, it remains unclear whether EU-level regulators will take action against OpenAI and ChatGPT. Some critics argue that fears surrounding AI are overblown and that development should not be paused. In contrast, others contend that swift regulation is necessary to tackle the potential harms posed by AI technologies, including misinformation, bias, cybersecurity threats, and the significant environmental costs associated with the computing power and electricity required to train and operate these systems.

The ongoing debate about AI's future trajectory and its impact on society underscores the urgent need for careful management and regulation of these technologies. The development of AI systems with human-competitive intelligence could pose considerable risks to society and humanity if left unchecked, making it crucial for governments and regulatory bodies to act decisively to ensure responsible AI development and deployment.

The Weekly Breeze

Keep pace with Busan's deep narratives.
Delivered every Monday morning.

Independent journalism, directly to your inbox.

Strategic Partner
Breeze Editorial
Elevate Your
Brand's Narrative

Connect your core values with a community of
thoughtful and discerning readers.

Inquire Now
Related Topics
Technology

Share This Story

Knowledge is most valuable when shared with the community.

💬 Comments

Please sign in to leave a comment.

    Related Coverage

    Continue with related reporting

    Follow adjacent reporting from the same newsroom file, with linked coverage that extends the current story's desk and context.

    How Google Gemini Turned AI Access Into a Metered System
    Mar 16, 2026

    How Google Gemini Turned AI Access Into a Metered System

    Cloud computing taught businesses to accept utility-style pricing for infrastructure. Gemini suggests advanced AI may now be moving in the same direction, with dependable reasoning and uninterrupted use becoming premium conditions.

    AI, White-Collar Work, and the Uncertain Future of Income
    Mar 8, 2026

    AI, White-Collar Work, and the Uncertain Future of Income

    White-collar work is not disappearing overnight. Instead, entire professions are being reorganized into automated production, human verification, and algorithmic supervision.

    Memory Placement and the Hidden Economics of AI Devices
    Feb 6, 2026

    Memory Placement and the Hidden Economics of AI Devices

    AI’s next phase is shaped less by smarter models than by where memory lives and how much it costs to keep close

    More from the author

    Continue with the author

    Stay with the same line of reporting through more work from this byline.

    Who Learns From War
    Mar 5, 2026

    Who Learns From War

    Can South Korea Prevent AI From Becoming an Elite Monopoly?
    Feb 25, 2026

    Can South Korea Prevent AI From Becoming an Elite Monopoly?