Skip to content
Technology
Breeze in Busan

Civil Society Groups Urge Swift Action to Protect People from Risks Posed by AI Technologies like ChatGPT

As artificial intelligence (AI) technologies advance at an unprecedented pace, civil society groups in the U.S. and Europe are pressing authorities to take swift and decisive action to protect people from the potential threats posed by OpenAI's GPT and ChatGPT models. The rapid proliferation of AI technologies, including language models like ChatGPT, has raised significant concerns about their impact on society, prompting coordinated pushback from advocacy organizations. In the United States, t

By Maru Kim
Mar 31, 2023
Updated: Feb 7, 2025
2 min read
Share Story
Civil Society Groups Urge Swift Action to Protect People from Risks Posed by AI Technologies like ChatGPT

As artificial intelligence (AI) technologies advance at an unprecedented pace, civil society groups in the U.S. and Europe are pressing authorities to take swift and decisive action to protect people from the potential threats posed by OpenAI's GPT and ChatGPT models. The rapid proliferation of AI technologies, including language models like ChatGPT, has raised significant concerns about their impact on society, prompting coordinated pushback from advocacy organizations.

In the United States, the Center for AI and Digital Policy (CAIDP) has filed a formal complaint with the Federal Trade Commission (FTC), urging the agency to halt further commercial deployment of GPT by OpenAI until appropriate safeguards are established to prevent ChatGPT from deceiving users and perpetuating biases ingrained in the training data. The CAIDP argues that the rapid adoption of AI technologies necessitates prompt action by regulators to ensure that the potential harms associated with these systems are adequately addressed.

Simultaneously, the European Consumer Organisation (BEUC) has called upon European regulators at both the EU and national levels to launch investigations into ChatGPT. Ursula Pachl, Deputy Director-General of BEUC, has stated that while AI technologies offer numerous benefits to society, the current regulatory framework does not provide sufficient protection against the potential harm they can cause.

The CAIDP is also requesting the FTC to mandate independent assessments of OpenAI's GPT products before and after their launch, as well as establish a more accessible mechanism for users to report incidents involving the GPT-4 language model. Marc Rotenberg, President of CAIDP, has asserted that the FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices, and OpenAI's GPT-4 should be no exception.

Concerns surrounding ChatGPT and other AI-powered chat interfaces, such as Microsoft's Bing and Google's Bard, include the systems' tendency to generate false information—a phenomenon known as "hallucination" in the AI industry—and to amplify biases present in their training data. These concerns have led to calls for swift action by governments and regulatory bodies to ensure that AI technologies are developed and deployed responsibly.

European lawmakers have been working on the Artificial Intelligence Act, a proposed regulatory framework for the AI industry, for nearly two years. However, the rapid advancements in AI technology and the competitive rollout of new services have rendered some of the Act's provisions outdated. As a result, EU institutions are now scrambling to modernize the bill to effectively address the challenges posed by AI systems like ChatGPT.

With the AI Act still under negotiation, it remains unclear whether EU-level regulators will take action against OpenAI and ChatGPT. Some critics argue that fears surrounding AI are overblown and that development should not be paused. In contrast, others contend that swift regulation is necessary to tackle the potential harms posed by AI technologies, including misinformation, bias, cybersecurity threats, and the significant environmental costs associated with the computing power and electricity required to train and operate these systems.

The ongoing debate about AI's future trajectory and its impact on society underscores the urgent need for careful management and regulation of these technologies. The development of AI systems with human-competitive intelligence could pose considerable risks to society and humanity if left unchecked, making it crucial for governments and regulatory bodies to act decisively to ensure responsible AI development and deployment.

Related Topics

Share This Story

Knowledge is most valuable when shared with the community.

Editorial Context

"Independent journalism relies on radical transparency. View our full log of editorial notes, corrections, and project dispatches in the Newsroom Transparency Log."

Reader Pulse

The report's impact signal

0 SIGNALS

Be the first to provide a reading pulse. These collective signals help our newsroom understand the impact of our reporting.

Join the deep discussion
Loading this week's participation brief

Join the discussion

Article Discussion

A more thoughtful conversation, anchored to the story

Atlantic-style discussion for this article. One-level replies, editor prompts, and moderation-first participation are now powered directly by Prisma.

Discussion Status

Open

Please sign in to join the discussion.

Loading discussion...

The Weekly Breeze

Independent reporting and analysis on Busan,
Korea, and the broader regional economy.

Independent journalism, directly to your inbox.

Related Coverage

Continue with related reporting

Follow adjacent reporting from the same newsroom file, with linked coverage that extends the current story's desk and context.

The Next Bottleneck in AI Coding Isn’t Code. It’s Context.
NewsApr 7, 2026

The Next Bottleneck in AI Coding Isn’t Code. It’s Context.

A new class of tooling is emerging to compress prompts, trim shell output, shrink tool metadata, and reset agent memory. The shift suggests that the next frontier in AI coding is no longer just model capability, but the engineering of context itself.

How Google Gemini Turned AI Access Into a Metered System
NewsMar 16, 2026

How Google Gemini Turned AI Access Into a Metered System

Cloud computing taught businesses to accept utility-style pricing for infrastructure. Gemini suggests advanced AI may now be moving in the same direction, with dependable reasoning and uninterrupted use becoming premium conditions.

Continue this story

More on this issue

Stay with the same issue through adjacent reporting that carries the argument, context, or consequences forward.

More from the author

Continue with Breeze in Busan

Stay with the same line of reporting through more work from this byline.