Civil Society Groups Urge Swift Action to Protect People from Risks Posed by AI Technologies like ChatGPT

The ongoing debate about AI's future trajectory and its impact on society underscores the urgent need for careful management and regulation of these technologies.

Maru Kim
Maru Kim

As artificial intelligence (AI) technologies advance at an unprecedented pace, civil society groups in the U.S. and Europe are pressing authorities to take swift and decisive action to protect people from the potential threats posed by OpenAI’s GPT and ChatGPT models. The rapid proliferation of AI technologies, including language models like ChatGPT, has raised significant concerns about their impact on society, prompting coordinated pushback from advocacy organizations.

In the United States, the Center for AI and Digital Policy (CAIDP) has filed a formal complaint with the Federal Trade Commission (FTC), urging the agency to halt further commercial deployment of GPT by OpenAI until appropriate safeguards are established to prevent ChatGPT from deceiving users and perpetuating biases ingrained in the training data. The CAIDP argues that the rapid adoption of AI technologies necessitates prompt action by regulators to ensure that the potential harms associated with these systems are adequately addressed.

Simultaneously, the European Consumer Organisation (BEUC) has called upon European regulators at both the EU and national levels to launch investigations into ChatGPT. Ursula Pachl, Deputy Director-General of BEUC, has stated that while AI technologies offer numerous benefits to society, the current regulatory framework does not provide sufficient protection against the potential harm they can cause.

The CAIDP is also requesting the FTC to mandate independent assessments of OpenAI’s GPT products before and after their launch, as well as establish a more accessible mechanism for users to report incidents involving the GPT-4 language model. Marc Rotenberg, President of CAIDP, has asserted that the FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices, and OpenAI’s GPT-4 should be no exception.

Concerns surrounding ChatGPT and other AI-powered chat interfaces, such as Microsoft’s Bing and Google’s Bard, include the systems’ tendency to generate false information—a phenomenon known as “hallucination” in the AI industry—and to amplify biases present in their training data. These concerns have led to calls for swift action by governments and regulatory bodies to ensure that AI technologies are developed and deployed responsibly.

European lawmakers have been working on the Artificial Intelligence Act, a proposed regulatory framework for the AI industry, for nearly two years. However, the rapid advancements in AI technology and the competitive rollout of new services have rendered some of the Act’s provisions outdated. As a result, EU institutions are now scrambling to modernize the bill to effectively address the challenges posed by AI systems like ChatGPT.

With the AI Act still under negotiation, it remains unclear whether EU-level regulators will take action against OpenAI and ChatGPT. Some critics argue that fears surrounding AI are overblown and that development should not be paused. In contrast, others contend that swift regulation is necessary to tackle the potential harms posed by AI technologies, including misinformation, bias, cybersecurity threats, and the significant environmental costs associated with the computing power and electricity required to train and operate these systems.

The ongoing debate about AI’s future trajectory and its impact on society underscores the urgent need for careful management and regulation of these technologies. The development of AI systems with human-competitive intelligence could pose considerable risks to society and humanity if left unchecked, making it crucial for governments and regulatory bodies to act decisively to ensure responsible AI development and deployment.

Share This Article
Follow:
Publisher
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *