Government Sets New Standards for Handling Public Data in AI Development

The guide clarifies the legal framework for using public data in AI development, emphasizing the necessity of processing such data within the boundaries of current privacy laws. It recommends technical and administrative safeguards to minimize privacy risks while allowing AI companies to innovate.

Maru Kim
Maru Kim

Seoul, South Korea – The Personal Information Protection Commission (PIPC) has released new guidelines for the safe handling of publicly available data used in artificial intelligence (AI) development and services. These guidelines aim to address legal uncertainties for companies and enhance privacy protection for citizens.

The new guide, titled “Guide to Handling Publicly Available Personal Data for AI Development and Services”, outlines how companies can legally and safely process data that is openly accessible on the internet. This data, essential for training generative AI models like ChatGPT, often includes personal information such as addresses, identification numbers, and credit card details.

The guide clarifies the legal framework for using public data in AI development, emphasizing the necessity of processing such data within the boundaries of current privacy laws. It recommends technical and administrative safeguards to minimize privacy risks while allowing AI companies to innovate. The guidelines also align with international practices, considering policies from the EU, US, and other leading countries in AI regulation.

The PIPC recognized the urgent need for clear guidelines as existing privacy laws did not adequately cover the use of large-scale public data for AI training. This has created significant legal uncertainties for AI companies and potential privacy risks for individuals.

The guide was developed through extensive consultations with academia, industry experts, and civil society organizations. The AI Privacy Public-Private Policy Council, established last August, played a crucial role in drafting the guidelines. This council includes 30 next-generation experts from various fields, focusing on data processing standards, risk assessment, and transparency.

In forming these guidelines, the PIPC also looked at international trends. The UK is currently holding public consultations to recognize the “legitimate interest” in using web-scraped data for AI training. France has established criteria for the “legitimate interest” in processing personal data for AI training. The US has proposed federal privacy legislation that excludes publicly available information from the definition of personal data.

The PIPC will continue to update the guidelines in response to technological advancements and evolving international regulations. The commission also plans to gather further input from academia, industry, and civil society to refine the legal grounds for processing user data in AI development.

Experts emphasize the importance of finding a balance between protecting personal data and promoting AI innovation. The guidelines are seen as a critical step towards reducing legal uncertainty and fostering a trustworthy data processing environment for AI technologies.

The hope is that these guidelines will help establish trustworthy AI and data processing practices, encouraging companies to develop innovative AI solutions within a clear legal framework.

Share This Article
Follow:
Maru Kim, Editor-in-Chief and Publisher, is dedicated to providing insightful and captivating stories that resonate with both local and global audiences. With a deep passion for journalism and a keen understanding of Busan’s cultural and economic landscape, Maru has positioned 'Breeze in Busan' as a trusted source of news, analysis, and cultural insight.
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *