Skip to content
Politics
Breeze in Busan

ChatGPT: A Deep Dive into Conversational AI

Conversational AI, including large language models (LLMs), has become one of the most talked-about technologies in recent years, with far-reaching implications for science and society. Among the most notable LLMs is ChatGPT, an AI-powered chatbot that can convincingly converse with users on a wide range of topics in multiple languages. The Rise of ChatGPT and Other Large Language Models ChatGPT is just one of the latest LLMs released by OpenAI and other firms. It is free, easy to use, and con

By Maru Kim
Feb 26, 2023
Updated: Feb 7, 2025
3 min read
Share Story
ChatGPT: A Deep Dive into Conversational AI

Conversational AI, including large language models (LLMs), has become one of the most talked-about technologies in recent years, with far-reaching implications for science and society. Among the most notable LLMs is ChatGPT, an AI-powered chatbot that can convincingly converse with users on a wide range of topics in multiple languages.

The Rise of ChatGPT and Other Large Language Models

ChatGPT is just one of the latest LLMs released by OpenAI and other firms. It is free, easy to use, and continues to learn autonomously from vast data sets of text. This technology has already been used by researchers to write essays, draft and improve papers, and identify research gaps, among others. The potential of LLMs is huge, from designing experiments to conducting peer reviews and supporting editorial decisions.

The Risks and Opportunities of Conversational AI in Research

Conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns. While it might accelerate innovation and shorten time-to-publication, it could also degrade the quality and transparency of research and spread misinformation. Researchers using LLMs risk being misled by false or biased information, and inattentive reviewers might be hoodwinked into accepting an AI-written paper without realizing it.

The Need for Human Verification and Accountability

Using conversational AI for specialized research is likely to introduce inaccuracies, bias, and plagiarism. Expert-driven fact-checking and verification processes will be indispensable to prevent such risks. High-quality journals might decide to include a human verification step or even ban certain applications that use this technology. It will become even more crucial to emphasize the importance of accountability to prevent human automation bias.

Developing Rules for the Responsible Use of LLMs

Research institutions, publishers, and funders should adopt explicit policies that raise awareness of and demand transparency about the use of conversational AI in the preparation of all materials that might become part of the published record. Author-contribution statements and acknowledgments in research papers should state clearly and specifically whether and to what extent LLMs were used.

Investing in Truly Open LLMs

The lack of transparency in the underlying training sets and LLMs for ChatGPT and its predecessors is a concern. The development and implementation of open-source AI technology should be prioritized to counter this opacity. Non-commercial organizations, universities, NGOs, government research facilities, and organizations such as the United Nations should make considerable investments in independent non-profit projects to develop advanced open-source, transparent, and democratically controlled AI technologies.

Embracing the Benefits of AI in Science

Chatbots provide opportunities to complete tasks quickly, from PhD students striving to finalize their dissertation to researchers needing a quick literature review for their grant proposal, or peer-reviewers under time pressure to submit their analysis. Conversational AI technology has enormous potential, provided that the current teething problems related to bias, provenance, and inaccuracies are ironed out.

Wider Debate and International Forum on LLMs

The research community needs to organize an urgent and wide-ranging debate on the development and responsible use of LLMs for research. This discussion should include scientists of different disciplines, technology companies, big research funders, science academies, publishers, NGOs, and privacy and legal specialists. The debate should address the implications of LLMs on diversity and inequalities in research and should involve people from underrepresented groups in research and communities affected by research.

Conversational AI is a game-changer for science, offering tremendous potential for innovation and breakthroughs in various disciplines. However, there are challenges related to bias, provenance, and inaccuracies that need to be addressed, and there is a need for an urgent and wide-ranging debate on the responsible use of LLMs for research. Ultimately, science must find a way to benefit from conversational AI without compromising its core values and standards, such as curiosity, imagination, and discovery. With the right regulations, policies, and guidelines in place, we can harness the power of LLMs to drive scientific progress and create a better future for all.

Related Topics

Share This Story

Knowledge is most valuable when shared with the community.

Editorial Context

"Independent journalism relies on radical transparency. View our full log of editorial notes, corrections, and project dispatches in the Newsroom Transparency Log."

Reader Pulse

The report's impact signal

0 SIGNALS

Be the first to provide a reading pulse. These collective signals help our newsroom understand the impact of our reporting.

Join the deep discussion
Loading this week's participation brief

Join the discussion

Article Discussion

A more thoughtful conversation, anchored to the story

Atlantic-style discussion for this article. One-level replies, editor prompts, and moderation-first participation are now powered directly by Prisma.

Discussion Status

Open

Please sign in to join the discussion.

Loading discussion...

The Weekly Breeze

Independent reporting and analysis on Busan,
Korea, and the broader regional economy.

Independent journalism, directly to your inbox.

Related Coverage

Continue with related reporting

Follow adjacent reporting from the same newsroom file, with linked coverage that extends the current story's desk and context.

The Cheap Alliance Era Is Over
NewsApr 24, 2026

The Cheap Alliance Era Is Over

The alliance must remain the core, but it can no longer be the whole architecture. That is where multilateralism stops being a slogan and starts becoming a hedge, giving Seoul more room to absorb shocks from Washington without weakening deterrence.

Election Season Has Brought Busan’s Integration Debate Back
NewsApr 15, 2026

Election Season Has Brought Busan’s Integration Debate Back

The southeast’s integration debate has returned to the center of local politics, but the argument itself is not new. What voters are being asked to judge is not only which map looks bigger or cleaner, but which side can explain why its version will last when earlier ones did not.

South Korea, Palestine and the Limits of Recognition
NewsApr 15, 2026

South Korea, Palestine and the Limits of Recognition

South Korea now speaks more plainly about Palestinian suffering than it once did. It still does not recognize Palestine. That gap — between language and decision — is where the real story begins.

Continue this story

More on this issue

Stay with the same issue through adjacent reporting that carries the argument, context, or consequences forward.

Busan’s real North Port fight is over the city’s civic center
NewsApr 6, 2026

Busan’s real North Port fight is over the city’s civic center

North Port is being sold through stadium politics in Busan’s local election, but the site carries a heavier question. As the waterfront meets Busan Station and the edge of the old downtown, the real issue is whether Busan can build a civic center rather than another disconnected project.

South Korea’s UN AI Push Enters a New Phase
NewsMar 28, 2026

South Korea’s UN AI Push Enters a New Phase

A March LOI with six UN agencies has given South Korea its strongest opening yet to host UN-linked AI functions. The question now is whether Seoul can match diplomatic ambition with law, funding, city strategy and institutional trust.

Who Learns From War
NewsMar 5, 2026

Who Learns From War

AI systems are entering the core of military planning. U.S. operations against Iranian-linked targets reveal how intelligence analysis, targeting decisions, and operational data now flow through platforms built jointly by the Pentagon and private technology companies.

More from the author

Continue with Breeze in Busan

Stay with the same line of reporting through more work from this byline.