Conversational AI, including large language models (LLMs), has become one of the most talked-about technologies in recent years, with far-reaching implications for science and society. Among the most notable LLMs is ChatGPT, an AI-powered chatbot that can convincingly converse with users on a wide range of topics in multiple languages.
The Rise of ChatGPT and Other Large Language Models
ChatGPT is just one of the latest LLMs released by OpenAI and other firms. It is free, easy to use, and continues to learn autonomously from vast data sets of text. This technology has already been used by researchers to write essays, draft and improve papers, and identify research gaps, among others. The potential of LLMs is huge, from designing experiments to conducting peer reviews and supporting editorial decisions.
The Risks and Opportunities of Conversational AI in Research
Conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns. While it might accelerate innovation and shorten time-to-publication, it could also degrade the quality and transparency of research and spread misinformation. Researchers using LLMs risk being misled by false or biased information, and inattentive reviewers might be hoodwinked into accepting an AI-written paper without realizing it.
The Need for Human Verification and Accountability
Using conversational AI for specialized research is likely to introduce inaccuracies, bias, and plagiarism. Expert-driven fact-checking and verification processes will be indispensable to prevent such risks. High-quality journals might decide to include a human verification step or even ban certain applications that use this technology. It will become even more crucial to emphasize the importance of accountability to prevent human automation bias.
Developing Rules for the Responsible Use of LLMs
Research institutions, publishers, and funders should adopt explicit policies that raise awareness of and demand transparency about the use of conversational AI in the preparation of all materials that might become part of the published record. Author-contribution statements and acknowledgments in research papers should state clearly and specifically whether and to what extent LLMs were used.
Investing in Truly Open LLMs
The lack of transparency in the underlying training sets and LLMs for ChatGPT and its predecessors is a concern. The development and implementation of open-source AI technology should be prioritized to counter this opacity. Non-commercial organizations, universities, NGOs, government research facilities, and organizations such as the United Nations should make considerable investments in independent non-profit projects to develop advanced open-source, transparent, and democratically controlled AI technologies.
Embracing the Benefits of AI in Science
Chatbots provide opportunities to complete tasks quickly, from PhD students striving to finalize their dissertation to researchers needing a quick literature review for their grant proposal, or peer-reviewers under time pressure to submit their analysis. Conversational AI technology has enormous potential, provided that the current teething problems related to bias, provenance, and inaccuracies are ironed out.
Wider Debate and International Forum on LLMs
The research community needs to organize an urgent and wide-ranging debate on the development and responsible use of LLMs for research. This discussion should include scientists of different disciplines, technology companies, big research funders, science academies, publishers, NGOs, and privacy and legal specialists. The debate should address the implications of LLMs on diversity and inequalities in research and should involve people from underrepresented groups in research and communities affected by research.
Conversational AI is a game-changer for science, offering tremendous potential for innovation and breakthroughs in various disciplines. However, there are challenges related to bias, provenance, and inaccuracies that need to be addressed, and there is a need for an urgent and wide-ranging debate on the responsible use of LLMs for research. Ultimately, science must find a way to benefit from conversational AI without compromising its core values and standards, such as curiosity, imagination, and discovery. With the right regulations, policies, and guidelines in place, we can harness the power of LLMs to drive scientific progress and create a better future for all.