In an insightful interview with JAMA Editor-in-Chief Kirsten Bibbins-Domingo, Marcia McNutt, President of the National Academy of Sciences (NAS), explored the transformative potential of generative AI in scientific research and the challenges it poses to established scientific norms. McNutt, a geophysicist and former editor-in-chief of Science, discussed the urgent need for oversight and responsible practices as AI technologies—particularly large language models like ChatGPT—begin to play a larger role in data analysis, research publishing, and scientific discovery.

The National Academy of Sciences, a body that has served as an apolitical resource for scientific advice for more than 160 years, has been proactive in addressing the implications of AI in science. McNutt explained that while AI has been used in scientific applications for decades—such as in marine science for autonomous underwater vehicles—the advent of generative AI introduces new concerns, particularly around accountability and transparency. “Generative AI challenges some of the core values of science, including reproducibility, attribution, and transparency,” McNutt said. “When it comes to generative AI, you can put in the same query and get completely different answers, which violates the reproducibility we value in science.”

One of the key challenges with generative AI, McNutt noted, is its tendency to produce results without clear sources or references. In scientific research, attribution is a cornerstone—authors must cite previous work to acknowledge contributions and build on others' findings. But AI-generated text may not always provide accurate citations, and sometimes, it invents them entirely. “Sometimes the references generative AI provides are made up,” McNutt explained, highlighting the lack of transparency in the process. Without clear documentation of how results are produced and what data or research was used, AI-generated content risks undermining the very principles of scientific integrity.

To address these concerns, McNutt and her colleagues at the NAS have called for the creation of a Strategic Council on the Responsible Use of AI in Science, which would focus on developing guidelines and best practices for AI usage in research. The council would work to ensure that AI applications align with the fundamental ethical standards of science, including human responsibility. “Ultimately, the responsibility falls on the researcher,” McNutt emphasized. “If you're using generative AI, you need to double-check the results. If you don’t, that’s on you.” She urged the scientific community to adopt a culture of transparency, where researchers disclose AI’s role in their work and rigorously verify AI-generated results. “Disclose, disclose, disclose,” she stressed, echoing the central message of NAS’s recent editorial on the topic.

Another area of concern McNutt raised was the public's trust in science, particularly as new technologies like generative AI are rapidly integrated into research and publishing. Drawing on the historical example of genetically modified organisms (GMOs), McNutt cautioned that AI could follow a similar path, with the public viewing it as an industry-driven tool designed for profit rather than a tool for the advancement of knowledge. “When science is driven primarily by corporate interests, there is a danger of losing public trust,” she warned. McNutt also pointed out the growing risk of academia being 'priced out' of AI research due to the high cost of developing and maintaining AI technologies, potentially limiting the influence of independent researchers and widening the gap between corporate and academic interests in the AI space.

The interview also touched on the broader implications of AI for the future of science and public policy. McNutt noted that while AI tools have the potential to revolutionize areas such as climate prediction, mathematical modeling, and life sciences research, there are serious risks to consider, especially when it comes to how these tools might be used in fields with significant ethical and social consequences. One example she highlighted was the potential for AI to exacerbate issues in social sciences, like dynamic pricing based on AI's analysis of consumer behavior, which could lead to price manipulation based on perceived economic status. She called for ongoing vigilance as AI applications expand into every corner of society and research.

In addressing these challenges, McNutt echoed Bibbins-Domingo's observation that the scientific community must balance enthusiasm for AI’s capabilities with a healthy skepticism and a commitment to ensuring its responsible use. “As scientists, we are learning how to use AI tools effectively and responsibly,” McNutt said. “We are still figuring out the rules.” She added that while AI has the power to significantly enhance scientific discovery, it also requires careful scrutiny at every stage, from development to application. The scientific community, according to McNutt, must take an active role in defining the ethical boundaries of AI, ensuring that AI development is not left to large corporate interests alone.

For McNutt, transparency and independent verification remain foundational to scientific integrity. Drawing a parallel between AI's current state and the early days of technological breakthroughs like semiconductors, she emphasized that only by rigorously testing and validating AI tools can the scientific community establish trust in their results. “We don’t yet know the full potential of AI, but we do know that it must be used responsibly,” McNutt said. “Trust is built on transparency, disclosure, and independent verification, and these are principles we must uphold as we move forward in this new era.”

The discussion concluded with McNutt’s firm belief that with the right framework, AI can be a powerful force for good in scientific discovery. “AI can help us solve some of the world’s most pressing problems,” she said, “but it’s up to us to ensure that it is used in a way that benefits all of society.” As generative AI continues to reshape the landscape of scientific research, McNutt’s call for oversight, responsibility, and transparency will likely remain at the forefront of the ongoing conversation on AI in science.

Source: Journal of the American Medical Association (JAMA) | November 8, 2024

Leave a Reply

Your email address will not be published.

Comment

Name

Email

Url