AILSA CHANG, HOST:
As the people who study biology and chemistry and physics increasingly adopt artificial intelligence, we're starting to understand how AI is changing the practice of science itself. New research suggests that it is helping individual scientists make advances, but it may not be helping the field as a whole nearly as much. NPR's Katia Riddle reports.
KATIA RIDDLE, BYLINE: A few years ago, researcher James Evans heard something from a colleague - AI was changing the game, helping scientists get ahead.
JAMES EVANS: It's been great for them. It's - you know, they've got citations, and they're more likely to kind of advance into kind of having a research lab, to become a senior scientist, etc. And I was like, well, but what does it do to science as a whole? What's happening?
RIDDLE: They decided to embark on a project to address this question. Evans studies computational social science at the University of Chicago. He and his colleagues analyzed millions of scientific papers, looking at how the spread of AI tools changed what scientists worked on and how their careers advanced. They published their findings in the journal Nature.
EVANS: Individual scientists have a shared problem, which is survival, right? You know, you need money to do your science.
RIDDLE: The results suggested that AI helps people to publish more papers and receive more citations, which is great to keep scientists working. But this AI-assisted work seemed to shrink the volume of topics by nearly 5%. Increasingly, researchers, said Evans, rely on AI to analyze existing data and suggest research directions. He says, if one person does this, fine, but if everyone does it?
EVANS: Then that would clearly be a kind of a public goods problem.
RIDDLE: That means they work on the safer, more known problems that have often already been explored. Both he and his colleague, who was initially excited about AI, says Evans, now see this use of it as an obstacle. It's preventing scientists, he says, from starting new research on problems such as climate change, global public health, water scarcity, food insecurity, rare diseases.
EVANS: So there's kind of a narrowing of science around what it is that these tools can do and what they enable scientists to do.
RIDDLE: Because this study analyzed existing work, scientists can't say for sure that AI is the definitive cause of this pattern. Regardless, says Evans, it is troubling. He points to this example. Recently, a team of researchers at Google DeepMind unveiled something called AlphaGenome, an AI tool designed to predict how DNA sequences influence gene regulation. Steven Salzberg is a professor of computational biology at Johns Hopkins University. He's also critical of Google's work.
STEVEN SALZBERG: I don't think that they are sincerely committed to solving the problems. I think they instead want to prove that AI is really valuable and can just keep making progress on everything.
RIDDLE: Salzberg says he tried to use this tool, but making sense of its predictions was more trouble than it was worth. He says it's disappointing because an earlier program from Google DeepMind called AlphaFold 2 was extremely helpful. It predicts the way proteins fold on the molecular level, saving scientists months or years of lab work.
SALZBERG: AlphaFold was great, and it was a real breakthrough.
RIDDLE: In an email statement, Ziga Avsec, a scientist from DeepMind, defended the work, writing that AlphaGenome could, quote, "help scientists make progress across a number of important scientific problems, from helping to pinpoint which variants cause rare genetic diseases to designing new therapeutics," unquote. Steven Salzberg stresses this is not just a Google problem. It's an AI problem. It's affecting researchers everywhere.
SALZBERG: When you have a hammer, you go around looking for nails, and that's what AI is right now.
RIDDLE: Salzberg is trying to teach the next generation of scientists how to think strategically about using AI.
SALZBERG: That's how I tell my students to approach things - let's look at problems that we think are important and try to solve those.
RIDDLE: Maybe someday, says Salzberg, AI will be able to think on an even bigger scale. Right now, it may be good at problem-solving, he says, but we still need humans to ask the right questions. Katia Riddle, NPR News. Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.