This post was originally published on this site
A new study finds a steady drop since 1945 in disruptive feats as a share of the world’s booming enterprise in scientific and technological advancement.
Miracle vaccines. Videophones in our pockets. Reusable rockets. Our technological bounty and its related blur of scientific progress seem undeniable and unsurpassed. Yet analysts now report that the overall pace of real breakthroughs has fallen dramatically over the past almost three-quarters of a century.
This month in the journal Nature, the report’s researchers told how their study of millions of scientific papers and patents shows that investigators and inventors have made relatively few breakthroughs and innovations compared with the world’s growing mountain of science and technology research. The three analysts found a steady drop from 1945 through 2010 in disruptive finds as a share of the booming venture, suggesting that scientists today are more likely to push ahead incrementally than to make intellectual leaps.
“We should be in a golden age of new discoveries and innovations,” said Michael Park, an author of the paper and a doctoral candidate in entrepreneurship and strategic management at University of Minnesota.
The new finding of Mr. Park and his colleagues suggests that investments in science are caught in a spiral of diminishing returns and that quantity in some respects is outpacing quality. While unaddressed in the study, it also raises questions about the extent to which science can open new frontiers and sustain the kind of boldness that unlocked the atom and the universe and what can be done to address the shift away from pioneering discovery. Earlier studies have pointed to slowdowns in scientific progress but typically with less rigor.
Mr. Park, along with Russell J. Funk, also of the University of Minnesota and Erin Leahey, a sociologist at the University of Arizona, based their study on an enhanced kind of citation analysis that Dr. Funk helped to devise. In general, citation analysis tracks how researchers cite one another’s published works as a way of separating bright ideas from unexceptional ones in a system flooded with papers. Their improved method widens the analytic scope.
“It’s a very clever metric,” said Pierre Azoulay, a professor of technological innovation, entrepreneurship and strategic management at the Massachusetts Institute of Technology. “I was giddy when I saw it. It’s like a new toy.”
Researchers have long sought objective ways to assess the state of science, which is seen as vital to economic growth, national pride and military strength. It became more difficult to do so as published papers soared in number to more than one million annually. Each day, that’s more than 3,000 papers — by any standard, an indecipherable blur.
Defying the surge, experts have debated the value of incremental strides versus “Eureka!” moments that change everything known about a field.
The new study could deepen the debate. One surprise is that discoveries hailed popularly as groundbreaking are seen by the authors of the new study as often representing little more than routine science, and true leaps as sometimes missing altogether from the conversation.
For instance, the top breakthrough on the study’s list of examples is a gene-splicing advance that’s poorly known to popular science. It let foreign DNA be inserted into human and animal cells rather than just bacteria ones. The New York Times referred to it in a 1983 note of four paragraphs. Even so, the feat produced a run of awards for its authors and their institution, Columbia University, as well as almost $1 billion in licensing fees as it lifted biotechnology operations around the world.
In contrast, the analysts would see two of this century’s most celebrated findings as representing triumphs of ordinary science rather than edgy leaps. The mRNA vaccines that successfully battle the coronavirus were rooted in decades of unglamorous toil, they noted.
So too, the 2015 observation of gravitational waves — subtle ripples in the fabric of space-time — was no unforeseen breakthrough but rather the confirmation of a century-old theory that required decades of hard work, testing and sensor development.
“Disruption is good,” said Dashun Wang, a scientist at Northwestern University who used the new analytic technique in a 2019 study. “You want novelty. But you also want everyday science.”
The three analysts uncovered the trend toward incremental advance while using the enhanced form of citation analysis to scrutinize nearly 50 million papers and patents published from 1945 to 2010. They looked across four categories — the life sciences and biomedicine, the physical sciences, technology and the social sciences — and found a steady drop in what they called “disruptive” findings. “Our results,” they wrote, “suggest that slowing rates of disruption may reflect a fundamental shift in the nature of science and technology.”
Their novel method — and citation analysis in general — get analytic power from the requirement that scientists cite studies that helped to shape their published findings. Starting in the 1950s, analysts began to tally those citations as a way to identify research of importance. It was a kind of scientific applause meter.
But the count could be misleading. Some authors cited their own research quite often. And stars of science could receive lots of citations for unremarkable finds. Worst of all, some of the most highly cited papers turned out to involve minuscule improvements in popular techniques used widely by the scientific community.
The new method looks at citations more deeply to separate everyday work from true breakthroughs more effectively. It tallies citations not only to the analyzed piece of research but to the previous studies it cites. It turns out that the previous work is cited far more often if the finding is routine rather than groundbreaking. The analytic method turns that difference into a new lens on the scientific enterprise.
The measure is called the CD index after its scale, which goes from consolidating to disrupting the body of existing knowledge.
Dr. Funk, who helped to devise the CD index, said the new study was so computationally intense that the team at times used supercomputers to crunch the millions of data sets. “It took a month or so,” he said. “This kind of thing wasn’t possible a decade ago. It’s just now coming within reach.”
The novel technique has aided other investigators, such as Dr. Wang. In 2019, he and his colleagues reported that small teams are more innovative than large ones. The finding was timely because science teams over the decades have shifted in makeup to ever-larger groups of collaborators.
In an interview, James A. Evans, a University of Chicago sociologist who was a co-author of that paper with Dr. Wang, called the new method elegant. “It came up with something important,” he said. Its application to science as a whole, he added, suggests not only a drop in the return on investment but a growing need for policy reform.
“We have extremely ordered science,” Dr. Evans said. “We bet with confidence on where we invest our money. But we’re not betting on fundamentally new things that have the potential to be disruptive. This paper suggests we need a little less order and a bit more chaos.”