New research can’t confirm lab results for about half of 50 key cancer studies
Eight years ago, researchers launched a project to carefully repeat influential lab experiments in cancer research. It turns out that about half of the scientific claims didn’t hold up.
Eight years ago, a team of researchers launched a project to carefully repeat early but influential lab experiments in cancer research.
They recreated 50 experiments — the type of preliminary research with mice and test tubes that sets the stage for new cancer drugs.
Now, in newly reported results, they say that about half of the scientific claims didn’t hold up.
“The truth is we fool ourselves,” said Dr. Vinay Prasad, a cancer doctor and researcher at the University of California, San Francisco, who wasn’t involved in the project. “Most of what we claim is novel or significant is no such thing.”
It’s a pillar of science that the strongest findings come from experiments that can be repeated with similar results.
In reality, there’s little incentive for researchers to share methods and data so others can verify the work, said Marcia McNutt, president of the National Academy of Sciences. Researchers lose prestige if their results don’t hold up to scrutiny, she said.
And there are built-in rewards for publishing discoveries.
For cancer patients, this can result in false hopes from reading headlines of a mouse study that seems to promise a cure “just around the corner,” Prasad said. “Progress in cancer is always slower than we hope.”
The new study reflects on shortcomings early in the scientific process, not with established treatments. By the time cancer drugs reach the market, they’ve been tested rigorously in large numbers of people to make sure they are safe and they work.
The researchers tried to repeat experiments from cancer biology papers published from 2010 to 2012 in major journals such as Cell, Science and Nature.
Overall, 54% of the original findings failed to measure up to statistical criteria set ahead of time by the Reproducibility Project, according to the team’s study, published online by the nonprofit eLife.
Among the studies that didn’t hold up was one that found a certain gut bacterium was tied to colon cancer in humans. Another was for a type of drug that shrunk breast tumors in mice. A third was a mouse study of a potential prostate cancer drug.
A co-author of the prostate cancer study said the research done at Sanford Burnham Prebys research institute has held up to other scrutiny.
“There’s plenty of reproduction in the literature of our results,” said Erkki Ruoslahti, who started a company that’s now running human trials on the same compound for metastatic pancreatic cancer.
This is the second major analysis by the Reproducibility Project. In 2015, it found similar problems when they tried to repeat experiments in psychology.
Study co-author Brian Nosek of the Center for Open Science said it can be wasteful to plow ahead without first doing the work to repeat findings.
“We start a clinical trial, or we spin up a startup company, or we trumpet to the world, ‘We have a solution,’ before we’ve done the follow-on work to verify it,” Nosek said.
The researchers tried to minimize differences in how the cancer experiments were conducted. Often, they couldn’t get help from the scientists who did the original work when they had questions about which strain of mice to use or where to find specially engineered tumor cells.
“I wasn’t surprised, but it is concerning that about a third of scientists were not helpful, and, in some cases, were beyond not helpful,” said Michael Lauer, deputy director of extramural research for the National Institutes of Health.
The NIH will try to improve data-sharing among scientists by requiring that of grant-funded institutions in 2023, Lauer said.
“Science, when it’s done right, can yield amazing things,” Lauer said.
For now, skepticism is the right approach, said Dr. Glenn Begley, a biotechnology consultant and former head of cancer research for drugmaker Amgen. A decade ago, he and other in-house scientists at Amgen reported even lower rates of confirmation when they tried to repeat published cancer experiments.
Cancer research is difficult, Begley said, and “it is very easy for researchers to be attracted to results that look exciting and provocative, results that appear to further support their favorite idea as to how cancer should work but that are just wrong.”