Overdiagnosed
On circumstantial evidence of overdiagnosis.
Prostates from men who had died in accidents were examined. The men weren’t known to be sick or to have any cancer. And because they studied 525 men of different ages, the researchers were able to estimate prostate cancer in various groups. To be clear, “prostate cancer” here means the detection of prostate cancer cells in histopathological sections of prostate glands of the subjects. This is similar to what doctors order for patients i.e. biopsy.
The results are striking. None of these men knew they had prostate cancer while they were alive. Even among young men in their twenties, almost 10% were found to have prostate cancer. And the proportion only increased with age. Among men in their seventies, more than 75% were found to have prostate cancer. If more than 50% of older men have prostate cancer, but only 3 percent will ever die of it, the potential for overdiagnosis is enormous. And when does this potential problem become a real problem? When doctors look harder in an effort to find small, early cancers.
The way we infer that overdiagnosis has occurred is by comparing the rates of cancer diagnosis and cancer death over time. The rise in cancer diagnosis is not accompanied by a rise in cancer death. This suggests that while there is more diagnosis, there is no change in the underlying amount of cancer that matters. It suggests overdiagnosis—the detection of very slow or nonprogressive cancers.
Alternative explanation to why this happened.
There is a true increase in the underlying amount of cancer destined to affect patients but that improvements in diagnosis and treatment match the increase in new cases to leave the total number of cancer deaths unchanged.
While possible, this explanation strains credulity. It is certainly not the most parsimonious explanation: it requires two conditions (true increase in cancer and improving medical care) instead of one (overdiagnosis). The authors used Occam’s razor as a basis to argue that the first explanation was unlikely because it more complex than the overdiagnosis explanation. To me, this is a bit shaky.
It requires a heroic assumption: that the rate of diagnosis and treatment improvement exactly matches the increase in true disease burden. If treatment improvements outpaced the rise in cancer, mortality would fall. If the rise in cancer outpaced improvements in treatment, mortality would rise. For mortality to remain unchanged means that the rise in cancer exactly matches improvements in treatment. This argument, to me, is stronger than the first because chances are minuscule that the rate of diagnosis and treatment improvement would exactly match the increase in true disease burden.
On real benefits of prostate cancer screening.
The research efforts spanned almost twenty years involving over a quarter of a million men and many millions of dollars. Yet there is still some uncertainty as to whether screening saves any lives: the European study concluded that it did, while the U.S. study concluded that it did not. If anything, the U.S. data made one wonder if less screening saved lives.
The European study found that screening reduced prostate cancer mortality by 20 percent. By statistical conventions, this is not a chance finding, but it is very close to being one.
The U.S. study found that screening increased prostate cancer mortality by 13 percent. By statistical conventions, this is a chance finding.
But there are some reasons to worry that screening could have the opposite effect of that intended. So there’s remaining uncertainty, despite two studies involving over a quarter of a million men. Although the results are contradictory and non-definitive, one thing is clear. When the results are not clear even though the study sample size was very large; statistics tells us that if there was any benefit at all from screening, it would be undoubtedly small.
On clinical decisions influenced by medico-legal considerations.
Increasingly destructive (and common) mind-set in our society: something bad has happened, and thus someone must be at fault.
The perceived risk of malpractice suits is much larger than the real risk, all that matters, however, is the perception. We see that there are legal penalties for underdiagnosis (failure to diagnose) and no corresponding penalties for overdiagnosis. Deciding which is the “safer” strategy on that basis is not very challenging.
On what we should really be focussing on and overly relying on surrogate endpoints to our detriments.
Many osteoporosis medications demonstrated their effectiveness by documented increased bone density. But the real question is ‘does it reduce the number of bone fractures?’ If it doesn’t, then increased bone density means nothing.
When MRI first came out, medical communities started to document silent abnormality e.g. bulging disc in people who don’t have back pain or sign of sinusitis in people who feel fine. If these abnormalities don’t do any harm, why bother? We, perhaps, need to differentiate between diagnostic and therapeutic abnormalities. Ones require interventions, while the others don’t. Failure to do so results in a lot of people have gone under the knife unnecessarily. I feel that our understanding sometimes couldn’t quite catch up with the technologies.
The issue with extrapolating from severe to mild abnormalities is that, practically speaking, it is often not known whether the same important benefits of treatment (e.g. avoiding death, bone fractures etc.) of severe abnormalities will appear in people with mild abnormalities. These events are so rare in those with mild abnormalities that it would require enormous studies to learn if treatment actually has an important benefit for this group. The studies required would be logistically and financially prohibitive. So investigators focus on less important but more measurable outcomes—such as bone density or PSA (Prostate Specific Antigen) level—as surrogates. Even outcomes that seem important on initial inspection may be more ambiguous in reality: compression fractures of the spine (which patients may or may not feel) or the development of small cancers (which may or may not grow). Benefits in these surrogate and ambiguous outcomes may be demonstrable, but improvements in them do not reliably translate into improvements that matter, namely, whether people feel better or live longer. Instead they require a leap of faith, an inference that proof of measurable benefits portends the existence of important benefits. But real benefits are, at best, small and uncertain, and they can easily be overwhelmed by the associated harms of diagnosis, hassle factors, and adverse effects from intervention (although these harms typically are not even considered, much less measured).
This may have direct implications on an asymptomatic apical radiolucency on endodontically treated teeth as detected by CBCT, but not conventional PA. We may see more radiolucencies on CBCT because imaging technique is now more sensitive, but do these newly detected radiolucencies need treatment? Will they do any harm if left untouched? These questions remain to be answered. However, if what mainly discussed in this book was any indication, the answer would be no and unlikely.