When talking about cancer screening, survival rates mislead

0
170

If we want to know whether screening saves lives, we need to show a reduction in the cancer death rate rather than an increase in the survival rate. Steven Depolo/Flickr, CC BY-SA

Cancer screening is beneficial when it’s able to prevent people dying from cancer. And it should clearly be adopted where there’s evidence showing this. But using cancer survival rates to promote screening, as is often done, is misleading.

For screening to prevent people from dying early, simply finding cancers is not enough; we need to find progressive cancers that would kill if left untreated. What’s more, we need effective treatment for these cancers. And the therapy has to be more likely to cure if administered earlier (when cancer may be detected by screening) than later (when cancer may be detected by the patient or doctor without screening).

Yet many screening advocates – often with the best of intentions – use survival rates to make their case. These statistics may appear as legitimate evidence to support screening, even to medical professionals. But they’re highly misleading as they always make screening look worthwhile, even if it isn’t actually doing any good.

The wrong emphasis

Consider breast cancer screening. The high survival rates for this cancer are used around the world to impress the apparent benefits of screening on people.

Australian research on the topic published last year, for instance, reported improved survival among women whose cancers were found by screening compared to those whose cancers were found because they developed symptoms. An accompanying press release claimed that “the findings show the survival benefits of mammography screening”.

The world’s largest breast cancer charity (and the creator of the “pink ribbon”) is US-based Susan G. Komen. Until recently, its website promoted breast cancer screening by noting that:

early detection saves lives. The five-year survival rate for breast cancer when caught early is 98%. When it’s not? 23%.

By highlighting the higher survival rate of women whose cancer was detected by screening, these examples imply screening prolongs lives. But survival rates actually tell us little about longer lifespans.

In fact, research shows that, between 1950 and 1995, the increases in survival rates after cancer diagnosis weren’t associated with decreases in cancer death rates over the same period. What’s more, cancers that showed the largest increases in five-year survival didn’t have the largest decrease in deaths. Although five-year survival increased for all cancers over the study period, the death rate for some of them also actually increased.

How it works

People whose cancer is detected by screening will show a higher survival rate than those who detect their cancer themselves or have it diagnosed by their doctor, even when screening has not prolonged any lives, because of two types of statistical bias.

The first is known as lead time bias. This is where the detection of cancer through screening at an early stage does nothing but advance the date of diagnosis. The figure below shows hypothetical outcomes for a person with cancer for different screening scenarios.

Figure one: improved survival due to lead-time bias

Here’s how to read the three scenarios:

  1. No screening – does not participate in screening; cancer is diagnosed because the person developed symptoms and the person dies four years later. The five-year survival rate for this cancer is 0%.

  2. Ineffective screening – participates in screening; cancer is detected but earlier treatment does not alter natural disease progression. The five-year survival rate for this cancer, which is caught early, is 100% – even though screening has made no difference to how long the person actually lives.

  3. Effective screening – participates in screening, cancer is detected and treatment is able to prolong life. The five-year survival rate is 100%, although screening actually only prolonged the person’s life by two years.

Survival time from diagnosis is substantially longer in both screening scenarios, even though only in the last is there a real benefit.

The second statistical distortion results from what is known as length time bias. This is where the detection of cancer through screening picks up slowly growing cancers, which have a better prognosis than cancers presenting clinically (detected by the doctor or the person when they get ill).

The extreme of this is over-diagnosis, where the cancer would never cause symptoms during that person’s lifetime, or death. Unfortunately, progressive and non-progressive cancer cells appear the same under the microscope – all are labelled as cancer.

The figure below shows hypothetical outcomes for a population with different screening scenarios.

Figure two: improved survival due to length time bias

Here’s how to read it:

  1. No screening – all cancers are diagnosed because the person develops symptoms, which means all are progressive. The five-year survival rate without screening is 20%.

  2. Ineffective screening – additional non-progressive cancers are detected by screening; cancers that are progressive may be detected earlier than with no screening, but earlier treatment makes no difference to natural disease progression. The five-year survival rate is “pushed to” 43%, even though screening has made no difference to how many lives are saved.

  3. Effective screening – additional non-progressive cancers are detected by screening; cancers that are progressive are detected earlier than with no screening and treatment is able to prolong life.

Five-year survival from diagnosis is substantially higher in both screening scenarios, even though only in the last is there actually a real benefit.

When is screening worthwhile?

If we want to show that screening saves lives, we need to show a reduction in the cancer death rate, which measures the number of people dying of the cancer within a population over a period of time.

Unlike survival rates, death rates don’t use the time the cancer was diagnosed in their calculation. And the total number of people in the population is considered, rather than just those diagnosed with cancer.

To work out death rates, the number of people dying over an equivalent period of time might be compared in screened and unscreened groups. Or compare a population before and after a screening program is started. Or different populations that are otherwise similar except that one has screening and the other does not can be compared.

The figures below shows death rates are not affected by lead time or length time biases.

Figure three: death rates not affected by lead time bias
Figure four: death rates not affected by length time bias

Here’s how to read the three scenarios:

  1. No screening – the person in the first table above dies during the study period, so the death rate is 100% (first panel, Figure 3); 400 of the 5,000 unscreened population in the second table die before five years, so the five-year death rate is 8% (first panel, Figure 4).

  2. Ineffective screening – the person dies during study period, so the death rate is 100% (middle panel, Figure 3); 400 of the 5,000 screened population die before five years, so the five-year death rate is 8% (middle panel, Figure 4)

  3. Effective screening – the person survives to end of study, so the death rate is 0% (bottom panel, Figure 3); 350 of the 5,000 screened population die before five years, so the five-year death rate is 7% (bottom panel, Figure 4).

Death rates are improved only where screening has led to a real benefit; they are unchanged where screening has no effect on natural disease progression.

Survival statistics, even when they are used by well-meaning advocates who misinterpret them as a measure of the success of cancer screening, are misleading. They tell us nothing about lives saved and the potential value of screening programs.

Katy Bell receives funding from National Health and Medical Research Council for research into the use of tests, including for screening.

Alexandra Barratt receives funding from ARC and NHMRC.

Andrew Hayen receives funding from NHMRC, ARC, and DFAT.