Judging the evidence

What causes and what protects against cancer? How does WCRF gather, present, assess and judge the evidence that it uses to explain the risk factors and the recommendations associated with cancer prevention?

We work on an ongoing programme to analyse global research on how diet, nutrition and physical activity affect the risk of developing cancer and influence survival after a diagnosis.

The Global Cancer Update Programme provides a comprehensive analysis, using the most meticulous of methods, of the worldwide body of evidence. The aim when judging evidence is to identify, with sufficient confidence to support a recommendation, what causes cancer, what protects against cancer and what is unlikely to have an effect. This work also reveals where evidence is inadequate and further research is needed.

Much of the human evidence on diet, nutrition and physical activity is observational, though it is reinforced by findings of extensive laboratory investigations. There is no perfect way to establish whether observed associations between these exposures and cancer are definitely causal. However, the Expert Panel believes the rigorous, integrated and systematic approach enables them to make sound judgements and reliable recommendations.

How we judge the evidence

  • A team at Imperial College London conducts systematic literature reviews – gathering and presenting the best-available, current, scientific evidence from around the world.
  • The International Agency for Research on Cancer provides expert reviews of the main hypotheses – related mechanisms to support the epidemiological evidence.
  • The Expert Panel evaluates and interprets the evidence, making judgements on the strength of the evidence and, where possible, the likelihood that the exposures studied increase, decrease or have no effect on the risk of cancer.
  • The Panel makes Recommendations for the public based on its judgements.
  • The WCRF/AICR Secretariat, responsible for day-to-day management of the programme, supports the work of the Panel.

How do we interpret the evidence?

Interpretation of epidemiological evidence is complex. A wide range of general considerations must be taken into account:

  • How relevant are the patterns and ranges of intake examined in the existing studies to populations globally?
  • Do the studies classify food and drink consumption, and physical activity, in ways that correspond to patterns globally?
  • How accurate are measurements of the level of exposure in the study population, such as levels of intake of a food or its dietary constituents?
  • Is terminology consistent between studies? For some exposures, such as ‘processed meat’, there are no generally agreed definitions.
  • How reliable and complete is the data on cancer outcomes – on incidence and mortality, and subtypes?
  • Is the study design appropriate? The hierarchy of evidence places randomised clinical trials (RCTs) at the top followed by cohort studies, case control studies, ecological studies and case reports but there are merits in considering different study designs.
  • What is the shape of the association between the exposure and the cancer? For example, is it linear, with a uniform increase (or decrease) in risk for rising levels of exposure? Is there a threshold above which an association is found or a plateau where no further increase or decrease in risk is observed? Or does the direction of association (whether risk is increased or decreased) change with the level of exposure?
  • Is there high heterogeneity, a large variation in the results of the studies, which would lead to less confidence in the overall summary estimate?
  • Is the overall evidence limited to a particular geographic area and can results be extrapolated at a global scale?
  • Do studies take the possibility of confounding, effect modification and reporting bias into account?

Professor Alan JacksonThe creation of and support for the Continuous Update Project during the last ten years marks a remarkable commitment to a reliable process for capturing all relevant new evidence and enabling its up-to-date interrogation in ‘real time’. Because the CUP has embedded the value of a structured and systematic approach, it enables scientists from disparate backgrounds to share knowledge and reach agreed interpretation … The Recommendations made today are very securely based.
– Professor Alan Jackson, Chair of the Expert Panel

Uncertainty in epidemiology

Even though the best available evidence has been used, that evidence does not normally prove, beyond all doubt, whether the risk factors – diet, nutrition and physical activity – cause, or protect against, cancer.

The risk factors themselves are complex and difficult to manipulate in experimental studies. Furthermore, even if a person’s way of living does cause cancer, it may take years or decades for that cancer to develop.

Although randomised clinical trials have the power to test cause and effect vigorously, controlled manipulation of diet and physical activity in RCTs over the long period of time required to study these exposures is not possible.

Much of the data on cancer risk therefore comes from epidemiological studies, and there is normally a degree of uncertainty surrounding whether observed associations in these studies are causal. Best judgement is therefore needed when interpreting and assessing results.

Best judgement and grading criteria

The WCRF/AICR criteria require a range of factors to be considered. These include quality of the studies – for example, whether the possibility of confounding, measurement errors and selection bias has been minimised. They also include the number of different study types and cohorts, whether there is any unexplained heterogeneity between results from different studies or populations, whether there is a dose–response relationship, and whether there is evidence of plausible biological mechanisms at typical levels of exposure.

The clearly defined grading criteria provide a systematic way to judge how strong any evidence of causality is. They enable evidence to be categorised as being ‘strong’ (‘convincing’, ‘probable’ or ‘substantial effect on risk unlikely’) or ‘limited’ (‘limited – suggestive’ or ‘limited – no conclusion’). Only evidence judged to be strong is usually used as the basis for Recommendations.

> Read the Judging the evidence report