Fresh thinking about the evidence needed for a healthier UK

The Health Foundation is working with Dr Harry Rutter from the London School of Hygiene and Tropical Medicine to develop a new model of evidence that will inform public health research, policy and practice. 

barley-872000_1920

As part of this work Dr Rutter and co-authors from the Health Foundation have published a new Viewpoint paper – The need for a complex systems model of evidence for public health – in The Lancet, which outlines the need for new approaches to designing and evaluating population-level interventions to improve health.

Key points

  • We are faced with many big health challenges in our society. Their complex nature is an ongoing problem for public health research and policy.
  • Such challenges often involve multiple factors operating over many decades in systems that adapt as changes occur. For example, the distribution of obesity in a population might be impacted by changes to food, employment, transport or economic systems.
  • The traditional linear model of research is not suited to tackling these challenges. This is because it focuses largely on changes in individuals, not the population as a whole, and because it tends to look at isolated interventions rather than the contexts in which they take place.
  • There is growing recognition that we need a new evidence model that looks at public health problems, and our potential responses, in terms of a complex systems approach.

Full reference: Rutter, H. et al.  The need for a complex systems model of evidence for public health The Lancet, 13 June 2017

Related: Building a new system for the generation and use of public health evidence

 

Knowledge transfer partnership programme announced

Knowledge Transfer Partnership announced at CSO Conference ‘Bringing Science and Innovation to the Heart of the NHS’

medicine-163707_1280

NHS England is set to launch its first Knowledge Transfer Partnership Programme, a 12 month development programme, aimed at clinical leaders in healthcare science.  Successful applicants who secure a place will work with other leading healthcare scientists and build long-term collaborations across clinical, research and industry sectors, whilst identifying new approaches to measuring improved outcomes, ultimately for NHS patients.

Public involvement in health research

The National Institute for Health Research has launched a campaign urging patients and the public to get involved in health and social care research.

survey-1594962_1280

The #twosides campaign highlights ways for people who aren’t medical or academic professionals to play a part in shaping research, for example through suggesting research questions, reviewing research applications,
joining a study team or being a study participant.

Additional link: NIHR press release

How Britain plans to lead the global science race to treat dementia

It has struck nearly a million people in the UK, yet even its cause is still unclear | The Guardian

neurons-1739997_960_720.jpg

Early next year, Professor Bart De Strooper will sit down in an empty office in University College London and start to plan a project that aims to revolutionise our understanding and treatment of dementia. Dozens of leading researchers will be appointed to his £250m project which has been set up to create a national network of dementia research centres – with UCL at its hub.

The establishment of the UK Dementia Research Institute – which was announced last week – follows the pledge, made in 2012 by former prime minister David Cameron, to tackle the disease at a national level and comes as evidence points to its increasing impact on the nation. Earlier this year, it was disclosed that dementia is now the leading cause of death in England and Wales. At the same time, pharmaceutical companies have reported poor results from trials of drugs designed to slow down the progress of Alzheimer’s disease, the most common form of dementia.

Read the full news story here

Development of a critical appraisal tool to assess the quality of cross-sectional studies

Downes, M.J. et al. BMJ Open. 6:e011458

magnifying-glass-914922_960_720.png

Objectives: The aim of this study was to develop a critical appraisal (CA) tool that addressed study design and reporting quality as well as the risk of bias in cross-sectional studies (CSSs). In addition, the aim was to produce a help document to guide the non-expert user through the tool.

Conclusions: CA of the literature is a vital step in evidence synthesis and therefore evidence-based decision-making in a number of different disciplines. The AXIS tool is therefore unique and was developed in a way that it can be used across disciplines to aid the inclusion of CSSs in systematic reviews, guidelines and clinical decision-making.

Read the full abstract here

Results from 42.5% of all trials run by major sponsors are unpublished | via @EBMDataLab

Evidence-Based Medicine Data Lab, University of Oxford

trials

Image source: TrialsTracker

Why it matters: Clinical trials are the best way we have of testing whether a medicine is safe and effective. They can involve thousands of people, patients and healthy volunteers, and take years to complete. But trials with negative results are twice as likely to remain unreported as those with positive results. This means that patients and doctors don’t have the full information about the benefits and risks of treatments. We believe all clinical trials, past and present, should be reported in full. Read more on AllTrials.net and sign the petition.

Our methodology: We regularly download details of all trials registered on ClinicalTrials.gov. We include all interventional trials completed between Jan 2006 and two years ago, except for Phase 0/1 trials and those that have made a formal request to delay results. Next, we look for summary results on ClinicalTrials.gov, or linked results on PubMed. Our table includes only sponsors with more than 30 trials: to see all sponsors, download the full dataset. We understand this method isn’t perfect. However, we feel that researchers have a clear obligation to ensure that their results are published, and discoverable. If they have failed to post summary results, or to ensure the trial ID is in their PubMed entry, then their results will be listed here as missing. See our paper for full details.

How to improve your score: Hello trial sponsors! Want to improve your score? Simply post summary results on ClinicalTrials.gov, or ask your journal to add the trial’s NCT ID to the PubMed entry for published results. You should see the data update shortly.

View the full webpage here

Publication Bias and Nonreporting Found in Majority of Systematic Reviews and Meta-analyses in Anesthesiology Journals

Hedin, R. et al. (2016) Anesthesia & Analgesia. 123(4) pp.1018–1025

Background: Systematic reviews and meta-analyses are used by clinicians to derive treatment guidelines and make resource allocation decisions in anesthesiology. One cause for concern with such reviews is the possibility that results from unpublished trials are not represented in the review findings or data synthesis. This problem, known as publication bias, results when studies reporting statistically nonsignificant findings are left unpublished and, therefore, not included in meta-analyses when estimating a pooled treatment effect. In turn, publication bias may lead to skewed results with overestimated effect sizes. The primary objective of this study is to determine the extent to which evaluations for publication bias are conducted by systematic reviewers in highly ranked anesthesiology journals and which practices reviewers use to mitigate publication bias. The secondary objective of this study is to conduct publication bias analyses on the meta-analyses that did not perform these assessments and examine the adjusted pooled effect estimates after accounting for publication bias.

Methods: This study considered meta-analyses and systematic reviews from 5 peer-reviewed anesthesia journals from 2007 through 2015. A PubMed search was conducted, and full-text systematic reviews that fit inclusion criteria were downloaded and coded independently by 2 authors. Coding was then validated, and disagreements were settled by consensus. In total, 207 systematic reviews were included for analysis. In addition, publication bias evaluation was performed for 25 systematic reviews that did not do so originally. We used Egger regression, Duval and Tweedie trim and fill, and funnel plots for these analyses.

Results: Fifty-five percent (n = 114) of the reviews discussed publication bias, and 43% (n = 89) of the reviews evaluated publication bias. Funnel plots and Egger regression were the most common methods for evaluating publication bias. Publication bias was reported in 34 reviews (16%). Thirty-six of the 45 (80.0%) publication bias analyses indicated the presence of publication bias by trim and fill analysis, whereas Egger regression indicated publication bias in 23 of 45 (51.1%) analyses. The mean absolute percent difference between adjusted and observed point estimates was 15.5%, the median was 6.2%, and the range was 0% to 85.5%.

Conclusions: Many of these reviews reported following published guidelines such as PRISMA or MOOSE, yet only half appropriately addressed publication bias in their reviews. Compared with previous research, our study found fewer reviews assessing publication bias and greater likelihood of publication bias among reviews not performing these evaluations.

Read the abstract here