Athens, Greece (Urotoday.com) Dr. Laurence Klotz presented on how to reduce scientific error and ensuring innovation. The history of science is a history of irresponsible dreams, obstinacy, and error. But science is one of the very few human activities – perhaps the only one – in which errors are systematically criticized and often corrected. In science, we often learn from our mistakes and can speak clearly and sensibly about making progress.1

The scientific error may occur in the execution or analysis of an experiment. There are several types, which include:

  1. Human error or mistakes in data collection
  2. Systematic error, or flaws in experimental design
  3. Random error, caused by environmental conditions or other unpredictable factors

It is important, especially in medical science, to mention the missing subgroup effect. At times, the effect of a drug or intervention may be seen only when applying it to a specific group of patients. Sometimes it is not easy to identify this important subgroup where the researched intervention has the greatest effect.

It is also important to be aware of false conclusions. Studies dubbed as statistically significant and statistically non-significant may not be necessarily contradictory.  Such designations may cause genuine effects to be dismissed.

Another important concept is the relationship between the p-value threshold, power and the false positive rate (Figure 1). As the p-value threshold is lower, the false positive rate is significantly lower at the same power.2

Other important types of manipulating data to achieve a p-value of less than 0.05 include:

  1. P-hacking – an analysis run in multiple ways. But only those that produce statistically significant findings are reported
  2. HARKing (Hypothesis After Results are Known) – data mining without an a priori hypothesis

The important steps in the scientific method that should be adhered to include:

  1. Definition of a research question
  2. Gathering of information and resources
  3. Formation of an explanatory hypothesis
  4. Hypothesis testing by performing an experiment and collecting data in a reproducible manner
  5. Data analysis
  6. Data interpretation and drawing conclusions that serve as a starting point for new hypothesis
  7. Publication of results
  8. Reproduce the results – frequently done by other scientists

Figure 1 – The relationship between the p-value threshold, power, and the false positive rate

SIU19_false_positive_rate.png

The problem with a lot of scientific research data today is that there are no attempts to reproduce the data. In one survey  – 55% of researchers tried and failed to reproduce published results,3 with less than 30% of them publishing their failure, and 44% of them reporting difficulty in publishing their contradictory results. Unfortunately, to date, there is no mechanism that exists to confirm reproducibility, and this is a serious problem. Part of this is the low incentive that exists to replicate research.

Dr. Klotz suggested a solution to this which has been gaining momentum. This is the reproducibility project. Its goal is to identify key studies in the literature and reproduce them. The top 50 cited cancer biology studies from 2010-2012 have been chosen to be reproduced in a rapid and cost-effective manner, by expert independent labs. A budget of 1.3 million dollars has been allocated to this. Labs from 400+ research institutions with 75 of the top 100 US research universities have been involved in this project. So far, they have reported a replication effort success for two out of every five cancer papers.4

In the urology field, the Movemeber Foundation for Prostate Cancer has supported a pilot study to assess the reproducibility of research findings with implications for prostate cancer patients, aiming to replicate four major landmark studies.

Concluding his talk, Dr. Klotz summarized how to reduce scientific error in the future:

  1. Recognize the key role of error in the scientific method
  2. Maintain and foster a scientific tradition of scrupulous adherence to truth and objectivity
  3. Enhance awareness of pitfalls of human investment in established paradigms
  4. Recognize the risk of statistical error, particularly type 2 errors (underpowered studies, missing subgroup) and the hazard of a p-value of less than 0.05
  5. Support reproducibility studies for key research findings
  6. Be aware of the high prevalence of scientific misconduct

Presented by: Laurence Klotz, MD, FRCS(C), Professor of Surgery, University of Toronto, Toronto, Canada, Chairman of the World Uro-Oncology Federation, former Chief of Urology, Sunnybrook Health Sciences Centre, former President of the Urological Research Society and the Canadian Urological Association

Written by: Hanan Goldberg, MD, Urology Department, SUNY Upstate Medical University, Syracuse, New-York, USA @GoldbergHanan at the 39th Congress of the Société Internationale d’Urologie, SIU 2019, #SIUWorld #SIU2019, October 17-20, 2019, Athens, Greece  

References:

  1. Popper K. Conjectures and Refutations: The growth of scientific knowledge. 1963.
  2. Benjamin DJ, Berger JO, Johannesson M, et al. Redefine statistical significance. Nature Human Behaviour 2018; 2(1): 6-10.
  3. Mobley A, Linder SK, Braeuer R, Ellis LM, Zwelling L. A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic. PloS one 2013; 8(5): e63221.
  4. Kaiser J. Mixed results from cancer replications unsettle field. Science (New York, NY) 2017; 355(6322): 234-5.
X