Pre-Congress Workshop - Tuesday 14 July 2015
Critical Appraisal in the Time of Bias: Learning to Read an Article Several Layers Deep
by Panteleimon Ekkekakis, Ph.D., FACSM
Problem and rationale
Imagine the following scenario. While the field of exercise science is engaged in the global “Exercise Is Medicine” initiative, a series of randomized controlled trials (RCTs) and meta-analyses appear in some of the world’s most prestigious medical journals, alleging to have definitively demonstrated that physical activity has no beneficial effect on depression. Since depression is one of the leading cause of disability, the articles attract worldwide media attention. In interviews, medical researchers assert that, once the link between physical activity and depression was examined with the highest scientific standards, no effect could be detected. In invited editorials, experts argue that, since methodologically rigorous research failed to demonstrate any benefit, physical activity should be removed from clinical guidelines and physicians should cease recommending it to their patients. What do you think would happen next? One might imagine that, since (a) depression is such a prevalent and debilitating disorder and (b) the studies and associated publicity undermine efforts to integrate physical activity in clinical practice, the evidence would be closely scrutinized. Given the centrality of this issue to the field of exercise psychology, for example, it would be reasonable to expect that journals would dedicate special issues to the analysis of these studies and conference organizers would invite expert panels to debate the evidence. In actuality, however, even though the aforementioned scenario is precisely what has been happening in the medical literature over the past few years, there has been a complete lack of response from experts, journals, conferences, and organizations in the fields of exercise psychology and exercise science. This intriguing phenomenon raises questions about the level of preparedness of these fields to function within the model of Evidence-Based Medicine (EBM). At many universities, training in the exercise sciences continues to reflect an outdated model, in which exposure to advanced concepts of research methods and statistics is restricted to postgraduate education. Even at the level of doctoral training, exposure to core concepts of EBM remains limited. In particular, most exercise-science curricula, from the undergraduate to the doctoral level, contain no modules specifically designed to prepare students to critically appraise RCTs and meta-analyses, the main instruments of EBM. Thus, the purposes of this workshop will be to (a) inform new and experienced researchers about the emerging signs of bias in research on physical activity and mental health and (b) introduce a practical step-by-step method for critically appraising RCTs and meta-analyses, using an example-based approach.
The workshop is designed for postgraduate students (Master’s and Ph.D. level), as well as researchers interested in further developing their critical appraisal skills. The workshop is not intended as an accelerated course in research methods or statistics. Therefore, basic knowledge of research concepts (e.g., internal and external validity, statistical power, errors of inference, measurement error) is assumed.
By the conclusion of the workshop, the participants will achieve the following learning objectives: (1) develop an appreciation for the prevalence of bias in contemporary clinical research, (2) understand the influence of social, economic, and political circumstances on the research process, and (3) sharpen their skills for evaluating research conclusions by learning to focus on aspects of research methodology that typically exhibit high susceptibility to bias.
Number of participants is limited (first come first serve basis). Participants can register through the Congress online registration platform.
|Early Bird (until 15 May 2015)||Regular Fee (after 15 May 2015)|
|Any participant||CHF 50.00||CHF 60.00|
Please note that no reimbursement will be issued.
For any questions regarding this workshop, please contact the workshop Organiser directly: Panteleimon Ekkekakis, Ph.D., FACSM (email@example.com).
|12:00-13:00||Lunch (not included in the registration fee)|
Official opening ceremony of the FEPSAC Congress will start at 18.00.
SECTION 1: The essential necessity of disillusionment
Bias in research: Part prejudice, part incompetence
Lessons from the tobacco industry internal documents
Lessons from pharmaceutical industry trials (MECCs, “Author TBD”, etc)
Ioannidis (2005): “Most published research findings are false”
If “Exercise is Medicine”, what does this mean for medicines?
SECTION 2: Producing “evidence” for an “evidence-based” world
Evidence-Based Medicine: With the best of intentions
“Levels of evidence” in issuing clinical guidelines
Randomized Controlled Trials (RCTs) and meta-analyses of RCTs
Research is not a chaotic process: Basic mechanics of producing the desired result
Critical appraisal as the weakest link of EBM
Barebones critical appraisal: Allocation concealment, intention to treat, blinding
SECTION 3: The first step: Understand the social, political, and economic context
In research with clinical implications, a finding is never just a finding
What are the ramifications of findings supporting one conclusion versus another?
Example: From Thatcher’s Britain to the “Layard Hypothesis”
Example: From the Kirsch et al. (2008) meta-analysis to “Stepped Care Approaches”
The research-media nexus
SECTION 4: The experimental design
Conclusions that a design can support and conclusions it cannot
Example: The A+B versus B design
Example: Pitfalls of mediational designs
SECTION 5: Participants
The importance of inclusion and exclusion criteria
Gaming statistical power
Oversampling for attrition
Oversampling for unreliability of measurement
SECTION 6: Measurement of Patient Reported Outcomes (PROs)
Relation between random measurement error and unreliability
Relation between unreliability and validity
Relation between unreliability and statistical power
Sensitivity and specificity
Common method bias
Tricks and traps in the process of measurement (inconsistency, unblinding, etc)
Example: “This measure has established reliability and validity”
Example: “This measure was used because it has been used before”
SECTION 7: Intervention and Control
Was the intervention theoretically and practically appropriate?
Was the intervention “strong enough” but not “too strong”?
Were there possible treatment interactions and/or confounds?
Was the designated “minimum effective dose” reasonable?
Was intervention delivery competent?
Was adherence to the intervention sufficient?
Was the control appropriate?
Was there cross-contamination between study arms?
SECTION 8: Statistical analysis
Beyond Type I and Type II errors
Are there discrepancies between the study protocol and the analyses in the report?
Are there elements of the analysis not specified in the protocol?
Tricks and traps in handling missing observations
Are there substantial deviations from other similar studies?
SECTION 9: Systematic reviews and meta-analyses
Data integrity: Never to be taken for granted
The importance of scrutinizing operational definitions
The importance of scrutinizing inclusion/exclusion criteria
Funnel plots to gauge bias
Heterogeneity: Apples and oranges?
Fixed-effects versus random-effects modeling
Tricks and traps in appraisals of methodological quality
SECTION 10: Conclusion