Jonathan Phillips, University of São Paulo
This module will give students the tools and confidence to understand, deconstruct and critique political science research papers. By encouraging participants to ground critiques of both quantitative and qualitative research in the framework and language of causation, the course hones vital skills for identifying hidden assumptions, weighing the strength of evidence and suggesting alternative explanations. The course also underlines the importance of making these critiques constructive by suggesting alternative research designs and a wide range of robustness checks. By the end of the course, participants will be confident contributing to peer review processes as colleagues, seminar participants or as journal referees, and will also gain new perspectives on how to design and execute their own research.
The course aims to systematize the types of critique we can make so that participants are able to provide multiple reasons why the account offered by an author might not be valid. While the course covers critiques of measurement, theory and modeling, we focus particularly on critiques of causation, including risks of selection, confounding and reverse causation, demystifying terms such as ‘counterfactual’, ‘complier’ and ‘external validity’.
In turn, we consider how to make critiques constructive – first, in the way they are communicated, and, second, in identifying positive research strategies that can overcome or mitigate common critiques, for example alternative research designs and robustness tests.
We will use the afternoon lab sessions to practice formulating effective and constructive critiques. Building on examples drawn from a wide range of papers and review articles across the fields of political science and international relations, participants will develop and compare critiques. Participants will also have the option (no obligation or expectation) of sharing their own research ideas and papers to receive feedback from others. The lab sessions will include the replication of one or two published analyses to highlight the range of modelling options researchers are faced with and the breadth of potential critiques that this opens up. The replication exercises will be guided and can be completed in Stata or R
First we discuss what constitutes a convincing argument and how a paper can contribute to learning in the discipline by reviewing the concept of causation and the framework of causal inference. Then we learn to systematically translate the text of a paper into the core elements of a research argument; the units of analysis, the comparisons, the concepts, the measures, the assumptions, the theory, the models and the conclusions.
We consider basic critiques of whether the measures reflect the concepts, whether the model captures the theory, and whether the conclusions follow from the premises and evidence.
We review a range of causal research designs, the assumptions on which they are based and their connection to specific statistical models. We practice repeatedly assessing if these assumptions have been met.
We look beyond each argument’s own claims to critique the generalizability of the findings, the sensitivity of the findings to the research design, the match between theory and evidence, and support for specific causal mechanisms.
We consider various strategies and techniques for overcoming weaknesses in an argument. These include the use of alternative research designs, deriving multiple tests from theory, uncovering ‘hidden’ units, robustness tests, heterogeneity tests and placebo tests.
Participants should have a basic understanding of research design and quantitative methods techniques.