Critical Finance Review
Request for TOPIC Replication Proposals

May 2017

Special Replication Issues

Following the success of its first such issue, the Critical Finance Review is planning to regularly publish issues dedicated to replicating the most influential empirical papers in financial economics. It is explicitly not the goal of these replication issues either to prove or to disprove the papers. The replications are meant to be as objective as possible. The CFR wants to reduce the incentives of authors to slant the results either favorably or unfavorably. The contract between an invited replicating team (often headed by a senior researcher, with some junior researchers or a Ph.D. student) and the CFR is that the journal will publish the replicating paper even (or especially if) all the findings of the original paper hold perfectly.

Papers for such issues are not selected because the editors have a prior on whether they are replicable. They are selected because they are influential flagship papers in the area. It is a professional recognition to have one's paper selected into one of these issues.

Mandatory Paper Outline

The format of replication studies should be in roughly equal parts (and in this order):

  1. Pure replication from the original underlying data. Exact replication is a sine-qua-non.

    Replication should be an attempt to use the same sample and methods employed in the original paper to obtain the same figures as those in the key tables reported in the original paper. (1) This part of the paper exists not only to confirm that there were no coding errors in the original paper, but to keep the starting point as close to the original paper as possible. The authors of the replicating paper are also required to publish their replication source code and some data (ideally full data sets).

    We do not expect replication problems. However, it may be the case that the replication may not succeed for no or little fault of the original authors. For example:

    • The original data (even CRSP and Compustat) may have been corrected or updated over the years.
    • The original paper may not have spelled out all details. (Doing so is nearly imspossible in an academic paper.) The replicating paper should also clarify method and procedures that are not immediately clear from the original papers.

    If this is the case, we as a profession still want to find out. The point is not to blame original authors—the point is to learn.

    Important: The idea is not to replicate all the values in all the tables—just the key ones. Replicating authors who want to replicate more or all results (i.e., more than what is suitable for print) can do so, but such results will go into an online appendix. Again, the idea of part 1 is to confirm the exact numbers, and distribute the source that makes this possible.

  2. Out-of-sample tests—performance since publication. Because we usually publish replications of papers 10-20 years old, we now can learn (a) whether the effects have become weaker; (b) whether the results continued to hold out of sample; (c) if they did not, whether they were so opposite that the full sample inference by now has changed (e.g., from statistically significant to insignificant).
  3. Plain specification robustness tests. This could add new tests: winsorizing, alternative weighting schemes, alternative timing, common additional controls, different standard error assumptions, and/or a placebo. Ideally, the replicating paper should also show or at least discuss when such tests support (or challenge) the original paper's conclusions.
  4. Additional higher-level tests and discussions. This could include interpretations of issues such as (corrections of) endogeneity, even if this could arguably be considered an omitted variables issue. It could be about time-series rather than cross-sectional (or vice-versa) association (e.g., fixed-effects), which can be different in meaning and interpretation but interesting from the perspective of the hypothesis. Ut could also contain interpretation of the findings through a different lens than that proposed in the original paper. This aspect of the paper could be a good venue for the replicating authors to publish original thoughts and discussion points that would otherwise not be easy to communicate to the broader profession.

Journal-Author Contract

Again, the CFR emphasizes that it is committed to publishing replication papers that conclude that the original paper was perfect.

Each paper is expected to be replicated by two teams working independently. If replication turns out to be difficult, teams can also help one another in the pure-replication part of the work. The first (replication) part is expected to be identical across both teams. If a team cannot replicate the original paper independently, it is then encouraged to communicate with the other replicating teams. If no team can replicate the original paper, then the teams are asked to coordinate with me for communication with the original author(s). We want to minimize the imposition on the original authors. In any case, we hope that replication will not be painful. After all, the original paper should have set out the recipe.

The CFR does reserve the right to ask teams to remove outright incorrect tests and execution, but will give replicating authors extensive latitude in deciding on good tests. For example, the editor and referee may feel that value-weighting removes too much interesting observations, but if the replicating authors insist that it is important and interesting, it will likely survive to the published paper.

Regardless of outcome, the original authors will be invited to provide non-anonymous feedback on the first submission of the papers and to publish their own perspectives on the replicating papers. They get the last word. Disagreements are welcome—insinuations are not.

As to the key incentive that makes participation for replicating teams worth their while, please recall that the replication paper—unlike others—will be published. It is not the usual wild goose chase—the desparately search for astonishing findings. Moreover, we know from the psychology replication issue that previous replication papers were very influential. They are beginning to transform their discipline. We hope we can do this for finance and accounting, too. Be part of it!

THE SPECIFIC TOPIC AND ISSUE

The second issue in this replication series will be dedicated to .... Tom and Jerry have graciously agreed to serve as the editors for this issue.

The empirical papers for this issue were selected based largely (but not exclusively) on objective citation counts. They are:

  1. Donald Duck
  2. Mickey Mouse
  3. Bugs Bunny
  4. ...
The CFR is hereby soliciting proposals for replication for each of these papers.

Timeline

The intended timeline is

Submission of Interest in 3-6 months (application selection)
Confirmed Replication in 6-12 months (first part: source code, data sets)
First Submission in 12-18 months
Review Process 3-4 months
Final Submission in 24 months
Original Author Responses in 27 months
Issue Creation in 36 months
Publication in 42 months

Team Objectivity

Members of the replication teams should be and strive to remain objective—if a third party could perceive a personal conflict-of-interest, either positive or negative, please indicate this in the proposal to the editor. The CFR's preferred goal is to select not only teams that are objective, but also viewed as objective. In case of doubt, please ask.

  • It is not a conflict of interest or lack of objectivity if authors have an opinion or hunch that the paper to-be-replicated is likely to hold up or not. In fact, some submitters may have already worked on replication earlier.
  • It is a conflict of interest if the replication and original authors have had a history of repeated disagreements.
  • It is a lack of objectivity if the replicating authors are intent on proving an outcome.

Contact

If interested, please contact the assigned issue editor yosemite sam and cc the CFR Editor Ivo Welch with a description of the team members, the paper to be replicated, and any potential conflicts of interest. The lead author on a team must have an existing publication record.

 


The Generic Professional Appeal

Replication (and not just replicability) is vitally important for the profession. We are not trying to debunk papers. We are trying to bring objectivity and remove politics from the knowledge-building process.

The effort involved is much less than it is with an ordinary paper. There is a clear road map of what is required and a direct route to publication. It should require less effort than even an invited paper. The work can be done together with coauthors and/or phd students.

This will be the first time our profession will have ever tried to execute objective systematic replication. We need to lend some prestige to this first-time undertaking.

As for me, I would like everyone to consider helping on this task at least once in their lifetime, and to view it as a necessary service to our academic profession, similar to refereeing, and regardless of whether it makes friends or enemies. I am worried about what our academic enterprise means if even famous people prefer to free-ride and not help build an objective replicated knowledge base. If the famous don't care enough to do it, how can we ask others?

Note that it is not necessary that the replicating team is built around an expert in the subject, here liquidity. After all, this is a replicating outside perspective. It should need good financial empiricists, not experts.

This is not my problem. This is our problem. If we cannot get this done as a collection of hundreds of academic researchers, what meaning does our professional endeavor really have?

What does our profession need most? More published papers? More referee reports? More of "everyone knows this is false" insinuations (which, as editor of the CFR, I have heard too many times without empirical support)? Or do we need unbiased replication and confirmation/rejection of our most important base findings? Where do you think you can contribute the most to our science?