January 2016

Authors

Per the below RFP, we have selected author-related teams to replicate and reconsider the following most influential papers:

Pastor-Stambaugh
[1] Robert Novy Marx; [2] Jeffrey Pontiff.
Acharya-Pedersen
[1] Craig Holden [2] Eiichiro Kazumori
Amihud
[1] Larry Harris and Andrea Amato [2] Anna von Reibnitz
All
In addition, Chase DeHan is writing R code for the replication of the three studies.

In special cases, I may allow a third replication. In any case, these will be replications and extension that will be as unbiased as possible. The CFR will publish the papers regardless of whether the results are perfectly the same or perfectly different.

Special Replication Issues

The Critical Finance Review is planning to publish issues dedicated to replicating the most influential empirical papers in financial economics. It is explicitly not the goal of these replication issues either to prove or to disprove the papers. The replications are meant to be as objective as possible. The CFR wants no incentive on itself or the authors to slant the results either favorably or unfavorably. The contract between an invited replicating team (often headed by a senior researcher) and the CFR is that the journal will publish the replicating paper even (or especially if) all the findings of the original paper hold perfectly.

The format of replication studies should be in equal parts (and in this order):

  1. Pure replication from common data. This is an attempt to use the same sample and methods employed in the original paper to obtain the same figures as those in the key tables reported in the original paper. (1) This part of the paper exists to show that the starting point is as close to the original paper as possible. The authors of the replicating paper are required to publish their replication source code and some data (ideally full data sets). We do not need to replicate all the values in all the tables—just the key ones. If you want to replicate all results (more than what is suitable for print), they can go into an online appendix. Again, the idea is to get the exact numbers, and distribute the source that makes this possible.
  2. Out-of-sample tests—performance since publication.
  3. Plain specification robustness tests. This could add new tests: winsorizing, alternative weighting schemes, alternative timing, common additional controls, different standard error assumptions, and/or a placebo. Ideally, the replicating paper should also show or at least discuss when such tests support (or challenge) the original paper's conclusions.
  4. Additional higher-level tests and discussions. This could include interpretations of issues such as (corrections of) endogeneity, even if this could arguably be considered an omitted variables issue. It could be about time-series rather than cross-sectional (or vice-versa) association (e.g., fixed-effects), which can be different in meaning and interpretation but interesting from the perspective of the hypothesis. Ut could also contain interpretation of the findings through a different lens than that proposed in the original paper. This aspect of the paper could be a good venue for the replicating authors to publish original thoughts and discussion points that would otherwise not be easy to communicate to the broader profession.
Again, the CFR emphasizes that it is committed to publishing replication papers that conclude that the original paper was perfect.

Each paper is expected to be replicated by two teams working independently. If replication turns out to be difficult, teams can also help one another in the pure-replication part of the work. The first (replication) part is expected to be identical across both teams. If a team cannot replicate the original paper independently, it is then encouraged to communicate with the other replicating teams. If no team can replicate the original paper, then the teams are asked to coordinate with me for communication with the original author(s). We want to minimize the imposition on the original authors. In any case, we hope that replication will not be painful. After all, the original paper should have set out the recipe.

The CFR does reserve the right to ask teams to remove outright incorrect tests and execution, but will give replicating authors extensive latitude in deciding on good tests. For example, the editor and referee may feel that value-weighting removes too much interesting observations, but if the replicating authors insist that it is important and interesting, it will likely survive to the published paper.

Regardless of outcome, the original authors will be invited to provide non-anonymous feedback on the first submission of the papers and to publish their own perspectives on the replicating papers. Disagreements are welcome—insinuations are not.

CFR RFP: Liquidity

The first issue in this replication series will be dedicated to liquidity. The three most important empirical papers in the literature in the mid-2000s, based on objective citation counts, are

The CFR is hereby soliciting 2-3 proposals for replication for each of these three papers. The intended timeline is
Submission of Interest November 2015 (possibly to AFA 2016)
Confirmed Replication April 2016 (first part: Email source code, data sets)
First Submission September 2016
Final Submission January 2017
Publication mid 2017

Members of team must be objective—if any third party could perceive a personal conflict-of-interest, either positive or negative, please indicate this in the proposal to the editor (Ivo Welch). The CFR's preferred goal is to select not only teams that are objective, but also viewed as objective. In case of doubt, ask. Note that an opinion about whether a paper's findings are likely to hold up or not is not a conflict of interest. In fact, some submitters may have already worked on replication earlier.

As to an extra incentive that makes it worth one's while, please recall that this paper—unlike others—will be published. It is not the usual wild goose chase, desparately searching for astonishing findings. Moreover, we know from the psychology replication issue that their papers were very influential and are beginning to transform their discipline. I hope we can do this, too. Be part of it!

An Appeal

Replication (and not just replicability) is vitally important for the profession. We are not trying to debunk papers. We are trying to bring objectivity and remove politics from the knowledge-building process.

The effort involved is much less than it is with an ordinary paper. There is a clear road map of what is required and a direct route to publication. It should require less effort than even an invited paper. The work can be done together with coauthors and/or phd students.

This will be the first time our profession will have ever tried to execute objective systematic replication. We need to lend some prestige to this first-time undertaking.

As for me, I would like everyone to consider helping on this task at least once in their lifetime, and to view it as a necessary service to our academic profession, similar to refereeing, and regardless of whether it makes friends or enemies. I am worried about what our academic enterprise means if even famous people prefer to free-ride and not help build an objective replicated knowledge base. If the famous don't care enough to do it, how can we ask others?

Note that it is not necessary that the replicating team is built around an expert in the subject, here liquidity. After all, this is a replicating outside perspective. It should need good financial empiricists, not experts.

This is not my problem. This is our problem. If we cannot get this done as a collection of hundreds of academic researchers, what meaning does our professional endeavor really have?

What does our profession need most? More published papers? More referee reports? More of "everyone knows this is false" insinuations (which, as editor of the CFR, I have heard too many times without empirical support)? Or do we need unbiased replication and confirmation/rejection of our most important base findings? Where do you think you can contribute the most to our science?

Contact

If interested, please contact the CFR Editor Ivo Welch with a description of the team members, the paper to be replicated, and any potential conflicts of interest. The lead author on a team must have an existing publication record.

 


Footnotes
  1. We do not expect replication problems. However, it may be possible replication may not succeed for no fault of the original authors. The original data may have been corrected or updated over the years. If this is the case, the profession should still learn this.

    The replicating paper should also clarify method and procedures that are not immediately clear from the original papers. It is almost impossible to have described every step to the last detail in any paper without omitting something.