Bad (or Sometimes Bad) Reasons for Referees Rejecting A Paper
The following are not the always automatic-reject reasons that they are at other journals:
I already knew this.
Well, sometimes this is a good reason for rejecting. Often it is not.
If you as a referee deeply involved in the area already knew this (but not everyone else), then please refer to a (couple of) papers that show it. If you cannot do this, then the fact that you knew this is not a reason for rejection. Finance is a written, not an oral discipline. (And if the reference is very obscure, and a large number of papers and authors obviously do not know it, then it may still count as new.)
The authors were careless, as is demonstrated by the fact that on page 15, they misspelled Bikhchandani.
Well, sometimes this is a good reason for rejecting. Often it is not. Careless papers may well be rejected, but it needs to be carelessness, not nitpicks. I can find a mispelling in any paper.
The authors do not know how to write an academic paper, as evidenced by their repeated use of first-person pronouns. The paper is poorly written.
Well, sometimes this is a good reason for rejecting. However, if this is all that is wrong with the paper, and it is a great idea, English can be fixed. Of course, if the paper is indeed poorly written, then the referee and editor must not be blamed for having missed what the paper was supposed to be about.
This is obvious.
Well, sometimes this is a good reason for rejecting. Often it is not. What is obvious in retrospect is not always obvious in prospect. Good academic papers should always be obvious after having read them. Moreover, what is obvious to one person is not obvious to another. A prima-facie argument against obviousness is if many people regularly get it wrong.
I don’t find it interesting.
Well, sometimes this is a good reason for rejecting. Sometimes it is not. As an editor, I very much like to hear the view of the referee on this issue, but it is the editor who plays the primary role on this judgment call.
The paper breaks no ground in methodology. Or: An undergraduate student could have written this.
Well, sometimes this is a good reason for rejecting. Often it is not. We are a discipline that writes paper about economic phenomena, not a discipline that writes papers about methods for papers about economic phenomena.
The paper merely shows that findings no longer hold by simply applying the same method to other data. In our discipline, the thresholds for looking at established phenomena are higher.
Well, sometimes this is a good reason for rejecting. Sometimes it is not.
We are a discipline that writes paper about economic phenomena, not a discipline that writes papers about methods for papers about economic phenomena. When an influential published paper has concluded that X causes Y in a data set from 1970-2010 and the finding now reverses from 2010-2016 using the same methods, then X does not seem to cause Y and this finding is interesting—even if the paper does nothing else. If anything, the threshold should be lower. The literature now contains a finding that is demonstrably incorrect now, even if at the time it was the best inference.
This is not to be confused with hypotehsis testing: if the T-statistic was 1.8 before and is 1.6 now, this is not interesting. If the T-statistic was 1.8 and is now -1.0, it is interesting. Both are more interesting if not only the new data (2011-2016), but also the full data (from 1970-2016) reverse in sign.