The paper was submitted to a medical journal; the editor had sent it out for review to 3 referees, two of whom were, allegedly, statistical experts. I hadn't read the paper, nor the reviews, so I can't comment in great details. But from what I hear, the reviewers' comments were just wrong. In practice, they told off the authors for using wrong statistical methods, while it looks like they just didn't understand the valid statistical point.

For example, one of the referees had criticised the following: for some reasons (I can't remember the details), the authors had performed a regression model and then regressed the resulting linear predictor on the covariates, which obviously leads to $R^2=1$.

Now, you can certainly debate as to whether the methods used by the authors were the most appropriate for their purpose, but their point was not wrong $-$ you can easily check in R with the following commands

# Simulates some covariates

x1 <- rnorm(100,0,1)

x2 <- rpois(100,2)

x3 <- rbinom(100,1,.6)

#(Arbitrarily) sets the coefficients for each covariate

beta <- c(1.43,-.14,.97,1.1)

# Computes the "true" linear predictor

mu <- cbind(rep(1,100),x1,x2,x3)%*%beta

# (Arbitrarily) sets the population standard deviation

sigma <- 2.198

# Simulates the response

y <- rnorm(100,mu,sigma)

# Fits a linear regression & show the results

m <- lm(y~x1+x2+x3)

summary(m)

Call:

lm(formula = y ~ x1 + x2 + x3)

Residuals

Min 1Q Median 3Q Max

-5.0186 -1.4885 -0.0434 1.4007 5.7971

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 1.86280 0.50344 3.700 0.000359 ***

x1 -0.03908 0.24307 -0.161 0.872618

x2 1.05753 0.15927 6.640 1.88e-09 ***

x3 0.41025 0.45461 0.902 0.369090

---

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.244 on 96 degrees of freedom

Multiple R-squared: 0.3154, Adjusted R-squared: 0.294

F-statistic: 14.74 on 3 and 96 DF, p-value: 5.692e-08

Of course, because of sampling variability, the coefficients are estimated with error; in addition, the overall model fit (as measured by $R^2$) is not perfect, with only 32% of the total variability explained by the regression. If however we regress the fitted values on the same set of covariates:

m1 <- lm(m$fitted.values~x1+x2+x3)

summary(m1)

Call:

lm(formula = m$fitted.values ~ x1 + x2 + x3)

Residuals:

Min 1Q Median 3Q Max

-1.560e-15 -2.553e-16 -1.035e-17 2.161e-16 2.699e-15

Coefficients:

Estimate Std. Error t value Pr(>|t|)

(Intercept) 1.863e+00 1.193e-16 1.562e+16 <2e-16 ***

x1 -3.908e-02 5.758e-17 -6.786e+14 <2e-16 ***

x2 1.058e+00 3.773e-17 2.803e+16 <2e-16 ***

x3 4.103e-01 1.077e-16 3.809e+15 <2e-16 ***

---

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 5.315e-16 on 96 degrees of freedom

Multiple R-squared: 1, Adjusted R-squared: 1

F-statistic: 2.627e+32 on 3 and 96 DF, p-value: < 2.2e-16

this now implies perfect fit $-$ but that just makes sense as the linear predictor is given by exactly

*that*combination of covariates.

I'm not defending my friend's paper for the sake of it $-$ to reiterate, I haven't read it and I don't really know whether it should have got published. And maybe there were other issues that the reviewers rightly picked up. But certainly it is wrong that it was judged as statistically flawed, and I think I would probably write a response letter to the editor to argue my case.

Of course this is a very delicate issue, and people often voice their strong opinions about the state of peer-reviewing; Larry Wasserman even goes as far as to argue that we should completely dispense with them.

Thanks for that post, I'm going to look at the details of the stats later.

ReplyDeleteI've got a very delicate question: is there somewhere on the web a place where we can collect strange views of reviewer considering statistics?

I've got some stories to tell (backed up with facts, that is =) ), as well as a lot of the peers I'm talking to.

I'm really not aware of such a thing $-$ I suppose this is quite a common thing, so many people may have stories to tell. But I just haven't encountered such a place.

ReplyDeleteHm, thanks anyway. It's rather strange, though, that obviously erroneous remarks of reviewers can decide the fate of your papers: it's difficult to respond if the editor decides to reject right away. We had this case recently in a cumulative dissertation - it prolonged the whole thing for nearly a year, as it happened four times in a row...

ReplyDelete