Let us consider the Dalgaard's example given in utils::example( SSD). Clearly such a comparison: anova( mlmfit, mlmfit0, idata=D.idata, X=~ deg * noise, M=~ deg * noise) cannot be performed because X and M span the same space and, consequently, T is void. Nevertheless, although the writing is equivalent, this comparison anova( mlmfit, mlmfit0, idata=D.idata, X=~ deg * noise) gives the following wrong result: Analysis of Variance Table Model 1: reacttime ~ 1 Model 2: reacttime ~ 1 - 1 Contrasts orthogonal to ~deg * noise Res.Df Df Gen.var. Pillai approx F num Df den Df Pr(>F) 1 9 1.2185e-29 2 10 1 2.4231e-29 0.99141 76.902 6 4 0.0004381 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 In the first case, identical( proj( X), proj( M)) is TRUE whereas in the second analysis, this assertion is FALSE. Consequently, if in the first case, the rounding done by zapsmall leads to 0, zapsmall returns some not null (very low) values in the second case. Then Thin.row does not return a NULL matrix.

Please simplify your example. If this is a problem with zapsmall, you shouldn't need to involve examples, anova, etc. Just put together an input to zapsmall that gives the wrong answer.

(In reply to Duncan Murdoch from comment #1) > Please simplify your example. If this is a problem with zapsmall, you > shouldn't need to involve examples, anova, etc. Just put together an input > to zapsmall that gives the wrong answer. The purpose ot the example was to emphasize the inconsistent call of zapsmall in stats:::Thin.row using an example for which the maximum of the values to be rounded (i.e., X) is lower than the tolerance of the error (tol).