I stumbled upon strange results of simulated p-values from chisq.test for tables with large margins. There have been previous reports about issues with simulate.p.value in chisq.test (PR#8224) and fisher.test (PR#10558), but these are unrelated. Here is my illustrative reproducible example: crosstab <- matrix(c(2400L, 180000L, 330000L, 25000000L), 2, 2) chisq.test(crosstab, correct = FALSE) ## X-squared = 0.2375, df = 1, p-value = 0.626 set.seed(1) chisq.test(crosstab, correct = FALSE, simulate.p.value = TRUE, B = 1000) ## X-squared = 0.2375, df = NA, p-value = 0.000999 Note that the simulated p-value was equal to 1/(B+1) with all random seeds I have tried. The 1000 underlying test statistics ('tmp <- .Call(C_chisq_sim, sr, sc, B, E)') are all the same and (approx.) equal to 9.977352e-05. However, dividing all counts by 10 yields consistent results between the asymptotic and the simulated p-value: chisq.test(crosstab / 10, correct = FALSE) ## X-squared = 0.0238, df = 1, p-value = 0.8775 set.seed(1) chisq.test(crosstab / 10, correct = FALSE, simulate.p.value = TRUE, B = 1000) ## X-squared = 0.0238, df = NA, p-value = 0.8941 Presumably, the system architecture plays a role for these outputs, so here is the relevant part of my sessionInfo(): R Under development (unstable) (2015-02-02 r67702) Platform: x86_64-unknown-linux-gnu (64-bit) Running under: Ubuntu 14.04.1 LTS