The usual trick with `system(<shell command that gets the free memory>)` is far from perfect.
Spawning the subshell has heavy memory footprint in itself, and I've seen it many times to fail while still there were many MB of memory available.
It is not compatible with per-process memory constraints assigned by kernel API.
Why it is not a minor inconvenience?
Usually there are opportunities to paralallize several computations (using the `mcparallel` from the `parallel` library), but this can fail due to insufficient memory. It would be way more efficient to check the amount of available memory beforehand, but is is currently not implemented on R. The usual trick with `system(<shell command that gets the free memory>)` usually is insufficient:
When one tries to work with a bigger data on linux there is a real danger of locking the workstation up with excess paging when R happily allocates too much memory. The natural solution is to use kernel API to impose memory constraints upon R, like the package https://github.com/krlmlr/ulimit
Disagree. These things are necessarily OS-dependent, and rely on reasonable and responsible system administration. See eg http://unix.stackexchange.com/questions/44985/limit-memory-usage-for-a-single-linux-process
The R Installation and Administration manual already addresses this in Section 5:
"You should ensure that the shell has set adequate resource limits: R expects a stack size of at least 8MB and to be able to open at least 256 file descriptors. (Any modern OS should have default limits at least as large as these, but apparently NetBSD may not. Use the shell command ulimit (sh/bash) or limit (csh/tcsh) to check.)"