See this post  on r-devel for more details. I think the following two examples indicate bugs.
1) When writing a UTF8 string to a binary UTF8 connection on windows, it generates Latin1:
> string <- enc2utf8("Zürich")
> con <- file("test1.txt", open="wb", encoding = "UTF-8")
> writeLines(string, con)
> system("file test1.txt")
test1.txt: ISO-8859 text
2) Another probably related problem: when writing a UTF8 string to a UTF8 text connection with useBytes=TRUE, the string seems to be re-coded one too many times resulting in invalid characters:
> con <- file("test3.txt", open="w", encoding = "UTF-8")
> writeLines(string, con, useBytes = TRUE)
> system("file test3.txt")
test3.txt: UTF-8 Unicode text, with CRLF line terminators
> readLines("test3.txt", encoding="UTF-8")
The first is not a bug: re-encoding is only documented to work on text mode connections, and it appears to work in one of those. (This might not have been documented when the bug was first reported!)
The second is unclear. You are asking the connection to translate the input into UTF-8, and you are telling writeLines that the string is already native. So I'd say it might be reasonable to do what it did. It's really a matter of priority, and it appears the request to translate had a higher priority than the request to leave things alone.
It's also arguable that the "useBytes" should have higher priority, but I'm not going to change this one.