I'd been using xcopy in a backup script but every so often xcopy would fail with "insufficient memory" when a pathname would sneak into the backup set that exceeded 254 characters. Lots of advice on the web said xcopy was deprecated for robocopy and suggested using robocopy instead.
I switched to robocopy and it works fine but runs dramatically slower. These are big backup sets and the xcopy version ran for 6 hours, which was OK for overnight. But the robocopy version runs for 11 hours which means it's still running the next morning! Is there a way to speed up robocopy OR is there a way to force xcopy to ignore long file names and keep going? Here's sample code, old and new:
xcopy S:\SharedFiles E:\NAS\SharedFiles /c /f /i /s /e /k /r /h /y /d /j 1>> C:\utilities\alloutput.txt 2>&1
robocopy S:\SharedFiles E:\NAS\SharedFiles /e /j /np /fp /r:1 /w:1 1>> C:\utilities\alloutput.txt 2>&1
N.B. that I'm using the /C option in xcopy but that doesn't seem to stop xcopy from ending as soon as it encounters the long pathname.
EDIT I updated my script to do /r:1 /w:1. In my test run there were 78 errors requiring retries. This made a slight improvement but it's still way slower than the xcopy version. I've also tried it with and without /J, with no discernible improvement. I have not tried setting a threading limit, but AFAIK xcopy is single-threaded anyway and robocopy defaults to 8.
/mt:1
or2
and see if the speed goes up. (or, try the opposite at/mt:16
). I would remove/j
unless your average file size is in GBs. Also remove/np
and/or add/eta
while you're troubleshooting to get an idea of the speeds compared to xcopy