![]() In that state, the 16GB of RAM is showing 7-8GB in use (mostly MS SQL at 4GB) and 8-9GB available. With the Duplicati service Started, but no job processing, I typically see just the Duplicati Working RAM sat at 32MB(ish) and around 100MB of Commit. Maybe it’s SQLite that is not pulling more of the database into Working RAM, which is why the commit RAM grows hard (and massive pagefile thrashing) and the working RAM sits at 32MB? (There’s more than 8GB of working ram free). I’m guessing (knowing nothing about the internals of either Duplicati or SQLite) that the problem was database related (?). It was during this “catch up” that the worst of the page faulting was happening - up to 200 page faults a second. The log was reporting a difference between the date/time stamps on the files (correct date/time) and the date/time recording in the database ( 00:00:00 (epoch)). When I restarted the backup, it had to work its way through all the files again, from scratch. It’s got about 5 days to go, so I’ll switch to the Canary then (unless it fails before that).Ī relevant point - as I mentioned, initially I was getting periodic 403 Forbidden responses from Google Drive (which I think was probably rate or volume limiting cutting in), which was ending the backup after every 30-40GB of upload. The backup I started yesterday is still running, and I’d really like one backup to complete first. Yes, I’ll be happy to do that, if you know there’s changes in it likely to help the problem. Tempdir=(a directory on a separate USB3.0 connected drive)Īny thoughts on how I can stop it thrashing pagefile and blocking up the file system I/O? I’ve got some options set to try and calm it down. I can’t understand why Duplicati isn’t using more physical RAM, or what would be triggering the page faults. (Unfortunately, the pagefile and the data volume are on the same physical disk/RAID array). End users start to complain about poor file serving performance. It’s reserving 32MB of working memory, and commit memory of 1.0GB.ĭuplicati is triggering a lot of page faults - I’ve seen values of over 100 - and as a result is thrashing the pagefile horrendously, making it a real choke point on the I/O of the whole server. When doing a backup to Google Drive, I notice that Duplicati’s memory consumption seems unusual. It’s backing up a very large number (150k-ish) of small-ish (<10MB) files. The server has 16GB of RAM in it, and floats around 40% memory usage and 12% CPU. HARD FAULTS PERSECOND WINDOWSI’m using Duplicati 2.0.5.1_beta_ running as a Service on Windows Server 20212 R2, backing up to a Google Drive. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |