Cloudberry backup not all chunks were updated
- #Cloudberry backup not all chunks were updated update#
- #Cloudberry backup not all chunks were updated software#
- #Cloudberry backup not all chunks were updated windows#
#Cloudberry backup not all chunks were updated windows#
I might try the backup on the Windows client as Ive read there are some additional options that are not available for the linux client yet. Ill grab the logs next time I attempt to stop it and post those for you. No error listed in the UI.Īfter the GUI becomes unresponsive, if I try to restart or stop the container, it just prompts me that it failed to do so. Yes the GUI becomes completely unresponsive. How it's failing exactly? Are you getting an error? The container is not stopped? How do you know that it hangs? The GUI becomes unresponsive?
#Cloudberry backup not all chunks were updated software#
Since it's a backup software and not a synchronisation/cloning tool, I'm afraid that it's not possible to only copy files without any metadata.īut to be sure, I encourage you to contact CloudBerry Lab's support and ask them. Is there no way around this? Can I not simply backup the files in their directory as they are? When it backs up the actual is then located in. Lets say the the files Im backing up are located in. One is a folder with the filename, and the second under it is a date. The bigger issue is that for every file it backs up, it creates two additional sub-folders. It just seems unnecessary to have the entire path created. This is not a big issue, I can make it work by simply moving all of the existing files to this new sub directory. Is there no way to avoid it backing up the entire path from /storage to /media? Is there no way to remove the CBB_Servername subfolder? My Rclone backup was saving everything to My Drive/ServerMedia/ My Drive/ServerMedia/CBB_Servername/Storage/user/media/. So the first problem is that when the backup runs, it is putting everything on the Google drive under the following directory
I need to do this method because I am backing up to Google Drive, which is mounted to my system under /mnt/disks/Google. I have the docker set to have access to /mnt/. I was hoping to setup something similar to Rclone Sync in Cloudberry.
I am coming from a Rclone backup instance that went crazy and started creating duplicates all over the place. Oddly enough if I delete a running backup instance inside of the Cloudberry UI it does not cause a hang.Īs to the usage issues I am having, I just wanted to ask if its possible to simply backup files alone. Not really sure what is going on with that. Restarting the UnRaid server during this hang, ends up hanging the UnRaid UI. All attempts to stop or restart the docker from the UnRaid UI fail. If I attempt to stop a backup that is in progress from inside of the Cloudberry UI, the docker hangs. Im running into an issue with this docker and have a question about its usage.įirst the problem. What are you port mappings, as displayed under the Docker page? Fix this here:ĭocker Application Backup_Remote, Container Port 43211 not found or changed on installed applicationīut the ports I changed were for webport and vnc When changing ports on a docker container, you should only ever modify the HOST port, as the application in question will expect the container port to remain the same as what the template author dictated. I changed my port mappings for one of my 2 dockers of cloud berry to I could run them at the same time, and about a week or so later, fix common problems complained about this:ĭocker Application Backup_Remote, Container Port 43210 not found or changed on installed application So far so good, I have been running a very large backup (87 GB) with many large files for over 3 hours without stalling. Personally I am using 3 threads, 100mb chunk and 700mb RAM allocated. The formula to calculate the minimum RAM allocation is But you also need to make sure that CloudBerry has enough RAM assigned to support the number of worker threads and chunk size. I read somewhere that Backblaze B2 stores in 100mb chunks or blocks and the recommendation was to try and match the incoming "chunk size" from Coudberry to that of Backblaze. I have since been tweaking the options under the advanced settings. It seems to happen with very large files over 1GB (home video archives).
#Cloudberry backup not all chunks were updated update#
I was hoping this recent update would fix it but my backup stalled again this morning. Yea my backups to B2 seem to randomly stop as well. The update crashed the docker, so had to reinstall the cloudberry docker, but it works fine now.Įdit: the backup stops after some time, is that just a bug for me or does it happens for others too Im looking at B2 for cold storage too, not as crazy pricing as Amazon S3/glacier.