Is ploop stable/Any progress on quotas for simfs [message #53588] |
Tue, 15 October 2019 23:12 |
seanfulton
Messages: 105 Registered: May 2007
|
Senior Member |
|
|
When ploop first came out we used it from lost many, many containers to corrupt ploop filesystems. We went back to simfs. Now looking at upgrading some nodes to OpenVZ 7 and it looks like ploop is the only way to get quotas for containers.
So for those who have upgraded, how stable is it?
Is there any progress on first-level quotas for SIMFS???
|
|
|
|
|
Re: Is ploop stable/Any progress on quotas for simfs [message #53593 is a reply to message #53588] |
Sun, 17 November 2019 19:07 |
wsap
Messages: 70 Registered: March 2018 Location: Halifax, NS
|
Member |
|
|
We've also been using OpenVZ 7 with ploop (transferred most containers from OpenVZ6 SIMFS) for about 2 years now. We were a bit hesitant to use ploop because of those past reports on these forums about recovery and such, however there hasn't yet been a single data consistency problem that the virtuozzo subsystem hasn't auto-repaired for us upon boot of the container. Granted we use RAID 5 and have replaced multiple disks with full parity recovery by the RAID controller and not a single problem with booting containers yet. Fingers' crossed that remains true.
Upsides: because ploop containers are singular image files on the node, migrating containers has never been faster (full network bandwidth), which is a huge upside. Quota works great within the containers as well.
The biggest downsides of ploop over SIMFS are:
1. BACKUPS (A) If you want quick restore from backup capability, your backup script needs to snapshot, (to create a disk not currently locked) and you need to sacrifice incremental backups and compression, thus backups take up a huge amount of space. (B) If you want file-by-file compressable, dedupable, incremental backups (like using borg backup) that can be used to restore an entire container / node then you need to ensure that your restore script acommodates for container (and ploop) creation, mount it, and then restore the data (This is preferred by far if you ask me, for backup storage costs alone, but takes much more work to implement).
2. There's a fair amount of storage overhead used by ploop, even with its nightly compact system running. On a node with 1.7TB total usable storage, where 90% of it is used by around 20 ploop containers, about 10% of that 90% is overhead -- that makes for around 150GB- 200GB of wasted space. I've analyzed a number of different nodes and they all have similar overhead. I don't expect zero overhead from such a system, but I would definitely like to see that down to less than 5%.
The backup thing isn't *that* big of a deal, but it's definitely a bit more involved than SIMFS where you simply create the container and pop your files in place.
|
|
|
Re: Is ploop stable/Any progress on quotas for simfs [message #53594 is a reply to message #53588] |
Mon, 18 November 2019 02:30 |
ccto
Messages: 61 Registered: October 2005
|
Member |
|
|
1. You may mount the ploop snapshot to some mountpount (e.g.) /mnt, and make the incremental file-based backup of that mountpoint. Then, you can restore it file by file.
2. Yes, from our experience, ploop contains overhead (yes, around 10%).
There is a pcompact tools, which somehow compact the ploop file automatically upon certain threshold.
Comparing to other, like KVM qcow2, there are also some storage overhead too.
From my personal experience, if you have a number of containers, each container contains a number of files, ploop performs faster than simfs over time.
|
|
|
Re: Is ploop stable/Any progress on quotas for simfs [message #53595 is a reply to message #53594] |
Mon, 18 November 2019 14:05 |
wsap
Messages: 70 Registered: March 2018 Location: Halifax, NS
|
Member |
|
|
ccto wrote on Sun, 17 November 2019 22:301. You may mount the ploop snapshot to some mountpount (e.g.) /mnt, and make the incremental file-based backup of that mountpoint. Then, you can restore it file by file.
Indeed! That's what we do. Just pointing out that it's more involved than SIMFS
ccto wrote on Sun, 17 November 2019 22:30Comparing to other, like KVM qcow2, there are also some storage overhead too.
For sure, but this is a comparison with SIMFS -- just making it clear what kinds of differences can be expected. I think the benefits outweigh the downsides, but that might not be the case for everyone.
ccto wrote on Sun, 17 November 2019 22:30From my personal experience, if you have a number of containers, each container contains a number of files, ploop performs faster than simfs over time.
That may be the case; I haven't done a direct comparison on the same hardware to know for sure. One thing we can be sure of is that it's definitely more performant migrating ploop containers as it doesn't need to xfer file by file.
|
|
|
|
|
|
|
Re: Is ploop stable/Any progress on quotas for simfs [message #53627 is a reply to message #53626] |
Thu, 16 January 2020 10:55 |
HHawk
Messages: 32 Registered: September 2017 Location: Europe
|
Member |
|
|
websavers wrote on Sat, 11 January 2020 21:28seanfulton wrote on Sat, 11 January 2020 13:40What sort of overhead are you seeing with PLOOP containers? RIght now I am migrating a CENTOS6 container and the container's disk usage shows 500M but df on the drive it is on is showng 1.2G of usage.
Here's a few containers we currently have for comparison. The first number is the total storage of all files in the container.
- WS2-163: 64.68 GB of 150 GB used -- du reports 78GB = 17% overhead
- WS2-230: 65.38 GB of 100 GB used -- du reports 77GB = 15% overhead
- WS2-253: 2.55 GB of 15 GB used -- du reports 4.3GB = 41% overhead
- WS2-301: 45.06 GB of 75 GB used -- du reports 54GB = 17% overhead
Assuming 41% is an outlier, and/or more likely to be a problem on smaller containers (which I believe is accurate from what I've seen), we're talking an average of 16% overhead. If you've got a 2TB drive, that means you've lost over 327GB to ploop overhead which in the above examples could easily be used for 4-6 more containers. Whereas if it were closer to 5% you'd only be losing 100GB, which would be an easier loss to stomach, given the advantages of ploop.
Similar results here. One time we even have an overhead of almost 1.5 TB, because of resizing the ploop (backup) container.
I still prefer SimFS, but alas we have no real choice. Never had these issues with SimFS. Always had the space which was in use.
|
|
|
|
|
Re: Is ploop stable/Any progress on quotas for simfs [message #53631 is a reply to message #53630] |
Fri, 31 January 2020 17:50 |
samiam123
Messages: 15 Registered: March 2017
|
Junior Member |
|
|
That script is to migrate and convert containers to a new node. I am talking about upgrading an existing physical node from OVZ6 server to OVZ7 server. It is definitely possible. Perhaps a little tricky, but definitely doable. We have upgraded CE6 servers to CE7 without too much difficulty. So that would get you a new node on a CE7/OVZ7 based kernel. Converting the OVZ6 containers to OVZ7 would just be another layer on top of that by running that script you pointed out. So it's definitely doable.
Maybe it's easy for some people to just set up a new server and migrate containers but it's not for us. We would have to change everyone's IP addresses if we were to do that. For what we do that's a very painful change for customers. Experience has show a lot of customers will cancel on us when we try migrate them and change their IP.
We can just as easily migrate customers to KVM VM's. We just create a KVM VM with the same OS and use a one liner rsync command between the OVZ server and KVM VM. Works great.
From the destination (KVM) VM.
rsync --exclude /etc/fstab --exclude /dev --exclude /etc/udev --exclude /etc/sysconfig/network-scripts --exclude /etc/inittab --exclude /etc/init --exclude=/boot --exclude=/proc --exclude=/lib/modules --exclude=/sys -e "ssh -p 22" --numeric-ids -avpogtStlHz root@openvzsourceip:/ /
KVM is just easier and simpler all around to install and administer. Standard commands with lots of documentation. With OVZ it's all proprietary and I don't see any need to use the KVM feature of OVZ7 when I can just use KVM to begin with and bypass all the OVZ proprietary install and configure complications.
[Updated on: Fri, 31 January 2020 18:08] Report message to a moderator
|
|
|
|