OpenVZ Forum


Home » General » Support » Is ploop stable/Any progress on quotas for simfs
Is ploop stable/Any progress on quotas for simfs [message #53588] Tue, 15 October 2019 23:12 Go to next message
seanfulton is currently offline  seanfulton
Messages: 105
Registered: May 2007
Senior Member
When ploop first came out we used it from lost many, many containers to corrupt ploop filesystems. We went back to simfs. Now looking at upgrading some nodes to OpenVZ 7 and it looks like ploop is the only way to get quotas for containers.

So for those who have upgraded, how stable is it?

Is there any progress on first-level quotas for SIMFS???
Re: Is ploop stable/Any progress on quotas for simfs [message #53590 is a reply to message #53588] Wed, 23 October 2019 12:19 Go to previous messageGo to next message
ccto is currently offline  ccto
Messages: 61
Registered: October 2005
Member
We have servers running with both OpenVZ 6, OpenVZ 7. Gradually migrating to OpenVZ 7 platform

We use ploop, snapshot (for backup), compact. ploop is OK.
We do not receive report about loss of files.

simfs inside OpenVZ 7 does not support 2nd level quota.
If guest is CentOS 5, ploop does not support 2nd level quota too in OpenVZ 7 host.
Re: Is ploop stable/Any progress on quotas for simfs [message #53591 is a reply to message #53590] Wed, 23 October 2019 19:41 Go to previous messageGo to next message
seanfulton is currently offline  seanfulton
Messages: 105
Registered: May 2007
Senior Member
This is very helpful, thank you. I have not been able to get SIMFS containers to have a container quota, I thought that was a first-level quota. It says it's supported, but when I create a VE in SIMFS and enter it, df shows the whole filesystem, not just the little slice (like it used to in OpenVZ 6). Is that normal?
Re: Is ploop stable/Any progress on quotas for simfs [message #53593 is a reply to message #53588] Sun, 17 November 2019 19:07 Go to previous messageGo to next message
wsap is currently offline  wsap
Messages: 60
Registered: March 2018
Location: Halifax, NS
Member
We've also been using OpenVZ 7 with ploop (transferred most containers from OpenVZ6 SIMFS) for about 2 years now. We were a bit hesitant to use ploop because of those past reports on these forums about recovery and such, however there hasn't yet been a single data consistency problem that the virtuozzo subsystem hasn't auto-repaired for us upon boot of the container. Granted we use RAID 5 and have replaced multiple disks with full parity recovery by the RAID controller and not a single problem with booting containers yet. Fingers' crossed that remains true.

Upsides: because ploop containers are singular image files on the node, migrating containers has never been faster (full network bandwidth), which is a huge upside. Quota works great within the containers as well.

The biggest downsides of ploop over SIMFS are:

1. BACKUPS (A) If you want quick restore from backup capability, your backup script needs to snapshot, (to create a disk not currently locked) and you need to sacrifice incremental backups and compression, thus backups take up a huge amount of space. (B) If you want file-by-file compressable, dedupable, incremental backups (like using borg backup) that can be used to restore an entire container / node then you need to ensure that your restore script acommodates for container (and ploop) creation, mount it, and then restore the data (This is preferred by far if you ask me, for backup storage costs alone, but takes much more work to implement).
2. There's a fair amount of storage overhead used by ploop, even with its nightly compact system running. On a node with 1.7TB total usable storage, where 90% of it is used by around 20 ploop containers, about 10% of that 90% is overhead -- that makes for around 150GB- 200GB of wasted space. I've analyzed a number of different nodes and they all have similar overhead. I don't expect zero overhead from such a system, but I would definitely like to see that down to less than 5%.

The backup thing isn't *that* big of a deal, but it's definitely a bit more involved than SIMFS where you simply create the container and pop your files in place.
Re: Is ploop stable/Any progress on quotas for simfs [message #53594 is a reply to message #53588] Mon, 18 November 2019 02:30 Go to previous messageGo to next message
ccto is currently offline  ccto
Messages: 61
Registered: October 2005
Member
1. You may mount the ploop snapshot to some mountpount (e.g.) /mnt, and make the incremental file-based backup of that mountpoint. Then, you can restore it file by file.

2. Yes, from our experience, ploop contains overhead (yes, around 10%).
There is a pcompact tools, which somehow compact the ploop file automatically upon certain threshold.

Comparing to other, like KVM qcow2, there are also some storage overhead too.

From my personal experience, if you have a number of containers, each container contains a number of files, ploop performs faster than simfs over time.
Re: Is ploop stable/Any progress on quotas for simfs [message #53595 is a reply to message #53594] Mon, 18 November 2019 14:05 Go to previous messageGo to next message
wsap is currently offline  wsap
Messages: 60
Registered: March 2018
Location: Halifax, NS
Member
ccto wrote on Sun, 17 November 2019 22:30
1. You may mount the ploop snapshot to some mountpount (e.g.) /mnt, and make the incremental file-based backup of that mountpoint. Then, you can restore it file by file.


Indeed! That's what we do. Just pointing out that it's more involved than SIMFS

ccto wrote on Sun, 17 November 2019 22:30
Comparing to other, like KVM qcow2, there are also some storage overhead too.


For sure, but this is a comparison with SIMFS -- just making it clear what kinds of differences can be expected. I think the benefits outweigh the downsides, but that might not be the case for everyone.

ccto wrote on Sun, 17 November 2019 22:30
From my personal experience, if you have a number of containers, each container contains a number of files, ploop performs faster than simfs over time.


That may be the case; I haven't done a direct comparison on the same hardware to know for sure. One thing we can be sure of is that it's definitely more performant migrating ploop containers as it doesn't need to xfer file by file.
Re: Is ploop stable/Any progress on quotas for simfs [message #53623 is a reply to message #53595] Fri, 10 January 2020 20:31 Go to previous messageGo to next message
seanfulton is currently offline  seanfulton
Messages: 105
Registered: May 2007
Senior Member
I want to reply to my own thread here. I built several OpenVZ servers and migrated my old simfs containers to ploop. Not really trusting it, but not having much choice.

Last week I needed to expand a ploop container. I did so and it filled up the physical drive that it was on. Now, that's not a great situation, given that I allocated 4500G to the container and the drive was 5.4T, it should not have gotten full.

But, I spent three days trying to fix it. I resized, compacted, etc. I beat the hell out of it.

Each time the activity would hang forcing a reboot. I am happy to say that the ploop drive was still completely mountable to the end. It never got corrupted, never had an error with that.

So at this point I would say ploop is pretty reliable and stable.

sean
Re: Is ploop stable/Any progress on quotas for simfs [message #53624 is a reply to message #53623] Sat, 11 January 2020 15:57 Go to previous messageGo to next message
wsap is currently offline  wsap
Messages: 60
Registered: March 2018
Location: Halifax, NS
Member
Hey Sean, that's some great testing! Thankfully we haven't experienced the resizing issues -- all of our containers are 500GB or less, so that may explain it. We *do* experience the overhead (everyone using ploop must) and I wish that could be improved a bit, even if it could be reduced to 5% rather than ~10%.

For our <500GB containers, resizing and compacting using the following command has *not* resulted in the need to restart the container.

prl_disk_tool compact --hdd /vz/private/$CTID/root.hdd/


That said our results agree on the lack of corruption in ploop drives so far... it's been a little over 2 years since we switched to ploop and so far not a single corruption has occurred. We'll keep our fingers crossed!
Re: Is ploop stable/Any progress on quotas for simfs [message #53625 is a reply to message #53624] Sat, 11 January 2020 17:40 Go to previous messageGo to next message
seanfulton is currently offline  seanfulton
Messages: 105
Registered: May 2007
Senior Member
What sort of overhead are you seeing with PLOOP containers? RIght now I am migrating a CENTOS6 container and the container's disk usage shows 500M but df on the drive it is on is showng 1.2G of usage.
Re: Is ploop stable/Any progress on quotas for simfs [message #53626 is a reply to message #53625] Sat, 11 January 2020 21:28 Go to previous messageGo to next message
wsap is currently offline  wsap
Messages: 60
Registered: March 2018
Location: Halifax, NS
Member
seanfulton wrote on Sat, 11 January 2020 13:40
What sort of overhead are you seeing with PLOOP containers? RIght now I am migrating a CENTOS6 container and the container's disk usage shows 500M but df on the drive it is on is showng 1.2G of usage.


Here's a few containers we currently have for comparison. The first number is the total storage of all files in the container.

- WS2-163: 64.68 GB of 150 GB used -- du reports 78GB = 17% overhead
- WS2-230: 65.38 GB of 100 GB used -- du reports 77GB = 15% overhead
- WS2-253: 2.55 GB of 15 GB used -- du reports 4.3GB = 41% overhead
- WS2-301: 45.06 GB of 75 GB used -- du reports 54GB = 17% overhead

Assuming 41% is an outlier, and/or more likely to be a problem on smaller containers (which I believe is accurate from what I've seen), we're talking an average of 16% overhead. If you've got a 2TB drive, that means you've lost over 327GB to ploop overhead which in the above examples could easily be used for 4-6 more containers. Whereas if it were closer to 5% you'd only be losing 100GB, which would be an easier loss to stomach, given the advantages of ploop.
Re: Is ploop stable/Any progress on quotas for simfs [message #53627 is a reply to message #53626] Thu, 16 January 2020 10:55 Go to previous messageGo to next message
HHawk is currently offline  HHawk
Messages: 32
Registered: September 2017
Location: Europe
Member
websavers wrote on Sat, 11 January 2020 21:28
seanfulton wrote on Sat, 11 January 2020 13:40
What sort of overhead are you seeing with PLOOP containers? RIght now I am migrating a CENTOS6 container and the container's disk usage shows 500M but df on the drive it is on is showng 1.2G of usage.


Here's a few containers we currently have for comparison. The first number is the total storage of all files in the container.

- WS2-163: 64.68 GB of 150 GB used -- du reports 78GB = 17% overhead
- WS2-230: 65.38 GB of 100 GB used -- du reports 77GB = 15% overhead
- WS2-253: 2.55 GB of 15 GB used -- du reports 4.3GB = 41% overhead
- WS2-301: 45.06 GB of 75 GB used -- du reports 54GB = 17% overhead

Assuming 41% is an outlier, and/or more likely to be a problem on smaller containers (which I believe is accurate from what I've seen), we're talking an average of 16% overhead. If you've got a 2TB drive, that means you've lost over 327GB to ploop overhead which in the above examples could easily be used for 4-6 more containers. Whereas if it were closer to 5% you'd only be losing 100GB, which would be an easier loss to stomach, given the advantages of ploop.


Similar results here. One time we even have an overhead of almost 1.5 TB, because of resizing the ploop (backup) container.
I still prefer SimFS, but alas we have no real choice. Never had these issues with SimFS. Always had the space which was in use.

Re: Is ploop stable/Any progress on quotas for simfs [message #53629 is a reply to message #53627] Thu, 30 January 2020 17:39 Go to previous messageGo to next message
samiam123 is currently offline  samiam123
Messages: 15
Registered: March 2017
Junior Member
Well you do sort of have a choice. Use KVM instead.

One of the main advantages of OVZ over KVM was SIMFS. With the switch to ploop that advantage is gone. Also, I think the project made a big mistake not creating a migration script to upgrade existing OVZ 6 nodes to OVZ 7. I'm sure it would have been possible. If they had that we would have stuck with OVZ. As it is now it's just as painful for us to switch to KVM and KVM is a much more standard and highly used platform. The performance/efficiency advantage of OVZ is pretty much gone now too.
Re: Is ploop stable/Any progress on quotas for simfs [message #53630 is a reply to message #53629] Thu, 30 January 2020 17:51 Go to previous messageGo to next message
wsap is currently offline  wsap
Messages: 60
Registered: March 2018
Location: Halifax, NS
Member
They did release a script to migrate for OVZ6 to OVZ7: https://docs.openvz.org/openvz_users_guide.webhelp/_migratin g_containers_from_openvz_based_on_kernels_2_6_18_and_2_6_32_ to_virtuozzo_7.html

And control panels like SolusVM have actually integrated it into their migration UI.
Re: Is ploop stable/Any progress on quotas for simfs [message #53631 is a reply to message #53630] Fri, 31 January 2020 17:50 Go to previous messageGo to next message
samiam123 is currently offline  samiam123
Messages: 15
Registered: March 2017
Junior Member
That script is to migrate and convert containers to a new node. I am talking about upgrading an existing physical node from OVZ6 server to OVZ7 server. It is definitely possible. Perhaps a little tricky, but definitely doable. We have upgraded CE6 servers to CE7 without too much difficulty. So that would get you a new node on a CE7/OVZ7 based kernel. Converting the OVZ6 containers to OVZ7 would just be another layer on top of that by running that script you pointed out. So it's definitely doable.

Maybe it's easy for some people to just set up a new server and migrate containers but it's not for us. We would have to change everyone's IP addresses if we were to do that. For what we do that's a very painful change for customers. Experience has show a lot of customers will cancel on us when we try migrate them and change their IP.

We can just as easily migrate customers to KVM VM's. We just create a KVM VM with the same OS and use a one liner rsync command between the OVZ server and KVM VM. Works great.

From the destination (KVM) VM.


rsync --exclude /etc/fstab --exclude /dev --exclude /etc/udev --exclude /etc/sysconfig/network-scripts --exclude /etc/inittab --exclude /etc/init --exclude=/boot --exclude=/proc --exclude=/lib/modules --exclude=/sys -e "ssh -p 22" --numeric-ids -avpogtStlHz root@openvzsourceip:/ /


KVM is just easier and simpler all around to install and administer. Standard commands with lots of documentation. With OVZ it's all proprietary and I don't see any need to use the KVM feature of OVZ7 when I can just use KVM to begin with and bypass all the OVZ proprietary install and configure complications.

[Updated on: Fri, 31 January 2020 18:08]

Report message to a moderator

Re: Is ploop stable/Any progress on quotas for simfs [message #53632 is a reply to message #53631] Fri, 31 January 2020 18:06 Go to previous message
seanfulton is currently offline  seanfulton
Messages: 105
Registered: May 2007
Senior Member
I have moved several containers and never had to move the IP addresses. How are your IPs configured?

We use venet0 with dedicated IP addresses and have not had a problem.

sean
Previous Topic: simfs to ploop in openvz7
Next Topic: Ploop can't work with a correctly image
Goto Forum:
  


Current Time: Tue Mar 19 06:40:24 GMT 2024

Total time taken to generate the page: 0.04096 seconds