OpenVZ Forum


Home » General » Support » Ploop - anyone using in production yet? (Any conflicts running this on top of LVM ?)
Ploop - anyone using in production yet? [message #45856] Mon, 09 April 2012 21:13 Go to next message
Rene is currently offline  Rene
Messages: 40
Registered: September 2006
Member


Is anyone running ploop in production yet. It is in the stable release branch but the Wiki page still says it is not save for production use. Is the Wiki page outdated, or what?

Would there be any conflict running ploop on top of LVM?

Re: Ploop - anyone using in production yet? [message #45857 is a reply to message #45856] Mon, 09 April 2012 21:26 Go to previous messageGo to next message
Rene is currently offline  Rene
Messages: 40
Registered: September 2006
Member
Just tried to convert a CT to ploop on a test server - it didn't get very far:

# vzctl convert 100 --layout ploop
Creating image: /vz1/private/100.ploop/root.hdd/root.hdd size=9223372036854775807K
Creating delta /vz1/private/100.ploop/root.hdd/root.hdd bs=2048 size=-2 sectors
Storing /vz1/private/100.ploop/root.hdd/DiskDescriptor.xml
Floating point exception


Re: Ploop - anyone using in production yet? [message #45865 is a reply to message #45856] Tue, 10 April 2012 15:40 Go to previous messageGo to next message
Ales is currently offline  Ales
Messages: 330
Registered: May 2009
Senior Member
Recent discussions on the list tell me that the wiki page is not outdated, ploop is not recommended for production environment. The technology just recently became available.

As to the error - I'm not running it and haven't made any tests on it yet, so I can't help you there...
Re: Ploop - anyone using in production yet? [message #45918 is a reply to message #45865] Sat, 14 April 2012 14:24 Go to previous messageGo to next message
Rene is currently offline  Rene
Messages: 40
Registered: September 2006
Member
Thanks Ales. What list is that, something I could subscribe to?
Re: Ploop - anyone using in production yet? [message #45919 is a reply to message #45856] Sun, 15 April 2012 18:06 Go to previous messageGo to next message
Ales is currently offline  Ales
Messages: 330
Registered: May 2009
Senior Member
Sure, it was the openvz users list and the list info is here:

http://wiki.openvz.org/Mailing_list

The ploop info came from this thread:

http://openvz.org/pipermail/users/2012-April/004628.html
Re: Ploop - anyone using in production yet? [message #45951 is a reply to message #45856] Fri, 20 April 2012 01:01 Go to previous messageGo to next message
VDSExtreme is currently offline  VDSExtreme
Messages: 11
Registered: February 2012
Location: The Netherlands
Junior Member
Some customers of us which are using their servers for testing purposes to bug-track our platforms, are pretty excited.

The only thing what needs to be fixed is the problems in backing up the containers. Please DON'T use VZDump to backup a ploop file system. This can cause a high unavailability of your containers.

As we can see the testing results, we have to do a lot to get everything better in combination of backups, snapshots and either control panels.


Kind regards,

VDS Extreme - Technical Department
Re: Ploop - anyone using in production yet? [message #46122 is a reply to message #45857] Thu, 26 April 2012 16:44 Go to previous message
kir is currently offline  kir
Messages: 1645
Registered: August 2005
Location: Moscow, Russia
Senior Member

Rene,

For simfs, diskspace parameter is basically a quota limit for given CT. For ploop, diskspace becomes the size of ploop device, and thus it can't be set arbitrarily high.

So, before trying to convert, set diskspace to some sensible value, for example

vzctl set 5000 --diskspace 10G --save

or smth. Then convert.

I have modified vzctl to check for too high value of diskspace and emit an error if it is the case.


Kir Kolyshkin
http://static.openvz.org/userbars/openvz-developer.png
Previous Topic: OpenVZ and rpc.idmapd (NFSv4)
Next Topic: failcnt on unlimited values?
Goto Forum:
  


Current Time: Sun Nov 17 14:53:59 GMT 2024

Total time taken to generate the page: 0.02924 seconds