vzdump snapshot problem and workaround for it [message #11797] |
Fri, 06 April 2007 20:09 |
oleka
Messages: 4 Registered: April 2007
|
Junior Member |
|
|
Hi everyone,
Just would like to share with you my experience using vzdump with snapshot backups.
My setup:
OS: Fedora Core5
vzdump version: 0.4-1 on FC5, vzdump-0.4-1.noarch.rpm
First of all, I had to update lvm rpm package on FC5 up to lvm2-2.02.17-1.fc5 (yum -y update lvm2).
Doing backup with the following command:
# vzdump --compress --dumpdir /backup/101 --snapshot 101
has given me this output:
starting backup for VPS 101 (/vz/private/101) Fri Apr 6 15:23:41 2007
creating lvm snapshot of /dev/mapper/VolGroup00-VZ ('/dev/VolGroup00/vzsnap')
Rounding up size to full physical extent 512.00 MB
Logical volume "vzsnap" created
mounting lvm snapshot
Use of uninitialized value in concatenation (.) or string at /usr/bin/vzdump line 463.
Use of uninitialized value in concatenation (.) or string at /usr/bin/vzdump line 465.
Logical volume "vzsnap" successfully removed
creating backup for VPS 101 failed (0.02 minutes): wrong lvm mount point '' at /usr/bin/vzdump line 465.
Looking into vzdump perl scrip, I could not understand the logic those lines: 463 and 465 (variable $lvmpath cannot get any value by script design).
Commenting them out has fixed the problem. And output I've gotten after the "fix":
starting backup for VPS 101 (/vz/private/101) Fri Apr 6 15:56:47 2007
creating lvm snapshot of /dev/mapper/VolGroup00-VZ ('/dev/VolGroup00/vzsnap')
Rounding up size to full physical extent 512.00 MB
Logical volume "vzsnap" created
mounting lvm snapshot
Creating archive '/backup/101/vzdump-101.tgz' (/vz/private/101)
tar: ./var/run/dbus/system_bus_socket: socket ignored
tar: ./var/lib/mysql/mysql.sock: socket ignored
tar: ./dev/log: socket ignored
Total bytes written: 365987840 (350MiB, 7.5MiB/s)
Logical volume "vzsnap" successfully removed
backup for VPS 101 finished successful (0.78 minutes)
Hope, this little trick will help.
|
|
|
|
|
|
Re: vzdump snapshot problem and workaround for it [message #11929 is a reply to message #11926] |
Thu, 12 April 2007 06:23 |
jarcher
Messages: 91 Registered: August 2006 Location: Smithfield, Rhode Island
|
Member |
|
|
I did some more checking, and have this to add:
I think I found the trouble, which is in the get_device function. Now, just as a warning, my PERL is really, really rusty. I did some many years ago and quickly decided I'll stick to C/C++. But I'll do my best to provide helpful information to the developer of vzdump.
My VPS private areas live on a LVM2 LV, which is ultimately on an iSCSI SAN.
I run this:
actual:/home/jim# vzdump --dumpdir /var/local/vps-bu --snapshot 108
As soon as get_device is entered, $dir is set to:
/mnt/compvps/vz/private/108
So running df against the private area for the VPS results in:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/actual--vg001-vps--storage
100756920 6463816 89174948 7% /mnt/compvps
So of course $out becomes:
/dev/mapper/actual--vg001-vps--storage
Parsing $out with the specified regex results in:
res[0]: /dev/mapper/actual--vg001-vps--storage
res[5]:
At this point, I don't follow this:
($vg, $lv) = @{$devmapper->{$dev}} if defined $devmapper->{$dev};
I see $devmapper is defined as a global but I can't find $dev, so I think what is happening is that since $devmapper->$dev is undefined, $vg and $lv are never being set, hence the errors I see reported.
Also, I added a few print statements and I see that the get_device function is being called 3 times, at least on my system.
I hope this helps someone.
sub get_device {
my $dir = shift;
open (TMP, "df '$dir'|");
<TMP>; #skip first line
my $out = <TMP>;
close (TMP);
my @res = split (/\s+/, $out);
my $dev = $res[0];
my $mp = $res[5];
my ($vg, $lv);
($vg, $lv) = @{$devmapper->{$dev}} if defined $devmapper->{$dev};
return wantarray ? ($dev, $mp, $vg, $lv) : $dev;
}
|
|
|
|
|
|
Re: vzdump snapshot problem and workaround for it [message #12016 is a reply to message #11958] |
Sat, 14 April 2007 20:14 |
jarcher
Messages: 91 Registered: August 2006 Location: Smithfield, Rhode Island
|
Member |
|
|
Thanks very much!
I’ll set the /dev// issue aside for right now and write about the PE allocation in the volume group. As you thought, my VG has no more capacity:
prod03:~# vgdisplay
--- Volume group ---
VG Name prod03-vg001
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 195.24 GB
PE Size 4.00 MB
Total PE 49982
Alloc PE / Size 49982 / 195.24 GB
Free PE / Size 0 / 0
VG UUID r9QqPU-rRQr-jIzr-U0Pa-bBle-2DYJ-fHcifB
If I understand correctly from investigating this, I can run:
# pvchange –s 8m
And this will increase the size of each extent, adding capacity, but not actual storage space, to the VG. To get more storage space I would actually have to add an additional PV to the VG, correct?
There does not seem to be a command to increase the number of extents, just the size of each extent. Is that correct?
If I don’t add another PV, will I be able to create another LV in the VG? If so, where would the data live?
|
|
|
Re: vzdump snapshot problem and workaround for it [message #12094 is a reply to message #12016] |
Tue, 17 April 2007 15:52 |
oleka
Messages: 4 Registered: April 2007
|
Junior Member |
|
|
Hi there,
Quote: |
If I understand correctly from investigating this, I can run:
# pvchange –s 8m
And this will increase the size of each extent, adding capacity, but not actual storage space, to the VG. To get more storage space I would actually have to add an additional PV to the VG, correct?
There does not seem to be a command to increase the number of extents, just the size of each extent. Is that correct?
If I don’t add another PV, will I be able to create another LV in the VG? If so, where would the data live?
|
Just have to mention to you that fiddling with logical volumes (e.g. changing partition size) can be very dangerous and you have to be absolutely sure what you're doing (there are tons of manuals/how-tos in the Inet). In your case I highly recommend to repartition your HN. More specific, create a separate partition for VZ VPSs and do not allocate all of your disk space (leave say several Gigs unoccupied). This will give you the needed Free PEs.
|
|
|
Re: vzdump snapshot problem and workaround for it [message #12130 is a reply to message #12094] |
Wed, 18 April 2007 06:37 |
jarcher
Messages: 91 Registered: August 2006 Location: Smithfield, Rhode Island
|
Member |
|
|
Thanks for the suggestion. Actually, my OpenVZ install lives on a partition but the VPSs all live on an iSCSI attached SAN. I am using OpenFiler, which itself is based upon LVM. So I have the extra little confusion that a LV on the SAN looks like a physical device on the initiator (client).
I have learned to expand my VGs by adding PVs to them and how to expand the file system while it is mounted. I also learned how to shrink a file system, which was big fun.
I have not tried it on my OpenVZ machine yet, as I am waiting for a second machine to arrive for redundancy. Once I have the ability to migrate my VPSs around I'll have more flexability.
Thanks for your help. I'll update the thread if I get a resolution.
|
|
|