I had a look at filesystem with df
merc conf # df -iT
Filesystem Type Inodes IUsed IFree IUse% Mounted on
/dev/root reiserfs 0 0 0 - /
udev tmpfs 223476 2764 220712 2% /dev
none tmpfs 223476 1 223475 1% /dev/shm
none tmpfs 223476 84 223392 1% /lib/rc/init.d
/dev/sda6 ext3 8880128 30520 8849608 1% /vz
/dev/sda5 reiserfs 0 0 0 - /tmp
merc conf # df -kT
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/root reiserfs 78772312 11702364 67069948 15% /
udev tmpfs 10240 116 10124 2% /dev
none tmpfs 1031592 0 1031592 0% /dev/shm
none tmpfs 1024 88 936 9% /lib/rc/init.d
/dev/sda6 ext3 69804244 683740 65574560 2% /vz
/dev/sda5 reiserfs 4048192 32840 4015352 1% /tmp
and found the values returned by vzsplit was pretty far off, possibly due to my mix of reiserfs and ext3 (which probably explains the "0:0" value on test box which is all reiserfs)
so by pushing them up quite a bit, basically by (in my case) using approx n=Available / 20 for blocks and inodes for hardlimit and a tad lower or so for for softlimit.
DISKSPACE="2000000:3000000"
DISKINODES="400000:420000"
VE's creates just fine now. I know reiserfs isn't recommended for openvz and I don't use it for my VE's - but I think the reason to not recommended it is a bit outdated. The problem is its way to deal with inodes rather then it should be unmature and unstable. Reiserfs 3.6 has been around for quite some time now and I have used it for many years w/o any problems;-)