OpenVZ Forum


Home » Mailing lists » Devel » [RFC] Virtualization steps
Re: [RFC] Virtualization steps [message #2334 is a reply to message #2322] Wed, 29 March 2006 13:47 Go to previous messageGo to next message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Wed, Mar 29, 2006 at 05:39:00AM +0400, Kirill Korotaev wrote:
> Nick,
>
> >>First of all, what it does which low level virtualization can't:
> >>- it allows to run 100 containers on 1GB RAM
> >> (it is called containers, VE - Virtual Environments,
> >> VPS - Virtual Private Servers).
> >>- it has no much overhead (<1-2%), which is unavoidable with hardware
> >> virtualization. For example, Xen has >20% overhead on disk I/O.
> >
> >Are any future hardware solutions likely to improve these problems?
> Probably you are aware of VT-i/VT-x technologies and planned virtualized
> MMU and I/O MMU from Intel and AMD.
> These features should improve the performance somehow, but there is
> still a limit for decreasing the overhead, since at least disk, network,
> video and such devices should be emulated.
>
> >>OS kernel virtualization
> >>~~~~~~~~~~~~~~~~~~~~~~~~
> >
> >Is this considered secure enough that multiple untrusted VEs are run
> >on production systems?
> it is secure enough. What makes it secure? In general:
> - virtualization, which makes resources private
> - resource control, which makes VE to be limited with its usages
> In more technical details virtualization projects make user access (and
> capabilities) checks stricter. Moreover, OpenVZ is using "denied by
> default" approach to make sure it is secure and VE users are not allowed
> something else.
>
> Also, about 2-3 month ago we had a security review of OpenVZ project
> made by Solar Designer. So, in general such virtualization approach
> should be not less secure than VM-like one. VM core code is bigger and
> there is enough chances for bugs there.
>
> >What kind of users want this, who can't use alternatives like real
> >VMs?
> Many companies, just can't share their names. But in general no
> enterprise and hosting companies need to run different OSes on the same
> machine. For them it is quite natural to use N machines for Linux and M
> for Windows. And since VEs are much more lightweight and easier to work
> with, they like it very much.
>
> Just for example, OpenVZ core is running more than 300,000 VEs worldwide.

not bad, how did you get to those numbers?
and, more important, how many of those are actually OpenVZ?
(compared to Virtuozzo(tm))

best,
Herbert

> Thanks,
> Kirill
Re: Re: [RFC] Virtualization steps [message #2336 is a reply to message #2333] Wed, 29 March 2006 14:47 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

>> I wonder what is the value of it if it doesn't do guarantees or QoS?
>> In our experiments with it we failed to observe any fairness.
>
> probably a misconfiguration on your side ...
maybe you can provide some instructions on which kernel version to use
and how to setup the following scenario:
2CPU box. 3 VPSs which should run with 1:2:3 ratio of CPU usage.

> well, do you have numbers?
just run the above scenario with one busy loop inside each VPS. I was
not able to observe 1:2:3 cpu distribution. Other scenarios also didn't
showed my any fairness. The results were different. Sometimes 1:1:2,
sometimes others.

Thanks,
Kirill
Re: Re: [RFC] Virtualization steps [message #2339 is a reply to message #2336] Wed, 29 March 2006 17:29 Go to previous messageGo to next message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Wed, Mar 29, 2006 at 06:47:58PM +0400, Kirill Korotaev wrote:
> >>I wonder what is the value of it if it doesn't do guarantees or QoS?
> >>In our experiments with it we failed to observe any fairness.
> >
> >probably a misconfiguration on your side ...
> maybe you can provide some instructions on which kernel version to use
> and how to setup the following scenario: 2CPU box. 3 VPSs which should
> run with 1:2:3 ratio of CPU usage.

that is quite simple, you enable the Hard CPU Scheduler
and select the Idle Time Skip, then you set the following
token bucket values depending on what your mean with
'should run with 1:2:3 ratio of CPU usage':

a) a guaranteed maximum of 16.7%, 33.3% and 50.0%

b) a fair sharing according to 1:2:3

c) a guaranteed minimum of 16.7%, 33.3% and 50.0%
with a fair sharing of 1:2:3 for the rest ...


for all cases you would set:
(adjust according to you reserve/boost likings)

VPS1,2,3: tokens_min = 50, tokens_max = 500
interval = interval2 = 6

a) VPS1: rate = 1, hard, noidleskip
VPS2: rate = 2, hard, noidleskip
VPS3: rate = 3, hard, noidleskip

b) VPS1: rate2 = 1, hard, idleskip
VPS2: rate2 = 2, hard, idleskip
VPS3: rate2 = 3, hard, idleskip

c) VPS1: rate = rate2 = 1, hard, idleskip
VPS2: rate = rate2 = 2, hard, idleskip
VPS3: rate = rate2 = 3, hard, idleskip

of course, adjusting rate/interval while keeping
the ratio might help you depending on the guest load
(i.e. more batch load type or mor interactive stuff)

of course, you can do those adjustments per CPU so, if
you for example want to assign one CPU to the third
guest, you can do that easily too ...

> >well, do you have numbers?
> just run the above scenario with one busy loop inside each VPS. I was
> not able to observe 1:2:3 cpu distribution. Other scenarios also didn't
> showed my any fairness. The results were different. Sometimes 1:1:2,
> sometimes others.

what was your setup?

best,
Herbert

> Thanks,
> Kirill
Re: [RFC] Virtualization steps [message #2340 is a reply to message #2283] Wed, 29 March 2006 20:30 Go to previous messageGo to next message
Dave Hansen is currently offline  Dave Hansen
Messages: 240
Registered: October 2005
Senior Member
On Tue, 2006-03-28 at 12:51 +0400, Kirill Korotaev wrote:
> Eric, we have a GIT repo on openvz.org already:
> http://git.openvz.org

Git is great for getting patches and lots of updates out, but I'm not
sure it is idea for what we're trying to do. We'll need things reviewed
at each step, especially because we're going to be touching so much
common code.

I'd guess set of quilt (or patch-utils) patches is probably best,
especially if we're trying to get stuff into -mm first.

-- Dave
Re: [RFC] Virtualization steps [message #2343 is a reply to message #2340] Wed, 29 March 2006 20:47 Go to previous messageGo to next message
ebiederm is currently offline  ebiederm
Messages: 1354
Registered: February 2006
Senior Member
Dave Hansen <haveblue@us.ibm.com> writes:

> On Tue, 2006-03-28 at 12:51 +0400, Kirill Korotaev wrote:
>> Eric, we have a GIT repo on openvz.org already:
>> http://git.openvz.org
>
> Git is great for getting patches and lots of updates out, but I'm not
> sure it is idea for what we're trying to do. We'll need things reviewed
> at each step, especially because we're going to be touching so much
> common code.
>
> I'd guess set of quilt (or patch-utils) patches is probably best,
> especially if we're trying to get stuff into -mm first.

Git is as good at holding patches as quilt. It isn't quite as
good at working with them as quilt but in the long term that is
fixable.

The important point is that we get a collection of patches that
we can all agree to, and that we publish it.

At this point it sounds like each group will happily publish the
patches, and that might not be a bad double check, on agreement.

Then we have someone send them to Andrew. Or we have a quilt or
a git tree that Andrew knows he can pull from.

But we do need lots of review so distribution to Andrew and the other
kernel developers as plain patches appears to be the healthy choice.
I'm going to go bury my head in the sand and finish my OLS paper now.


Eric
Re: Re: [RFC] Virtualization steps [message #2346 is a reply to message #2336] Wed, 29 March 2006 21:37 Go to previous messageGo to next message
Sam Vilain is currently offline  Sam Vilain
Messages: 73
Registered: February 2006
Member
On Wed, 2006-03-29 at 18:47 +0400, Kirill Korotaev wrote:
> >> I wonder what is the value of it if it doesn't do guarantees or QoS?
> >> In our experiments with it we failed to observe any fairness.
> >
> > probably a misconfiguration on your side ...
> maybe you can provide some instructions on which kernel version to use
> and how to setup the following scenario:
> 2CPU box. 3 VPSs which should run with 1:2:3 ratio of CPU usage.

Ok, I'll call those three VPSes fast, faster and fastest.

"fast" : fill rate 1, interval 3
"faster" : fill rate 2, interval 3
"fastest" : fill rate 3, interval 3

That all adds up to a fill rate of 6 with an interval of 3, but that is
right because with two processors you have 2 tokens to allocate per
jiffie. Also set the bucket size to something of the order of HZ.

You can watch the processes within each vserver's priority jump up and
down with `vtop' during testing. Also you should be able to watch the
vserver's bucket fill and empty in /proc/virtual/XXX/sched (IIRC)

> > well, do you have numbers?
> just run the above scenario with one busy loop inside each VPS. I was
> not able to observe 1:2:3 cpu distribution. Other scenarios also didn't
> showed my any fairness. The results were different. Sometimes 1:1:2,
> sometimes others.

I mentioned this earlier, but for the sake of the archives I'll repeat -
if you are running with any of the buckets on empty, the scheduler is
imbalanced and therefore not going to provide the exact distribution you
asked for.

However with a single busy loop in each vserver I'd expect the above to
yield roughly 100% for fastest, 66% for faster and 33% for fast, within
5 seconds or so of starting those processes (assuming you set a bucket
size of HZ).

Sam.
Re: [RFC] Virtualization steps [message #2348 is a reply to message #2340] Wed, 29 March 2006 22:44 Go to previous messageGo to next message
Sam Vilain is currently offline  Sam Vilain
Messages: 73
Registered: February 2006
Member
Dave Hansen wrote:

>On Tue, 2006-03-28 at 12:51 +0400, Kirill Korotaev wrote:
>
>
>>Eric, we have a GIT repo on openvz.org already:
>>http://git.openvz.org
>>
>>
>
>Git is great for getting patches and lots of updates out, but I'm not
>sure it is idea for what we're trying to do. We'll need things reviewed
>at each step, especially because we're going to be touching so much
>common code.
>
>I'd guess set of quilt (or patch-utils) patches is probably best,
>especially if we're trying to get stuff into -mm first.
>
>

The apparent problem is that the git commit history on a branch cannot
be unwound. However, that is fine - just make another branch and put
your new sequence of commits there.

Tools exist that allow you to wind and unwind the commit history
arbitrarily to revise patches before they are published on a branch that
you don't want to just delete. For instance:

stacked git

http://www.procode.org/stgit/

or patchy git

http://www.spearce.org/2006/02/pg-version-0111-released.html

are examples of such tools.

I recommend starting with stacked git, it really is nice.

Sam.
Re: [RFC] Virtualization steps [message #2358 is a reply to message #2340] Thu, 30 March 2006 13:51 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

ok. This is also easier for us, as it is a usual way of doing things in
OpenVZ. Will see...

> On Tue, 2006-03-28 at 12:51 +0400, Kirill Korotaev wrote:
>> Eric, we have a GIT repo on openvz.org already:
>> http://git.openvz.org
>
> Git is great for getting patches and lots of updates out, but I'm not
> sure it is idea for what we're trying to do. We'll need things reviewed
> at each step, especially because we're going to be touching so much
> common code.
>
> I'd guess set of quilt (or patch-utils) patches is probably best,
> especially if we're trying to get stuff into -mm first.
>
> -- Dave
>
>
Re: [RFC] Virtualization steps [message #2364 is a reply to message #2314] Wed, 29 March 2006 20:56 Go to previous messageGo to next message
Bill Davidsen is currently offline  Bill Davidsen
Messages: 4
Registered: March 2006
Junior Member
Sam Vilain wrote:
> On Tue, 2006-03-28 at 09:41 -0500, Bill Davidsen wrote:
>>> It is more than realistic. Hosting companies run more than 100 VPSs in
>>> reality. There are also other usefull scenarios. For example, I know
>>> the universities which run VPS for every faculty web site, for every
>>> department, mail server and so on. Why do you think they want to run
>>> only 5VMs on one machine? Much more!
>> I made no commont on what "they" might want, I want to make the rack of
>> underutilized Windows, BSD and Solaris servers go away. An approach
>> which doesn't support unmodified guest installs doesn't solve any of my
>> current problems. I didn't say it was in any way not useful, just not of
>> interest to me. What needs I have for Linux environments are answered by
>> jails and/or UML.
>
> We are talking about adding jail technology, also known as containers on
> Solaris and vserver/openvz on Linux, to the mainline kernel.
>
> So, you are obviously interested!
>
> Because of course, you can take an unmodified filesystem of the guest
> and assuming the kernels are compatible run them without changes. I
> find this consolidation approach indispensible.
>
The only way to assume kernels are compatible is to run the same distro.
Because vendor kernels are sure not compatible, even running a
kernel.org kernel on Fedora (for instance) reveals the the utilities are
also tweaked to expect the kernel changes, and you wind up with a system
which feels like wearing someone else's hat. It's stable but little
things just don't work right.

--
-bill davidsen (davidsen@tmr.com)
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
Re: [RFC] Virtualization steps [message #2404 is a reply to message #2294] Wed, 29 March 2006 21:37 Go to previous messageGo to next message
Bill Davidsen is currently offline  Bill Davidsen
Messages: 4
Registered: March 2006
Junior Member
Herbert Poetzl wrote:

>>> Summary of previous discussions on LKML
>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> Have their been any discussions between the groups pushing this
>> virtualization, and ...
>
> yes, the discussions are ongoing ... maybe to clarify the
> situation for the folks not involved (projects in
> alphabetical order):
>
Thank you! Nice to have a scorecard.
--
-bill davidsen (davidsen@tmr.com)
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
Re: Re: [RFC] Virtualization steps [message #2626 is a reply to message #2346] Wed, 12 April 2006 08:22 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

Sam,

> Ok, I'll call those three VPSes fast, faster and fastest.
>
> "fast" : fill rate 1, interval 3
> "faster" : fill rate 2, interval 3
> "fastest" : fill rate 3, interval 3
>
> That all adds up to a fill rate of 6 with an interval of 3, but that is
> right because with two processors you have 2 tokens to allocate per
> jiffie. Also set the bucket size to something of the order of HZ.
>
> You can watch the processes within each vserver's priority jump up and
> down with `vtop' during testing. Also you should be able to watch the
> vserver's bucket fill and empty in /proc/virtual/XXX/sched (IIRC)
>
> I mentioned this earlier, but for the sake of the archives I'll repeat -
> if you are running with any of the buckets on empty, the scheduler is
> imbalanced and therefore not going to provide the exact distribution you
> asked for.
>
> However with a single busy loop in each vserver I'd expect the above to
> yield roughly 100% for fastest, 66% for faster and 33% for fast, within
> 5 seconds or so of starting those processes (assuming you set a bucket
> size of HZ).

Sam, what we observe is the situation, when Linux cpu scheduler spreads
2 tasks on 1st CPU and 1 task on the 2nd CPU. Std linux scheduler
doesn't do any rebalancing after that, so no plays with tokens make the
spread to be 3:2:1, since the lowest priority process gets a full 2nd
CPU (100% instead of 33% of CPU).

Where is my mistake? Can you provide a configuration where we could test
or the instuctions on how to avoid this?

Thanks,
Kirill
Re: Re: [RFC] Virtualization steps [message #2636 is a reply to message #2626] Thu, 13 April 2006 01:05 Go to previous messageGo to next message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Wed, Apr 12, 2006 at 12:28:56PM +0400, Kirill Korotaev wrote:
> Sam,
>
> >Ok, I'll call those three VPSes fast, faster and fastest.
> >
> >"fast" : fill rate 1, interval 3
> >"faster" : fill rate 2, interval 3
> >"fastest" : fill rate 3, interval 3
> >
> >That all adds up to a fill rate of 6 with an interval of 3, but that is
> >right because with two processors you have 2 tokens to allocate per
> >jiffie. Also set the bucket size to something of the order of HZ.
> >
> >You can watch the processes within each vserver's priority jump up and
> >down with `vtop' during testing. Also you should be able to watch the
> >vserver's bucket fill and empty in /proc/virtual/XXX/sched (IIRC)
> >
> >I mentioned this earlier, but for the sake of the archives I'll repeat -
> >if you are running with any of the buckets on empty, the scheduler is
> >imbalanced and therefore not going to provide the exact distribution you
> >asked for.
> >
> >However with a single busy loop in each vserver I'd expect the above to
> >yield roughly 100% for fastest, 66% for faster and 33% for fast, within
> >5 seconds or so of starting those processes (assuming you set a bucket
> >size of HZ).
>
> Sam, what we observe is the situation, when Linux cpu scheduler spreads
> 2 tasks on 1st CPU and 1 task on the 2nd CPU. Std linux scheduler
> doesn't do any rebalancing after that, so no plays with tokens make the
> spread to be 3:2:1, since the lowest priority process gets a full 2nd
> CPU (100% instead of 33% of CPU).
>
> Where is my mistake? Can you provide a configuration where we could test
> or the instuctions on how to avoid this?

well, your mistake seems to be that you probably haven't
tested this yet, because with the following (simple)
setups I seem to get what you consider impossible
(of course, not as precise as your scheduler does it)


vcontext --create --xid 100 ./cpuhog -n 1 100 &
vcontext --create --xid 200 ./cpuhog -n 1 200 &
vcontext --create --xid 300 ./cpuhog -n 1 300 &

vsched --xid 100 --fill-rate 1 --interval 6
vsched --xid 200 --fill-rate 2 --interval 6
vsched --xid 300 --fill-rate 3 --interval 6

vattribute --xid 100 --flag sched_hard
vattribute --xid 200 --flag sched_hard
vattribute --xid 300 --flag sched_hard


PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
39 root 25 0 1304 248 200 R 74 0.1 0:46.16 ./cpuhog -n 1 300
38 root 25 0 1308 252 200 H 53 0.1 0:34.06 ./cpuhog -n 1 200
37 root 25 0 1308 252 200 H 28 0.1 0:19.53 ./cpuhog -n 1 100
46 root 0 0 1804 912 736 R 1 0.4 0:02.14 top -cid 20

and here the other way round:

vsched --xid 100 --fill-rate 3 --interval 6
vsched --xid 200 --fill-rate 2 --interval 6
vsched --xid 300 --fill-rate 1 --interval 6

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
36 root 25 0 1304 248 200 R 75 0.1 0:58.41 ./cpuhog -n 1 100
37 root 25 0 1308 252 200 H 54 0.1 0:42.77 ./cpuhog -n 1 200
38 root 25 0 1308 252 200 R 29 0.1 0:25.30 ./cpuhog -n 1 300
45 root 0 0 1804 912 736 R 1 0.4 0:02.26 top -cid 20


note that this was done on a virtual dual cpu
machine (QEMU 8.0) with 2.6.16-vs2.1.1-rc16 and
that there were roughly 25% idle time, which I'm
unable to explain atm ...

feel free to jump on that fact, but I consider
it unimportant for now ...

best,
Herbert

> Thanks,
> Kirill
Re: Re: [RFC] Virtualization steps [message #2642 is a reply to message #2636] Thu, 13 April 2006 06:45 Go to previous messageGo to next message
Kirill Korotaev is currently offline  Kirill Korotaev
Messages: 137
Registered: January 2006
Senior Member
Herbert,

Thanks a lot for the details, I will give it a try once again. Looks
like fairness in this scenario simply requires sched_hard settings.

Herbert... I don't know why you've decided that my goal is to prove that
your scheduler is bad or not precise. My goal is simply to investigate
different approaches and make some measurements. I suppose you can
benefit from such a volunteer, don't you think so? Anyway, thanks again
and don't be cycled on the idea that OpenVZ are so cruel bad guys :)

Thanks,
Kirill

> well, your mistake seems to be that you probably haven't
> tested this yet, because with the following (simple)
> setups I seem to get what you consider impossible
> (of course, not as precise as your scheduler does it)
>
>
> vcontext --create --xid 100 ./cpuhog -n 1 100 &
> vcontext --create --xid 200 ./cpuhog -n 1 200 &
> vcontext --create --xid 300 ./cpuhog -n 1 300 &
>
> vsched --xid 100 --fill-rate 1 --interval 6
> vsched --xid 200 --fill-rate 2 --interval 6
> vsched --xid 300 --fill-rate 3 --interval 6
>
> vattribute --xid 100 --flag sched_hard
> vattribute --xid 200 --flag sched_hard
> vattribute --xid 300 --flag sched_hard
>
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 39 root 25 0 1304 248 200 R 74 0.1 0:46.16 ./cpuhog -n 1 300
> 38 root 25 0 1308 252 200 H 53 0.1 0:34.06 ./cpuhog -n 1 200
> 37 root 25 0 1308 252 200 H 28 0.1 0:19.53 ./cpuhog -n 1 100
> 46 root 0 0 1804 912 736 R 1 0.4 0:02.14 top -cid 20
>
> and here the other way round:
>
> vsched --xid 100 --fill-rate 3 --interval 6
> vsched --xid 200 --fill-rate 2 --interval 6
> vsched --xid 300 --fill-rate 1 --interval 6
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 36 root 25 0 1304 248 200 R 75 0.1 0:58.41 ./cpuhog -n 1 100
> 37 root 25 0 1308 252 200 H 54 0.1 0:42.77 ./cpuhog -n 1 200
> 38 root 25 0 1308 252 200 R 29 0.1 0:25.30 ./cpuhog -n 1 300
> 45 root 0 0 1804 912 736 R 1 0.4 0:02.26 top -cid 20
>
>
> note that this was done on a virtual dual cpu
> machine (QEMU 8.0) with 2.6.16-vs2.1.1-rc16 and
> that there were roughly 25% idle time, which I'm
> unable to explain atm ...
>
> feel free to jump on that fact, but I consider
> it unimportant for now ...
>
> best,
> Herbert
>
>
>>Thanks,
>>Kirill
>
>
Re: Re: [RFC] Virtualization steps [message #2651 is a reply to message #2642] Thu, 13 April 2006 13:42 Go to previous messageGo to next message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Thu, Apr 13, 2006 at 10:52:19AM +0400, Kirill Korotaev wrote:
> Herbert,
>
> Thanks a lot for the details, I will give it a try once again. Looks
> like fairness in this scenario simply requires sched_hard settings.

hmm, not precisely, it's a cpu limit you described
and that is what this configuration does, for fair
scheduling you need to activate the indle skip and
configure it in a similar way ...

> Herbert... I don't know why you've decided that my goal is to prove
> that your scheduler is bad or not precise. My goal is simply to
> investigate different approaches and make some measurements.

fair enough ...

> I suppose you can benefit from such a volunteer, don't you think so?

well, if the 'results' and 'methods' will be made
public, I can, until now all I got was something
along the lines:

"Linux-VServer is not stable! WE (swsoft?) have
a secret but essential test suite running two
weeks to confirm that OUR kernels ARE stable,
and Linux-VServer will never pass those tests,
but of course, we can't tell you what kind of
tests or what results we got"

which doesn't help me anything and which, to be
honest, does not sound very friendly either ...

> Anyway, thanks again and don't be cycled on the idea that OpenVZ are
> so cruel bad guys :)

but what about the Virtuozzo(tm) guys? :)
I'm really trying not to generalize here ...

best,
Herbert

> Thanks,
> Kirill
>
> >well, your mistake seems to be that you probably haven't
> >tested this yet, because with the following (simple)
> >setups I seem to get what you consider impossible
> >(of course, not as precise as your scheduler does it)
> >
> >
> >vcontext --create --xid 100 ./cpuhog -n 1 100 &
> >vcontext --create --xid 200 ./cpuhog -n 1 200 &
> >vcontext --create --xid 300 ./cpuhog -n 1 300 &
> >
> >vsched --xid 100 --fill-rate 1 --interval 6
> >vsched --xid 200 --fill-rate 2 --interval 6
> >vsched --xid 300 --fill-rate 3 --interval 6
> >
> >vattribute --xid 100 --flag sched_hard
> >vattribute --xid 200 --flag sched_hard
> >vattribute --xid 300 --flag sched_hard
> >
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> > 39 root 25 0 1304 248 200 R 74 0.1 0:46.16 ./cpuhog -n 1
> > 300 38 root 25 0 1308 252 200 H 53 0.1 0:34.06 ./cpuhog
> > -n 1 200 37 root 25 0 1308 252 200 H 28 0.1 0:19.53
> > ./cpuhog -n 1 100 46 root 0 0 1804 912 736 R 1 0.4
> > 0:02.14 top -cid 20
> >and here the other way round:
> >
> >vsched --xid 100 --fill-rate 3 --interval 6
> >vsched --xid 200 --fill-rate 2 --interval 6
> >vsched --xid 300 --fill-rate 1 --interval 6
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> > 36 root 25 0 1304 248 200 R 75 0.1 0:58.41 ./cpuhog -n 1
> > 100 37 root 25 0 1308 252 200 H 54 0.1 0:42.77 ./cpuhog
> > -n 1 200 38 root 25 0 1308 252 200 R 29 0.1 0:25.30
> > ./cpuhog -n 1 300 45 root 0 0 1804 912 736 R 1 0.4
> > 0:02.26 top -cid 20
> >
> >note that this was done on a virtual dual cpu
> >machine (QEMU 8.0) with 2.6.16-vs2.1.1-rc16 and
> >that there were roughly 25% idle time, which I'm
> >unable to explain atm ...
> >
> >feel free to jump on that fact, but I consider
> >it unimportant for now ...
> >
> >best,
> >Herbert
> >
> >
> >>Thanks,
> >>Kirill
> >
> >
>
Re: Re: [RFC] Virtualization steps [message #2655 is a reply to message #2651] Thu, 13 April 2006 21:33 Go to previous messageGo to next message
Cedric Le Goater is currently offline  Cedric Le Goater
Messages: 443
Registered: February 2006
Senior Member
Herbert Poetzl wrote:

> well, if the 'results' and 'methods' will be made
> public, I can, until now all I got was something
> along the lines:
>
> "Linux-VServer is not stable! WE (swsoft?) have
> a secret but essential test suite running two
> weeks to confirm that OUR kernels ARE stable,
> and Linux-VServer will never pass those tests,
> but of course, we can't tell you what kind of
> tests or what results we got"
>
> which doesn't help me anything and which, to be
> honest, does not sound very friendly either ...

Recently, we've been running tests and benchmarks in different
virtualization environments : openvz, vserver, vserver in a minimal context
and also Xen as a reference in the virtual machine world.

We ran the usual benchmarks, dbench, tbench, lmbench, kernerl build, on the
native kernel, on the patched kernel and in each virtualized environment.
We also did some scalability tests to see how each solution behaved. And
finally, some tests on live migration. We didn't do much on network nor on
resource management behavior.

We'd like to continue in an open way. But first, we want to make sure we
have the right tests, benchmarks, tools, versions, configuration, tuning,
etc, before publishing any results :) We have some materials already but
before proposing we would like to have your comments and advices on what we
should or shouldn't use.

Thanks for doing such a great job on lightweight containers,

C.
Re: Re: [RFC] Virtualization steps [message #2656 is a reply to message #2655] Thu, 13 April 2006 22:45 Go to previous messageGo to next message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Thu, Apr 13, 2006 at 11:33:13PM +0200, Cedric Le Goater wrote:
> Herbert Poetzl wrote:
>
> > well, if the 'results' and 'methods' will be made
> > public, I can, until now all I got was something
> > along the lines:
> >
> > "Linux-VServer is not stable! WE (swsoft?) have
> > a secret but essential test suite running two
> > weeks to confirm that OUR kernels ARE stable,
> > and Linux-VServer will never pass those tests,
> > but of course, we can't tell you what kind of
> > tests or what results we got"
> >
> > which doesn't help me anything and which, to be
> > honest, does not sound very friendly either ...
>
> Recently, we've been running tests and benchmarks in different
> virtualization environments : openvz, vserver, vserver in a minimal
> context and also Xen as a reference in the virtual machine world.
>
> We ran the usual benchmarks, dbench, tbench, lmbench, kernerl build,
> on the native kernel, on the patched kernel and in each virtualized
> environment. We also did some scalability tests to see how each
> solution behaved. And finally, some tests on live migration. We didn't
> do much on network nor on resource management behavior.

I would be really interested in getting comparisons
between vanilla kernels and linux-vserver patched
versions, especially vs2.1.1 and vs2.0.2 on the
same test setup with a minimum difference in config

I doubt that you can really compare across the
existing virtualization technologies, as it really
depends on the setup and hardware

> We'd like to continue in an open way. But first, we want to make sure
> we have the right tests, benchmarks, tools, versions, configuration,
> tuning, etc, before publishing any results :) We have some materials
> already but before proposing we would like to have your comments and
> advices on what we should or shouldn't use.

In my experience it is extremely hard to do 'proper'
comparisons, because the slightest change of the
environment can cause big differences ...

here as example, a kernel build (-j99) on 2.6.16
on a test host, with and without a chroot:

without:

451.03user 26.27system 2:00.38elapsed 396%CPU
449.39user 26.21system 1:59.95elapsed 396%CPU
447.40user 25.86system 1:59.79elapsed 395%CPU

now with:

490.77user 24.45system 2:13.35elapsed 386%CPU
489.69user 24.50system 2:12.60elapsed 387%CPU
490.41user 24.99system 2:12.22elapsed 389%CPU

now is chroot() that imperformant? no, but the change
in /tmp being on a partition vs. tmpfs makes quite
some difference here

even moving from one partition to another will give
measurable difference here, all within a small margin

an interesting aspect is the gain (or loss) you have
when you start several guests basically doing the
same thing (and sharing the same files, etc)

> Thanks for doing such a great job on lightweight containers,

you're welcome!

best,
Herbert

> C.
Re: Re: [RFC] Virtualization steps [message #2657 is a reply to message #2655] Thu, 13 April 2006 22:51 Go to previous messageGo to next message
kir is currently offline  kir
Messages: 1645
Registered: August 2005
Location: Moscow, Russia
Senior Member

Cedric Le Goater wrote:

> Recently, we've been running tests and benchmarks in different
>
>virtualization environments : openvz, vserver, vserver in a minimal context
>and also Xen as a reference in the virtual machine world.
>
>We ran the usual benchmarks, dbench, tbench, lmbench, kernerl build, on the
>native kernel, on the patched kernel and in each virtualized environment.
>We also did some scalability tests to see how each solution behaved. And
>finally, some tests on live migration. We didn't do much on network nor on
>resource management behavior.
>
>We'd like to continue in an open way. But first, we want to make sure we
>have the right tests, benchmarks, tools, versions, configuration, tuning,
>etc, before publishing any results :) We have some materials already but
>before proposing we would like to have your comments and advices on what we
>should or shouldn't use.
>
>Thanks for doing such a great job on lightweight containers,
>
>C.
>
>
Cedrik,

You made my day, I am really happy to hear that! Such testing and
benchmarking should be done by an independent third party, and IBM fits
that requirement just fine. It all makes much sense for everybody who's
involved.

If it will be opened (not just results, but also the processes and
tools), and all the projects will be able to contribute and help, that
would be just great. We do a lot of testing in-house, and will be happy
to contribute to such an independent testing/benchmarking project.

Speaking of live migration, we in OpenVZ plan to release our
implementation as soon as next week.

Regards,
Kir.
Re: Re: [RFC] Virtualization steps [message #2662 is a reply to message #2656] Fri, 14 April 2006 07:35 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

> I would be really interested in getting comparisons
> between vanilla kernels and linux-vserver patched
> versions, especially vs2.1.1 and vs2.0.2 on the
> same test setup with a minimum difference in config
>
> I doubt that you can really compare across the
> existing virtualization technologies, as it really
> depends on the setup and hardware
and kernel .config's :)
for example, I'm pretty sure, OVZ smp kernel is not the same as any of
prebuilt vserver kernels.

> In my experience it is extremely hard to do 'proper'
> comparisons, because the slightest change of the
> environment can cause big differences ...
>
> here as example, a kernel build (-j99) on 2.6.16
> on a test host, with and without a chroot:
>
> without:
>
> 451.03user 26.27system 2:00.38elapsed 396%CPU
> 449.39user 26.21system 1:59.95elapsed 396%CPU
> 447.40user 25.86system 1:59.79elapsed 395%CPU
>
> now with:
>
> 490.77user 24.45system 2:13.35elapsed 386%CPU
> 489.69user 24.50system 2:12.60elapsed 387%CPU
> 490.41user 24.99system 2:12.22elapsed 389%CPU
>
> now is chroot() that imperformant? no, but the change
> in /tmp being on a partition vs. tmpfs makes quite
> some difference here
filesystem performance also very much depends on disk layout.
If you use different partitions of the same disk for Xen, vserver and OVZ,
one of them will be quickest while others can be significantly slower
and slower :/

Thanks,
Kirill
Re: Re: [RFC] Virtualization steps [message #2665 is a reply to message #2656] Fri, 14 April 2006 09:56 Go to previous messageGo to next message
Cedric Le Goater is currently offline  Cedric Le Goater
Messages: 443
Registered: February 2006
Senior Member
Bonjour !

Herbert Poetzl wrote:

> I would be really interested in getting comparisons
> between vanilla kernels and linux-vserver patched
> versions, especially vs2.1.1 and vs2.0.2 on the
> same test setup with a minimum difference in config

We did the tests last month and used the stable version : vs2.0.2rc9 on a
2.6.15.4. Using benchmarks like dbench, tbench, lmbench, the vserver patch
has no impact, vserver overhead in a context is hardly measurable (<3%),
same results for a debian sarge running in a vserver.

It is pretty difficult to follow everyone patches. This makes the
comparisons difficult so we chose to normalize all the results with the
native kernel results. But in a way, this is good because the goal of these
tests isn't to compare technologies but to measure their overhead and
stability. And at the end, we don't care if openvz is faster than vserver,
we want containers in the linux kernel to be fast and stable, one day :)

> I doubt that you can really compare across the
> existing virtualization technologies, as it really
> depends on the setup and hardware

I agree these are very different technologies but from a user point of
view, they provide a similar service. So, it is interesting to see what are
the drawbacks and the benefits of each solution. You want fault containment
and strict isolation, here's the price. You want performance, here's another.

Anyway, there's already enough focus on the virtual machines so we should
focus only on lightweight containers.

>> We'd like to continue in an open way. But first, we want to make sure
>> we have the right tests, benchmarks, tools, versions, configuration,
>> tuning, etc, before publishing any results :) We have some materials
>> already but before proposing we would like to have your comments and
>> advices on what we should or shouldn't use.
>
> In my experience it is extremely hard to do 'proper'
> comparisons, because the slightest change of the
> environment can cause big differences ...
>
> here as example, a kernel build (-j99) on 2.6.16
> on a test host, with and without a chroot:
>
> without:
>
> 451.03user 26.27system 2:00.38elapsed 396%CPU
> 449.39user 26.21system 1:59.95elapsed 396%CPU
> 447.40user 25.86system 1:59.79elapsed 395%CPU
>
> now with:
>
> 490.77user 24.45system 2:13.35elapsed 386%CPU
> 489.69user 24.50system 2:12.60elapsed 387%CPU
> 490.41user 24.99system 2:12.22elapsed 389%CPU
>
> now is chroot() that imperformant? no, but the change
> in /tmp being on a partition vs. tmpfs makes quite
> some difference here
>
> even moving from one partition to another will give
> measurable difference here, all within a small margin

very interesting thanks.

> an interesting aspect is the gain (or loss) you have
> when you start several guests basically doing the
> same thing (and sharing the same files, etc)

we have these in the pipe also, we called them scalability test: trying to
run as much containers as possible and see how performance drops (when the
kernel survives the test :)

ok, now i guess we want to make some kind of test plan.

C.
Re: Re: [RFC] Virtualization steps [message #2666 is a reply to message #2657] Fri, 14 April 2006 10:08 Go to previous messageGo to next message
Cedric Le Goater is currently offline  Cedric Le Goater
Messages: 443
Registered: February 2006
Senior Member
Bonjour !

Kir Kolyshkin wrote:

> You made my day, I am really happy to hear that! Such testing and
> benchmarking should be done by an independent third party, and IBM fits
> that requirement just fine. It all makes much sense for everybody who's
> involved.
>
> If it will be opened (not just results, but also the processes and
> tools), and all the projects will be able to contribute and help, that
> would be just great. We do a lot of testing in-house, and will be happy
> to contribute to such an independent testing/benchmarking project.

What we have in mind is something like http://test.kernel.org/ for each
patch set. I guess we will start humbly at the beginning :)

Initially, the idea was to test the patch series we've been sending on
lkml. But as we've been running tests on existing solutions, openvz,
vserver, and our own prototype, we thought that extending to all was
interesting and fair.

The goal is to promote lightweight containers in the linux kernel, so this
needs to be open.

> Speaking of live migration, we in OpenVZ plan to release our
> implementation as soon as next week.

We've been working on that topic for a long time, we are very interested in
seeing what you've acheived ! Migration tests is also an interesting topic
we could add with time to the containers tests.

thanks,

C.
Re: Re: [RFC] Virtualization steps [message #2678 is a reply to message #2665] Sat, 15 April 2006 19:29 Go to previous messageGo to next message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Fri, Apr 14, 2006 at 11:56:21AM +0200, Cedric Le Goater wrote:
> Bonjour !
>
> Herbert Poetzl wrote:
>
> > I would be really interested in getting comparisons
> > between vanilla kernels and linux-vserver patched
> > versions, especially vs2.1.1 and vs2.0.2 on the
> > same test setup with a minimum difference in config
>
> We did the tests last month and used the stable version : vs2.0.2rc9
> on a 2.6.15.4. Using benchmarks like dbench, tbench, lmbench, the
> vserver patch has no impact, vserver overhead in a context is hardly
> measurable (<3%), same results for a debian sarge running in a
> vserver.

with 2.1.1-rc16 they are not supposed to be measurable
at all, so if you measure any difference here, please
let me know about it, as I consider it an issue :)

> It is pretty difficult to follow everyone patches. This makes the
> comparisons difficult so we chose to normalize all the results with
> the native kernel results. But in a way, this is good because the goal
> of these tests isn't to compare technologies but to measure their
> overhead and stability. And at the end, we don't care if openvz is
> faster than vserver, we want containers in the linux kernel to be fast
> and stable, one day :)

I'm completely with you here ...

> > I doubt that you can really compare across the
> > existing virtualization technologies, as it really
> > depends on the setup and hardware
>
> I agree these are very different technologies but from a user point
> of view, they provide a similar service. So, it is interesting to see
> what are the drawbacks and the benefits of each solution. You want
> fault containment and strict isolation, here's the price. You want
> performance, here's another.

precisely, taht's why there are different projects
and different aims ...

> Anyway, there's already enough focus on the virtual machines so we
> should focus only on lightweight containers.
>
> >> We'd like to continue in an open way. But first, we want to
> >> make sure we have the right tests, benchmarks, tools, versions,
> >> configuration, tuning, etc, before publishing any results :) We
> >> have some materials already but before proposing we would like to
> >> have your comments and advices on what we should or shouldn't use.
> >
> > In my experience it is extremely hard to do 'proper'
> > comparisons, because the slightest change of the
> > environment can cause big differences ...
> >
> > here as example, a kernel build (-j99) on 2.6.16
> > on a test host, with and without a chroot:
> >
> > without:
> >
> > 451.03user 26.27system 2:00.38elapsed 396%CPU
> > 449.39user 26.21system 1:59.95elapsed 396%CPU
> > 447.40user 25.86system 1:59.79elapsed 395%CPU
> >
> > now with:
> >
> > 490.77user 24.45system 2:13.35elapsed 386%CPU
> > 489.69user 24.50system 2:12.60elapsed 387%CPU
> > 490.41user 24.99system 2:12.22elapsed 389%CPU
> >
> > now is chroot() that imperformant? no, but the change
> > in /tmp being on a partition vs. tmpfs makes quite
> > some difference here
> >
> > even moving from one partition to another will give
> > measurable difference here, all within a small margin
>
> very interesting thanks.
>
> > an interesting aspect is the gain (or loss) you have
> > when you start several guests basically doing the
> > same thing (and sharing the same files, etc)
>
> we have these in the pipe also, we called them scalability test:
> trying to run as much containers as possible and see how performance
> drops (when the kernel survives the test :)

yes, might want to check with and without unification
here too, as I think you can reach more than 100% native
speed in the multi guest scenario with that :)

> ok, now i guess we want to make some kind of test plan.

sounds good, please keep me posted ...

best,
Herbert

> C.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
Re: Re: [RFC] Virtualization steps [message #2679 is a reply to message #2666] Sat, 15 April 2006 19:31 Go to previous message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Fri, Apr 14, 2006 at 12:08:05PM +0200, Cedric Le Goater wrote:
> Bonjour !
>
> Kir Kolyshkin wrote:
>
> > You made my day, I am really happy to hear that! Such testing and
> > benchmarking should be done by an independent third party, and
> > IBM fits that requirement just fine. It all makes much sense for
> > everybody who's involved.
> >
> > If it will be opened (not just results, but also the processes and
> > tools), and all the projects will be able to contribute and help,
> > that would be just great. We do a lot of testing in-house, and will
> > be happy to contribute to such an independent testing/benchmarking
> > project.
>
> What we have in mind is something like http://test.kernel.org/ for
> each patch set. I guess we will start humbly at the beginning :)
>
> Initially, the idea was to test the patch series we've been sending on
> lkml. But as we've been running tests on existing solutions, openvz,
> vserver, and our own prototype, we thought that extending to all was
> interesting and fair.

would be really great if you could extend that to something
like the PLM where folks (like linux-vserver and openvz) can
test their patches against mainline kernels in a fairly
automated way ...

I guess that would be some initial work, but could improve
many other patches (not only those related to virtualization)

best,
Herbert

> The goal is to promote lightweight containers in the linux kernel, so
> this needs to be open.
>
> > Speaking of live migration, we in OpenVZ plan to release our
> > implementation as soon as next week.
>
> We've been working on that topic for a long time, we are very
> interested in seeing what you've acheived ! Migration tests is also an
> interesting topic we could add with time to the containers tests.
>
> thanks,
>
> C.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
Previous Topic: [PATCH COMMIT] diff-merge-2.6.15.5-20060413
Next Topic: [PATCH] IPC: access to unmapped vmalloc area in grow_ary()
Goto Forum:
  


Current Time: Tue Sep 10 12:00:42 GMT 2024

Total time taken to generate the page: 0.05083 seconds