OpenVZ Forum


Home » Mailing lists » Devel » Container Test Campaign
Re: [Vserver] Re: Container Test Campaign [message #4344 is a reply to message #4276] Thu, 06 July 2006 10:51 Go to previous messageGo to next message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Tue, Jul 04, 2006 at 03:02:54PM +0200, Clément Calmels wrote:
> Hi,
>
> Sorry, I just forgot one part of your email... (and sorry for the mail
> spamming, I probably got too big fingers or too tiny keyboard)
>
> > 1.2 Can you tell how you run the tests. I am particularly interested in
> > - how many iterations do you do?
> > - what result do you choose from those iterations?
> > - how reproducible are the results?
> > - are you rebooting the box between the iterations?
> > - are you reformatting the partition used for filesystem testing?
> > - what settings are you using (such as kernel vm params)?
> > - did you stop cron daemons before running the test?
> > - are you using the same test binaries across all the participants?
> > - etc. etc...
>
> A basic 'patch' test looks like:
> o build the appropriate kernel (2.6.16-026test014-x86_64-smp for
> example)
> o reboot
> o run dbench on /tmp with 8 processes

sidenote: on a 'typical' Linux-VServer guest, tmp
will be mounted as tmpfs, so be careful with that
OVZ might do similar as might your host distro :)

HTH,
Herbert

> o run tbench with 8 processes
> o run lmbench
> o run kernbench
>
> For test inside a 'guest' I just do something like:
> o build the appropriate kernel (2.6.16-026test014-x86_64-smp for
> example)
> o reboot
> o build the utilities (vztcl+vzquota for example)
> o reboot
> o launch a guest
> o run in the guest dbench ...
> o run in the guest tbench ...
> ....
>
> -The results are the average value of several iterations of each set of
> these kind of tests. I will try to update the site with the numbers of
> iterations behind each values.
> - For the filesystem testing, the partition is not reformatted. I can
> change this behaviour...
> - For the settings of the guest I tried to use the default settings (I
> had to change some openvz guest settings) just following the HOWTO on
> vserver or openvz site.
> For the kernel parameters, did you mean kernel config file tweaking?
> - Cron are stopped during tests.
> - All binaries are always build in the test node.
>
> Feel free to provide me different scenario which you think are more
> relevant.
>
> --
> Clément Calmels <clement.calmels@fr.ibm.com>
>
> _______________________________________________
> Vserver mailing list
> Vserver@list.linux-vserver.org
> http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] Re: Container Test Campaign [message #4345 is a reply to message #4308] Thu, 06 July 2006 10:54 Go to previous messageGo to next message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Wed, Jul 05, 2006 at 02:43:17PM +0400, Kirill Korotaev wrote:
> >>>- All binaries are always build in the test node.
> >>>
> >>
> >>I assuming you are doing your tests on the same system (i.e. same
> >>compiler/libs/whatever else), and you do not change that system over
> >>time (i.e. you do not upgrade gcc on it in between the tests).
> >
> >
> >I hope! :)
>
> All binaries should be built statically to work the same way inside

I'm against that, IMHO statically built binaries (except
for dietlibc and uClibc) are not really realistic

> host/guest or you need to make sure that you have exactly the same
> versions of glibc and other system libraries. At least glibc can
> affect perforamnce very much :/

yep, indeed, I'd suggest to use the very same filesystem
for tests on the host as you use for the guests ...

best,
Herbert

> Kirill
> _______________________________________________
> Vserver mailing list
> Vserver@list.linux-vserver.org
> http://list.linux-vserver.org/mailman/listinfo/vserver
Re: [Vserver] Re: Container Test Campaign [message #4348 is a reply to message #4344] Thu, 06 July 2006 11:30 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

Herbert Poetzl wrote:
> sidenote: on a 'typical' Linux-VServer guest, tmp
> will be mounted as tmpfs, so be careful with that
> OVZ might do similar as might your host distro :)
>
good point. Can we document all these issues somewhere?

Kirill
Re: [Vserver] Re: Container Test Campaign [message #4355 is a reply to message #4342] Thu, 06 July 2006 14:12 Go to previous messageGo to next message
Gerrit Huizenga is currently offline  Gerrit Huizenga
Messages: 2
Registered: July 2006
Junior Member
On Thu, 06 Jul 2006 14:44:23 +0400, Kirill Korotaev wrote:
> Gerrit,
>
> >>>>I assuming you are doing your tests on the same system (i.e. same
> >>>>compiler/libs/whatever else), and you do not change that system over
> >>>>time (i.e. you do not upgrade gcc on it in between the tests).
> >>>
> >>>I hope! :)
> >>
> >>All binaries should be built statically to work the same way inside host/guest or
> >>you need to make sure that you have exactly the same versions of glibc and other
> >>system libraries. At least glibc can affect perforamnce very much :/
> >
> >
> > Ick - no one builds binaries statically in the real world. And,
> > when you build binaries statically, you lose all ability to fix
> > security problems in base libraries by doing an update of that library.
> > Instead, all applications need to be rebuilt.
> >
> > Performance tests should reflect real end user usage - not contrived
> > situations that make a particular solution look better or worse.
> > If glibc can affect performance, that should be demonstrated in the
> > real performance results - it is part of the impact of the solution and
> > may need an additional solution or discussion.
>
> What I tried to say is that performance results done in different
> environments are not comparable so have no much meaning. I don't want us
> to waste our time digging in why one environment is a bif faster or slower than another.
> I hope you don't want too.

I *do* want to understand why one patch set or another is significantly
faster or slower than any other. I think by now everyone realizes that
what goes into mainline will not be some slice of vserver, or OpenVZ
or MetaCluster or Eric's work in progress. It will be the convergance
of the patches that enable all solutions, and those patches will be added
as they are validated as beneficial to all participants *and* beneficial
(or not harmful) to mainline Linux. So, testing of large environments
is good to see where the overall impacts are (btw, people should start
reading up on basic oprofile use by about now ;-) but in the end, each
set of patches for each subsystem will be judged on their own merits.
Those merits include code cleanliness, code maintainainability, code
functionality, performance, testability, etc.

So, you are right that testing which compares roughly similar environments
is good. But those tests will help us identify areas where one solution
or another may have code which provides functionality in some way which
has lower impact.

I do not want to have to dig into those results in great detail if the
difference between two approaches is minor. However, if a particular
area has major impacts to performance, we need to understand how the
approaches differ and why one solution has greater impact than another.
Sometimes it is just a coding issue that can be easily addressed. Sometimes
it will be a design issue indicating that one solution or another has
a design issue which might have been better addressed by another solution.

The fun thing here (well, maybe not for each solution provider) is that
we get to cherry pick the best implementations from each solution, or
create new ones as we go which ultimate allow us to have application
virtualization, containers, or whatever you want to call them.

> Now, to have the same environment there are at least 2 ways:
> - make static binaries (not that good, but easiest way)

This is a case where "easiest" is just plain wrong. If it doesn't match
how people will use their distros and solutions out of the box it has
no real relevence to the code that will get checked in.

> - have exactly the same packages in host/VPS for all test cases.
>
> BTW, I also prefer 2nd way, but it is harder.

Herbert's suggestion here is good - if you can use exactly the same
filesystem for performance comparisons you remove one set of variables.

However, I also believe that if the difference between any two filesystems
or even distro environements doing basic performance tests (e.g.
standardized benchmarks) then there is probably some other problem that
we should be aware of. Most of the standardized benchmarks elimininate
the variance of the underlying system to the best of their ability.
For instance, kernbench carries around a full kernel (quite backlevel)
as the kernel that it builds. The goal is to make sure that the kernel
being built hasn't changed from one version to the next. In this case,
it is also important to use the same compiler since there can be
extensive variation between versions of gcc.

gerrit
Re: [Vserver] Re: Container Test Campaign [message #4402 is a reply to message #4355] Mon, 10 July 2006 08:16 Go to previous messageGo to next message
dev is currently offline  dev
Messages: 1693
Registered: September 2005
Location: Moscow
Senior Member

Gerrit,

Great! this is what I wanted to hear :) Fully agree.

Thanks,
Kirill

> On Thu, 06 Jul 2006 14:44:23 +0400, Kirill Korotaev wrote:
>
>>Gerrit,
>>
>>
>>>>>>I assuming you are doing your tests on the same system (i.e. same
>>>>>>compiler/libs/whatever else), and you do not change that system over
>>>>>>time (i.e. you do not upgrade gcc on it in between the tests).
>>>>>
>>>>>I hope! :)
>>>>
>>>>All binaries should be built statically to work the same way inside host/guest or
>>>>you need to make sure that you have exactly the same versions of glibc and other
>>>>system libraries. At least glibc can affect perforamnce very much :/
>>>
>>>
>>>Ick - no one builds binaries statically in the real world. And,
>>>when you build binaries statically, you lose all ability to fix
>>>security problems in base libraries by doing an update of that library.
>>>Instead, all applications need to be rebuilt.
>>>
>>>Performance tests should reflect real end user usage - not contrived
>>>situations that make a particular solution look better or worse.
>>>If glibc can affect performance, that should be demonstrated in the
>>>real performance results - it is part of the impact of the solution and
>>>may need an additional solution or discussion.
>>
>>What I tried to say is that performance results done in different
>>environments are not comparable so have no much meaning. I don't want us
>>to waste our time digging in why one environment is a bif faster or slower than another.
>>I hope you don't want too.
>
>
> I *do* want to understand why one patch set or another is significantly
> faster or slower than any other. I think by now everyone realizes that
> what goes into mainline will not be some slice of vserver, or OpenVZ
> or MetaCluster or Eric's work in progress. It will be the convergance
> of the patches that enable all solutions, and those patches will be added
> as they are validated as beneficial to all participants *and* beneficial
> (or not harmful) to mainline Linux. So, testing of large environments
> is good to see where the overall impacts are (btw, people should start
> reading up on basic oprofile use by about now ;-) but in the end, each
> set of patches for each subsystem will be judged on their own merits.
> Those merits include code cleanliness, code maintainainability, code
> functionality, performance, testability, etc.
>
> So, you are right that testing which compares roughly similar environments
> is good. But those tests will help us identify areas where one solution
> or another may have code which provides functionality in some way which
> has lower impact.
>
> I do not want to have to dig into those results in great detail if the
> difference between two approaches is minor. However, if a particular
> area has major impacts to performance, we need to understand how the
> approaches differ and why one solution has greater impact than another.
> Sometimes it is just a coding issue that can be easily addressed. Sometimes
> it will be a design issue indicating that one solution or another has
> a design issue which might have been better addressed by another solution.
>
> The fun thing here (well, maybe not for each solution provider) is that
> we get to cherry pick the best implementations from each solution, or
> create new ones as we go which ultimate allow us to have application
> virtualization, containers, or whatever you want to call them.
>
>
>>Now, to have the same environment there are at least 2 ways:
>>- make static binaries (not that good, but easiest way)
>
>
> This is a case where "easiest" is just plain wrong. If it doesn't match
> how people will use their distros and solutions out of the box it has
> no real relevence to the code that will get checked in.
>
>
>>- have exactly the same packages in host/VPS for all test cases.
>>
>>BTW, I also prefer 2nd way, but it is harder.
>
>
> Herbert's suggestion here is good - if you can use exactly the same
> filesystem for performance comparisons you remove one set of variables.
>
> However, I also believe that if the difference between any two filesystems
> or even distro environements doing basic performance tests (e.g.
> standardized benchmarks) then there is probably some other problem that
> we should be aware of. Most of the standardized benchmarks elimininate
> the variance of the underlying system to the best of their ability.
> For instance, kernbench carries around a full kernel (quite backlevel)
> as the kernel that it builds. The goal is to make sure that the kernel
> being built hasn't changed from one version to the next. In this case,
> it is also important to use the same compiler since there can be
> extensive variation between versions of gcc.
>
> gerrit
>
Container Test Campaign [message #4461 is a reply to message #4183] Tue, 11 July 2006 08:45 Go to previous messageGo to next message
Clement Calmels is currently offline  Clement Calmels
Messages: 11
Registered: June 2006
Junior Member
Some updates on
http://lxc.sourceforge.net/bench/

New design, results of the stable version of openvz added, clearer
figures.

--
Clément Calmels <clement.calmels@fr.ibm.com>
Re: Container Test Campaign [message #4468 is a reply to message #4461] Tue, 11 July 2006 09:18 Go to previous messageGo to next message
Kirill Korotaev is currently offline  Kirill Korotaev
Messages: 137
Registered: January 2006
Senior Member
> Some updates on
> http://lxc.sourceforge.net/bench/
>
> New design, results of the stable version of openvz added, clearer
> figures.
>

1. are 2.6.16 OVZ results still for CFQ disk scheduler?
2. there is definetely something unclean in your testing as
vserver and MCR makes dbench faster than vanilla :))
have you took into account my notice about partition size?
and that disk partition on which dbench works should be reformatted
each time before test case?

Kirill
Re: Container Test Campaign [message #4512 is a reply to message #4468] Wed, 12 July 2006 16:31 Go to previous messageGo to next message
Clement Calmels is currently offline  Clement Calmels
Messages: 11
Registered: June 2006
Junior Member
Le mardi 11 juillet 2006 à 13:18 +0400, Kirill Korotaev a écrit :
> > Some updates on
> > http://lxc.sourceforge.net/bench/
> >
> > New design, results of the stable version of openvz added, clearer
> > figures.
> >
>
> 1. are 2.6.16 OVZ results still for CFQ disk scheduler?

This tests are currently in progress... for the moment, it seems that
the anticipatory io scheduler improves performance a lot.

> 2. there is definetely something unclean in your testing as
> vserver and MCR makes dbench faster than vanilla :))

Couldn't some test be faster inside a container than with a Vanilla? For
example if I want to dump all files in /proc, obviously inside a light
container it will be faster because /proc visibility is limited to the
container session. Just to be clear:

r3-21:~ # find /proc/ | wc -l
4213
r3-21:~ # mcr-execute -j1 -- find /proc/ | wc -l
729

I'm not sure and I'm still investigating. I'm now adding Oprofile to all
tests to have more information. If you know technical reasons that imply
different results, let me know. Help welcome!

--
Clément Calmels <clement.calmels@fr.ibm.com>
Re: [Vserver] Re: Container Test Campaign [message #4517 is a reply to message #4512] Thu, 13 July 2006 02:07 Go to previous message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Wed, Jul 12, 2006 at 06:31:25PM +0200, Clément Calmels wrote:
> Le mardi 11 juillet 2006 à 13:18 +0400, Kirill Korotaev a écrit :
> > > Some updates on
> > > http://lxc.sourceforge.net/bench/
> > >
> > > New design, results of the stable version of openvz added, clearer
> > > figures.
> > >
> >
> > 1. are 2.6.16 OVZ results still for CFQ disk scheduler?
>
> This tests are currently in progress... for the moment, it seems that
> the anticipatory io scheduler improves performance a lot.
>
> > 2. there is definetely something unclean in your testing as
> > vserver and MCR makes dbench faster than vanilla :))

that's not really unusual ...

> Couldn't some test be faster inside a container than with a Vanilla?

yes, they definitely can, and some very specific ones
are constantly faster regardless of how many tests
and/or setups you have ...

> For example if I want to dump all files in /proc, obviously inside a
> light container it will be faster because /proc visibility is limited
> to the container session. Just to be clear:
>
> r3-21:~ # find /proc/ | wc -l
> 4213
> r3-21:~ # mcr-execute -j1 -- find /proc/ | wc -l
> 729
>
> I'm not sure and I'm still investigating. I'm now adding Oprofile to all
> tests to have more information. If you know technical reasons that imply
> different results, let me know. Help welcome!

yes, the 'isolation' used in Linux-VServer already
gave that 'at first glance' strange behaviour that
some tests are 'faster' inside a guest than on the
real/vanilla system, so for us it is not really new
but probably it is still confusing, here are a few
reasons _why_ some tests are better than the 'original'

- structures inside the kernel change, relations
between certain structures change too, some of
those changes cause 'better' behaviour, just
because cache usage or memory placement is different

- many checks walk huge lists to find a socket or
process or whatever, some of them use hashes to
speed up the search, the lightweight guests often
provide faster access to 'related' structures

- scheduler and memory management are tricky beasts
sometimes it 'just happens' that certain operations
and/or sequences are faster than other, although
they give the same result

HTC,
Herbert

> --
> Clément Calmels <clement.calmels@fr.ibm.com>
>
> _______________________________________________
> Vserver mailing list
> Vserver@list.linux-vserver.org
> http://list.linux-vserver.org/mailman/listinfo/vserver
Previous Topic: [PATCH] fdset's leakage
Next Topic: [andrex@alumni.utexas.net: Bug#378045: vzctl: /etc/init.d/vz gives up too easily, doesn't tell me wh
Goto Forum:
  


Current Time: Tue Jul 29 02:06:41 GMT 2025

Total time taken to generate the page: 0.57871 seconds