OpenVZ Forum


Home » Mailing lists » Devel » [RFC] Virtualization steps
Re: Re: [RFC] Virtualization steps [message #2678 is a reply to message #2665] Sat, 15 April 2006 19:29 Go to previous messageGo to previous message
Herbert Poetzl is currently offline  Herbert Poetzl
Messages: 239
Registered: February 2006
Senior Member
On Fri, Apr 14, 2006 at 11:56:21AM +0200, Cedric Le Goater wrote:
> Bonjour !
>
> Herbert Poetzl wrote:
>
> > I would be really interested in getting comparisons
> > between vanilla kernels and linux-vserver patched
> > versions, especially vs2.1.1 and vs2.0.2 on the
> > same test setup with a minimum difference in config
>
> We did the tests last month and used the stable version : vs2.0.2rc9
> on a 2.6.15.4. Using benchmarks like dbench, tbench, lmbench, the
> vserver patch has no impact, vserver overhead in a context is hardly
> measurable (<3%), same results for a debian sarge running in a
> vserver.

with 2.1.1-rc16 they are not supposed to be measurable
at all, so if you measure any difference here, please
let me know about it, as I consider it an issue :)

> It is pretty difficult to follow everyone patches. This makes the
> comparisons difficult so we chose to normalize all the results with
> the native kernel results. But in a way, this is good because the goal
> of these tests isn't to compare technologies but to measure their
> overhead and stability. And at the end, we don't care if openvz is
> faster than vserver, we want containers in the linux kernel to be fast
> and stable, one day :)

I'm completely with you here ...

> > I doubt that you can really compare across the
> > existing virtualization technologies, as it really
> > depends on the setup and hardware
>
> I agree these are very different technologies but from a user point
> of view, they provide a similar service. So, it is interesting to see
> what are the drawbacks and the benefits of each solution. You want
> fault containment and strict isolation, here's the price. You want
> performance, here's another.

precisely, taht's why there are different projects
and different aims ...

> Anyway, there's already enough focus on the virtual machines so we
> should focus only on lightweight containers.
>
> >> We'd like to continue in an open way. But first, we want to
> >> make sure we have the right tests, benchmarks, tools, versions,
> >> configuration, tuning, etc, before publishing any results :) We
> >> have some materials already but before proposing we would like to
> >> have your comments and advices on what we should or shouldn't use.
> >
> > In my experience it is extremely hard to do 'proper'
> > comparisons, because the slightest change of the
> > environment can cause big differences ...
> >
> > here as example, a kernel build (-j99) on 2.6.16
> > on a test host, with and without a chroot:
> >
> > without:
> >
> > 451.03user 26.27system 2:00.38elapsed 396%CPU
> > 449.39user 26.21system 1:59.95elapsed 396%CPU
> > 447.40user 25.86system 1:59.79elapsed 395%CPU
> >
> > now with:
> >
> > 490.77user 24.45system 2:13.35elapsed 386%CPU
> > 489.69user 24.50system 2:12.60elapsed 387%CPU
> > 490.41user 24.99system 2:12.22elapsed 389%CPU
> >
> > now is chroot() that imperformant? no, but the change
> > in /tmp being on a partition vs. tmpfs makes quite
> > some difference here
> >
> > even moving from one partition to another will give
> > measurable difference here, all within a small margin
>
> very interesting thanks.
>
> > an interesting aspect is the gain (or loss) you have
> > when you start several guests basically doing the
> > same thing (and sharing the same files, etc)
>
> we have these in the pipe also, we called them scalability test:
> trying to run as much containers as possible and see how performance
> drops (when the kernel survives the test :)

yes, might want to check with and without unification
here too, as I think you can reach more than 100% native
speed in the multi guest scenario with that :)

> ok, now i guess we want to make some kind of test plan.

sounds good, please keep me posted ...

best,
Herbert

> C.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
 
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Previous Topic: [PATCH COMMIT] diff-merge-2.6.15.5-20060413
Next Topic: [PATCH] IPC: access to unmapped vmalloc area in grow_ary()
Goto Forum:
  


Current Time: Mon Jul 07 01:14:28 GMT 2025

Total time taken to generate the page: 0.03454 seconds