OpenVZ Forum


Home » Mailing lists » Devel » [RFC] Virtualization steps
Re: Re: [RFC] Virtualization steps [message #2642 is a reply to message #2636] Thu, 13 April 2006 06:45 Go to previous messageGo to previous message
Kirill Korotaev is currently offline  Kirill Korotaev
Messages: 137
Registered: January 2006
Senior Member
Herbert,

Thanks a lot for the details, I will give it a try once again. Looks
like fairness in this scenario simply requires sched_hard settings.

Herbert... I don't know why you've decided that my goal is to prove that
your scheduler is bad or not precise. My goal is simply to investigate
different approaches and make some measurements. I suppose you can
benefit from such a volunteer, don't you think so? Anyway, thanks again
and don't be cycled on the idea that OpenVZ are so cruel bad guys :)

Thanks,
Kirill

> well, your mistake seems to be that you probably haven't
> tested this yet, because with the following (simple)
> setups I seem to get what you consider impossible
> (of course, not as precise as your scheduler does it)
>
>
> vcontext --create --xid 100 ./cpuhog -n 1 100 &
> vcontext --create --xid 200 ./cpuhog -n 1 200 &
> vcontext --create --xid 300 ./cpuhog -n 1 300 &
>
> vsched --xid 100 --fill-rate 1 --interval 6
> vsched --xid 200 --fill-rate 2 --interval 6
> vsched --xid 300 --fill-rate 3 --interval 6
>
> vattribute --xid 100 --flag sched_hard
> vattribute --xid 200 --flag sched_hard
> vattribute --xid 300 --flag sched_hard
>
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 39 root 25 0 1304 248 200 R 74 0.1 0:46.16 ./cpuhog -n 1 300
> 38 root 25 0 1308 252 200 H 53 0.1 0:34.06 ./cpuhog -n 1 200
> 37 root 25 0 1308 252 200 H 28 0.1 0:19.53 ./cpuhog -n 1 100
> 46 root 0 0 1804 912 736 R 1 0.4 0:02.14 top -cid 20
>
> and here the other way round:
>
> vsched --xid 100 --fill-rate 3 --interval 6
> vsched --xid 200 --fill-rate 2 --interval 6
> vsched --xid 300 --fill-rate 1 --interval 6
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 36 root 25 0 1304 248 200 R 75 0.1 0:58.41 ./cpuhog -n 1 100
> 37 root 25 0 1308 252 200 H 54 0.1 0:42.77 ./cpuhog -n 1 200
> 38 root 25 0 1308 252 200 R 29 0.1 0:25.30 ./cpuhog -n 1 300
> 45 root 0 0 1804 912 736 R 1 0.4 0:02.26 top -cid 20
>
>
> note that this was done on a virtual dual cpu
> machine (QEMU 8.0) with 2.6.16-vs2.1.1-rc16 and
> that there were roughly 25% idle time, which I'm
> unable to explain atm ...
>
> feel free to jump on that fact, but I consider
> it unimportant for now ...
>
> best,
> Herbert
>
>
>>Thanks,
>>Kirill
>
>
 
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Previous Topic: [PATCH COMMIT] diff-merge-2.6.15.5-20060413
Next Topic: [PATCH] IPC: access to unmapped vmalloc area in grow_ary()
Goto Forum:
  


Current Time: Sat Jul 12 21:19:48 GMT 2025

Total time taken to generate the page: 0.02665 seconds