OpenVZ Forum


Home » Mailing lists » Devel » [patch00/05]: Containers(V2)- Introduction
Re: [patch00/05]: Containers(V2)- Introduction [message #6633 is a reply to message #6630] Wed, 20 September 2006 22:59 Go to previous messageGo to next message
Christoph Lameter is currently offline  Christoph Lameter
Messages: 123
Registered: September 2006
Senior Member
On Wed, 20 Sep 2006, Paul Jackson wrote:

> > Hmmm... That gets into issues of knowing how many pages are in use by an
> > application and that is fundamentally difficult to do due to pages being
> > shared between processes.
>
> Fundamentally difficult or not, it seems to be required for what Nick
> describes, and for sure cpusets doesn't do it (track memory usage per
> container.)

We have the memory usage on a node. So in that sense we track memory
usage. We also have VM counters for that node or nodes that describe
resource usage.
Re: [patch00/05]: Containers(V2)- Introduction [message #6634 is a reply to message #6631] Wed, 20 September 2006 23:01 Go to previous messageGo to next message
Christoph Lameter is currently offline  Christoph Lameter
Messages: 123
Registered: September 2006
Senior Member
On Wed, 20 Sep 2006, Paul Jackson wrote:

> All in all, it could avoid anything more than trivial changes to the
> existing memory allocation code hot paths. But the infrastructure
> needed for managing this mechanism needs some non-trivial work.

This is material we have to anyways for hotplug support. Adding a real
node or a virtual node whatever it is fundamentally the same process that
requires a regeneration of the zonelists in the system.
Re: [patch00/05]: Containers(V2)- Introduction [message #6635 is a reply to message #6632] Wed, 20 September 2006 23:02 Go to previous messageGo to next message
Christoph Lameter is currently offline  Christoph Lameter
Messages: 123
Registered: September 2006
Senior Member
On Wed, 20 Sep 2006, Paul Jackson wrote:

> the kernel. A useful memory containerization should (IMHO) allow for
> both adding and removing such containers.

How does the containers implementation under discussion behave if a
process is part of a container and the container is removed?
Re: [patch00/05]: Containers(V2)- Introduction [message #6636 is a reply to message #6590] Wed, 20 September 2006 23:18 Go to previous messageGo to next message
Paul Jackson is currently offline  Paul Jackson
Messages: 157
Registered: February 2006
Senior Member
Chistroph, responding to Alan:
> > I'm also not clear how you handle shared pages correctly under the fake
> > node system, can you perhaps explain that further how this works for say
> > a single apache/php/glibc shared page set across 5000 containers each a
> > web site.
>
> Cpusets can share nodes. I am not sure what the problem would be? Paul may
> be able to give you more details.

Cpusets share pre-assigned nodes, but not anonymous proportions of the
total system memory.

So sharing an apache/php/glibc page set across 5000 containers using
cpusets would be awkward. Unless I'm missing something, you'd have to
prepage in that page set, from some task allowed that many pages in
its own cpuset, then you'd run each of the 5000 web servers in smaller
cpusets that allowed space for the remainder of whatever that web
server was provisioned, not counting the shared pages. The shared pages
wouldn't count, because cpusets doesn't ding you for using a page that
is already in memory -- it just keeps you from allocating fresh pages
on certain nodes. When it came time to do rolling upgrades to new
versions of the software, and add a marketing driven list of 57
additional applications that the customers could use to build their
website, this could become an official nightmare.

Overbooking (selling say 10 Mbs of memory for each server, even though
there is less than 5000 * 10 Mb total RAM in the system) would also be
awkward. One could simulate with overlapping sets of fake numa nodes,
as I described in an earlier post today (the one that gave each task
some four of the five 20 MB fake cpusets.) But there would still be
false resource conflicts, and the (ab)use of the cpuset apparatus for
this seems unintuitive, in my opinion.

I imagine that a web site supporting 5000 web servers would be very
interested in overbooking working well. I'm sure the $7.99/month
cheap as dirt virtual web servers of which I am a customer overbook.

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
Re: [patch00/05]: Containers(V2)- Introduction [message #6637 is a reply to message #6631] Wed, 20 September 2006 23:22 Go to previous messageGo to next message
Rohit Seth is currently offline  Rohit Seth
Messages: 101
Registered: August 2006
Senior Member
On Wed, 2006-09-20 at 15:51 -0700, Paul Jackson wrote:
> Seth wrote:
> > But am not sure
> > if this number of nodes can change dynamically on the running machine or
> > a reboot is required to change the number of nodes.
>
> The current numa=fake=N kernel command line option is just boottime,
> and just x86_64.
>

Ah okay.

> I presume we'd have to remove these two constraints for this to be
> generally usable to containerize memory.
>

Right.

> We also, in my current opinion, need to fix up the node_distance
> between such fake numa sibling nodes, to correctly reflect that they
> are on the same real node (LOCAL_DISTANCE).
>
> And some non-trivial, arch-specific, zonelist sorting and reconstruction
> work will be needed.
>
> And an API devised for the above mentioned dynamic changing.
>
> And this will push on the memory hotplug/unplug technology.
>

Yes, if we use the existing notion of nodes for other purposes then you
have captured the right set of changes that will be needed to make that
happen. Such changes are not required for container patches as such.

> All in all, it could avoid anything more than trivial changes to the
> existing memory allocation code hot paths. But the infrastructure
> needed for managing this mechanism needs some non-trivial work.
>
>
> > Though when you want to have in access of 100 containers then the cpuset
> > function starts popping up on the oprofile chart very aggressively.
>
> As the linux-mm discussion last weekend examined in detail, we can
> eliminate this performance speed bump, probably by caching the
> last zone on which we found some memory. The linear search that was
> implicit in __alloc_pages()'s use of zonelists for many years finally
> become explicit with this new usage pattern.
>

Okay.

>
> > Containers also provide a mechanism to move files to containers. Any
> > further references to this file come from the same container rather than
> > the container which is bringing in a new page.
>
> I haven't read these patches enough to quite make sense of this, but I
> suspect that this is not a distinction between cpusets and these
> containers, for the basic reason that cpusets doesn't need to 'move'
> a file's references because it has no clue what such are.
>

But container support will allow the certain files pages to come from
the same container irrespective of who is using them. Something useful
for shared libs etc.

-rohit

>
> > In future there will be more handlers like CPU and disk that can be
> > easily embeded into this container infrastructure.
>
> This may be a deciding point.
>
Re: [patch00/05]: Containers(V2)- Introduction [message #6638 is a reply to message #6632] Wed, 20 September 2006 23:26 Go to previous messageGo to next message
Rohit Seth is currently offline  Rohit Seth
Messages: 101
Registered: August 2006
Senior Member
On Wed, 2006-09-20 at 15:58 -0700, Paul Jackson wrote:
> Seth wrote:
> > So now we depend on getting memory hot-plug to work for faking up these
> > nodes ...for the memory that is already present in the system. It just
> > does not sound logical.
>
> It's logical to me. Part of memory hotplug is adding physial memory,
> which is not an issue here. Part of it is adding another logical
> memory node (turning on another bit in node_online_map) and fixing up
> any code that thought a systems memory nodes were baked in at boottime.
> Perhaps the hardest part is the memory hot-un-plug, which would become
> more urgently needed with such use of fake numa nodes. The assumption
> that memory doesn't just up and vanish is non-trivial to remove from
> the kernel. A useful memory containerization should (IMHO) allow for
> both adding and removing such containers.
>

Absolutely. Since these containers are not (hard) partitioning the
memory in any way so it is easy to change the limits (effectively
reducing and increasing the memory limits for tasks belonging to
containers). As you said, memory hot-un-plug is important and it is
non-trivial amount of work.

-rohit
Re: [patch00/05]: Containers(V2)- Introduction [message #6639 is a reply to message #6593] Wed, 20 September 2006 23:29 Go to previous messageGo to next message
Paul Jackson is currently offline  Paul Jackson
Messages: 157
Registered: February 2006
Senior Member
Alan replying to Christoph:
> > Cpusets can share nodes. I am not sure what the problem would be? Paul may
> > be able to give you more details.
>
> If it can do it in a human understandable way, configured at runtime
> with dynamic sharing, overcommit and reconfiguration of sizes then
> great. Lets see what Paul has to say.

Unless I'm missing something (a frequent occurrence) such a use of
cpusets looses on the understandable, is hobbled on the overcommit, and
has to make do with a somewhat oddly limited and not trivial to
configure approximation of the dynamic sharing. And the
reconfiguration would seem to be a great exercise of memory hotunplug
(echos of the original motivation for fake numa - exercising cpusets ;).

Not exactly passing with flying colors ;).

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
Re: [patch00/05]: Containers(V2)- Introduction [message #6640 is a reply to message #6638] Wed, 20 September 2006 23:31 Go to previous messageGo to next message
Christoph Lameter is currently offline  Christoph Lameter
Messages: 123
Registered: September 2006
Senior Member
On Wed, 20 Sep 2006, Rohit Seth wrote:

> Absolutely. Since these containers are not (hard) partitioning the
> memory in any way so it is easy to change the limits (effectively
> reducing and increasing the memory limits for tasks belonging to
> containers). As you said, memory hot-un-plug is important and it is
> non-trivial amount of work.

Maybe the hotplug guys want to contribute to the discussion?
Re: [patch00/05]: Containers(V2)- Introduction [message #6641 is a reply to message #6635] Wed, 20 September 2006 23:33 Go to previous messageGo to next message
Rohit Seth is currently offline  Rohit Seth
Messages: 101
Registered: August 2006
Senior Member
On Wed, 2006-09-20 at 16:02 -0700, Christoph Lameter wrote:
> On Wed, 20 Sep 2006, Paul Jackson wrote:
>
> > the kernel. A useful memory containerization should (IMHO) allow for
> > both adding and removing such containers.
>
> How does the containers implementation under discussion behave if a
> process is part of a container and the container is removed?
>

It first removes all the tasks belonging to this container (which means
resetting the container pointers in task_struct and then per page
container pointer belonging to anonymous pages). It then clears the
container pointers in the mapping structure and also in the pages
belonging to these files.

-rohit
Re: [patch00/05]: Containers(V2)- Introduction [message #6642 is a reply to message #6641] Wed, 20 September 2006 23:36 Go to previous messageGo to next message
Christoph Lameter is currently offline  Christoph Lameter
Messages: 123
Registered: September 2006
Senior Member
On Wed, 20 Sep 2006, Rohit Seth wrote:

> > How does the containers implementation under discussion behave if a
> > process is part of a container and the container is removed?
> It first removes all the tasks belonging to this container (which means
> resetting the container pointers in task_struct and then per page
> container pointer belonging to anonymous pages). It then clears the
> container pointers in the mapping structure and also in the pages
> belonging to these files.

So the application continues to run unharmed?

Could we equip containers with restrictions on processors and nodes for
NUMA?
Re: [patch00/05]: Containers(V2)- Introduction [message #6644 is a reply to message #6642] Wed, 20 September 2006 23:39 Go to previous messageGo to next message
Rohit Seth is currently offline  Rohit Seth
Messages: 101
Registered: August 2006
Senior Member
On Wed, 2006-09-20 at 16:36 -0700, Christoph Lameter wrote:
> On Wed, 20 Sep 2006, Rohit Seth wrote:
>
> > > How does the containers implementation under discussion behave if a
> > > process is part of a container and the container is removed?
> > It first removes all the tasks belonging to this container (which means
> > resetting the container pointers in task_struct and then per page
> > container pointer belonging to anonymous pages). It then clears the
> > container pointers in the mapping structure and also in the pages
> > belonging to these files.
>
> So the application continues to run unharmed?
>

It will hit a one time penalty of getting those pointers reset, but
besides that it will continue to run fine.

> Could we equip containers with restrictions on processors and nodes for
> NUMA?
>

Yes. That is something we will have to do (I think part of CPU
handler-TBD).

-rohit
Re: [patch00/05]: Containers(V2)- Introduction [message #6645 is a reply to message #6637] Wed, 20 September 2006 23:45 Go to previous messageGo to next message
Paul Jackson is currently offline  Paul Jackson
Messages: 157
Registered: February 2006
Senior Member
Seth wrote:
> But container support will allow the certain files pages to come from
> the same container irrespective of who is using them. Something useful
> for shared libs etc.

Yes - that is useful for shared libs, and your container patch
apparently does that cleanly, while cpuset+fakenuma containers can
only provide a compromise kludge.

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
Re: [patch00/05]: Containers(V2)- Introduction [message #6646 is a reply to message #6644] Wed, 20 September 2006 23:51 Go to previous messageGo to next message
Christoph Lameter is currently offline  Christoph Lameter
Messages: 123
Registered: September 2006
Senior Member
On Wed, 20 Sep 2006, Rohit Seth wrote:

> > Could we equip containers with restrictions on processors and nodes for
> > NUMA?
> Yes. That is something we will have to do (I think part of CPU
> handler-TBD).

Paul: Will we still need cpusets if that is there?
Re: [patch00/05]: Containers(V2)- Introduction [message #6647 is a reply to message #6646] Thu, 21 September 2006 00:05 Go to previous messageGo to next message
Paul Jackson is currently offline  Paul Jackson
Messages: 157
Registered: February 2006
Senior Member
> > > Could we equip containers with restrictions on processors and nodes for
> > > NUMA?
> > Yes. That is something we will have to do (I think part of CPU
> > handler-TBD).
>
> Paul: Will we still need cpusets if that is there?

Yes. There's quite a bit more to cpusets than just some form,
any form, of CPU and Memory restriction. I can't imagine that
Containers, in any form, are going to replicate that API.

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6653 is a reply to message #6622] Thu, 21 September 2006 00:31 Go to previous messageGo to next message
Chandra Seetharaman is currently offline  Chandra Seetharaman
Messages: 88
Registered: August 2006
Member
On Wed, 2006-09-20 at 12:52 -0700, Christoph Lameter wrote:
> On Wed, 20 Sep 2006, Chandra Seetharaman wrote:
>
> > cpuset partitions resource and hence the resource that are assigned to a
> > node is not available for other cpuset, which is not good for "resource
> > management".
>
> cpusets can have one node in multiple cpusets.

AFAICS, That doesn't help me in over committing resources.

--

------------------------------------------------------------ ----------
Chandra Seetharaman | Be careful what you choose....
- sekharan@us.ibm.com | .......you may get it.
------------------------------------------------------------ ----------
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6655 is a reply to message #6653] Thu, 21 September 2006 00:36 Go to previous messageGo to next message
Paul Jackson is currently offline  Paul Jackson
Messages: 157
Registered: February 2006
Senior Member
Chandra wrote:
> AFAICS, That doesn't help me in over committing resources.

I agree - I don't think cpusets plus fake numa ... handles over commit.
You might could hack up a cheap substitute, but it wouldn't do the job.

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
Re: [Lhms-devel] [patch00/05]: Containers(V2)- Introduction [message #6659 is a reply to message #6640] Thu, 21 September 2006 00:47 Go to previous messageGo to next message
KAMEZAWA Hiroyuki is currently offline  KAMEZAWA Hiroyuki
Messages: 463
Registered: September 2006
Senior Member
On Wed, 20 Sep 2006 16:31:22 -0700 (PDT)
Christoph Lameter <clameter@sgi.com> wrote:

> On Wed, 20 Sep 2006, Rohit Seth wrote:
>
> > Absolutely. Since these containers are not (hard) partitioning the
> > memory in any way so it is easy to change the limits (effectively
> > reducing and increasing the memory limits for tasks belonging to
> > containers). As you said, memory hot-un-plug is important and it is
> > non-trivial amount of work.
>
> Maybe the hotplug guys want to contribute to the discussion?
>
Ah, I'm reading threads with interest.

I think this discussion is about using fake nodes ('struct pgdat')
to divide system's memory into some chunks. Your thought is that
for resizing/adding/removing fake pgdat, memory-hot-plug codes may be useful.
correct ?

Now, memory-hotplug manages all memory by 'section' and allows adding/(removing)
section to pgdat.
Does this section-size handling meet container people's requirement ?
And do we need freeing page when pgdat is removed ?

I think at least SPARSEMEM is useful for fake nodes because 'struct page'
are not tied to pgdat. (DISCONTIGMEM uses node_start_pfn. SPARSEMEM not.)

-Kame
Re: [Lhms-devel] [patch00/05]: Containers(V2)- Introduction [message #6663 is a reply to message #6659] Thu, 21 September 2006 01:29 Go to previous messageGo to next message
KAMEZAWA Hiroyuki is currently offline  KAMEZAWA Hiroyuki
Messages: 463
Registered: September 2006
Senior Member
Self-response..

On Thu, 21 Sep 2006 09:51:00 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:

> On Wed, 20 Sep 2006 16:31:22 -0700 (PDT)
> Christoph Lameter <clameter@sgi.com> wrote:
>
> > On Wed, 20 Sep 2006, Rohit Seth wrote:
> >
> > > Absolutely. Since these containers are not (hard) partitioning the
> > > memory in any way so it is easy to change the limits (effectively
> > > reducing and increasing the memory limits for tasks belonging to
> > > containers). As you said, memory hot-un-plug is important and it is
> > > non-trivial amount of work.
> >
> > Maybe the hotplug guys want to contribute to the discussion?
> >
> Ah, I'm reading threads with interest.

I wonder it may not good to use pgdat for resource controlling.

For example

In following scenario,
==
(1). add <pid> > /mnt/configfs/containers/my_container/add_task
(2). <pid> does some work.
(3). echo <pid> > /mnt/configfs/containers/my_container/rm_task
(4). echo <pid> > /mnt/configfs/containers/my_container2/add_task
==
(if fake-pgdat/memory-hotplug is used)
The pages used by <pid> in (2) will be accounted in 'my_container' after (3).
Is this user's wrong use of system ?

-Kame
Re: [patch00/05]: Containers(V2)- Introduction [message #6728 is a reply to message #6584] Wed, 20 September 2006 16:44 Go to previous messageGo to next message
Nick Piggin is currently offline  Nick Piggin
Messages: 35
Registered: March 2006
Member
On Wed, Sep 20, 2006 at 09:25:03AM -0700, Christoph Lameter wrote:
> On Tue, 19 Sep 2006, Rohit Seth wrote:
>
> > For example, a user can run a batch job like backup inside containers.
> > This job if run unconstrained could step over most of the memory present
> > in system thus impacting other workloads running on the system at that
> > time. But when the same job is run inside containers then the backup
> > job is run within container limits.
>
> I just saw this for the first time since linux-mm was not cced. We have
> discussed a similar mechanism on linux-mm.
>
> We already have such a functionality in the kernel its called a cpuset. A
> container could be created simply by creating a fake node that then
> allows constraining applications to this node. We already track the
> types of pages per node. The statistics you want are already existing.
> See /proc/zoneinfo and /sys/devices/system/node/node*/*.
>
> > We use the term container to indicate a structure against which we track
> > and charge utilization of system resources like memory, tasks etc for a
> > workload. Containers will allow system admins to customize the
> > underlying platform for different applications based on their
> > performance and HW resource utilization needs. Containers contain
> > enough infrastructure to allow optimal resource utilization without
> > bogging down rest of the kernel. A system admin should be able to
> > create, manage and free containers easily.
>
> Right thats what cpusets do and it has been working fine for years. Maybe
> Paul can help you if you find anything missing in the existing means to
> control resources.

What I like about Rohit's patches is the page tracking stuff which
seems quite simple but capable.

I suspect cpusets don't quite provide enough features for non-exclusive
use of memory (eg. page tracking for directed reclaim).
Re: [patch00/05]: Containers(V2)- Introduction [message #6729 is a reply to message #6581] Wed, 20 September 2006 17:07 Go to previous messageGo to next message
Nick Piggin is currently offline  Nick Piggin
Messages: 35
Registered: March 2006
Member
On Wed, Sep 20, 2006 at 09:48:13AM -0700, Christoph Lameter wrote:
> On Wed, 20 Sep 2006, Nick Piggin wrote:
>
> > > Right thats what cpusets do and it has been working fine for years. Maybe
> > > Paul can help you if you find anything missing in the existing means to
> > > control resources.
> >
> > What I like about Rohit's patches is the page tracking stuff which
> > seems quite simple but capable.
> >
> > I suspect cpusets don't quite provide enough features for non-exclusive
> > use of memory (eg. page tracking for directed reclaim).
>
> Look at the VM statistics please. We have detailed page statistics per
> zone these days. If there is anything missing then this would best be put
> into general functionality. When I looked at it, I saw page statistics
> that were replicating things that we already track per zone. All these
> would become available if a container is realized via a node and we would
> be using proven VM code.

Look at what the patches do. These are not only for hard partitioning
of memory per container but also those that share memory (eg. you might
want each to share 100MB of memory, up to a max of 80MB for an individual
container).

The nodes+cpusets stuff doesn't seem to help with that because you
with that because you fundamentally need to track pages on a per
container basis otherwise you don't know who's got what.

Now if, in practice, it turns out that nobody really needed these
features then of course I would prefer the cpuset+nodes approach. My
point is that I am not in a position to know who wants what, so I
hope people will come out and discuss some of these issues.
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6730 is a reply to message #6583] Wed, 20 September 2006 17:23 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, Nick Piggin <nickpiggin@yahoo.com.au> wrote:

> Yeah, I'm not sure about that. I don't think really complex schemes
> are needed... but again I might need more knowledge of their workloads
> and problems.
>

The basic need for separating files into containers distinct from the
tasks that are using them arises when you have several "jobs" all
working with the same large data set. (Possibly read-only data files,
or possibly one job is updating a dataset that's being used by other
jobs).
For automated job-tracking and scheduling, it's important to be able
to distinguish shared usage from individual usage (e.g. to be able to
answer questions "if I kill job X, how much memory do I get back?" and
"how do I recover 1G of memory on this machine")

As an example, assume two jobs each with 100M of anonymous memory both
mapping the same 1G file, for a total usage of 1.2G.

Any setup that doesn't let you distinguish shared and private usage
makes it hard to answer that kind of scheduling questions. E.g.:

- first user gets charged for the page -> first job reported as 1.1G,
and the second as 0.1G.

- page charges get shared between all users of the page -> two tasks
using 0.6G each.

- all users get charged for the page -> two tasks using 1.1G each.

But in fact killing either one of these jobs individually would only
free up 100M

By explicitly letting userspace see that there are two jobs each with
a private usage of 100M, and they're sharing a dataset of 1G, it's
possible to make more informed decisions.

The issue of telling the kernel exactly which files/directories need
to be accounted separately can be handled by userspace.

It could be done by per-page accounting, or by constraining particular
files to particular memory zones, or by just tracking/limiting the
number of pages from each address_space in the pagecache, but I think
that it's important that the kernel at least provide the primitive
support for this.

Paul
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6731 is a reply to message #6594] Wed, 20 September 2006 17:37 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, Rohit Seth <rohitseth@google.com> wrote:
>
> cpusets provides cpu and memory NODES binding to tasks. And I think it
> works great for NUMA machines where you have different nodes with its
> own set of CPUs and memory. The number of those nodes on a commodity HW
> is still 1. And they can have 8-16 CPUs and in access of 100G of
> memory. You may start using fake nodes (untested territory) to

I've been experimenting with this, and it's looking promising.

>
> Containers also provide a mechanism to move files to containers. Any
> further references to this file come from the same container rather than
> the container which is bringing in a new page.

This could be done with memory nodes too - a vma can specify its
memory policy, so binding individual files to nodes shouldn't be hard.

>
> In future there will be more handlers like CPU and disk that can be
> easily embeded into this container infrastructure.

I think that at least the userspace API for adding more handlers would
need to be defined before actually committing a container patch, even
if the kernel code isn't yet extensible. The CKRM/RG interface is a
good start towards this.

Paul
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6732 is a reply to message #6599] Wed, 20 September 2006 17:42 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, Christoph Lameter <clameter@sgi.com> wrote:
>
> I think we should have one container mechanism instead of multiple. Maybe
> merge the two? The cpuset functionality is well established and working
> right.

The basic container abstraction provided by cpusets is very nice -
maybe rename it from "cpuset" to "container"? Since it already
provides access to memory nodes as well as cpus, and could be extended
to handle other resource types too (network, disk).

Paul
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6733 is a reply to message #6587] Wed, 20 September 2006 17:30 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>
> I'm also not clear how you handle shared pages correctly under the fake
> node system, can you perhaps explain that further how this works for say
> a single apache/php/glibc shared page set across 5000 containers each a
> web site.

If you can associate files with containers, you can have a "shared
libraries" container that the libraries/binaries for apache/php/glibc
are associated with - all pages from those files are then accounted to
the shared container. So you can see that there are 5000 apaches each
using say 10MB privately, and sharing a container with 100MB of file
data. This can also be O(1) in the number of apache containers.

Paul
Re: [patch00/05]: Containers(V2)- Introduction [message #6735 is a reply to message #6601] Wed, 20 September 2006 18:37 Go to previous messageGo to next message
Peter Zijlstra is currently offline  Peter Zijlstra
Messages: 61
Registered: September 2006
Member
On Wed, 2006-09-20 at 10:50 -0700, Rohit Seth wrote:
> On Thu, 2006-09-21 at 03:00 +1000, Nick Piggin wrote:
> > (this time to the lists as well)
> >
> > Peter Zijlstra wrote:
> >
> > > I'd much rather containterize the whole reclaim code, which should not
> > > be too hard since he already adds a container pointer to struct page.
> >
> >
>
> Right now the memory handler in this container subsystem is written in
> such a way that when existing kernel reclaimer kicks in, it will first
> operate on those (container with pages over the limit) pages first. But
> in general I like the notion of containerizing the whole reclaim code.

Patch 5/5 seems to have a horrid deactivation scheme.

> > > I still have to reread what Rohit does for file backed pages, that gave
> > > my head a spin.
>
> Please let me know if there is any specific part that isn't making much
> sense.

Well, the whole over the limit handler is quite painfull, having taken a
second reading it isn't all that complex after all, just odd.

You just start invalidating whole files for file backed pages. Granted,
this will get you below the threshold. but you might just have destroyed
your working set.

Pretty much the same for you anonymous memory handler, you scan through
the pages in linear fashion and demote the first that you encounter.

Both things pretty thoroughly destroy the existing kernel reclaim.
Re: [patch00/05]: Containers(V2)- Introduction [message #6736 is a reply to message #6607] Wed, 20 September 2006 18:27 Go to previous messageGo to next message
Peter Zijlstra is currently offline  Peter Zijlstra
Messages: 61
Registered: September 2006
Member
On Wed, 2006-09-20 at 11:14 -0700, Rohit Seth wrote:
> On Wed, 2006-09-20 at 20:06 +0200, Peter Zijlstra wrote:
> > On Wed, 2006-09-20 at 10:52 -0700, Christoph Lameter wrote:
> > > On Wed, 20 Sep 2006, Rohit Seth wrote:
> > >
> > > > Right now the memory handler in this container subsystem is written in
> > > > such a way that when existing kernel reclaimer kicks in, it will first
> > > > operate on those (container with pages over the limit) pages first. But
> > > > in general I like the notion of containerizing the whole reclaim code.
> > >
> > > Which comes naturally with cpusets.
> >
> > How are shared mappings dealt with, are pages charged to the set that
> > first faults them in?
> >
>
> For anonymous pages (simpler case), they get charged to the faulting
> task's container.
>
> For filesystem pages (could be shared across tasks running different
> containers): Every time a new file mapping is created, it is bound to a
> container of the process creating that mapping. All subsequent pages
> belonging to this mapping will belong to this container, irrespective of
> different tasks running in different containers accessing these pages.
> Currently, I've not implemented a mechanism to allow a file to be
> specifically moved into or out of container. But when that gets
> implemented then all pages belonging to a mapping will also move out of
> container (or into a new container).

Yes, I read that in your patches, I was wondering how the cpuset
approach would handle this.

Neither are really satisfactory for shared mappings.
Re: [patch00/05]: Containers(V2)- Introduction [message #6737 is a reply to message #6602] Wed, 20 September 2006 18:06 Go to previous messageGo to next message
Peter Zijlstra is currently offline  Peter Zijlstra
Messages: 61
Registered: September 2006
Member
On Wed, 2006-09-20 at 10:52 -0700, Christoph Lameter wrote:
> On Wed, 20 Sep 2006, Rohit Seth wrote:
>
> > Right now the memory handler in this container subsystem is written in
> > such a way that when existing kernel reclaimer kicks in, it will first
> > operate on those (container with pages over the limit) pages first. But
> > in general I like the notion of containerizing the whole reclaim code.
>
> Which comes naturally with cpusets.

How are shared mappings dealt with, are pages charged to the set that
first faults them in?
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6738 is a reply to message #6612] Wed, 20 September 2006 18:43 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, Chandra Seetharaman <sekharan@us.ibm.com> wrote:
> > We already have such a functionality in the kernel its called a cpuset. A
>
> Christoph,
>
> There had been multiple discussions in the past (as recent as Aug 18,
> 2006), where we (Paul and CKRM/RG folks) have concluded that cpuset and
> resource management are orthogonal features.
>
> cpuset provides "resource isolation", and what we, the resource
> management guys want is work-conserving resource control.

CPUset provides two things:

- a generic process container abstraction

- "resource controllers" for CPU masks and memory nodes.

Rather than adding a new process container abstraction, wouldn't it
make more sense to change cpuset to make it more extensible (more
separation between resource controllers), possibly rename it to
"containers", and let the various resource controllers fight it out
(e.g. zone/node-based memory controller vs multiple LRU controller,
CPU masks vs a properly QoS-based CPU scheduler, etc)

Or more specifically, what would need to be added to cpusets to make
it possible to bolt the CKRM/RG resource controllers on to it?

Paul
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6740 is a reply to message #6620] Wed, 20 September 2006 19:51 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, Christoph Lameter <clameter@sgi.com> wrote:
> On Wed, 20 Sep 2006, Peter Zijlstra wrote:
>
> > > Which comes naturally with cpusets.
> >
> > How are shared mappings dealt with, are pages charged to the set that
> > first faults them in?
>
> They are charged to the node from which they were allocated. If the
> process is restricted to the node (container) then all pages allocated
> are are charged to the container regardless if they are shared or not.
>

Or you could use the per-vma mempolicy support to bind a large data
file to a particular node, and track shared file usage that way.

Paul
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6745 is a reply to message #6647] Thu, 21 September 2006 00:09 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, Paul Jackson <pj@sgi.com> wrote:
>
> Yes. There's quite a bit more to cpusets than just some form,
> any form, of CPU and Memory restriction. I can't imagine that
> Containers, in any form, are going to replicate that API.
>

That would be one of the nice aspects of a generic process container
abstraction linked to different resource controllers - you wouldn't
need to replicate the cpuset support, you could use it in parallel
with other resource controllers. (So e.g. use the cpusets support to
pin a group of processes on to a given set of CPU/memory nodes, and
then use the CKRM/RG CPU and disk/IO controllers to limit resource
usage within those nodes)

Paul
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6748 is a reply to message #6655] Thu, 21 September 2006 00:42 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, Paul Jackson <pj@sgi.com> wrote:
> Chandra wrote:
> > AFAICS, That doesn't help me in over committing resources.
>
> I agree - I don't think cpusets plus fake numa ... handles over commit.
> You might could hack up a cheap substitute, but it wouldn't do the job.

I have some patches locally that basically let you give out a small
set of nodes initially to a cpuset, and if memory pressure in
try_to_free_pages() passes a specified threshold, automatically
allocate one of the parent cpuset's unused memory nodes to the child
cpuset, up to specified limit. It's a bit ugly, but lets you trade of
performance vs memory footprint on a per-job basis (when combined with
fake numa to give lots of small nodes).

Paul
Re: [ckrm-tech] [Lhms-devel] [patch00/05]: Containers(V2)- Introduction [message #6750 is a reply to message #6663] Thu, 21 September 2006 01:36 Go to previous messageGo to next message
Paul Menage is currently offline  Paul Menage
Messages: 642
Registered: September 2006
Senior Member
On 9/20/06, KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> wrote:
> For example
>
> In following scenario,
> ==
> (1). add <pid> > /mnt/configfs/containers/my_container/add_task
> (2). <pid> does some work.
> (3). echo <pid> > /mnt/configfs/containers/my_container/rm_task
> (4). echo <pid> > /mnt/configfs/containers/my_container2/add_task
> ==
> (if fake-pgdat/memory-hotplug is used)
> The pages used by <pid> in (2) will be accounted in 'my_container' after (3).
> Is this user's wrong use of system ?

Yes. You can't use memory node partitioning for file pages in this way
unless you have strict controls over who can access the data sets in
question, and are careful to prevent people from moving between
containers. So it's not suitable for all uses of resource-isolating
containers.

Who is to say that the pages allocated in (2) *should* be reaccounted
to my_container2 after (3)? Some people might want that, other
(including me) might not.

Paul
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6925 is a reply to message #6523] Wed, 27 September 2006 19:50 Go to previous messageGo to next message
Chandra Seetharaman is currently offline  Chandra Seetharaman
Messages: 88
Registered: August 2006
Member
Rohit,

I finally looked into your memory controller patches. Here are some of
the issues I see:

(All points below are in the context of page limit of containers being
hit and the new code starts freeing up pages)

1. LRU is ignored totally thereby thrashing the working set (as pointed
by Peter Zijlstra).
2. Frees up file pages first when hitting the page limit thereby making
vm_swappiness ineffective.
3. Starts writing back pages when the # of file pages is close to the
limit, thereby breaking the current writeback algorithm/logic.
4. MAPPED files are not counted against the page limit. why ?. This
affects reclamation behavior and makes vm_swappiness ineffective.
5. Starts freeing up pages from the first task or the first file in the
linked list. This logic unfairly penalizes the early members of the
list.
6. Both active and inactive pages use physical pages. But, the
controller only counts active pages and not inactive pages. why ?
7. Page limit is checked against the sum of (anon and file pages) in
some places and against active pages at some other places. IMO, it
should be always compared to the same value.

BTW, It will be easier to read/follow the patches if you separate them
out as functionalities.

regards,

chandra

On Tue, 2006-09-19 at 19:16 -0700, Rohit Seth wrote:
> Containers:
>
> Commodity HW is becoming more powerful. This is giving opportunity to
> run different workloads on the same platform for better HW resource
> utilization. To run different workloads efficiently on the same
> platform, it is critical that we have a notion of limits for each
> workload in Linux kernel. Current cpuset feature in Linux kernel
> provides grouping of CPU and memory support to some extent (for NUMA
> machines).
>
> For example, a user can run a batch job like backup inside containers.
> This job if run unconstrained could step over most of the memory present
> in system thus impacting other workloads running on the system at that
> time. But when the same job is run inside containers then the backup
> job is run within container limits.
>
> We use the term container to indicate a structure against which we track
> and charge utilization of system resources like memory, tasks etc for a
> workload. Containers will allow system admins to customize the
> underlying platform for different applications based on their
> performance and HW resource utilization needs. Containers contain
> enough infrastructure to allow optimal resource utilization without
> bogging down rest of the kernel. A system admin should be able to
> create, manage and free containers easily.
>
> At the same time, changes in kernel are minimized so as this support can
> be easily integrated with mainline kernel.
>
> The user interface for containers is through configfs. Appropriate file
> system privileges are required to do operations on each container.
> Currently implemented container resources are automatically visible to
> user space through /configfs/container/<container_name> after a
> container is created.
>
> Signed-off-by: Rohit Seth <rohitseth@google.com>
>
> Diffstat for the patch set (against linux-2.6.18-rc6-mm2_:
>
> Documentation/containers.txt | 65 ++++
> fs/inode.c | 3
> include/linux/container.h | 167 ++++++++++
> include/linux/fs.h | 5
> include/linux/mm_inline.h | 4
> include/linux/mm_types.h | 4
> include/linux/sched.h | 6
> init/Kconfig | 8
> kernel/Makefile | 1
> kernel/container_configfs.c | 440 ++++++++++++++++++++++++++++
> kernel/exit.c | 2
> kernel/fork.c | 9
> mm/Makefile | 2
> mm/container.c | 658 +++++++++++++++++++++++++++++++++++++++++++
> mm/container_mm.c | 512 +++++++++++++++++++++++++++++++++
> mm/filemap.c | 4
> mm/page_alloc.c | 3
> mm/rmap.c | 8
> mm/swap.c | 1
> mm/vmscan.c | 1
> 20 files changed, 1902 insertions(+), 1 deletion(-)
>
> Changes from version 1:
> Fixed the Documentation error
> Fixed the corruption in container task list
> Added the support for showing all the tasks belonging to a container
> through showtask attribute
> moved the Kconfig changes to init directory (from mm)
> Fixed the bug of unregistering container subsystem if we are not able to
> create workqueue
> Better support for handling limits for file pages. This now includes
> support for flushing and invalidating page cache pages.
> Minor other changes.
>
> ************************************************************ *****
> This patch set has basic container support that includes:
>
> - Create a container using mkdir command in configfs
>
> - Free a container using rmdir command
>
> - Dynamically adjust memory and task limits for container.
>
> - Add/Remove a task to container (given a pid)
>
> - Files are currently added as part of open from a task that already
> belongs to a container.
>
> - Keep track of active, anonymous, mapped and pagecache usage of
> container memory
>
> - Does not allow more than task_limit number of tasks to be created in
> the container.
>
> - Over the limit memory handler is called when number of pages (anon +
> pagecache) exceed the limit. It is also called when number of active
> pages exceed the page limit. Currently, this memory handler scans the
> mappings and tasks belonging to container (file and anonymous) and tries
> to deactivate pages. If the number of page cache pages is also high
> then it also invalidate mappings. The thought behind this scheme is, it
> is okay for containers to go over limit as long they run in degraded
> manner when they are over their limit. Also, if there is any memory
> pressure then pages belonging to over the limit container(s) become
> prime candidates for kernel reclaimer. Container mutex is also held
> during the time this handler is working its way through to prevent any
> further addition of resources (like tasks or mappings) to this
> container. Though it also blocks removal of same resources from the
> container for the same time. It is possible that over the limit page
> handler takes lot of time if memory pressure on a container is
> continuously very high. The limits, like how long a task should
> schedule out when it hits memory limit, is also on the lower side at
> present (particularly when it is memory hogger). But should be easy to
> change if need be.
>
> - Indicate the number of times the page limit and task limit is hit
>
> - Indicate the tasks (pids) belonging to container.
>
> Below is a one line description for patches that will follow:
>
> [patch01]: Documentation on how to use containers
> (Documentation/container.txt)
>
> [patch02]: Changes in the generic part of kernel code
>
> [patch03]: Container's interface with configfs
>
> [patch04]: Core container support
>
> [patch05]: Over the limit memory handler.
>
> TODO:
>
> - some code(like container_add_task) in mm/container.c should go
> elsewhere.
> - Support adding/removing a file name to container through configfs
> - /proc/pid/container to show the container id (or name)
> - More testing for memory controller. Currently it is possible that
> limits are exceeded. See if a call to reclaim can be easily integrated.
> - Kernel memory tracking (based on patches from BC)
> - Limit on user locked memory
> - Huge memory support
> - Stress testing with containers
> - One shot view of all containers
> - CKRM folks are interested in seeing all processes belonging to a
> container. Add the attribute show_tasks to container.
> - Add logic so that the sum of limits are not exceeding appropriate
> system requirements.
> - Extend it with other controllers (CPU and Disk I/O)
> - Add flags bits for supporting different actions (like in some cases
> provide a hard memory limit and in some cases it could be soft).
> - Capability to kill processes for the extreme cases.
> ...
>
> This is based on lot of discussions over last month or so. I hope this
> patch set is something that we can agree and more support can be added
> on top of this. Please provide feedback and add other extensions that
> are useful in the TODO list.
>
> Thanks,
> -rohit
>
>
>
>
>
> ------------------------------------------------------------ -------------
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys -- and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourc eforge&CID=DEVDEV
> _______________________________________________
> ckrm-tech mailing list
> https://lists.sourceforge.net/lists/listinfo/ckrm-tech
--

------------------------------------------------------------ ----------
Chandra Seetharaman | Be careful what you choose....
- sekharan@us.ibm.com | .......you may get it.
------------------------------------------------------------ ----------
...

Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6928 is a reply to message #6925] Wed, 27 September 2006 21:28 Go to previous messageGo to next message
Rohit Seth is currently offline  Rohit Seth
Messages: 101
Registered: August 2006
Senior Member
On Wed, 2006-09-27 at 12:50 -0700, Chandra Seetharaman wrote:
> Rohit,
>
> I finally looked into your memory controller patches. Here are some of
> the issues I see:
> (All points below are in the context of page limit of containers being
> hit and the new code starts freeing up pages)
>
> 1. LRU is ignored totally thereby thrashing the working set (as pointed
> by Peter Zijlstra).

As the container goes over the limit, this algorithm deactivates some of
the pages. I agree that the logic to find out the correct pages to
deactivate needs to be improved. But the idea is that these pages go in
front of inactive list so that if there is any memory pressure system
wide then these pages can easily be reclaimed.

> 2. Frees up file pages first when hitting the page limit thereby making
> vm_swappiness ineffective.

Not sure if I understood this part correctly. But the choice when the
container goes over its limit is between swap out some of the anonymous
memory first or writeback some of the dirty file pages belonging to this
container.

> 3. Starts writing back pages when the # of file pages is close to the
> limit, thereby breaking the current writeback algorithm/logic.

That is done so as to ensure processes belonging to container (Whose
limit is hit) are the first ones getting penalized. For example, if you
run a tar in a container with 100MB limit then the dirty file pages will
be written back to disk when 100MB limit is hit). Though I will be
adding a HARD_LIMIT on page cache flag and the strict limit will be only
maintained if this container flag is set.

> 4. MAPPED files are not counted against the page limit. why ?. This
> affects reclamation behavior and makes vm_swappiness ineffective.

num_mapped_pages only indicates how many page cache pages are mapped in
user page tables. More of an accounting variable.

> 5. Starts freeing up pages from the first task or the first file in the
> linked list. This logic unfairly penalizes the early members of the
> list.

This is the part that I've to fix. Some per container variables that
remembers the last values will help here.

> 6. Both active and inactive pages use physical pages. But, the
> controller only counts active pages and not inactive pages. why ?

The thought is, it is okay for containers to go over its limit as long
as there is enough memory in the system. When there is any memory
pressure then the inactive (+ dereferenced) pages get swapped out thus
penalizing the container. I'm also thinking of having hard limit for
anonymous pages beyond which the container will not be able to grow its
anonymous pages.

> 7. Page limit is checked against the sum of (anon and file pages) in
> some places and against active pages at some other places. IMO, it
> should be always compared to the same value.
>
It is checked against sum of anon+file pages at the time when new pages
is getting allocated. But as the reclaimer activate the pages, so it is
also important to make sure the number of active pages is not going
above its limit.

Thanks for your comments,
-rohit
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6929 is a reply to message #6928] Wed, 27 September 2006 22:24 Go to previous messageGo to next message
Chandra Seetharaman is currently offline  Chandra Seetharaman
Messages: 88
Registered: August 2006
Member
On Wed, 2006-09-27 at 14:28 -0700, Rohit Seth wrote:

Rohit,

For 1-4, I understand the rationale. But, your implementation deviates
from the current behavior of the VM subsystem which could affect the
ability of these patches getting into mainline.

IMO, the current behavior in terms of reclamation, LRU, vm_swappiness,
and writeback logic should be maintained.

> On Wed, 2006-09-27 at 12:50 -0700, Chandra Seetharaman wrote:
> > Rohit,
> >
> > I finally looked into your memory controller patches. Here are some of
> > the issues I see:
> > (All points below are in the context of page limit of containers being
> > hit and the new code starts freeing up pages)
> >
> > 1. LRU is ignored totally thereby thrashing the working set (as pointed
> > by Peter Zijlstra).
>
> As the container goes over the limit, this algorithm deactivates some of
> the pages. I agree that the logic to find out the correct pages to
> deactivate needs to be improved. But the idea is that these pages go in
> front of inactive list so that if there is any memory pressure system
> wide then these pages can easily be reclaimed.
>
> > 2. Frees up file pages first when hitting the page limit thereby making
> > vm_swappiness ineffective.
>
> Not sure if I understood this part correctly. But the choice when the
> container goes over its limit is between swap out some of the anonymous
> memory first or writeback some of the dirty file pages belonging to this
> container.
>
> > 3. Starts writing back pages when the # of file pages is close to the
> > limit, thereby breaking the current writeback algorithm/logic.
>
> That is done so as to ensure processes belonging to container (Whose
> limit is hit) are the first ones getting penalized. For example, if you
> run a tar in a container with 100MB limit then the dirty file pages will
> be written back to disk when 100MB limit is hit). Though I will be
> adding a HARD_LIMIT on page cache flag and the strict limit will be only
> maintained if this container flag is set.
>
> > 4. MAPPED files are not counted against the page limit. why ?. This
> > affects reclamation behavior and makes vm_swappiness ineffective.
>
> num_mapped_pages only indicates how many page cache pages are mapped in
> user page tables. More of an accounting variable.

But, # of mapped pages is used in the reclamation path logic. These set
of patches doesn't take them into account.

>
> > 5. Starts freeing up pages from the first task or the first file in the
> > linked list. This logic unfairly penalizes the early members of the
> > list.
>
> This is the part that I've to fix. Some per container variables that
> remembers the last values will help here.

Yes, that will help in fairness between the items in the list.

But, it will still suffer from (1) above, as we would have no idea of
the current working set (LRU) (within an item or among the items).

>
> > 6. Both active and inactive pages use physical pages. But, the
> > controller only counts active pages and not inactive pages. why ?
>
> The thought is, it is okay for containers to go over its limit as long

Real number of "physical pages" used by the container is the sum of
active and inactive pages.

My question is, shouldn't that be used to check against page limit
instead of active pages alone ?

How do we describe "page limit" as (to the user) ?

> as there is enough memory in the system. When there is any memory
> pressure then the inactive (+ dereferenced) pages get swapped out thus
> penalizing the container. I'm also thinking of having hard limit for

Reclamation goes through active pages and page cache pages before it
gets into inactive pages. So, this may not work as you are explaining.

> anonymous pages beyond which the container will not be able to grow its
> anonymous pages.

You might break the current behavior (memory pressure must be very high
before these starts failing) if you are going to be strict about it.

>
> > 7. Page limit is checked against the sum of (anon and file pages) in
> > some places and against active pages at some other places. IMO, it
> > should be always compared to the same value.
> >
> It is checked against sum of anon+file pages at the time when new pages

why can't we check against active pages here ?

> is getting allocated. But as the reclaimer activate the pages, so it is
> also important to make sure the number of active pages is not going
> above its limit.

My point is that they won't be same (ever) and hence the check is
inconsistent.

Also, if it is this way, how do we describe the purpose of "page
limit" (for the user).

--

------------------------------------------------------------ ----------
Chandra Seetharaman | Be careful what you choose....
- sekharan@us.ibm.com | .......you may get it.
------------------------------------------------------------ ----------
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6941 is a reply to message #6929] Thu, 28 September 2006 08:01 Go to previous messageGo to next message
Balbir Singh is currently offline  Balbir Singh
Messages: 491
Registered: August 2006
Senior Member
Chandra Seetharaman wrote:
> On Wed, 2006-09-27 at 14:28 -0700, Rohit Seth wrote:
>
> Rohit,
>
> For 1-4, I understand the rationale. But, your implementation deviates
> from the current behavior of the VM subsystem which could affect the
> ability of these patches getting into mainline.
>
> IMO, the current behavior in terms of reclamation, LRU, vm_swappiness,
> and writeback logic should be maintained.
>

<snip>

Hi, Rohit,

I have been playing around with the containers patch. I finally got
around to reading the code.


1. Comments on reclaiming

You could try the following options to overcome some of the disadvantages of the
current scheme.

(a) You could consider a reclaim path based on Dave Hansen's Challenged memory
controller (see http://marc.theaimsgroup.com/?l=linux-mm&m=1155669825323 45&w=2).

(b) The other option is to do what the resource group memory controller does -
build a per group LRU list of pages (active, inactive) and reclaim
them using the existing code (by passing the correct container pointer,
instead of the zone pointer). One disadvantage of this approach is that
the global reclaim is impacted as the global LRU list is broken. At the
expense of another list, we could maintain two lists, global LRU and
container LRU lists. Depending on the context of the reclaim - (container
over limit, memory pressure) we could update/manipulate both lists.
This approach is definitely very expensive.

2. Comments on task migration support

(a) One of the issues I found while using the container code is that, one could
add a task to a container say "a". "a" gets charged for the tasks usage,
when the same task moves to a different container say "b", when the task
exits, the credit goes to "b" and "a" remains indefinitely charged.

(b) For tasks addition and removal, I think it's probably better to move
the entire process (thread group) rather than allow each individual thread
to move across containers. Having threads belonging to the same process
reside in different containers can be complex to handle, since they
share the same VM. Do you have a scenario where the above condition
would be useful?


--

Warm Regards,
Balbir Singh,
Linux Technology Center,
IBM Software Labs

PS: Chandra, I hope the details of the resource group memory controller
are correct.
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6962 is a reply to message #6929] Thu, 28 September 2006 18:12 Go to previous messageGo to next message
Rohit Seth is currently offline  Rohit Seth
Messages: 101
Registered: August 2006
Senior Member
On Wed, 2006-09-27 at 15:24 -0700, Chandra Seetharaman wrote:
> On Wed, 2006-09-27 at 14:28 -0700, Rohit Seth wrote:
>
> Rohit,
>
> For 1-4, I understand the rationale. But, your implementation deviates
> from the current behavior of the VM subsystem which could affect the
> ability of these patches getting into mainline.
>

I agree that this implementation differs from existing VM subsystem.
But the key point here is, it puts the pages that should be reclaimed.
And this part needs further refining.

> IMO, the current behavior in terms of reclamation, LRU, vm_swappiness,
> and writeback logic should be maintained.
>

How? I don't want to duplicate the whole logic for containers.

> > On Wed, 2006-09-27 at 12:50 -0700, Chandra Seetharaman wrote:
> > > Rohit,
> > >
> > > I finally looked into your memory controller patches. Here are some of
> > > the issues I see:
> > > (All points below are in the context of page limit of containers being
> > > hit and the new code starts freeing up pages)
> > >
> > > 1. LRU is ignored totally thereby thrashing the working set (as pointed
> > > by Peter Zijlstra).
> >
> > As the container goes over the limit, this algorithm deactivates some of
> > the pages. I agree that the logic to find out the correct pages to
> > deactivate needs to be improved. But the idea is that these pages go in
> > front of inactive list so that if there is any memory pressure system
> > wide then these pages can easily be reclaimed.
> >
> > > 2. Frees up file pages first when hitting the page limit thereby making
> > > vm_swappiness ineffective.
> >
> > Not sure if I understood this part correctly. But the choice when the
> > container goes over its limit is between swap out some of the anonymous
> > memory first or writeback some of the dirty file pages belonging to this
> > container.
> >
> > > 3. Starts writing back pages when the # of file pages is close to the
> > > limit, thereby breaking the current writeback algorithm/logic.
> >
> > That is done so as to ensure processes belonging to container (Whose
> > limit is hit) are the first ones getting penalized. For example, if you
> > run a tar in a container with 100MB limit then the dirty file pages will
> > be written back to disk when 100MB limit is hit). Though I will be
> > adding a HARD_LIMIT on page cache flag and the strict limit will be only
> > maintained if this container flag is set.
> >
> > > 4. MAPPED files are not counted against the page limit. why ?. This
> > > affects reclamation behavior and makes vm_swappiness ineffective.
> >
> > num_mapped_pages only indicates how many page cache pages are mapped in
> > user page tables. More of an accounting variable.
>
> But, # of mapped pages is used in the reclamation path logic. These set
> of patches doesn't take them into account.
>
> >
> > > 5. Starts freeing up pages from the first task or the first file in the
> > > linked list. This logic unfairly penalizes the early members of the
> > > list.
> >
> > This is the part that I've to fix. Some per container variables that
> > remembers the last values will help here.
>
> Yes, that will help in fairness between the items in the list.
>
> But, it will still suffer from (1) above, as we would have no idea of
> the current working set (LRU) (within an item or among the items).
>

Please let me know how do you propose to have another LRU for pages in
containers. Though I can add some heuristics.

> >
> > > 6. Both active and inactive pages use physical pages. But, the
> > > controller only counts active pages and not inactive pages. why ?
> >
> > The thought is, it is okay for containers to go over its limit as long
>
> Real number of "physical pages" used by the container is the sum of
> active and inactive pages.
>

>From the user pov, the real sum of pages that are used by container for
user land is anon + file. Now some times it is possible that there are
active pages that are neither in page cache nor in use as anon.


> My question is, shouldn't that be used to check against page limit
> instead of active pages alone ?
I can use active+inactive as the test. Sure. But I will have to also
still have a check to make sure that number of active pages themselves
is not bigger than page_limit.

> How do we describe "page limit" as (to the user) ?
>

Amount of memory below which no container throttling will happen. And
if the system is properly configured that it also ensures that this much
memory will always be there to user. If a container goes over this
limit then it will be throttled and it will suffer performance.

> > as there is enough memory in the system. When there is any memory
> > pressure then the inactive (+ dereferenced) pages get swapped out thus
> > penalizing the container. I'm also thinking of having hard limit for
>
> Reclamation goes through active pages and page cache pages before it
> gets into inactive pages. So, this may not work as you are explaining.

That is a good point. I'll have to make a check in reclaim so that when
the system is ready for swap or write back then containers are looked
first.

>
> > anonymous pages beyond which the container will not be able to grow its
> > anonymous pages.
>
> You might break the current behavior (memory pressure must be very high
> before these starts failing) if you are going to be strict about it.
>

That feature when implemented will be a container specific.

> >
> > > 7. Page limit is checked against the sum of (anon and file pages) in
> > > some places and against active pages at some other places. IMO, it
> > > should be always compared to the same value.
> > >
> > It is checked against sum of anon+file pages at the time when new pages
>
> why can't we check against active pages here ?
>
> > is getting allocated. But as the reclaimer activate the pages, so it is
> > also important to make sure the number of active pages is not going
> > above its limit.
>
> My point is that they won't be same (ever) and hence the check is
> inconsistent.
>

The check ensures
1- when a new page is getting added then the total sum of pages is
checked against the limit.
2- Number of active pages don't exceed the limit.

These two points combined together enforce the decision that once the
container goes over the limit, we scan the pages again to deactivate the
excess.

Thanks,
-rohit
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6965 is a reply to message #6941] Thu, 28 September 2006 18:31 Go to previous messageGo to next message
Rohit Seth is currently offline  Rohit Seth
Messages: 101
Registered: August 2006
Senior Member
On Thu, 2006-09-28 at 13:31 +0530, Balbir Singh wrote:
> Chandra Seetharaman wrote:
> > On Wed, 2006-09-27 at 14:28 -0700, Rohit Seth wrote:
> >
> > Rohit,
> >
> > For 1-4, I understand the rationale. But, your implementation deviates
> > from the current behavior of the VM subsystem which could affect the
> > ability of these patches getting into mainline.
> >
> > IMO, the current behavior in terms of reclamation, LRU, vm_swappiness,
> > and writeback logic should be maintained.
> >
>
> <snip>
>
> Hi, Rohit,
>
> I have been playing around with the containers patch. I finally got
> around to reading the code.
>
>
> 1. Comments on reclaiming
>
> You could try the following options to overcome some of the disadvantages of the
> current scheme.
>
> (a) You could consider a reclaim path based on Dave Hansen's Challenged memory
> controller (see http://marc.theaimsgroup.com/?l=linux-mm&m=1155669825323 45&w=2).
>

I will go through that. Did you get a chance to stress the system and
found any short comings that should be resolved.

> (b) The other option is to do what the resource group memory controller does -
> build a per group LRU list of pages (active, inactive) and reclaim
> them using the existing code (by passing the correct container pointer,
> instead of the zone pointer). One disadvantage of this approach is that
> the global reclaim is impacted as the global LRU list is broken. At the
> expense of another list, we could maintain two lists, global LRU and
> container LRU lists. Depending on the context of the reclaim - (container
> over limit, memory pressure) we could update/manipulate both lists.
> This approach is definitely very expensive.
>

Two LRUs is a nice idea. Though I don't think it will go too far. It
will involve adding another list pointers in the page structure. I
agree that the mem handler is not optimal at all but I don't want to
make it mimic kernel reclaimer at the same time.

> 2. Comments on task migration support
>
> (a) One of the issues I found while using the container code is that, one could
> add a task to a container say "a". "a" gets charged for the tasks usage,
> when the same task moves to a different container say "b", when the task
> exits, the credit goes to "b" and "a" remains indefinitely charged.
>
hmm, when the task is removed from "a" then "a" gets the credits for the
amount of anon memory that is used by the task. Or do you mean
something different.

> (b) For tasks addition and removal, I think it's probably better to move
> the entire process (thread group) rather than allow each individual thread
> to move across containers. Having threads belonging to the same process
> reside in different containers can be complex to handle, since they
> share the same VM. Do you have a scenario where the above condition
> would be useful?
>
>
I don't have a scenario where a task actually gets to move out of
container (except exit). That asynchronous removal of tasks has already
got the code very complicated for locking etc. But if you think moving
a thread group is useful then I will add that functionality.

Thanks,
-rohit
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6970 is a reply to message #6962] Thu, 28 September 2006 20:23 Go to previous messageGo to next message
Chandra Seetharaman is currently offline  Chandra Seetharaman
Messages: 88
Registered: August 2006
Member
On Thu, 2006-09-28 at 11:12 -0700, Rohit Seth wrote:
> On Wed, 2006-09-27 at 15:24 -0700, Chandra Seetharaman wrote:
> > On Wed, 2006-09-27 at 14:28 -0700, Rohit Seth wrote:
> >
> > Rohit,
> >
> > For 1-4, I understand the rationale. But, your implementation deviates
> > from the current behavior of the VM subsystem which could affect the
> > ability of these patches getting into mainline.
> >
>
> I agree that this implementation differs from existing VM subsystem.
> But the key point here is, it puts the pages that should be reclaimed.

But, you are putting the pages up for reclamation without any
consideration to the working set and system memory pressure.

> And this part needs further refining.
>
> > IMO, the current behavior in terms of reclamation, LRU, vm_swappiness,
> > and writeback logic should be maintained.
> >
>
> How? I don't want to duplicate the whole logic for containers.

We don't have to be duplicating the whole logic. Just make sure that the
existing mechanisms are aware of containers, if they exist.

<snip>

> >
> > But, it will still suffer from (1) above, as we would have no idea of
> > the current working set (LRU) (within an item or among the items).
> >
>
> Please let me know how do you propose to have another LRU for pages in
> containers. Though I can add some heuristics.

There are multiple ways as Balbir pointed in his email:
- reclamation per container (as in current RG implementation)
( + do a system wide reclaim when the system pressure is high)
- reclaim with the knowledge of containers that are over limit
(Dave Hansen's patches + avoid overhead of combing the list)
- have two lists one for the system and one per container

>
> > >
> > > > 6. Both active and inactive pages use physical pages. But, the
> > > > controller only counts active pages and not inactive pages. why ?
> > >
> > > The thought is, it is okay for containers to go over its limit as long
> >
> > Real number of "physical pages" used by the container is the sum of
> > active and inactive pages.
> >
>
> >From the user pov, the real sum of pages that are used by container for
> user land is anon + file. Now some times it is possible that there are
> active pages that are neither in page cache nor in use as anon.
>
>
> > My question is, shouldn't that be used to check against page limit
> > instead of active pages alone ?
> I can use active+inactive as the test. Sure. But I will have to also
> still have a check to make sure that number of active pages themselves
> is not bigger than page_limit.
>
> > How do we describe "page limit" as (to the user) ?
> >
>
> Amount of memory below which no container throttling will happen. And

But, from the implementation one cannot clearly derive what we mean by
"memory" here (physical ?, file + anon ?; if we say physical, it is not
correct).

> if the system is properly configured that it also ensures that this much
> memory will always be there to user. If a container goes over this
> limit then it will be throttled and it will suffer performance.

But, the user's expectation would be that we would be throwing out pages
based on LRU (within that container). But this implementation doesn't
provide that behavior. It doesn't care about the working set.

Performance impact will be lesser if we consider the working set and
throw out pages based on LRU (within a container).

>
> > > as there is enough memory in the system. When there is any memory
> > > pressure then the inactive (+ dereferenced) pages get swapped out thus
> > > penalizing the container. I'm also thinking of having hard limit for
> >
> > Reclamation goes through active pages and page cache pages before it
> > gets into inactive pages. So, this may not work as you are explaining.
>
> That is a good point. I'll have to make a check in reclaim so that when
> the system is ready for swap or write back then containers are looked
> first.
>
> >
> > > anonymous pages beyond which the container will not be able to grow its
> > > anonymous pages.
> >
> > You might break the current behavior (memory pressure must be very high
> > before these starts failing) if you are going to be strict about it.
> >
>
> That feature when implemented will be a container specific.

My point is, even though it is container specific, the behavior (inside
a container) should be same as what a user sees at the system level now.

For example, consider a workload that is run on a 1G system now, and
user sees only occasional memory allocation failures and just a handful
of oom kills. When the workload is moved to a container with 1G,
failures the user see should be in the same order ( and similar with
performance characteristics).

Do you agree that it will be the user's expectation ?

>
> > >
> > > > 7. Page limit is checked against the sum of (anon and file pages) in
> > > > some places and against active pages at some other places. IMO, it
> > > > should be always compared to the same value.
> > > >
> > > It is checked against sum of anon+file pages at the time when new pages
> >
> > why can't we check against active pages here ?
> >
> > > is getting allocated. But as the reclaimer activate the pages, so it is
> > > also important to make sure the number of active pages is not going
> > > above its limit.
> >
> > My point is that they won't be same (ever) and hence the check is
> > inconsistent.
> >
>
> The check ensures
> 1- when a new page is getting added then the total sum of pages is
> checked against the limit.
> 2- Number of active pages don't exceed the limit.
>
> These two points combined together enforce the decision that once the
> container goes over the limit, we scan the pages again to deactivate the
> excess.

Again, I understand the rationale. But it is not consistent.
>
> Thanks,
> -rohit
>
--

------------------------------------------------------------ ----------
Chandra Seetharaman | Be careful what you choose....
- sekharan@us.ibm.com | .......you may get it.
------------------------------------------------------------ ----------
Re: [ckrm-tech] [patch00/05]: Containers(V2)- Introduction [message #6973 is a reply to message #6970] Thu, 28 September 2006 21:38 Go to previous messageGo to previous message
Rohit Seth is currently offline  Rohit Seth
Messages: 101
Registered: August 2006
Senior Member
On Thu, 2006-09-28 at 13:23 -0700, Chandra Seetharaman wrote:
> On Thu, 2006-09-28 at 11:12 -0700, Rohit Seth wrote:
> > On Wed, 2006-09-27 at 15:24 -0700, Chandra Seetharaman wrote:
> > > On Wed, 2006-09-27 at 14:28 -0700, Rohit Seth wrote:
> > >
> > > Rohit,
> > >
> > > For 1-4, I understand the rationale. But, your implementation deviates
> > > from the current behavior of the VM subsystem which could affect the
> > > ability of these patches getting into mainline.
> > >
> >
> > I agree that this implementation differs from existing VM subsystem.
> > But the key point here is, it puts the pages that should be reclaimed.
>
> But, you are putting the pages up for reclamation without any
> consideration to the working set and system memory pressure.
>

And I agree with you that some heuristics need to be put there to make
that algorithm go better.

> > And this part needs further refining.
> >
> > > IMO, the current behavior in terms of reclamation, LRU, vm_swappiness,
> > > and writeback logic should be maintained.
> > >
> >
> > How? I don't want to duplicate the whole logic for containers.
>
> We don't have to be duplicating the whole logic. Just make sure that the
> existing mechanisms are aware of containers, if they exist.
>

The next version is going to have hooks in kernel reclaim path for
containers. But that will still not make it close to what normal
reclaim path does for pages outside containers.

> <snip>
>
> > >
> > > But, it will still suffer from (1) above, as we would have no idea of
> > > the current working set (LRU) (within an item or among the items).
> > >
> >
> > Please let me know how do you propose to have another LRU for pages in
> > containers. Though I can add some heuristics.
>
> There are multiple ways as Balbir pointed in his email:
> - reclamation per container (as in current RG implementation)
> ( + do a system wide reclaim when the system pressure is high)
> - reclaim with the knowledge of containers that are over limit
> (Dave Hansen's patches + avoid overhead of combing the list)
> - have two lists one for the system and one per container
>

I will look at Dave's patch. Having two different list is not the right
approach. I will add some reclaim logic in kernel reclaim path.

Any idea why current RG implementation is not in mainline? Any effort
in reviving that and getting it in Andrew's tree.

> >
> > > >
> > > > > 6. Both active and inactive pages use physical pages. But, the
> > > > > controller only counts active pages and not inactive pages. why ?
> > > >
> > > > The thought is, it is okay for containers to go over its limit as long
> > >
> > > Real number of "physical pages" used by the container is the sum of
> > > active and inactive pages.
> > >
> >
> > >From the user pov, the real sum of pages that are used by container for
> > user land is anon + file. Now some times it is possible that there are
> > active pages that are neither in page cache nor in use as anon.
> >
> >
> > > My question is, shouldn't that be used to check against page limit
> > > instead of active pages alone ?
> > I can use active+inactive as the test. Sure. But I will have to also
> > still have a check to make sure that number of active pages themselves
> > is not bigger than page_limit.
> >
> > > How do we describe "page limit" as (to the user) ?
> > >
> >
> > Amount of memory below which no container throttling will happen. And
>
> But, from the implementation one cannot clearly derive what we mean by
> "memory" here (physical ?, file + anon ?; if we say physical, it is not
> correct).
>

It is mostly correct. i.e. anon+file == total user physical memory for
container. Except for the corner cases when there could be stale
pagecache pages that are no longer on page cache but still on LRU (not
yet recalimed).

> > if the system is properly configured that it also ensures that this much
> > memory will always be there to user. If a container goes over this
> > limit then it will be throttled and it will suffer performance.
>
> But, the user's expectation would be that we would be throwing out pages
> based on LRU (within that container). But this implementation doesn't
> provide that behavior. It doesn't care about the working set.
>

IMO, user is expected to live inside the limits when containers are
defined. If the limits are exceeded then some performance impact will
happen. Having said that though I would still like to get some
optimizations in memory handler so that more appropriate pages are
deactivated.

> Performance impact will be lesser if we consider the working set and
> throw out pages based on LRU (within a container).
>
I don't deny it. But two separate LRUs is not an option.

> >
> > > > as there is enough memory in the system. When there is any memory
> > > > pressure then the inactive (+ dereferenced) pages get swapped out thus
> > > > penalizing the container. I'm also thinking of having hard limit for
> > >
> > > Reclamation goes through active pages and page cache pages before it
> > > gets into inactive pages. So, this may not work as you are explaining.
> >
> > That is a good point. I'll have to make a check in reclaim so that when
> > the system is ready for swap or write back then containers are looked
> > first.
> >
> > >
> > > > anonymous pages beyond which the container will not be able to grow its
> > > > anonymous pages.
> > >
> > > You might break the current behavior (memory pressure must be very high
> > > before these starts failing) if you are going to be strict about it.
> > >
> >
> > That feature when implemented will be a container specific.
>
> My point is, even though it is container specific, the behavior (inside
> a container) should be same as what a user sees at the system level now.
>
> For example, consider a workload that is run on a 1G system now, and
> user sees only occasional memory allocation failures and just a handful
> of oom kills. When the workload is moved to a container with 1G,
> failures the user see should be in the same order ( and similar with
> performance characteristics).
>
> Do you agree that it will be the user's expectation ?
>

That will be nice to have feature. And for that any container
implementation will have to be as tightly intertwined with rest of vm as
cpuset is.

thanks,
-rohit
>
Previous Topic: Re: Re: namespace and nsproxy syscalls
Next Topic: [Lxc-devel] [patch 093/203] pidspace: is_init()
Goto Forum:
  


Current Time: Sun Oct 06 11:25:17 GMT 2024

Total time taken to generate the page: 0.04631 seconds