OpenVZ Forum


Home » Mailing lists » Devel » [PATCH 0/2] dm-band: The I/O bandwidth controller: Overview
[PATCH 0/2] dm-band: The I/O bandwidth controller: Overview [message #26434] Wed, 23 January 2008 12:53 Go to next message
Ryo Tsuruta is currently offline  Ryo Tsuruta
Messages: 35
Registered: January 2008
Member
Hi everyone,

I'm happy to announce that I've implemented a Block I/O bandwidth controller.
The controller is designed to be of use in a cgroup or virtual machine
environment. The current approach is that the controller is implemented as
a device-mapper driver.

What's dm-band all about?
========================
Dm-band is an I/O bandwidth controller implemented as a device-mapper driver.
Several jobs using the same physical device have to share the bandwidth of
the device. Dm-band gives bandwidth to each job according to its weight, 
which each job can set its own value to.

At this time, a job is a group of processes with the same pid or pgrp or uid.
There is also a plan to make it support cgroup. A job can also be a virtual
machine such as KVM or Xen.

  +------+ +------+ +------+   +------+ +------+ +------+ 
  |cgroup| |cgroup| | the  |   | pid  | | pid  | | the  |  jobs
  |  A   | |  B   | |others|   |  X   | |  Y   | |others| 
  +--|---+ +--|---+ +--|---+   +--|---+ +--|---+ +--|---+   
  +--V----+---V---+----V---+   +--V----+---V---+----V---+   
  | group | group | default|   | group | group | default|  band groups
  |       |       |  group |   |       |       |  group | 
  +-------+-------+--------+   +-------+-------+--------+
  |         band1          |   |         band2          |  band devices
  +-----------|------------+   +-----------|------------+
  +-----------V--------------+-------------V------------+
  |                          |                          |
  |          sdb1            |           sdb2           |  physical devices
  +--------------------------+--------------------------+


How dm-band works.
========================
Every band device has one band group, which by default is called the default
group.

Band devices can also have extra band groups in them. Each band group
has a job to support and a weight. Proportional to the weight, dm-band gives
tokens to the group.

A group passes on I/O requests that its job issues to the underlying
layer so long as it has tokens left, while requests are blocked
if there aren't any tokens left in the group. One token is consumed each
time the group passes on a request. Dm-band will refill groups with tokens
once all of groups that have requests on a given physical device use up their
tokens.

With this approach, a job running on a band group with large weight is
guaranteed to be able to issue a large number of I/O requests.


Getting started
=============
The following is a brief description how to control the I/O bandwidth of
disks. In this description, we'll take one disk with two partitions as an
example target.

You can also check the manual at Document/device-mapper/band.txt of the
linux kernel source tree for more information.


Create and map band devices
---------------------------
Create two band devices "band1" and "band2" and map them to "/dev/sda1"
and "/dev/sda2" respectively.

 # echo "0 `blockdev --getsize /dev/sda1` band /dev/sda1 1" | dmsetup create band1
 # echo "0 `blockdev --getsize /dev/sda2` band /dev/sda2 1" | dmsetup create band2

If the commands are successful then the device files "/dev/mapper/band1"
and "/dev/mapper/band2" will have been created.


Bandwidth control
----------------
In this example weights of 40 and 10 will be assigned to "band1" and
"band2" respectively. This is done using the following commands:

 # dmsetup message band1 0 weight 40
 # dmsetup message band2 0 weight 10

After these commands, "band1" can use 80% --- 40/(40+10)*100 --- of the
bandwidth of the physical disk "/dev/sda" while "band2" can use 20%.


Additional bandwidth control
---------------------------
In this example two extra band groups are created on "band1".
The first group consists of all the processes with user-id 1000 and the
second group consists of all the processes with user-id 2000. Their
weights are 30 and 20 respectively.

Firstly the band group type of "band1" is set to "user".
Then, the user-id 1000 and 2000 groups are attached to "band1".
Finally, weights are assigned to the user-id 1000 and 2000 groups.

 # dmsetup message band1 0 type user
 # dmsetup message band1 0 attach 1000
 # dmsetup message band1 0 attach 2000
 # dmsetup message band1 0 weight 1000:30
 # dmsetup message band1 0 weight 2000:20

Now the processes in the user-id 1000 group can use 30% ---
30/(30+20+40+10)*100 --- of the bandwidth of the physical disk.

 Band Device    Band Group                     Weight
  band1         user id 1000                     30
  band1         user id 2000                     20
  band1         default group(the other users)   40
  band2         default group                    10


Remove band devices
-------------------
Remove the band devices when no longer used.

  # dmsetup remove band1
  # dmsetup remove band2


TODO
========================
  - Cgroup support. 
  - Control read and write requests separately.
  - Support WRITE_BARRIER.
  - Optimization.
  - More configuration tools. Or is the dmsetup command sufficient?
  - Other policies to schedule BIOs. Or is the weight policy sufficient?

Thanks,
Ryo Tsuruta
_______________________________________________
Containers mailing list
Containers@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
[PATCH 1/2] dm-band: The I/O bandwidth controller: Source code patch [message #26440 is a reply to message #26434] Wed, 23 January 2008 12:56 Go to previous messageGo to next message
Ryo Tsuruta is currently offline  Ryo Tsuruta
Messages: 35
Registered: January 2008
Member
Here is the patch of dm-band.

Based on 2.6.23.14
Signed-off-by: Ryo Tsuruta <ryov@valinux.co.jp>
Signed-off-by: Hirokazu Takahashi <taka@valinux.co.jp>

diff -uprN linux-2.6.23.14.orig/drivers/md/Kconfig linux-2.6.23.14/drivers/md/Kconfig
--- linux-2.6.23.14.orig/drivers/md/Kconfig	2008-01-15 05:49:56.000000000 +0900
+++ linux-2.6.23.14/drivers/md/Kconfig	2008-01-21 16:09:41.000000000 +0900
@@ -276,4 +276,13 @@ config DM_DELAY
 
 	If unsure, say N.
 
+config DM_BAND
+	tristate "I/O band width control "
+	depends on BLK_DEV_DM
+	---help---
+	Any processes or cgroups can use the same storage
+	with its band-width fairly shared.
+
+	If unsure, say N.
+
 endif # MD
diff -uprN linux-2.6.23.14.orig/drivers/md/Makefile linux-2.6.23.14/drivers/md/Makefile
--- linux-2.6.23.14.orig/drivers/md/Makefile	2008-01-15 05:49:56.000000000 +0900
+++ linux-2.6.23.14/drivers/md/Makefile	2008-01-21 20:45:03.000000000 +0900
@@ -8,6 +8,7 @@ dm-multipath-objs := dm-hw-handler.o dm-
 dm-snapshot-objs := dm-snap.o dm-exception-store.o
 dm-mirror-objs	:= dm-log.o dm-raid1.o
 dm-rdac-objs	:= dm-mpath-rdac.o
+dm-band-objs	:= dm-bandctl.o dm-band-policy.o dm-band-type.o
 md-mod-objs     := md.o bitmap.o
 raid456-objs	:= raid5.o raid6algos.o raid6recov.o raid6tables.o \
 		   raid6int1.o raid6int2.o raid6int4.o \
@@ -39,6 +40,7 @@ obj-$(CONFIG_DM_MULTIPATH_RDAC)	+= dm-rd
 obj-$(CONFIG_DM_SNAPSHOT)	+= dm-snapshot.o
 obj-$(CONFIG_DM_MIRROR)		+= dm-mirror.o
 obj-$(CONFIG_DM_ZERO)		+= dm-zero.o
+obj-$(CONFIG_DM_BAND)		+= dm-band.o
 
 quiet_cmd_unroll = UNROLL  $@
       cmd_unroll = $(PERL) $(srctree)/$(src)/unroll.pl $(UNROLL) \
diff -uprN linux-2.6.23.14.orig/drivers/md/dm-band-policy.c linux-2.6.23.14/drivers/md/dm-band-policy.c
--- linux-2.6.23.14.orig/drivers/md/dm-band-policy.c	1970-01-01 09:00:00.000000000 +0900
+++ linux-2.6.23.14/drivers/md/dm-band-policy.c	2008-01-21 20:31:14.000000000 +0900
@@ -0,0 +1,185 @@
+/*
+ * Copyright (C) 2008 VA Linux Systems Japan K.K.
+ *
+ *  I/O bandwidth control
+ *
+ * This file is released under the GPL.
+ */
+#include <linux/bio.h>
+#include <linux/workqueue.h>
+#include "dm.h"
+#include "dm-bio-list.h"
+#include "dm-band.h"
+
+/*
+ * The following functiotons determine when and which BIOs should
+ * be submitted to control the I/O flow.
+ * It is possible to add a new I/O scheduling policy with it.
+ */
+
+
+/*
+ * Functions for weight balancing policy.
+ */
+#define DEFAULT_WEIGHT	100
+#define DEFAULT_TOKENBASE	2048
+#define BAND_IOPRIO_BASE 100
+
+static int proceed_global_epoch(struct banddevice *bs)
+{
+	bs->g_epoch++;
+#if 0	/* this will also work correct */
+	if (bs->g_blocked)
+		queue_work(bs->g_band_wq, &bs->g_conductor);
+	return 0;
+#endif
+	dprintk(KERN_ERR "proceed_epoch %d --> %d\n",
+						bs->g_epoch-1, bs->g_epoch);
+	return 1;
+}
+
+static inline int proceed_epoch(struct bandgroup *bw)
+{
+	struct banddevice *bs = bw->c_shared;
+
+	if (bw->c_my_epoch != bs->g_epoch) {
+		bw->c_my_epoch = bs->g_epoch;
+		return 1;
+	}
+	return 0;
+}
+
+static inline int iopriority(struct bandgroup *bw)
+{
+	struct banddevice *bs = bw->c_shared;
+	int iopri;
+
+	iopri = bw->c_token*BAND_IOPRIO_BASE/bw->c_token_init_value + 1;
+	if (bw->c_my_epoch != bs->g_epoch)
+		iopri += BAND_IOPRIO_BASE;
+	if (bw->c_going_down)
+		iopri += BAND_IOPRIO_BASE*2;
+
+	return iopri;
+}
+
+static int is_token_left(struct bandgroup *bw)
+{
+	if (bw->c_token > 0)
+		return iopriority(bw);
+
+	if (proceed_epoch(bw) || bw->c_going_down) {
+		bw->c_token = bw->c_token_init_value;
+		dprintk(KERN_ERR "refill token: bw:%p token:%d\n",
+							bw, bw->c_token);
+		return iopriority(bw);
+	}
+	return 0;
+}
+
+static void prepare_token(struct bandgroup *bw, struct bio *bio)
+{
+	bw->c_token--;
+}
+
+static void set_weight(struct bandgroup *bw, int new)
+{
+	struct banddevice *bs = bw->c_shared;
+	struct bandgroup *p;
+
+	bs->g_weight_total += (new - bw->c_weight);
+	bw->c_weight = new;
+
+	list_for_each_entry(p, &bs->g_brothers, c_list) {
+		/* Fixme: it might overflow */
+		p->c_token = p->c_token_init_value =
+		   bs->g_token_base*p->c_weight/bs->g_weight_total + 1;
+	}
+}
+
+static int policy_weight_ctr(struct bandgroup *bw)
+{
+	struct banddevice *bs = bw->c_shared;
+
+	bw->c_my_epoch = bs->g_epoch;
+	bw->c_weight = 0;
+	set_weight(bw, DEFAULT_WEIGHT);
+	return 0;
+}
+
+static void policy_weight_dtr(struct bandgroup *bw)
+{
+	set_weight(bw, 0);
+}
+
+static int policy_weight_param(struct bandgroup *bw, char *cmd, char *value)
+{
+	struct banddevice *bs = bw->c_shared;
+	int val = simple_strtol(value, NULL, 0);
+	int r = 0;
+
+	if (!strcmp(cmd, "weight")) {
+		if (val > 0)
+			set_weight(bw, val);
+		else
+			r = -EINVAL;
+	} else if (!strcmp(cmd, "token")) {
+		if (val > 0) {
+			bs->g_token_base = val;
+			set_weight(bw, bw->c_weight);
+		} else
+			r = -EINVAL;
+	} else {
+		r = -EINVAL;
+	}
+	return r;
+}
+
+/*
+ *  <Method>      <description>
+ * g_can_submit   : To determine whether a given group has a right to
+ *                  submit BIOs.
+ * 		    The larger return value the higher priority to submit.
+ * 		    Zero means it has no right.
+ * g_prepare_bio  : Called right before submitting each BIO.
+ * g_restart_bios : Called when there exist some BIOs blocked but none of them
+ * 		    can't be submitted now.
+ *		    This method have to do something to restart to submit BIOs.
+ * 		    Returns 0 if it has become able to submit them now.
+ * 		    Otherwise, returns 1 and this policy module has to restart
+ *		    sumitting BIOs by itself later on.
+ * g_hold_bio     : To hold a given BIO until it is submitted.
+ * 		    The default function is used when this method is undefined.
+ * g_pop_bio      : To select and get the best BIO to submit.
+ * g_group_ctr    : To initalize the policy own members of struct bandgroup.
+ * g_group_dtr    : Called when struct bandgroup is removed.
+ * g_set_param    : To update the policy own date.
+ * 		    The parameters can be passed through "dmsetup message"
+ *                  command.
+ */
+static void policy_weight_init(struct banddevice *bs)
+{
+	bs->g_can_submit = is_token_left;
+	bs->g_prepare_bio = prepare_token;
+	bs->g_restart_bios = proceed_global_epoch;
+	bs->g_group_ctr = policy_weight_ctr;
+	bs->g_group_dtr = policy_weight_dtr;
+	bs->g_set_param = policy_weight_param;
+
+	bs->g_token_base = DEFAULT_TOKENBASE;
+	bs->g_epoch = 0;
+	bs->g_weight_total = 0;
+}
+/* weight balancing policy. --- End --- */
+
+
+static void policy_default_init(struct banddevice *bs) /*XXX*/
+{
+	policy_weight_init(bs);	/* temp */
+}
+
+struct policy_type band_policy_type[] = {
+	{"default", policy_default_init},
+	{"weight", policy_weight_init},
+	{NULL,     policy_default_init}
+};
diff -uprN linux-2.6.23.14.orig/drivers/md/dm-band-type.c linux-2.6.23.14/drivers/md/dm-band-type.c
--- linux-2.6.23.14.orig/drivers/md/dm-band-type.c	1970-01-01 09:00:00.000000000 +0900
+++ linux-2.6.23.14/drivers/md/dm-band-type.c	2008-01-21 20:27:15.000000000 +0900
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2008 VA Linux Systems Japan K.K.
+ *
+ *  I/O bandwidth control
+ *
+ * This file is released under the GPL.
+ */
+#include <linux/bio.h>
+#include "dm.h"
+#include "dm-bio-list.h"
+#include "dm-band.h"
+
+/*
+ * Any I/O bandwidth can be divided into several bandwidth groups, each of which
+ * has its own unique ID. The following functions are called to determine
+ * which group a given BIO belongs to and return the ID of the group.
+ */
+
+/* ToDo: unsigned long value would be better for group ID */
+
+static int band_process_id(struct bio *bio)
+{
+	/*
+	 * This function will work for KVM and Xen.
+	 */
+	return (int)current->tgid;
+}
+
+static int band_process_group(struct bio *bio)
+{
+	return (int)process_group(current);
+}
+
+static int band_uid(struct bio *bio)
+{
+	return (int)current->uid;
+}
+
+static int band_cpuset(struct bio *bio)
+{
+	return 0;	/* not implemented yet */
+}
+
+static int band_node(struct bio *bio)
+{
+	return 0;	/* not implemented yet */
+}
+
+static int band_cgroup(struct bio *bio)
+{
+  /*
+   * This function should return the ID of the cgroup which issued "bio".
+   * The ID of the cgroup which the current process belongs to won't be
+   * suitable ID for this purpose, since some BIOs will be handled by kernel
+   * threads like aio or pdflush on behalf of the process requesting the BIOs.
+   */
+	return 0;	/* not implemented yet */
+}
+
+struct group_type band_group_type[] = {
+	{"none",   NULL},
+	{"pgrp",   band_process_group},
+	{"pid",    band_process_id},
+	{"node",   band_node},
+	{"cpuset", band_cpuset},
+	{"cgroup", band_cgroup},
+	{"user",   band_uid},
+	{NULL,     NULL}
+};
diff -uprN linux-2.6.23.14.orig/drivers/md/dm-band.h linux-2.6.23.14/drivers/md/dm-band.h
--- linux-2.6.23.14.orig/drivers/md/dm-band.h	1970-01-01 09:00:00.000000000 +0900
+++ linux-2.6.23.14/drivers/md/dm-band.h	2008-01-21 20:20:54.000000000 +0900
@@ -0,0 +1,99 @@
+/*
+ * Copyright (C) 2008 VA Linux Systems Japan K.K.
+ *
+ *  I/O bandwidth control
+ *
+ * This file is released under the GPL.
+ */
+
+#define DEFAULT_IO_THROTTLE	4
+#define DEFAULT_IO_LIMIT	128
+#define BAND_NAME_MAX 31
+#define BAND_ID_ANY (-1)
+
+struct bandgroup;
+
+struct banddevice {
+	struct list_head	g_brothers;
+	struct work_struct	g_conductor;
+	struct workqueue_struct	*g_band_wq;
+	int	g_io_throttle;
+	int	g_io_limit;
+	int	g_plug_bio;
+	int	g_issued;
+	int	g_blocked;
+	spinlock_t	g_lock;
+
+	int	g_devgroup;
+	int	g_ref;		/* just for debugging */
+	struct	list_head g_list;
+	int	g_flags;	/*
...

[PATCH 2/2] dm-band: The I/O bandwidth controller: Document [message #26445 is a reply to message #26434] Wed, 23 January 2008 12:58 Go to previous messageGo to next message
Ryo Tsuruta is currently offline  Ryo Tsuruta
Messages: 35
Registered: January 2008
Member
Here is the document of dm-band.

Based on 2.6.23.14
Signed-off-by: Ryo Tsuruta <ryov@valinux.co.jp>
Signed-off-by: Hirokazu Takahashi <taka@valinux.co.jp>

diff -uprN linux-2.6.23.14.orig/Documentation/device-mapper/band.txt linux-2.6.23.14/Documentation/device-mapper/band.txt
--- linux-2.6.23.14.orig/Documentation/device-mapper/band.txt	1970-01-01 09:00:00.000000000 +0900
+++ linux-2.6.23.14/Documentation/device-mapper/band.txt	2008-01-23 21:48:46.000000000 +0900
@@ -0,0 +1,431 @@
+====================
+Document for dm-band
+====================
+
+Contents:
+  What's dm-band all about?
+  How dm-band works
+  Setup and Installation
+  Command Reference
+  TODO
+
+
+What's dm-band all about?
+========================
+Dm-band is an I/O bandwidth controller implemented as a device-mapper driver.
+Several jobs using the same physical device have to share the bandwidth of
+the device. Dm-band gives bandwidth to each job according to its weight, 
+which each job can set its own value to.
+
+At this time, a job is a group of processes with the same pid or pgrp or uid.
+There is also a plan to make it support cgroup. A job can also be a virtual
+machine such as KVM or Xen.
+
+  +------+ +------+ +------+   +------+ +------+ +------+ 
+  |cgroup| |cgroup| | the  |   | pid  | | pid  | | the  |  jobs
+  |  A   | |  B   | |others|   |  X   | |  Y   | |others| 
+  +--|---+ +--|---+ +--|---+   +--|---+ +--|---+ +--|---+   
+  +--V----+---V---+----V---+   +--V----+---V---+----V---+   
+  | group | group | default|   | group | group | default|  band groups
+  |       |       |  group |   |       |       |  group | 
+  +-------+-------+--------+   +-------+-------+--------+
+  |         band1          |   |         band2          |  band devices
+  +-----------|------------+   +-----------|------------+
+  +-----------V--------------+-------------V------------+
+  |                          |                          |
+  |          sdb1            |           sdb2           |  physical devices
+  +--------------------------+--------------------------+
+
+
+How dm-band works.
+========================
+Every band device has one band group, which by default is called the default
+group.
+
+Band devices can also have extra band groups in them. Each band group
+has a job to support and a weight. Proportional to the weight, dm-band gives
+tokens to the group.
+
+A group passes on I/O requests that its job issues to the underlying
+layer so long as it has tokens left, while requests are blocked
+if there aren't any tokens left in the group. One token is consumed each
+time the group passes on a request. Dm-band will refill groups with tokens
+once all of groups that have requests on a given physical device use up their
+tokens.
+
+With this approach, a job running on a band group with large weight is
+guaranteed to be able to issue a large number of I/O requests.
+
+
+Setup and Installation
+======================
+
+Build a kernel with these options enabled:
+
+  CONFIG_MD
+  CONFIG_BLK_DEV_DM
+  CONFIG_DM_BAND
+
+If compiled as module, use modprobe to load dm-band.
+
+  # make modules
+  # make modules_install
+  # depmod -a
+  # modprobe dm-band
+
+"dmsetup targets" command shows all available device-mapper targets.
+"band" is displayed if dm-band has loaded.
+
+  # dmsetup targets
+  band             v0.0.2
+
+
+Getting started
+=============
+The following is a brief description how to control the I/O bandwidth of
+disks. In this description, we'll take one disk with two partitions as an
+example target.
+
+
+Create and map band devices
+---------------------------
+Create two band devices "band1" and "band2" and map them to "/dev/sda1"
+and "/dev/sda2" respectively.
+
+ # echo "0 `blockdev --getsize /dev/sda1` band /dev/sda1 1" | dmsetup create band1
+ # echo "0 `blockdev --getsize /dev/sda2` band /dev/sda2 1" | dmsetup create band2
+
+If the commands are successful then the device files "/dev/mapper/band1"
+and "/dev/mapper/band2" will have been created.
+
+
+Bandwidth control
+----------------
+In this example weights of 40 and 10 will be assigned to "band1" and
+"band2" respectively. This is done using the following commands:
+
+ # dmsetup message band1 0 weight 40
+ # dmsetup message band2 0 weight 10
+
+After these commands, "band1" can use 80% --- 40/(40+10)*100 --- of the
+bandwidth of the physical disk "/dev/sda" while "band2" can use 20%.
+
+
+Additional bandwidth control
+---------------------------
+In this example two extra band groups are created on "band1".
+The first group consists of all the processes with user-id 1000 and the
+second group consists of all the processes with user-id 2000. Their
+weights are 30 and 20 respectively.
+
+Firstly the band group type of "band1" is set to "user".
+Then, the user-id 1000 and 2000 groups are attached to "band1".
+Finally, weights are assigned to the user-id 1000 and 2000 groups.
+
+ # dmsetup message band1 0 type user
+ # dmsetup message band1 0 attach 1000
+ # dmsetup message band1 0 attach 2000
+ # dmsetup message band1 0 weight 1000:30
+ # dmsetup message band1 0 weight 2000:20
+
+Now the processes in the user-id 1000 group can use 30% ---
+30/(30+20+40+10)*100 --- of the bandwidth of the physical disk.
+
+ Band Device    Band Group                     Weight
+  band1         user id 1000                     30
+  band1         user id 2000                     20
+  band1         default group(the other users)   40
+  band2         default group                    10
+
+
+Remove band devices
+-------------------
+Remove the band devices when no longer used.
+
+  # dmsetup remove band1
+  # dmsetup remove band2
+
+
+Command Reference
+=================
+
+
+Create a band device
+--------------------
+SYNOPSIS
+  dmsetup create BAND_DEVICE
+
+DESCRIPTION
+  The following space delimited arguments, which describe the physical device
+  may are read from standard input. All arguments are required, and they must
+  be provided in order the order listed below.
+
+    starting sector of the physical device
+    size in sectors of the physical device
+    string "band" as a target type
+    physical device name
+    device group ID
+
+  You must set the same device group ID for each band device that shares 
+  the same bandwidth.
+
+  A default band group is also created and attached to the band device.
+
+  If the command is successful, the device file
+  "/dev/device-mapper/BAND_DEVICE" will have been created.
+
+EXAMPLE
+  Create a band device with the following parameters:
+    physical device = "/dev/sda1"
+    band device name = "band1"
+    device group ID = "100"
+
+    # size=`blockdev --getsize /dev/sda1`
+    # echo "0 $size band /dev/sda1 100" | dmsetup create band1
+
+  Create two device groups (ID=1,2). The bandwidth of each device group may be
+  individually controlled.
+
+    # echo "0 11096883 band /dev/sda1 1" | dmsetup create band1
+    # echo "0 11096883 band /dev/sda2 1" | dmsetup create band2
+    # echo "0 11096883 band /dev/sda3 2" | dmsetup create band3
+    # echo "0 11096883 band /dev/sda4 2" | dmsetup create band4
+
+
+Remove the band device
+----------------------
+SYNOPSIS
+  dmsetup remove BAND_DEVICE
+
+DESCRIPTION
+  Remove the band device with the given name. All band groups that are attached
+  to the band device are removed automatically.
+
+EXAMPLE
+  Remove the band device "band1".
+
+  # dmsetup remove band1
+
+
+Set a band group's type
+-----------------------
+SYNOPSIS
+  dmsetup message BAND_DEVICE 0 type TYPE
+
+DESCRIPTION
+  Set a band group's type. TYPE must be one of "user", "pid" or "pgrp".
+
+EXAMPLE
+  Set a band group's type to "user".
+
+  # dmsetup message band1 0 type user
+
+
+Create a band group
+-------------------
+SYNOPSIS
+  dmsetup message BAND_DEVICE 0 attach ID
+
+DESCRIPTION
+  Create a band group and attach it a band device. The ID number specifies the
+  user-id, pid or pgrp, as per the the type.
+
+EXAMPLE
+  Attach a band group with uid 1000 to the band device "band1".
+
+  # dmsetup message band1 0 type user
+  # dmsetup message band1 0 attach 1000
+
+
+Remove a band group
+-------------------
+SYNOPSIS
+  dmsetup message BAND_DEVICE 0 detach ID
+
+DESCRIPTION
+  Detach a band group specified by ID from a band device.
+
+EXAMPLE
+  Detach the band group with ID "2000" from the band device "band2".
+
+  # dmsetup message band2 0 detach 1000
+
+
+Set the weight of a band group
+------------------------------
+SYNOPSIS
+  dmsetup message BAND_DEVICE 0 weight VAL
+  dmsetup message BAND_DEVICE 0 weight ID:VAL
+
+DESCRIPTION
+  Set the weight of band group. The weight is evaluated as a ratio against the
+  total weight. The following example means that "band1" can use 80% ---
+  40/(40+10)*100 --- of the bandwidth of the physical disk "/dev/sda" while
+  "band2" can use 20%.
+
+    # dmsetup message band1 0 weight 40
+    # dmsetup message band1 0 weight 10
+
+  The following has the same effect as the above commands:
+
+    # dmsetup message band1 0 weight 4
+    # dmsetup message band2 0 weight 1  
+
+  VAL must be an integer grater than 0. The default is 100.
+
+EXAMPLE
+  Set the weight of the default band group to 40.
+
+  # dmsetup message band1 0 weight 40
+
+  Set the weight of the band group with ID "1000" to 10.
+
+  # dmsetup message band1 0 weight 1000:10
+
+
+Set the number of tokens
+------------------------
+SYNOPSIS
+  dmsetup message BAND_DEVICE 0 token VAL
+
+DESCRIPTION
+  Set the number of tokens. The value is applied to the all band devices
+  that have the same device g
...

Re: [PATCH 2/2] dm-band: The I/O bandwidth controller: Document [message #26448 is a reply to message #26445] Wed, 23 January 2008 19:57 Go to previous messageGo to next message
Andi Kleen is currently offline  Andi Kleen
Messages: 33
Registered: February 2006
Member
Ryo Tsuruta <ryov@valinux.co.jp> writes:

> Here is the document of dm-band.

Could you please address in the document how the intended use
cases/feature set etc. differs from CFQ2 io priorities?

Thanks,

-Andi
_______________________________________________
Containers mailing list
Containers@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
Re: [PATCH 0/2] dm-band: The I/O bandwidth controller: Overview [message #26453 is a reply to message #26434] Thu, 24 January 2008 08:11 Go to previous messageGo to next message
Hirokazu Takahashi is currently offline  Hirokazu Takahashi
Messages: 18
Registered: January 2008
Junior Member
Hi,

> Hi,
> 
> I believe this work is very important especially in the context of 
> virtual machines.  I think it would be more useful though implemented in 
> the context of the IO scheduler.  Since we already support a notion of 
> IO priority, it seems reasonable to add a notion of an IO cap.

I agree that what you proposed is the most straightforward approach.
Ryo and I have also investigated the CFQ scheduler that It will be possible
to enhance it to support bandwidth control with quite a few modification.
I think both approach have pros and cons.

At this time, we have chosen the device-mapper approach because:
 - it can work with any I/O scheduler. Some people will want use the NOOP
   scheduler against high-end storages.
 - only people who need I/O bandwidth control should use it.
 - it is independent to the I/O schedulers so that it will be easy to maintain.
 - it can keep the CFQ implementation simple.

The current the CFQ scheduler has some limitations if you want to control the
bandwidths. The scheduler only has seven priority levels, which also means it
has only seven classes. If you assign the same io-priority A to several VMs
--- virtual machines ---, these machines have to share the I/O bandwidth which
is assign to the io-priority A class. If other VM with io-priority B which is
lower than io-priority A and there is no other VM in the same io-priority B
class, VM in the io-priority B class may be able to use large bandwidth than
VMs in io-priority the A class.

I guess two level scheduling should be introduced in the CFQ scheduler if needed.
The one is to choose the best cgroup or job, and the other is to choose the
highest io-priority class.

There is another limitation that io-priority is global so that it affects
all the disks to access. It isn't allowed to have a job use several io-priorities
to access several disks respectively. I think "per disk io-priority" will be
required.

But the device-mapper approach also has a bad points.
It is hard to get the capabilities and configurations of the underlying devices
such as information of partitions or LUNs. So some configuration tools may
probably be required.

Thank you,
Hirokazu Takahashi.

> Regards,
> 
> Anthony Liguori
> 
> Ryo Tsuruta wrote:
> > Hi everyone,
> > 
> > I'm happy to announce that I've implemented a Block I/O bandwidth controller.
> > The controller is designed to be of use in a cgroup or virtual machine
> > environment. The current approach is that the controller is implemented as
> > a device-mapper driver.
> > 
> > What's dm-band all about?
> > ========================
> > Dm-band is an I/O bandwidth controller implemented as a device-mapper driver.
> > Several jobs using the same physical device have to share the bandwidth of
> > the device. Dm-band gives bandwidth to each job according to its weight, 
> > which each job can set its own value to.
> > 
> > At this time, a job is a group of processes with the same pid or pgrp or uid.
> > There is also a plan to make it support cgroup. A job can also be a virtual
> > machine such as KVM or Xen.
> > 
> >   +------+ +------+ +------+   +------+ +------+ +------+ 
> >   |cgroup| |cgroup| | the  |   | pid  | | pid  | | the  |  jobs
> >   |  A   | |  B   | |others|   |  X   | |  Y   | |others| 
> >   +--|---+ +--|---+ +--|---+   +--|---+ +--|---+ +--|---+   
> >   +--V----+---V---+----V---+   +--V----+---V---+----V---+   
> >   | group | group | default|   | group | group | default|  band groups
> >   |       |       |  group |   |       |       |  group | 
> >   +-------+-------+--------+   +-------+-------+--------+
> >   |         band1          |   |         band2          |  band devices
> >   +-----------|------------+   +-----------|------------+
> >   +-----------V--------------+-------------V------------+
> >   |                          |                          |
> >   |          sdb1            |           sdb2           |  physical devices
> >   +--------------------------+--------------------------+
> > 
> > 
> > How dm-band works.
> > ========================
> > Every band device has one band group, which by default is called the default
> > group.
> > 
> > Band devices can also have extra band groups in them. Each band group
> > has a job to support and a weight. Proportional to the weight, dm-band gives
> > tokens to the group.
> > 
> > A group passes on I/O requests that its job issues to the underlying
> > layer so long as it has tokens left, while requests are blocked
> > if there aren't any tokens left in the group. One token is consumed each
> > time the group passes on a request. Dm-band will refill groups with tokens
> > once all of groups that have requests on a given physical device use up their
> > tokens.
> > 
> > With this approach, a job running on a band group with large weight is
> > guaranteed to be able to issue a large number of I/O requests.
> > 
> > 
> > Getting started
> > =============
> > The following is a brief description how to control the I/O bandwidth of
> > disks. In this description, we'll take one disk with two partitions as an
> > example target.
> > 
> > You can also check the manual at Document/device-mapper/band.txt of the
> > linux kernel source tree for more information.
> > 
> > 
> > Create and map band devices
> > ---------------------------
> > Create two band devices "band1" and "band2" and map them to "/dev/sda1"
> > and "/dev/sda2" respectively.
> > 
> >  # echo "0 `blockdev --getsize /dev/sda1` band /dev/sda1 1" | dmsetup create band1
> >  # echo "0 `blockdev --getsize /dev/sda2` band /dev/sda2 1" | dmsetup create band2
> > 
> > If the commands are successful then the device files "/dev/mapper/band1"
> > and "/dev/mapper/band2" will have been created.
> > 
> > 
> > Bandwidth control
> > ----------------
> > In this example weights of 40 and 10 will be assigned to "band1" and
> > "band2" respectively. This is done using the following commands:
> > 
> >  # dmsetup message band1 0 weight 40
> >  # dmsetup message band2 0 weight 10
> > 
> > After these commands, "band1" can use 80% --- 40/(40+10)*100 --- of the
> > bandwidth of the physical disk "/dev/sda" while "band2" can use 20%.
> > 
> > 
> > Additional bandwidth control
> > ---------------------------
> > In this example two extra band groups are created on "band1".
> > The first group consists of all the processes with user-id 1000 and the
> > second group consists of all the processes with user-id 2000. Their
> > weights are 30 and 20 respectively.
> > 
> > Firstly the band group type of "band1" is set to "user".
> > Then, the user-id 1000 and 2000 groups are attached to "band1".
> > Finally, weights are assigned to the user-id 1000 and 2000 groups.
> > 
> >  # dmsetup message band1 0 type user
> >  # dmsetup message band1 0 attach 1000
> >  # dmsetup message band1 0 attach 2000
> >  # dmsetup message band1 0 weight 1000:30
> >  # dmsetup message band1 0 weight 2000:20
> > 
> > Now the processes in the user-id 1000 group can use 30% ---
> > 30/(30+20+40+10)*100 --- of the bandwidth of the physical disk.
> > 
> >  Band Device    Band Group                     Weight
> >   band1         user id 1000                     30
> >   band1         user id 2000                     20
> >   band1         default group(the other users)   40
> >   band2         default group                    10
> > 
> > 
> > Remove band devices
> > -------------------
> > Remove the band devices when no longer used.
> > 
> >   # dmsetup remove band1
> >   # dmsetup remove band2
> > 
> > 
> > TODO
> > ========================
> >   - Cgroup support. 
> >   - Control read and write requests separately.
> >   - Support WRITE_BARRIER.
> >   - Optimization.
> >   - More configuration tools. Or is the dmsetup command sufficient?
> >   - Other policies to schedule BIOs. Or is the weight policy sufficient?
> > 
> > Thanks,
> > Ryo Tsuruta
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
_______________________________________________
Containers mailing list
Containers@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
Re: [PATCH 2/2] dm-band: The I/O bandwidth controller: Document [message #26458 is a reply to message #26448] Thu, 24 January 2008 10:32 Go to previous messageGo to next message
Ryo Tsuruta is currently offline  Ryo Tsuruta
Messages: 35
Registered: January 2008
Member
Hi,

> > Here is the document of dm-band.
> 
> Could you please address in the document how the intended use
> cases/feature set etc. differs from CFQ2 io priorities?

Thank you for your suggestion, I'll do that step by step.

Thanks,
Ryo Tsuruta
_______________________________________________
Containers mailing list
Containers@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
dm-band: The I/O bandwidth controller: Performance Report [message #26498 is a reply to message #26434] Fri, 25 January 2008 07:07 Go to previous messageGo to next message
Ryo Tsuruta is currently offline  Ryo Tsuruta
Messages: 35
Registered: January 2008
Member
Hi,

Now I report the result of dm-band bandwidth control test I did yesterday.
I've got really good results that dm-band works as I expected. I made
several band-groups on several disk partitions and gave them heavy I/O loads.

Hardware Spec.
==============
  DELL Dimention E521:

  Linux kappa.local.valinux.co.jp 2.6.23.14 #1 SMP
    Thu Jan 24 17:24:59 JST 2008 i686 athlon i386 GNU/Linux
  Detected 2004.217 MHz processor.
  CPU0: AMD Athlon(tm) 64 X2 Dual Core Processor 3800+ stepping 02
  Memory: 966240k/981888k available (2102k kernel code, 14932k reserved,
    890k data, 216k init, 64384k highmem)
  scsi 2:0:0:0: Direct-Access     ATA      ST3250620AS     3.AA PQ: 0 ANSI: 5
  sd 2:0:0:0: [sdb] 488397168 512-byte hardware sectors (250059 MB)
  sd 2:0:0:0: [sdb] Write Protect is off
  sd 2:0:0:0: [sdb] Mode Sense: 00 3a 00 00
  sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled,
    doesn't support DPO or FUA
  sdb: sdb1 sdb2 < sdb5 sdb6 sdb7 sdb8 sdb9 sdb10 sdb11 sdb12 sdb13 sdb14
    sdb15 >

The results of bandwidth control test on partitions
===================================================

The configurations of the test #1:
   o Prepare three partitions sdb5, sdb6 and sdb7.
   o Give weights of 40, 20 and 10 to sdb5, sdb6 and sdb7 respectively.
   o Run 128 processes issuing random read/write direct I/O with 4KB data
     on each device at the same time.
   o Count up the number of I/Os and sectors which have done in 60 seconds.

                The result of the test #1
 ---------------------------------------------------------------------------
|     device      |       sdb5        |       sdb6        |      sdb7       |
|     weight      |    40 (57.0%)     |     20 (29.0%)    |    10 (14.0%)   |
|-----------------+-------------------+-------------------+-----------------|
|   I/Os (r/w)    |  6640( 3272/ 3368)|  3434( 1719/ 1715)|  1689( 857/ 832)|
|  sectors (r/w)  | 53120(26176/26944)| 27472(13752/13720)| 13512(6856/6656)|
|  ratio to total |       56.4%       |       29.2%       |      14.4%      |
 ---------------------------------------------------------------------------


The configurations of the test #2:
   o The configurations are the same as the test #1 except this test doesn't
     run any processes issuing I/Os on sdb6.

                The result of the test #2
 ---------------------------------------------------------------------------
|     device      |       sdb5        |       sdb6        |      sdb7       |
|     weight      |    40 (57.0%)     |     20 (29.0%)    |    10 (14.0%)   |
|-----------------+-------------------+-------------------+-----------------|
|   I/Os (r/w)    |  9566(4815/  4751)|     0(    0/    0)|  2370(1198/1172)|
|  sectors (r/w)  | 76528(38520/38008)|     0(    0/    0)| 18960(9584/9376)|
|  ratio to total |       76.8%       |        0.0%       |     23.2%       |
 ---------------------------------------------------------------------------

The results of bandwidth control test on band-groups.
=====================================================
The configurations of the test #3:
   o Prepare three partitions sdb5 and sdb6.
   o Create two extra band-groups on sdb5, the first is of user1 and the
     second is of user2.
   o Give weights of 40, 20, 10 and 10 to the user1 band-group, the user2
     band-group, the default group of sdb5 and sdb6 respectively.
   o Run 128 processes issuing random read/write direct I/O with 4KB data
     on each device at the same time.
   o Count up the number of I/Os and sectors which have done in 60 seconds.

                The result of the test #3
 ---------------------------------------------------------------------------
|dev|                          sdb5                        |      sdb6      |
|---+------------------------------------------------------+----------------|
|usr|     user1        |      user2       |  other users   |   all users    |
|wgt|   40 (50.0%)     |    20 (25.0%)    |   10 (12.5%)   |   10 (12.5%)   |
|---+------------------+------------------+----------------+----------------|
|I/O| 5951( 2940/ 3011)| 3068( 1574/ 1494)| 1663( 828/ 835)| 1663( 810/ 853)|
|sec|47608(23520/24088)|24544(12592/11952)|13304(6624/6680)|13304(6480/6824)|
| % |     48.2%        |       24.9%      |      13.5%     |      13.5%     |
 ---------------------------------------------------------------------------

The configurations of the test #4:
   o The configurations are the same as the test #3 except this test doesn't
     run any processes issuing I/Os on the user2 band-group.

                The result of the test #4
 ---------------------------------------------------------------------------
|dev|                          sdb5                        |     sdb6       |
|---+------------------------------------------------------+----------------|
|usr|     user1        |      user2       |  other users   |   all users    |
|wgt|   40 (50.0%)     |    20 (25.0%)    |   10 (12.5%)   |   10 (12.5%)   |
|---+------------------+------------------+----------------+----------------|
|I/O| 8002( 3963/ 4039)|    0(    0/    0)| 2056(1021/1035)| 2008( 998/1010)|
|sec|64016(31704/32312)|    0(    0/    0)|16448(8168/8280)|16064(7984/8080)|
| % |     66.3%        |        0.0%      |      17.0%     |      16.6%     |
 ---------------------------------------------------------------------------

Conclusions and future works
============================
Dm-band works well with random I/Os. I have a plan on running some tests
using various real applications such as databases or file servers.
If you have any other good idea to test dm-band, please let me know.

Thank you,
Ryo Tsuruta.
_______________________________________________
Containers mailing list
Containers@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
Re: [PATCH 0/2] dm-band: The I/O bandwidth controller: Overview [message #26903 is a reply to message #26434] Wed, 23 January 2008 19:22 Go to previous messageGo to next message
Anthony Liguori is currently offline  Anthony Liguori
Messages: 1
Registered: January 2008
Junior Member
Hi,

I believe this work is very important especially in the context of 
virtual machines.  I think it would be more useful though implemented in 
the context of the IO scheduler.  Since we already support a notion of 
IO priority, it seems reasonable to add a notion of an IO cap.

Regards,

Anthony Liguori

Ryo Tsuruta wrote:
> Hi everyone,
> 
> I'm happy to announce that I've implemented a Block I/O bandwidth controller.
> The controller is designed to be of use in a cgroup or virtual machine
> environment. The current approach is that the controller is implemented as
> a device-mapper driver.
> 
> What's dm-band all about?
> ========================
> Dm-band is an I/O bandwidth controller implemented as a device-mapper driver.
> Several jobs using the same physical device have to share the bandwidth of
> the device. Dm-band gives bandwidth to each job according to its weight, 
> which each job can set its own value to.
> 
> At this time, a job is a group of processes with the same pid or pgrp or uid.
> There is also a plan to make it support cgroup. A job can also be a virtual
> machine such as KVM or Xen.
> 
>   +------+ +------+ +------+   +------+ +------+ +------+ 
>   |cgroup| |cgroup| | the  |   | pid  | | pid  | | the  |  jobs
>   |  A   | |  B   | |others|   |  X   | |  Y   | |others| 
>   +--|---+ +--|---+ +--|---+   +--|---+ +--|---+ +--|---+   
>   +--V----+---V---+----V---+   +--V----+---V---+----V---+   
>   | group | group | default|   | group | group | default|  band groups
>   |       |       |  group |   |       |       |  group | 
>   +-------+-------+--------+   +-------+-------+--------+
>   |         band1          |   |         band2          |  band devices
>   +-----------|------------+   +-----------|------------+
>   +-----------V--------------+-------------V------------+
>   |                          |                          |
>   |          sdb1            |           sdb2           |  physical devices
>   +--------------------------+--------------------------+
> 
> 
> How dm-band works.
> ========================
> Every band device has one band group, which by default is called the default
> group.
> 
> Band devices can also have extra band groups in them. Each band group
> has a job to support and a weight. Proportional to the weight, dm-band gives
> tokens to the group.
> 
> A group passes on I/O requests that its job issues to the underlying
> layer so long as it has tokens left, while requests are blocked
> if there aren't any tokens left in the group. One token is consumed each
> time the group passes on a request. Dm-band will refill groups with tokens
> once all of groups that have requests on a given physical device use up their
> tokens.
> 
> With this approach, a job running on a band group with large weight is
> guaranteed to be able to issue a large number of I/O requests.
> 
> 
> Getting started
> =============
> The following is a brief description how to control the I/O bandwidth of
> disks. In this description, we'll take one disk with two partitions as an
> example target.
> 
> You can also check the manual at Document/device-mapper/band.txt of the
> linux kernel source tree for more information.
> 
> 
> Create and map band devices
> ---------------------------
> Create two band devices "band1" and "band2" and map them to "/dev/sda1"
> and "/dev/sda2" respectively.
> 
>  # echo "0 `blockdev --getsize /dev/sda1` band /dev/sda1 1" | dmsetup create band1
>  # echo "0 `blockdev --getsize /dev/sda2` band /dev/sda2 1" | dmsetup create band2
> 
> If the commands are successful then the device files "/dev/mapper/band1"
> and "/dev/mapper/band2" will have been created.
> 
> 
> Bandwidth control
> ----------------
> In this example weights of 40 and 10 will be assigned to "band1" and
> "band2" respectively. This is done using the following commands:
> 
>  # dmsetup message band1 0 weight 40
>  # dmsetup message band2 0 weight 10
> 
> After these commands, "band1" can use 80% --- 40/(40+10)*100 --- of the
> bandwidth of the physical disk "/dev/sda" while "band2" can use 20%.
> 
> 
> Additional bandwidth control
> ---------------------------
> In this example two extra band groups are created on "band1".
> The first group consists of all the processes with user-id 1000 and the
> second group consists of all the processes with user-id 2000. Their
> weights are 30 and 20 respectively.
> 
> Firstly the band group type of "band1" is set to "user".
> Then, the user-id 1000 and 2000 groups are attached to "band1".
> Finally, weights are assigned to the user-id 1000 and 2000 groups.
> 
>  # dmsetup message band1 0 type user
>  # dmsetup message band1 0 attach 1000
>  # dmsetup message band1 0 attach 2000
>  # dmsetup message band1 0 weight 1000:30
>  # dmsetup message band1 0 weight 2000:20
> 
> Now the processes in the user-id 1000 group can use 30% ---
> 30/(30+20+40+10)*100 --- of the bandwidth of the physical disk.
> 
>  Band Device    Band Group                     Weight
>   band1         user id 1000                     30
>   band1         user id 2000                     20
>   band1         default group(the other users)   40
>   band2         default group                    10
> 
> 
> Remove band devices
> -------------------
> Remove the band devices when no longer used.
> 
>   # dmsetup remove band1
>   # dmsetup remove band2
> 
> 
> TODO
> ========================
>   - Cgroup support. 
>   - Control read and write requests separately.
>   - Support WRITE_BARRIER.
>   - Optimization.
>   - More configuration tools. Or is the dmsetup command sufficient?
>   - Other policies to schedule BIOs. Or is the weight policy sufficient?
> 
> Thanks,
> Ryo Tsuruta

_______________________________________________
Containers mailing list
Containers@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
Re: [Xen-devel] dm-band: The I/O bandwidth controller: Performance Report [message #26904 is a reply to message #26498] Tue, 29 January 2008 06:42 Go to previous message
INAKOSHI Hiroya is currently offline  INAKOSHI Hiroya
Messages: 1
Registered: January 2008
Junior Member
Hi,

Ryo Tsuruta wrote:
> The results of bandwidth control test on band-groups.
> =====================================================
> The configurations of the test #3:
>    o Prepare three partitions sdb5 and sdb6.
>    o Create two extra band-groups on sdb5, the first is of user1 and the
>      second is of user2.
>    o Give weights of 40, 20, 10 and 10 to the user1 band-group, the user2
>      band-group, the default group of sdb5 and sdb6 respectively.
>    o Run 128 processes issuing random read/write direct I/O with 4KB data
>      on each device at the same time.

you mean that you run 128 processes on each user-device pairs?  Namely,
I guess that

  user1: 128 processes on sdb5,
  user2: 128 processes on sdb5,
  another: 128 processes on sdb5,
  user2: 128 processes on sdb6.

> Conclusions and future works
> ============================
> Dm-band works well with random I/Os. I have a plan on running some tests
> using various real applications such as databases or file servers.
> If you have any other good idea to test dm-band, please let me know.

The second preliminary studies might be:

- What if you use a different I/O size on each device (or device-user pair)?
- What if you use a different number of processes on each device (or
device-user pair)?


And my impression is that it's natural dm-band is in device-mapper,
separated from I/O scheduler.  Because bandwidth control and I/O
scheduling are two different things, it may be simpler that they are
implemented in different layers.

Regards,

Hiroya.


> 
> Thank you,
> Ryo Tsuruta.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
> 
> 

_______________________________________________
Containers mailing list
Containers@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers
Previous Topic: [PATCH 4/4] Get rid of the kill_pgrp_info() function
Next Topic: Re: [PATCH 6/12] gfs2: make gfs2_glock.gl_owner_pid be a struct pid *
Goto Forum:
  


Current Time: Tue Apr 23 06:15:48 GMT 2024

Total time taken to generate the page: 0.01548 seconds