OpenVZ Forum


Home » Mailing lists » Devel » [PATCH 2/3] i/o bandwidth controller infrastructure
Re: [PATCH 2/3] i/o bandwidth controller infrastructure [message #31241 is a reply to message #31181] Sun, 22 June 2008 13:11 Go to previous messageGo to previous message
Andrea Righi is currently offline  Andrea Righi
Messages: 65
Registered: May 2008
Member
Carl Henrik Lunde wrote:
> Did you consider using token bucket instead of this (leaky bucket?)?
> 
> I've attached a patch which implements token bucket.  Although not as
> precise as the leaky bucket the performance is better at high bandwidth
> streaming loads.
> 
> The leaky bucket stops at around 53 MB/s while token bucket works for
> up to 64 MB/s.  The baseline (no cgroups) is 66 MB/s.
> 
> benchmark:
> two streaming readers (fio) with block size 128k, bucket size 4 MB
> 90% of the bandwidth was allocated to one process, the other gets 10%
> 
> bw-limit: actual bw  algorithm     bw1  bw2
>  5 MiB/s:  5.0 MiB/s leaky_bucket  0.5  4.5
>  5 MiB/s:  5.2 MiB/s token_bucket  0.6  4.6
> 10 MiB/s: 10.0 MiB/s leaky_bucket  1.0  9.0
> 10 MiB/s: 10.3 MiB/s token_bucket  1.0  9.2
> 15 MiB/s: 15.0 MiB/s leaky_bucket  1.5 13.5
> 15 MiB/s: 15.4 MiB/s token_bucket  1.5 13.8
> 20 MiB/s: 19.9 MiB/s leaky_bucket  2.0 17.9
> 20 MiB/s: 20.5 MiB/s token_bucket  2.1 18.4
> 25 MiB/s: 24.4 MiB/s leaky_bucket  2.5 21.9
> 25 MiB/s: 25.6 MiB/s token_bucket  2.6 23.0
> 30 MiB/s: 29.2 MiB/s leaky_bucket  3.0 26.2
> 30 MiB/s: 30.7 MiB/s token_bucket  3.1 27.7
> 35 MiB/s: 34.3 MiB/s leaky_bucket  3.4 30.9
> 35 MiB/s: 35.9 MiB/s token_bucket  3.6 32.3
> 40 MiB/s: 39.7 MiB/s leaky_bucket  3.9 35.8
> 40 MiB/s: 41.0 MiB/s token_bucket  4.1 36.9
> 45 MiB/s: 44.0 MiB/s leaky_bucket  4.3 39.7
> 45 MiB/s: 46.1 MiB/s token_bucket  4.6 41.5
> 50 MiB/s: 47.9 MiB/s leaky_bucket  4.7 43.2
> 50 MiB/s: 51.0 MiB/s token_bucket  5.1 45.9
> 55 MiB/s: 50.5 MiB/s leaky_bucket  5.0 45.5
> 55 MiB/s: 56.2 MiB/s token_bucket  5.6 50.5
> 60 MiB/s: 52.9 MiB/s leaky_bucket  5.2 47.7
> 60 MiB/s: 61.0 MiB/s token_bucket  6.1 54.9
> 65 MiB/s: 53.0 MiB/s leaky_bucket  5.4 47.6
> 65 MiB/s: 63.7 MiB/s token_bucket  6.6 57.1
> 70 MiB/s: 53.8 MiB/s leaky_bucket  5.5 48.4
> 70 MiB/s: 64.1 MiB/s token_bucket  7.1 57.0

Carl,

based on your token bucket solution I've implemented a run-time leaky
bucket / token bucket switcher:

# leaky bucket #
echo 0 > /cgroups/foo/blockio.throttling_strategy
# token bucket #
echo 1 > /cgroups/foo/blockio.throttling_strategy

The -rc of the new io-throttle patch 2/3 is below, 1/3 and 3/3 are the
same as patchset version 3, even if documentation must be updated. It
would be great if you could review the patch, in particular the
token_bucket() implementation and repeat your tests.

The all-in-one patch is available here:
http://download.systemimager.org/~arighi/linux/patches/io-throttle/cgroup-io-throttle-v4-rc1.patch

I also did some quick tests similar to yours, the benchmark I've used is
available here as well:
http://download.systemimager.org/~arighi/linux/patches/io-throttle/benchmark/iobw.c

= Results =
I/O scheduler: cfq
filesystem: ext3
Command: ionice -c 1 -n 0 iobw -direct 2 4m 32m
Bucket size: 4MiB

testing 2 parallel streams, chunk_size 4096KiB, data_size 32768KiB

=== no throttling ===
testing 2 parallel streams, chunk_size 4096KiB, data_size 32768KiB
[task   2] time:  2.929, bw:   10742 KiB/s (WRITE)
[task   2] time:  2.878, bw:   10742 KiB/s (READ )
[task   1] time:  2.377, bw:   13671 KiB/s (WRITE)
[task   1] time:  3.979, bw:    7812 KiB/s (READ )
[parent 0] time:  6.397, bw:   19531 KiB/s (TOTAL)

=== bandwidth limit: 4MiB/s (leaky bucket) ===
[task   2] time: 15.880, bw:    1953 KiB/s (WRITE)
[task   2] time: 14.278, bw:    1953 KiB/s (READ )
[task   1] time: 14.711, bw:    1953 KiB/s (WRITE)
[task   1] time: 16.563, bw:    1953 KiB/s (READ )
[parent 0] time: 31.316, bw:    3906 KiB/s (TOTAL)

=== bandwidth limit: 4MiB/s (token bucket) ===
[task   2] time: 11.864, bw:    1953 KiB/s (WRITE)
[task   2] time: 15.958, bw:    1953 KiB/s (READ )
[task   1] time: 19.233, bw:     976 KiB/s (WRITE)
[task   1] time: 12.643, bw:    1953 KiB/s (READ )
[parent 0] time: 31.917, bw:    3906 KiB/s (TOTAL)

=== bandwidth limit: 8MiB/s (leaky bucket) ===
[task   2] time:  7.198, bw:    3906 KiB/s (WRITE)
[task   2] time:  8.012, bw:    3906 KiB/s (READ )
[task   1] time:  7.891, bw:    3906 KiB/s (WRITE)
[task   1] time:  7.846, bw:    3906 KiB/s (READ )
[parent 0] time: 15.780, bw:    7812 KiB/s (TOTAL)

=== bandwidth limit: 8MiB/s (token bucket) ===
[task   1] time:  6.996, bw:    3906 KiB/s (WRITE)
[task   1] time:  6.529, bw:    4882 KiB/s (READ )
[task   2] time: 10.341, bw:    2929 KiB/s (WRITE)
[task   2] time:  5.681, bw:    4882 KiB/s (READ )
[parent 0] time: 16.079, bw:    7812 KiB/s (TOTAL)

=== bandwidth limit: 12MiB/s (leaky bucket) ===
[task   2] time:  4.992, bw:    5859 KiB/s (WRITE)
[task   2] time:  5.077, bw:    5859 KiB/s (READ )
[task   1] time:  5.500, bw:    5859 KiB/s (WRITE)
[task   1] time:  5.061, bw:    5859 KiB/s (READ )
[parent 0] time: 10.603, bw:   11718 KiB/s (TOTAL)

=== bandwidth limit: 12MiB/s (token bucket) ===
[task   1] time:  5.057, bw:    5859 KiB/s (WRITE)
[task   1] time:  4.329, bw:    6835 KiB/s (READ )
[task   2] time:  5.771, bw:    4882 KiB/s (WRITE)
[task   2] time:  4.961, bw:    5859 KiB/s (READ )
[parent 0] time: 10.786, bw:   11718 KiB/s (TOTAL)

=== bandwidth limit: 16MiB/s (leaky bucket) ===
[task   1] time:  3.737, bw:    7812 KiB/s (WRITE)
[task   1] time:  3.988, bw:    7812 KiB/s (READ )
[task   2] time:  4.043, bw:    7812 KiB/s (WRITE)
[task   2] time:  3.954, bw:    7812 KiB/s (READ )
[parent 0] time:  8.040, bw:   15625 KiB/s (TOTAL)

=== bandwidth limit: 16MiB/s (token bucket) ===
[task   1] time:  3.224, bw:    9765 KiB/s (WRITE)
[task   1] time:  3.550, bw:    8789 KiB/s (READ )
[task   2] time:  5.085, bw:    5859 KiB/s (WRITE)
[task   2] time:  3.033, bw:   10742 KiB/s (READ )
[parent 0] time:  8.160, bw:   15625 KiB/s (TOTAL)

=== bandwidth limit: 20MiB/s (leaky bucket) ===
[task   1] time:  3.265, bw:    9765 KiB/s (WRITE)
[task   1] time:  3.339, bw:    9765 KiB/s (READ )
[task   2] time:  3.001, bw:   10742 KiB/s (WRITE)
[task   2] time:  3.840, bw:    7812 KiB/s (READ )
[parent 0] time:  6.884, bw:   18554 KiB/s (TOTAL)

=== bandwidth limit: 20MiB/s (token bucket) ===
[task   1] time:  2.897, bw:   10742 KiB/s (WRITE)
[task   1] time:  3.071, bw:    9765 KiB/s (READ )
[task   2] time:  3.697, bw:    8789 KiB/s (WRITE)
[task   2] time:  2.925, bw:   10742 KiB/s (READ )
[parent 0] time:  6.657, bw:   19531 KiB/s (TOTAL)

=== bandwidth limit: 24MiB/s (leaky bucket) ===
[task   1] time:  2.283, bw:   13671 KiB/s (WRITE)
[task   1] time:  3.626, bw:    8789 KiB/s (READ )
[task   2] time:  3.892, bw:    7812 KiB/s (WRITE)
[task   2] time:  2.774, bw:   11718 KiB/s (READ )
[parent 0] time:  6.724, bw:   18554 KiB/s (TOTAL)

=== bandwidth limit: 24MiB/s (token bucket) ===
[task   2] time:  3.215, bw:    9765 KiB/s (WRITE)
[task   2] time:  2.767, bw:   11718 KiB/s (READ )
[task   1] time:  2.615, bw:   11718 KiB/s (WRITE)
[task   1] time:  3.958, bw:    7812 KiB/s (READ )
[parent 0] time:  6.610, bw:   19531 KiB/s (TOTAL)

In conclusion, results seem to confirm that leaky bucket is more precise
(more smoothed) than token bucket; token bucket, instead, is better in
terms of efficiency when approaching to the disk's I/O physical limit,
as the theory claims.

It would be also interesting to test how token bucket performance
changes using different bucket size values. I'll do more accurate tests
ASAP.

Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
---
 block/Makefile                  |    2 +
 block/blk-io-throttle.c         |  490 +++++++++++++++++++++++++++++++++++++++
 include/linux/blk-io-throttle.h |   12 +
 include/linux/cgroup_subsys.h   |    6 +
 init/Kconfig                    |   10 +
 5 files changed, 520 insertions(+), 0 deletions(-)

diff --git a/block/Makefile b/block/Makefile
index 5a43c7d..8dec69b 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -14,3 +14,5 @@ obj-$(CONFIG_IOSCHED_CFQ)	+= cfq-iosched.o
 
 obj-$(CONFIG_BLK_DEV_IO_TRACE)	+= blktrace.o
 obj-$(CONFIG_BLOCK_COMPAT)	+= compat_ioctl.o
+
+obj-$(CONFIG_CGROUP_IO_THROTTLE)	+= blk-io-throttle.o
diff --git a/block/blk-io-throttle.c b/block/blk-io-throttle.c
new file mode 100644
index 0000000..c6af273
--- /dev/null
+++ b/block/blk-io-throttle.c
@@ -0,0 +1,490 @@
+/*
+ * blk-io-throttle.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License as published by the Free Software Foundation; either
+ * version 2 of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write to the
+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
+ * Boston, MA 021110-1307, USA.
+ *
+ * Copyright (C) 2008 Andrea Righi <righi.andrea@gmail.com>
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/cgroup.h>
+#include <linux/slab.h>
+#include <linux/gfp.h>
+#include <linux/err.h>
+#include <linux/sched.h>
+#include <linux/fs.h>
+#include <linux/jiffies.h>
+#include <linux/hardirq.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/uaccess.h>
+#include <linux/vmalloc.h>
+#include <linux/blk-io-throttle.h>
+
+#define ONE_SEC 1000000L /* # of microseconds in a second */
+#define KBS(x) ((x) * ONE_SEC >> 10)
+
+struct iothrottle_node {
+	struct list_head node;
+	dev_t dev;
+	unsigned long iorate;
+	unsigned long timestamp;
+	atomic_long_t stat;
+	long bucket_size;
+	atomic_long_t token;
+};
+
+struct iothrottle {
+	struct cgroup_subsys_state css;
+	/* protects the list below, not the single elemen
...

 
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Read Message
Previous Topic: mini-summit and ols
Next Topic: Re: Obtaining the latest kernel to test out the latest on the containers
Goto Forum:
  


Current Time: Fri Aug 01 12:53:23 GMT 2025

Total time taken to generate the page: 0.80221 seconds