Home » General » Discussions » DRBD?
|
|
|
Re: DRBD? [message #4538 is a reply to message #4532] |
Thu, 13 July 2006 21:05   |
cdevidal
Messages: 24 Registered: June 2006 Location: Jacksonville, FL
|
Junior Member |
|
|
wfischer wrote on Thu, 13 July 2006 10:12 | just fyi: if you need some more information - we had a talk about clustering at German's Linuxtag, there was also a part about clustering OpenVZ with DRBD and Heartbeat.
|
Yeah I saw that paper, forgot about it. Thanks for reminding me. Mayhaps the author has tips and scripts and config files he could share...
R U good enough?
TenThousandDollarOffer.com
|
|
|
|
|
|
|
Re: DRBD? [message #4559 is a reply to message #4556] |
Fri, 14 July 2006 13:27   |
cdevidal
Messages: 24 Registered: June 2006 Location: Jacksonville, FL
|
Junior Member |
|
|
wfischer wrote on Fri, 14 July 2006 09:10 | Ensure that you use the same DRBD version of the userland tools as the DRBD version that is included in the OpenVZ kernel (I think in the OpenVZ kernel is currently drbd 0.7.17 - but check it to be sure).
|
That is correct.
Found this:
CentOS 4 DRBD userland tool RPMs
And this:
Other distros
Unfortunately everything is 0.7.20.
Are you certain the versions must precisely match?
R U good enough?
TenThousandDollarOffer.com
|
|
|
|
Re: DRBD? [message #4562 is a reply to message #4560] |
Fri, 14 July 2006 14:30   |
cdevidal
Messages: 24 Registered: June 2006 Location: Jacksonville, FL
|
Junior Member |
|
|
wfischer wrote on Fri, 14 July 2006 09:44 | If you cannot find prebuilt ones, it is very easy to create rpm's out of drbd's source (you can do a "make rpm", a spec file is already included in the source)
|
A spec of the userland tools?
R U good enough?
TenThousandDollarOffer.com
|
|
|
|
|
|
|
Re: DRBD? [message #4792 is a reply to message #4567] |
Wed, 26 July 2006 11:45   |
wfischer
Messages: 38 Registered: November 2005 Location: Austria/Germany
|
Member |
|
|
Hi again,
I found postings from Lars (one of the drbd developers): http://lists.linbit.com/pipermail/drbd-user/2006-May/005027. html and http://lists.linbit.com/pipermail/drbd-user/2006-June/005051 .html
there he gives a short explanation on version numbers for api and proto version. As I already thought, the api version of the userland tools must match the api version of the kernel module.
I checked the drbd version of the last three openvz kernels, see below:
[root@localhost ~]# cat /proc/version
Linux version 2.6.8-022stab077.1 (root@kern268.build.sw.ru) (gcc version 3.3.3 20040412 (Red Hat Linux 3.3.3-7)) #1 Fri Apr 21 16:50:02 MSD 2006
[root@localhost ~]# modprobe drbd
[root@localhost ~]# cat /proc/drbd
version: 0.7.17 (api:77/proto:74)
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12
0: cs:Unconfigured
1: cs:Unconfigured
[root@localhost ~]#
[root@localhost ~]# cat /proc/version
Linux version 2.6.8-022stab078.10 (root@rhel4-32) (gcc version 3.4.5 20051201 (Red Hat 3.4.5-2)) #1 Wed Jun 21 12:01:20 MSD 2006
[root@localhost ~]# modprobe drbd
[root@localhost ~]# cat /proc/drbd
version: 0.7.17 (api:77/proto:74)
SVN Revision: 2093 build by phil@mescal, 2006-03-06 15:04:12
0: cs:Unconfigured
1: cs:Unconfigured
[root@localhost ~]#
[root@localhost ~]# cat /proc/version
Linux version 2.6.8-022stab078.14 (root@kern268.build.sw.ru) (gcc version 3.3.3 20040412 (Red Hat Linux 3.3.3-7)) #1 Wed Jul 19 16:02:34 MSD 2006
[root@localhost ~]# modprobe drbd
[root@localhost ~]# cat /proc/drbd
version: 0.7.20 (api:79/proto:74)
SVN Revision: 2260 build by phil@mescal, 2006-07-04 15:18:57
0: cs:Unconfigured
1: cs:Unconfigured
[root@localhost ~]#
Regarding instructions building a drbd/heartbeat cluster with openvz for the openvz wiki: finally today I have time to do this - so in about 12 hours the infos about that should be in the wiki. [update: unfortunately I won't finish today - I have my two CentOS boxes now running for this, but it takes a little longer than I thought - I post here once the content is in the wiki]
best wishes from Austria,
Werner
Werner Fischer, Developer of a Virtuozzo-out-of-the-box-cluster solution at Thomas-Krenn.AG
[Updated on: Wed, 26 July 2006 19:16] Report message to a moderator
|
|
|
|
Re: DRBD? [message #4866 is a reply to message #4862] |
Mon, 31 July 2006 17:39   |
cdevidal
Messages: 24 Registered: June 2006 Location: Jacksonville, FL
|
Junior Member |
|
|
wfischer wrote on Sun, 30 July 2006 14:21 | I have documented the first part of how to use DRBD with OpenVZ
|
Woohoo! I've had to pause my work, so I'm glad you've got something up there.
wfischer wrote on Sun, 30 July 2006 14:21 | The info on (...) how to do updates (especially how to do OpenVZ kernel updates that contain a new version of DRBD, which is a little tricky) will follow soon.
|
Wonderful! That info, plus knowing which OpenVZ files to copy to the DRBD partition were my unknowns.
Question: Why not two DRBD partitions, one on each node, and run a handful of VPSes on the first node and a handful on the second, so that the second node's CPU cycles and RAM are not sitting idle? Or were you just trying to keep things simple?
If you do such a setup, DRBD's Group parameter is very helpful when you have two DRBD devices on one hard drive. The first group synchronizes, then the second, but not in parallel (as would be the case if you had two drives). Set one DRBD device in one group and the other in a new group.
R U good enough?
TenThousandDollarOffer.com
|
|
|
Re: DRBD? [message #4875 is a reply to message #4866] |
Tue, 01 August 2006 08:24   |
wfischer
Messages: 38 Registered: November 2005 Location: Austria/Germany
|
Member |
|
|
cdevidal wrote on Mon, 31 July 2006 19:39 | Question: Why not two DRBD partitions, one on each node, and run a handful of VPSes on the first node and a handful on the second, so that the second node's CPU cycles and RAM are not sitting idle? Or were you just trying to keep things simple?
|
Yea, the first reason is that the setup is more simple. And the more simple the setup is, the higher the availability will be.
The second reason is that in a active-passive configuration you can get aware of performance bottlenecks soon enough. We had a for example a cluster running, that ran Apache on node1 and MySQL on node2 (without any virtualization). When we started the project, every machine had 1,5 GB RAM. Apache needed about 500 MB, and also MySQL needed about 500 MB. After some time we discovered that Apache now needs 1 GB, and also MySQL consumes 1 GB of RAM - so if a failover would have happened the remaining cluster node would have started swapping and get very slow (in fact so slow, that it would have seemed that the cluster is down)
When you run all services on only one node, you can sooner discover those performance bottlenecks (actually before a failover happens) - and enlarge e.g. RAM like in this case.
Evan Marcus and Hal Stern have a very interesting discussion about why to use active/passive and what to answer to management when they ask: "how can I use the standby server?" You can find it in their book "Blueprints for High Availability", 2nd edition, page 417 - 425 (see http://www.amazon.com/gp/product/0471430269/).
cdevidal wrote on Mon, 31 July 2006 19:39 | If you do such a setup, DRBD's Group parameter is very helpful when you have two DRBD devices on one hard drive. The first group synchronizes, then the second, but not in parallel (as would be the case if you had two drives). Set one DRBD device in one group and the other in a new group.
|
Yea, you are absolutely right! If someone really wants active/active, the DRBD group parameter is very valueable.
Werner Fischer, Developer of a Virtuozzo-out-of-the-box-cluster solution at Thomas-Krenn.AG
|
|
|
Re: DRBD? [message #4880 is a reply to message #4875] |
Tue, 01 August 2006 12:40   |
cdevidal
Messages: 24 Registered: June 2006 Location: Jacksonville, FL
|
Junior Member |
|
|
wfischer wrote on Tue, 01 August 2006 04:24 | The second reason is that in a active-passive configuration you can get aware of performance bottlenecks soon enough. We had a for example a cluster running, that ran Apache on node1 and MySQL on node2 (without any virtualization). When we started the project, every machine had 1,5 GB RAM. Apache needed about 500 MB, and also MySQL needed about 500 MB. After some time we discovered that Apache now needs 1 GB, and also MySQL consumes 1 GB of RAM - so if a failover would have happened the remaining cluster node would have started swapping and get very slow (in fact so slow, that it would have seemed that the cluster is down)
When you run all services on only one node, you can sooner discover those performance bottlenecks (actually before a failover happens) - and enlarge e.g. RAM like in this case.
Evan Marcus and Hal Stern have a very interesting discussion about why to use active/passive and what to answer to management when they ask: "how can I use the standby server?"
|
I'm glad I talked to someone with experience 
Good info, thank you.
Two questions:
1.) Did you actually perform a failover and observe it to be so slow because it was swapping?
2.) So then am I to understand that in theory load balancing and high availability aren't in conflict but in practice they are? For example OpenSSI, which gives you high availability + load balancing, but if you just have two nodes and every service fails over to the first node it gets to be so slow you might as well not have anything at all.
In other words, are load balancing and high availability mutually exclusive not in theory but in practice, at least for two nodes?
R U good enough?
TenThousandDollarOffer.com
[Updated on: Tue, 01 August 2006 12:56] Report message to a moderator
|
|
|
Re: DRBD? [message #4941 is a reply to message #4880] |
Fri, 04 August 2006 09:55   |
wfischer
Messages: 38 Registered: November 2005 Location: Austria/Germany
|
Member |
|
|
cdevidal wrote on Tue, 01 August 2006 14:40 | 1.) Did you actually perform a failover and observe it to be so slow because it was swapping?
|
Yes, I had one situation like that. As far as I remember I also saw a situation where the OOM killer finally got active, as also the sum of physical RAM+swap was not big enough.
cdevidal wrote on Tue, 01 August 2006 14:40 | 2.) So then am I to understand that in theory load balancing and high availability aren't in conflict but in practice they are? For example OpenSSI, which gives you high availability + load balancing, but if you just have two nodes and every service fails over to the first node it gets to be so slow you might as well not have anything at all.
In other words, are load balancing and high availability mutually exclusive not in theory but in practice, at least for two nodes?
|
I have no experience with OpenSSI - I only know that it provides a single system image across many machines. When you need load balancing (like a webserver farm), you also need two clustered load balancer boxes (otherwise the load balancer would be a single point of failure). So with load balancing you need at least four machines (two load balancers and two servers) to also get high availability.
Up untill now I have not implemented a load balancing cluster yet (I only took a deeper look on linux virtual server).
another info: I updated http://wiki.openvz.org/HA_cluster_with_DRBD_and_Heartbeat - I think it is complete now. I hope I have not overlooked errors in the document, as it is rather long meanwhile.
best wishes,
Werner
Werner Fischer, Developer of a Virtuozzo-out-of-the-box-cluster solution at Thomas-Krenn.AG
|
|
|
Re: DRBD? [message #4943 is a reply to message #4941] |
Fri, 04 August 2006 10:25   |
cdevidal
Messages: 24 Registered: June 2006 Location: Jacksonville, FL
|
Junior Member |
|
|
wfischer wrote on Fri, 04 August 2006 05:55 | I have no experience with OpenSSI - I only know that it provides a single system image across many machines.
|
Oh no I was just using it as a "for example."
I was just asking if you thought that all load balancing+high availability solutions are mutually exclusive.
R U good enough?
TenThousandDollarOffer.com
|
|
|
|
Re: DRBD? [message #5775 is a reply to message #4964] |
Wed, 30 August 2006 13:08  |
 |
jimcooncat
Messages: 1 Registered: August 2006
|
Junior Member |
|
|
Trying to figure out how having both openvz and drbd on a two-node server set would be useful. In my half-baked scenario, I'm picturing a thin client setup like this:
----------- ----------- -----------
| client | | client | | client |
----------- ----------- -----------
* * *
********network switch **********
* *
-------------- --------------
| app server | | app server |
-------------- --------------
* *
************network switch ************
* *
---------------- ---------------
| drbd primary | ******xover ****** | drbd backup |
---------------- ---------------
My thought was to have OpenVZ on the "app servers" to run freenx servers, and be able to load balance/migrate sessions between the two "app servers".
Would loading openvz right on the drbd machines be able to eliminate my "app server" layer? Or am I barking up the wrong tree here?
----
from Maine, that's why
|
|
|
Goto Forum:
Current Time: Wed Mar 29 07:00:54 GMT 2023
Total time taken to generate the page: 0.01036 seconds
|