DRBD Configuration

DRBD 8.0 is the version in Debian Lenny. 8.3 is in Backports. The following is for setting up the 8.0 version. 8.3 has a bunch more options but they aren't strictly necessary unless your storage is greater than 4 TB. DRBD is controlled by the "/etc/drbd.conf" file. A template will be installed when you install the "drbd8-utils" package. You'll need to create your own to match your options. In the following we'll assume that we have two servers, "node1" and "node2". The DRBD cross-over interface is eth2 on both machines, and they have the IP addresses of "10.10.10.1" on node1 and "10.10.10.2" on node2. "/dev/md2" is a RAID1 array that we'll use both for the data that DRBD will be keeping in sync as well as the meta-data. Here's a complete /etc/drbd.conf file. It's identical on both nodes because it contains all the information about both sides of the cluster. resource vs {   protocol C;   startup {     wfc-timeout 30;     degr-wfc-timeout 15;   }   disk {     fencing resource-only;   }   handlers {     outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";   }   on node1 {     address 10.10.10.1:7789;     device /dev/drbd0;     disk /dev/md2;     meta-disk internal;   }   on node2 {     address 10.10.10.2:7789;     device /dev/drbd0;     disk /dev/md2;     meta-disk internal;   } } Let's look at these lines a little closer. resource vs {...}"resource" is the keyword that says we're starting a specific DRBD device definition. You then name that resource anything you want. We'll call it "vs" for "vservers". The DRBD docs use "r0". Use whatever you want. If you have more than one DRBD device you are setting up then you need to create separate "resource" sections for each, with different names, of course. protocol C;There are a couple of different ways DRBD can operate. The "C" protocol means that when data is written to one DRBD node, it is then written to the other node and only THEN does the original write return. So it's a "synchronous" write where you always know you have a good copy on both sides if the write returns successfully. But there's other protocols as well. Check out the DRBD docs for the full explanation.   startup {     wfc-timeout 30;     degr-wfc-timeout 15;   }When DRBD starts up (usually at boot time) it will try to connect to the other side. If the other side is dead, we don't want it hanging forever. "wfc-timeout" is how many seconds it will wait for the other side on a "normal" startup. "degr-wfc-timeout" is how long DRBD waits if it was the only working node previously. (i.e. the cluster is "degraded".) We wait 30 seconds on normal startup, and 15 if the cluster was degraded. You definitely should set these once you are in production as the default is to wait forever.   disk {     fencing resource-only;   }   handlers {     outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5";   }This is what "hooks" DRBD and Heartbeat together. "drbd-peer-outdater" is a heartbeat script which will mark the other side's data as outdated if it loses connection. This way a node which doesn't have an up-to-date copy of the data can never become primary and overwrite data on the other side.   on node1 {...}   ...   on node2 {...}Here's where we create the DRBD shared device itself. We need two sections, "on node1" and "on node2" which each describe the settings on that node of our cluster. In our case they are identical except for the IP address of the node itself, but they don't have to be. We think, however, it's a good idea to keep the two sides as "identical" as possible for ease of administration.     address 10.10.10.1:7789;This the IP address and port that DRBD will use for communicate on this node. This needs to be the IP address of the cross-over interface. The port can be whatever you want, as long as it's open, of course. "7789" is the default port used in DRBD docs so we use it as well.     device /dev/drbd0;This is the device that will be created by DRBD and which you will use to build your filesystem. Choose any unused "/etc/drbdX". Being the geeks that we are - we count from '0'. :-)     disk /dev/md2;This is the physical "backing device" that DRBD will use. In this case it's a RAID1 device, but as we said it can be anything - drive, partition, logical volume, RAID array, etc.     meta-disk internal;Here we tell DRBD where to store the meta-data. "internal" is the easiest - meta-data is stored on the same device as the data. You can also store it on a separate partition and even store meta-data for several different DRBD devices on the same partitions using an index. See the docs for the details if you need that. We take the easy way and store it all locally. (Remember, do NOT put meta-data on a RAID5 array.) There's a lot more options you can put into your drbd.conf file, but the above is what you need to make this work. Check out the DRBD docs for all the other possibilities. (But make sure you match the syntax to your version of DRBD.)