Setting Up Network RAID1 With DRBD

This tutorial shows how to set up RAID1 with the help of DRBD on two Linux boxes. This is useful for high-availability setups (like a HA NFS server) because if one node fails, all data is still available from the other node.

It is very important that both nodes have the same time. Therefore we install the ntp packages:
apt-get install ntp ntpdate

Verify your current partition table and make sure you have one empty partition for DRDB ready on both servers.
fdisk -l

Our outcome would be the following:

Disk /dev/sda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00029d5c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        3793    30461952   83  Linux
/dev/sda2            3793        3917      992257    5  Extended
/dev/sda5            3793        3917      992256   82  Linux swap / Solaris

Disk /dev/sdb: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn’t contain a valid partition table

As you can see we didn’t partition /dev/sdb yet. Use fdisk to make this a partitioned drive. Then run fdisk -l again to verify the result:

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3916    31455238+  83  Linux

Now you can install DRDB on both nodes and active it by using modprobe:
apt-get install drbd8-utils
modprobe drbd

Verify if everything is running as you want:
lsmod | grep drbd

If you dont get any result out of the lsmod something is wrong.

Now edit /etc/drdb/drdb.conf to setup the partition in DRDB for replication. Our file looks like this:

global { usage-count no; }

common { syncer { rate 100M; } }
resource r0 {
protocol C;
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
net {
cram-hmac-alg sha1;
shared-secret “secret”;
on s1 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;
on s2 {
device /dev/drbd0;
disk /dev/sdb1;
meta-disk internal;

Please verify the ip addresses to match your first and second node. In our situation we used for the first node and for the second node.

Now we can create the actual drdb storage volume.
drbdadm create-md r0

This would result in something like this;
Writing meta data…
initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.

Now start DRDB like any other program;
/etc/init.d/drbd start

Everything should be up and running now! Let’s just make server 1 the master server:
drbdadm — –overwrite-data-of-peer primary all

Now we can check the status by using the following command:
cat /proc/drbd

This would result in something like this:
ersion: 8.3.7 (api:88/proto:86-91)
srcversion: EE47D8BF18AC166BE219757
0: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r—-
ns:0 nr:15790400 dw:15790144 dr:0 al:0 bm:963 lo:9 pe:29622 ua:8 ap:0 ep:1 wo:b oos:15664096
[=========>……….] sync’ed: 50.3% (15296/30716)M
finish: 0:02:44 speed: 95,212 (85,352) K/sec

When the sync has been completed you will see something like this:
rcversion: EE47D8BF18AC166BE219757
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r—-

You can now create an ext3 filesystem on it and start pumping files !


Geef een reactie

%d bloggers liken dit: