The guide is constantly being updated. This guide assumes, throughout, that you are using DRBD version 8. If you are using a pre Please use the drbd-user mailing list to submit comments.
|Published (Last):||19 October 2007|
|PDF File Size:||19.50 Mb|
|ePub File Size:||12.52 Mb|
|Price:||Free* [*Free Regsitration Required]|
Please use this contents only as a guideline. For a detailed installation and configuration guide, please read the DRBD official documentation. At the moment of this writing, the latest DRBD version is 8. The same DRBD packages must be installed on both nodes.
If you are using a CentOS 5 platform, you have the drbd83 package and its kernel kmod-drbd83 counterpart, from the extras repository, which is active by default. At the end, the same DRBD configuration must be present on both nodes. The data marked with bold italic is the one that must be replaced with actual data from your specific setup.
The commands must be issued on both nodes. After this stage, you will need to perform the operations described only on the primary node. Disk synchronization The initial full synchronization of the two nodes must be performed on only one node, only on initial resource configuration, and only on the node you selected as the synchronization source.
This node is the one you will consider the primary node in the future cluster setup. To perform this step, issue this command: After issuing this command, the initial full synchronization will commence. It may take some time depending on the size of the device and overall disk and network performance. The synchronization is logged with the following two syslog messages: Do not attempt to perform the same synchronization on the secondary node, it must be performed only once on the primary node.
File system creation At this final point, you have to create the file system for the DRBD resource, at your choice. In our example, the filesystem has been created as ext3: Remember to perform this step only on the primary node.
Distributed Replicated Block Device
Writes to the primary node are transferred to the lower-level block device and simultaneously propagated to the secondary node s. The secondary node s then transfers data to its corresponding lower-level block device. When the failed ex-primary node returns, the system may or may not raise it to primary level again, after device data resynchronization. DRBD is often deployed together with the Pacemaker or Heartbeat cluster resource managers, although it does integrate with other cluster management frameworks. It integrates with virtualization solutions such as Xen , and may be used both below and on top of the Linux LVM stack. Shared cluster storage comparison[ edit ] Conventional computer cluster systems typically use some sort of shared storage for data being used by cluster resources. In DRBD that overhead is reduced as all read operations are carried out locally.
Axigen with Linux-HA and DRBD - DRBD
The DRBD User’s Guide