table of contents
other versions
- jessie 8.9.2~rc1-2+deb8u1
- jessie-backports 8.9.5-1~bpo8+1
- stretch 8.9.10-2
- testing 9.5.0-1
- unstable 9.5.0-1
DRBD.CONF(5) | Configuration Files | DRBD.CONF(5) |
NAME¶
drbd.conf - Configuration file for DRBD's devices .INTRODUCTION¶
The file /etc/drbd.conf is read by drbdadm. The file format was designed as to allow to have a verbatim copy of the file on both nodes of the cluster. It is highly recommended to do so in order to keep your configuration manageable. The file /etc/drbd.conf should be the same on both nodes of the cluster. Changes to /etc/drbd.conf do not apply immediately. By convention the main config contains two include statements. The first one includes the file /etc/drbd.d/global_common.conf, the second one all file with a .res suffix. alice uses /dev/drbd1 as devices for its application, and /dev/sda7 as low-level storage for the data. The IP addresses are used to specify the networking interfaces to be used. An eventually running resync process should use about 10MByte/second of IO bandwidth. This sync-rate statement is valid for volume 0, but would also be valid for further volumes. In this example it assigns full 10MByte/second to each volume. There may be multiple resource sections in a single drbd.conf file. For more examples, please have a look at the DRBD User's Guide[1].FILE FORMAT¶
The file consists of sections and parameters. A section begins with a keyword, sometimes an additional name, and an opening brace (“{”). A section ends with a closing brace (“}”. The braces enclose the parameters. section [name] { parameter value; [...] } A parameter starts with the identifier of the parameter followed by whitespace. Every subsequent character is considered as part of the parameter's value. A special case are Boolean parameters which consist only of the identifier. Parameters are terminated by a semicolon (“;”). Some parameter values have default units which might be overruled by K, M or G. These units are defined in the usual way (K = 2^10 = 1024, M = 1024 K, G = 1024 M). Comments may be placed into the configuration file and must begin with a hash sign (“#”). Subsequent characters are ignored until the end of the line.Sections¶
skipComments out chunks of text, even spanning more than one
line. Characters between the keyword skip and the opening brace
(“{”) are ignored. Everything enclosed by the braces is skipped.
This comes in handy, if you just want to comment out some ' resource [name]
{...}' section: just precede it with ' skip'.
global
Configures some global parameters. Currently only
minor-count, dialog-refresh, disable-ip-verification and
usage-count are allowed here. You may only have one global section,
preferably as the first section.
common
All resources inherit the options set in this section.
The common section might have a startup, a options, a
handlers, a net and a disk section.
resource name
Configures a DRBD resource. Each resource section needs
to have two (or more) on host sections and may have a
startup, a options, a handlers, a net and a
disk section. It might contain volumes sections.
on host-name
Carries the necessary configuration parameters for a DRBD
device of the enclosing resource. host-name is mandatory and must match
the Linux host name (uname -n) of one of the nodes. You may list more than one
host name here, in case you want to use the same parameters on several hosts
(you'd have to move the IP around usually). Or you may list more than two such
sections.
See also the floating section keyword. Required statements in this
section: address and volume. Note for backward compatibility and
convenience it is valid to embed the statements of a single volume directly
into the host section.
volume vnr
resource r1 { protocol C; device minor 1; meta-disk internal; on alice bob { address 10.2.2.100:7801; disk /dev/mapper/some-san; } on charlie { address 10.2.2.101:7801; disk /dev/mapper/other-san; } on daisy { address 10.2.2.103:7801; disk /dev/mapper/other-san-as-seen-from-daisy; } }
Defines a volume within a connection. The minor numbers
of a replicated volume might be different on different hosts, the volume
number ( vnr) is what groups them together. Required parameters in this
section: device, disk, meta-disk.
stacked-on-top-of resource
For a stacked DRBD setup (3 or 4 nodes), a
stacked-on-top-of is used instead of an on section. Required
parameters in this section: device and address.
floating AF addr:port
Carries the necessary configuration parameters for a DRBD
device of the enclosing resource. This section is very similar to the
on section. The difference to the on section is that the
matching of the host sections to machines is done by the IP-address instead of
the node name. Required parameters in this section: device,
disk, meta-disk, all of which may be inherited from the
resource section, in which case you may shorten this section down to just the
address identifier.
disk
resource r2 { protocol C; device minor 2; disk /dev/sda7; meta-disk internal; # short form, device, disk and meta-disk inherited floating 10.1.1.31:7802; # longer form, only device inherited floating 10.1.1.32:7802 { disk /dev/sdb; meta-disk /dev/sdc8; } }
This section is used to fine tune DRBD's properties in
respect to the low level storage. Please refer to drbdsetup(8) for
detailed description of the parameters. Optional parameters:
on-io-error, size, fencing, disk-barrier,
disk-flushes, disk-drain, md-flushes,
max-bio-bvecs, resync-rate, resync-after,
al-extents, al-updates, c-plan-ahead,
c-fill-target, c-delay-target, c-max-rate,
c-min-rate, disk-timeout, discard-zeroes-if-aligned,
rs-discard-granularity, read-balancing.
net
This section is used to fine tune DRBD's properties.
Please refer to drbdsetup(8) for a detailed description of this
section's parameters. Optional parameters: protocol,
sndbuf-size, rcvbuf-size, timeout, connect-int,
ping-int, ping-timeout, max-buffers,
max-epoch-size, ko-count, allow-two-primaries,
cram-hmac-alg, shared-secret, after-sb-0pri,
after-sb-1pri, after-sb-2pri, data-integrity-alg,
no-tcp-cork, on-congestion, congestion-fill,
congestion-extents, verify-alg, use-rle,
csums-alg, socket-check-timeout.
startup
This section is used to fine tune DRBD's properties.
Please refer to drbdsetup(8) for a detailed description of this
section's parameters. Optional parameters: wfc-timeout,
degr-wfc-timeout, outdated-wfc-timeout, wait-after-sb,
stacked-timeouts and become-primary-on.
options
This section is used to fine tune the behaviour of the
resource object. Please refer to drbdsetup(8) for a detailed
description of this section's parameters. Optional parameters:
cpu-mask, and on-no-data-accessible.
handlers
In this section you can define handlers (executables)
that are started by the DRBD system in response to certain events. Optional
parameters: pri-on-incon-degr, pri-lost-after-sb,
pri-lost, fence-peer (formerly oudate-peer),
local-io-error, initial-split-brain, split-brain,
before-resync-target, after-resync-target.
The interface is done via environment variables:
•DRBD_RESOURCE is the name of the
resource
•DRBD_MINOR is the minor number of the DRBD
device, in decimal.
•DRBD_CONF is the path to the primary
configuration file; if you split your configuration into multiple files (e.g.
in /etc/drbd.conf.d/), this will not be helpful.
•DRBD_PEER_AF , DRBD_PEER_ADDRESS ,
DRBD_PEERS are the address family (e.g. ipv6), the peer's
address and hostnames.
DRBD_PEER is deprecated.
Please note that not all of these might be set for all handlers, and that some
values might not be useable for a floating definition.Parameters¶
minor-count countmay be a number from 1 to 1048575.
Minor-count is a sizing hint for DRBD. It helps to right-size various
memory pools. It should be set in the in the same order of magnitude than the
actual number of minors you use. Per default the module loads with 11 more
resources than you have currently in your config but at least 32.
dialog-refresh time
may be 0 or a positive number.
The user dialog redraws the second count every time seconds (or does no
redraws if time is 0). The default value is 1.
disable-ip-verification
Use disable-ip-verification if, for some obscure
reasons, drbdadm can/might not use ip or ifconfig to do a sanity
check for the IP address. You can disable the IP verification with this
option.
usage-count val
Please participate in DRBD's online usage
counter[2]. The most convenient way to do so is to set this option to
yes. Valid options are: yes, no and ask.
protocol prot-id
On the TCP/IP link the specified protocol is used.
Valid protocol specifiers are A, B, and C.
Protocol A: write IO is reported as completed, if it has reached local disk and
local TCP send buffer.
Protocol B: write IO is reported as completed, if it has reached local disk and
remote buffer cache.
Protocol C: write IO is reported as completed, if it has reached both local and
remote disk.
device name minor nr
The name of the block device node of the resource being
described. You must use this device with your application (file system) and
you must not use the low level block device which is specified with the
disk parameter.
One can ether omit the name or minor and the minor number.
If you omit the name a default of /dev/drbd minor will be used.
Udev will create additional symlinks in /dev/drbd/by-res and
/dev/drbd/by-disk.
disk name
DRBD uses this block device to actually store and
retrieve the data. Never access such a device while DRBD is running on top of
it. This also holds true for dumpe2fs(8) and similar commands.
address AF addr:port
A resource needs one IP address per device, which
is used to wait for incoming connections from the partner device respectively
to reach the partner device. AF must be one of ipv4,
ipv6, ssocks or sdp (for compatibility reasons sci
is an alias for ssocks). It may be omited for IPv4 addresses. The
actual IPv6 address that follows the ipv6 keyword must be placed inside
brackets: ipv6 [fd01:2345:6789:abcd::1]:7800.
Each DRBD resource needs a TCP port which is used to connect to the
node's partner device. Two different DRBD resources may not use the same
addr:port combination on the same node.
meta-disk internal,
Internal means that the last part of the backing device
is used to store the meta-data. The size of the meta-data is computed based on
the size of the device.
When a device is specified, either with or without an index, DRBD
stores the meta-data on this device. Without index, the size of the
meta-data is determined by the size of the data device. This is usually used
with LVM, which allows to have many variable sized block devices. The
meta-data size is 36kB + Backing-Storage-size / 32k, rounded up to the next
4kb boundary. (Rule of the thumb: 32kByte per 1GByte of storage, rounded up to
the next MB.)
When an index is specified, each index number refers to a fixed slot of
meta-data of 128 MB, which allows a maximum data size of 4 TiB. This way,
multiple DBRD devices can share the same meta-data device. For example, if
/dev/sde6[0] and /dev/sde6[1] are used, /dev/sde6 must be at least 256 MB big.
Because of the hard size limit, use of meta-disk indexes is discouraged.
on-io-error handler
is taken, if the lower level device reports io-errors to
the upper layers.
handler may be pass_on, call-local-io-error or
detach.
pass_on: The node downgrades the disk status to inconsistent, marks the
erroneous block as inconsistent in the bitmap and retries the IO on the remote
node.
call-local-io-error: Call the handler script local-io-error.
detach: The node drops its low level device, and continues in diskless
mode.
fencing fencing_policy
By fencing we understand preventive measures to
avoid situations where both nodes are primary and disconnected (AKA split
brain).
Valid fencing policies are:
dont-care
disk-barrier,
This is the default policy. No fencing actions are
taken.
resource-only
If a node becomes a disconnected primary, it tries to
fence the peer's disk. This is done by calling the fence-peer handler.
The handler is supposed to reach the other node over alternative communication
paths and call ' drbdadm outdate res' there.
resource-and-stonith
If a node becomes a disconnected primary, it freezes all
its IO operations and calls its fence-peer handler. The fence-peer handler is
supposed to reach the peer over alternative communication paths and call
'drbdadm outdate res' there. In case it cannot reach the peer it should
stonith the peer. IO is resumed as soon as the situation is resolved. In case
your handler fails, you can resume IO with the resume-io command.
DRBD has four implementations to express
write-after-write dependencies to its backing storage device. DRBD will use
the first method that is supported by the backing storage device and that is
not disabled. By default the flush method is used.
Since drbd-8.4.2 disk-barrier is disabled by default because since
linux-2.6.36 (or 2.6.32 RHEL6) there is no reliable way to determine if
queuing of IO-barriers works. Dangerous only enable if you are told so
by one that knows for sure.
When selecting the method you should not only base your decision on the
measurable performance. In case your backing storage device has a volatile
write cache (plain disks, RAID of plain disks) you should use one of the first
two. In case your backing storage device has battery-backed write cache you
may go with option 3. Option 4 (disable everything, use "none")
is dangerous on most IO stacks, may result in write-reordering, and if
so, can theoretically be the reason for data corruption, or disturb the DRBD
protocol, causing spurious disconnect/reconnect cycles. Do not
useno-disk-drain.
Unfortunately device mapper (LVM) might not support barriers.
The letter after "wo:" in /proc/drbd indicates with method is
currently in use for a device: b, f, d, n. The
implementations are:
barrier
md-flushes
The first requires that the driver of the backing storage
device support barriers (called 'tagged command queuing' in SCSI and 'native
command queuing' in SATA speak). The use of this method can be enabled by
setting the disk-barrier options to yes.
flush
The second requires that the backing device support disk
flushes (called 'force unit access' in the drive vendors speak). The use of
this method can be disabled setting disk-flushes to no.
drain
The third method is simply to let write requests drain
before write requests of a new reordering domain are issued. This was the only
implementation before 8.0.9.
none
The fourth method is to not express write-after-write
dependencies to the backing store at all, by also specifying
no-disk-drain. This is dangerous on most IO stacks, may result
in write-reordering, and if so, can theoretically be the reason for data
corruption, or disturb the DRBD protocol, causing spurious
disconnect/reconnect cycles. Do not useno-disk-drain.
Disables the use of disk flushes and barrier BIOs when
accessing the meta data device. See the notes on disk-flushes.
max-bio-bvecs
In some special circumstances the device mapper stack
manages to pass BIOs to DRBD that violate the constraints that are set forth
by DRBD's merge_bvec() function and which have more than one bvec. A known
example is: phys-disk -> DRBD -> LVM -> Xen -> misaligned
partition (63) -> DomU FS. Then you might see "bio would need to, but
cannot, be split:" in the Dom0's kernel log.
The best workaround is to proper align the partition within the VM (E.g. start
it at sector 1024). This costs 480 KiB of storage. Unfortunately the default
of most Linux partitioning tools is to start the first partition at an odd
number (63). Therefore most distribution's install helpers for virtual linux
machines will end up with misaligned partitions. The second best workaround is
to limit DRBD's max bvecs per BIO (= max-bio-bvecs) to 1, but that
might cost performance.
The default value of max-bio-bvecs is 0, which means that there is no
user imposed limitation.
disk-timeout
If the lower-level device on which a DRBD device stores
its data does not finish an I/O request within the defined
disk-timeout, DRBD treats this as a failure. The lower-level device is
detached, and the device's disk state advances to Diskless. If DRBD is
connected to one or more peers, the failed request is passed on to one of
them.
This option is dangerous and may lead to kernel panic!
"Aborting" requests, or force-detaching the disk, is intended for
completely blocked/hung local backing devices which do no longer complete
requests at all, not even do error completions. In this situation, usually a
hard-reset and failover is the only way out.
By "aborting", basically faking a local error-completion, we allow for
a more graceful swichover by cleanly migrating services. Still the affected
node has to be rebooted "soon".
By completing these requests, we allow the upper layers to re-use the associated
data pages.
If later the local backing device "recovers", and now DMAs some data
from disk into the original request pages, in the best case it will just put
random data into unused pages; but typically it will corrupt meanwhile
completely unrelated data, causing all sorts of damage.
Which means delayed successful completion, especially for READ requests, is a
reason to panic(). We assume that a delayed *error* completion is OK, though
we still will complain noisily about it.
The default value of disk-timeout is 0, which stands for an infinite
timeout. Timeouts are specified in units of 0.1 seconds. This option is
available since DRBD 8.3.12.
discard-zeroes-if-aligned {yes | no}
There are several aspects to discard/trim/unmap support
on linux block devices. Even if discard is supported in general, it may fail
silently, or may partially ignore discard requests. Devices also announce
whether reading from unmapped blocks returns defined data (usually zeroes), or
undefined data (possibly old data, possibly garbage).
If on different nodes, DRBD is backed by devices with differing discard
characteristics, discards may lead to data divergence (old data or garbage
left over on one backend, zeroes due to unmapped areas on the other backend).
Online verify would now potentially report tons of spurious differences. While
probably harmless for most use cases (fstrim on a file system), DRBD cannot
have that.
To play safe, we have to disable discard support, if our local backend (on a
Primary) does not support "discard_zeroes_data=true". We also have
to translate discards to explicit zero-out on the receiving side, unless the
receiving side (Secondary) supports "discard_zeroes_data=true",
thereby allocating areas what were supposed to be unmapped.
There are some devices (notably the LVM/DM thin provisioning) that are capable
of discard, but announce discard_zeroes_data=false. In the case of DM-thin,
discards aligned to the chunk size will be unmapped, and reading from unmapped
sectors will return zeroes. However, unaligned partial head or tail areas of
discard requests will be silently ignored.
If we now add a helper to explicitly zero-out these unaligned partial areas,
while passing on the discard of the aligned full chunks, we effectively
achieve discard_zeroes_data=true on such devices.
Setting discard-zeroes-if-aligned to yes will allow DRBD to use
discards, and to announce discard_zeroes_data=true, even on backends that
announce discard_zeroes_data=false.
Setting discard-zeroes-if-aligned to no will cause DRBD to always
fall-back to zero-out on the receiving side, and to not even announce discard
capabilities on the Primary, if the respective backend announces
discard_zeroes_data=false.
We used to ignore the discard_zeroes_data setting completely. To not break
established and expected behaviour, and suddenly cause fstrim on
thin-provisioned LVs to run out-of-space instead of freeing up space, the
default value is yes.
This option is available since 8.4.7.
read-balancing method
The supported methods for load balancing of read
requests are prefer-local, prefer-remote, round-robin,
least-pending, when-congested-remote, 32K-striping,
64K-striping, 128K-striping, 256K-striping,
512K-striping and 1M-striping.
The default value of is prefer-local. This option is available since
8.4.1.
rs-discard-granularity byte
When rs-discard-granularity is set to a non zero,
positive value then DRBD tries to do a resync operation in requests of this
size. In case such a block contains only zero bytes on the sync source node,
the sync target node will issue a discard/trim/unmap command for the area.
The value is constrained by the discard granularity of the backing block device.
In case rs-discard-granularity is not a multiplier of the discard
granularity of the backing block device DRBD rounds it up. The feature only
gets active if the backing block device reads back zeroes after a discard
command.
The default value of is 0. This option is available since 8.4.7.
sndbuf-size size
is the size of the TCP socket send buffer. The default
value is 0, i.e. autotune. You can specify smaller or larger values. Larger
values are appropriate for reasonable write throughput with protocol A over
high latency networks. Values below 32K do not make sense. Since 8.0.13 resp.
8.2.7, setting the size value to 0 means that the kernel should
autotune this.
rcvbuf-size size
is the size of the TCP socket receive buffer. The default
value is 0, i.e. autotune. You can specify smaller or larger values. Usually
this should be left at its default. Setting the size value to 0 means
that the kernel should autotune this.
timeout time
If the partner node fails to send an expected response
packet within time tenths of a second, the partner node is considered
dead and therefore the TCP/IP connection is abandoned. This must be lower than
connect-int and ping-int. The default value is 60 = 6 seconds,
the unit 0.1 seconds.
connect-int time
In case it is not possible to connect to the remote DRBD
device immediately, DRBD keeps on trying to connect. With this option you can
set the time between two retries. The default value is 10 seconds, the unit is
1 second.
ping-int time
If the TCP/IP connection linking a DRBD device pair is
idle for more than time seconds, DRBD will generate a keep-alive packet
to check if its partner is still alive. The default is 10 seconds, the unit is
1 second.
ping-timeout time
The time the peer has time to answer to a keep-alive
packet. In case the peer's reply is not received within this time period, it
is considered as dead. The default value is 500ms, the default unit are tenths
of a second.
max-buffers number
Limits the memory usage per DRBD minor device on the
receiving side, or for internal buffers during resync or online-verify. Unit
is PAGE_SIZE, which is 4 KiB on most systems. The minimum possible setting is
hard coded to 32 (=128 KiB). These buffers are used to hold data blocks while
they are written to/read from disk. To avoid possible distributed deadlocks on
congestion, this setting is used as a throttle threshold rather than a hard
limit. Once more than max-buffers pages are in use, further allocation from
this pool is throttled. You want to increase max-buffers if you cannot
saturate the IO backend on the receiving side.
ko-count number
In case the secondary node fails to complete a single
write request for count times the timeout, it is expelled from
the cluster. (I.e. the primary node goes into StandAlone mode.) To
disable this feature, you should explicitly set it to 0; defaults may change
between versions.
max-epoch-size number
The highest number of data blocks between two write
barriers. If you set this smaller than 10, you might decrease your
performance.
allow-two-primaries
With this option set you may assign the primary role to
both nodes. You only should use this option if you use a shared storage file
system on top of DRBD. At the time of writing the only ones are: OCFS2 and
GFS. If you use this option with any other file system, you are going to crash
your nodes and to corrupt your data!
unplug-watermark number
This setting has no effect with recent kernels that use
explicit on-stack plugging (upstream Linux kernel 2.6.39, distributions may
have backported).
When the number of pending write requests on the standby (secondary) node
exceeds the unplug-watermark, we trigger the request processing of our
backing storage device. Some storage controllers deliver better performance
with small values, others deliver best performance when the value is set to
the same value as max-buffers, yet others don't feel much effect at all.
Minimum 16, default 128, maximum 131072.
cram-hmac-alg
You need to specify the HMAC algorithm to enable peer
authentication at all. You are strongly encouraged to use peer authentication.
The HMAC algorithm will be used for the challenge response authentication of
the peer. You may specify any digest algorithm that is named in
/proc/crypto.
shared-secret
The shared secret used in peer authentication. May be up
to 64 characters. Note that peer authentication is disabled as long as no
cram-hmac-alg (see above) is specified.
after-sb-0pri policy
possible policies are:
disconnect
after-sb-1pri policy
No automatic resynchronization, simply disconnect.
discard-younger-primary
Auto sync from the node that was primary before the
split-brain situation happened.
discard-older-primary
Auto sync from the node that became primary as second
during the split-brain situation.
discard-zero-changes
In case one node did not write anything since the split
brain became evident, sync from the node that wrote something to the node that
did not write anything. In case none wrote anything this policy uses a random
decision to perform a "resync" of 0 blocks. In case both have
written something this policy disconnects the nodes.
discard-least-changes
Auto sync from the node that touched more blocks during
the split brain situation.
discard-node-NODENAME
Auto sync to the named node.
possible policies are:
disconnect
after-sb-2pri policy
No automatic resynchronization, simply disconnect.
consensus
Discard the version of the secondary if the outcome of
the after-sb-0pri algorithm would also destroy the current secondary's
data. Otherwise disconnect.
violently-as0p
Always take the decision of the after-sb-0pri
algorithm, even if that causes an erratic change of the primary's view of the
data. This is only useful if you use a one-node FS (i.e. not OCFS2 or GFS)
with the allow-two-primaries flag, AND if you really know what
you are doing. This is DANGEROUS and MAY CRASH YOUR MACHINE if you have
an FS mounted on the primary node.
discard-secondary
Discard the secondary's version.
call-pri-lost-after-sb
Always honor the outcome of the after-sb-0pri
algorithm. In case it decides the current secondary has the right data, it
calls the "pri-lost-after-sb" handler on the current primary.
possible policies are:
disconnect
always-asbp
No automatic resynchronization, simply disconnect.
violently-as0p
Always take the decision of the after-sb-0pri
algorithm, even if that causes an erratic change of the primary's view of the
data. This is only useful if you use a one-node FS (i.e. not OCFS2 or GFS)
with the allow-two-primaries flag, AND if you really know what
you are doing. This is DANGEROUS and MAY CRASH YOUR MACHINE if you have
an FS mounted on the primary node.
call-pri-lost-after-sb
Call the "pri-lost-after-sb" helper program on
one of the machines. This program is expected to reboot the machine, i.e. make
it secondary.
Normally the automatic after-split-brain policies are
only used if current states of the UUIDs do not indicate the presence of a
third node.
With this option you request that the automatic after-split-brain policies are
used as long as the data sets of the nodes are somehow related. This might
cause a full sync, if the UUIDs indicate the presence of a third node. (Or
double faults led to strange UUID sets.)
rr-conflict policy
This option helps to solve the cases when the outcome of
the resync decision is incompatible with the current role assignment in the
cluster.
disconnect
data-integrity-alg alg
No automatic resynchronization, simply disconnect.
violently
Sync to the primary node is allowed, violating the
assumption that data on a block device are stable for one of the nodes.
Dangerous, do not use.
call-pri-lost
Call the "pri-lost" helper program on one of
the machines. This program is expected to reboot the machine, i.e. make it
secondary.
DRBD can ensure the data integrity of the user's data on
the network by comparing hash values. Normally this is ensured by the 16 bit
checksums in the headers of TCP/IP packets.
This option can be set to any of the kernel's data digest algorithms. In a
typical kernel configuration you should have at least one of md5,
sha1, and crc32c available. By default this is not enabled.
See also the notes on data integrity.
tcp-cork
DRBD usually uses the TCP socket option TCP_CORK to hint
to the network stack when it can expect more data, and when it should flush
out what it has in its send queue. It turned out that there is at least one
network stack that performs worse when one uses this hinting method. Therefore
we introducted this option. By setting tcp-cork to no you can
disable the setting and clearing of the TCP_CORK socket option by DRBD.
on-congestion congestion_policy,
By default DRBD blocks when the available TCP send queue
becomes full. That means it will slow down the application that generates the
write requests that cause DRBD to send more data down that TCP connection.
When DRBD is deployed with DRBD-proxy it might be more desirable that DRBD goes
into AHEAD/BEHIND mode shortly before the send queue becomes full. In
AHEAD/BEHIND mode DRBD does no longer replicate data, but still keeps the
connection open.
The advantage of the AHEAD/BEHIND mode is that the application is not slowed
down, even if DRBD-proxy's buffer is not sufficient to buffer all write
requests. The downside is that the peer node falls behind, and that a resync
will be necessary to bring it back into sync. During that resync the peer node
will have an inconsistent disk.
Available congestion_policys are block and pull-ahead. The
default is block. Fill_threshold might be in the range of 0 to
10GiBytes. The default is 0 which disables the check.
Active_extents_threshold has the same limits as al-extents.
The AHEAD/BEHIND mode and its settings are available since DRBD 8.3.10.
wfc-timeout time
Wait for connection timeout. The init script
drbd(8) blocks the boot process until the DRBD resources are connected.
When the cluster manager starts later, it does not see a resource with
internal split-brain. In case you want to limit the wait time, do it here.
Default is 0, which means unlimited. The unit is seconds.
degr-wfc-timeout time
Wait for connection timeout, if this node was a degraded
cluster. In case a degraded cluster (= cluster with only one node left) is
rebooted, this timeout value is used instead of wfc-timeout, because the peer
is less likely to show up in time, if it had been dead before. Value 0 means
unlimited.
outdated-wfc-timeout time
Wait for connection timeout, if the peer was outdated. In
case a degraded cluster (= cluster with only one node left) with an outdated
peer disk is rebooted, this timeout value is used instead of wfc-timeout,
because the peer is not allowed to become primary in the meantime. Value 0
means unlimited.
wait-after-sb
By setting this option you can make the init script to
continue to wait even if the device pair had a split brain situation and
therefore refuses to connect.
become-primary-on node-name
Sets on which node the device should be promoted to
primary role by the init script. The node-name might either be a host
name or the keyword both. When this option is not set the devices stay
in secondary role on both nodes. Usually one delegates the role assignment to
a cluster manager (e.g. heartbeat).
stacked-timeouts
Usually wfc-timeout and degr-wfc-timeout
are ignored for stacked devices, instead twice the amount of
connect-int is used for the connection timeouts. With the
stacked-timeouts keyword you disable this, and force DRBD to mind the
wfc-timeout and degr-wfc-timeout statements. Only do that if the
peer of the stacked resource is usually not available or will usually not
become primary. By using this option incorrectly, you run the risk of causing
unexpected split brain.
resync-rate rate
To ensure a smooth operation of the application on top of
DRBD, it is possible to limit the bandwidth which may be used by background
synchronizations. The default is 250 KB/sec, the default unit is KB/sec.
Optional suffixes K, M, G are allowed.
use-rle
During resync-handshake, the dirty-bitmaps of the nodes
are exchanged and merged (using bit-or), so the nodes will have the same
understanding of which blocks are dirty. On large devices, the fine grained
dirty-bitmap can become large as well, and the bitmap exchange can take quite
some time on low-bandwidth links.
Because the bitmap typically contains compact areas where all bits are unset
(clean) or set (dirty), a simple run-length encoding scheme can considerably
reduce the network traffic necessary for the bitmap exchange.
For backward compatibilty reasons, and because on fast links this possibly does
not improve transfer time but consumes cpu cycles, this defaults to off.
socket-check-timeout value
In setups involving a DRBD-proxy and connections that
experience a lot of buffer-bloat it might be necessary to set
ping-timeout to an unusual high value. By default DRBD uses the same
value to wait if a newly established TCP-connection is stable. Since the
DRBD-proxy is usually located in the same data center such a long wait time
may hinder DRBD's connect process.
In such setups socket-check-timeout should be set to at least to the
round trip time between DRBD and DRBD-proxy. I.e. in most cases to 1.
The default unit is tenths of a second, the default value is 0 (which causes
DRBD to use the value of ping-timeout instead). Introduced in
8.4.5.
resync-after res-name
By default, resynchronization of all devices would run in
parallel. By defining a resync-after dependency, the resynchronization of this
resource will start only if the resource res-name is already in
connected state (i.e., has finished its resynchronization).
al-extents extents
DRBD automatically performs hot area detection. With this
parameter you control how big the hot area (= active set) can get. Each extent
marks 4M of the backing storage (= low-level device). In case a primary node
leaves the cluster unexpectedly, the areas covered by the active set must be
resynced upon rejoining of the failed node. The data structure is stored in
the meta-data area, therefore each change of the active set is a write
operation to the meta-data device. A higher number of extents gives longer
resync times but less updates to the meta-data. The default number of
extents is 1237. (Minimum: 7, Maximum: 65534)
Note that the effective maximum may be smaller, depending on how you created the
device meta data, see also drbdmeta(8). The effective maximum is 919 *
(available on-disk activity-log ring-buffer area/4kB -1), the default 32kB
ring-buffer effects a maximum of 6433 (covers more than 25 GiB of data). We
recommend to keep this well within the amount your backend storage and
replication link are able to resync inside of about 5 minutes.
al-updates {yes | no}
DRBD's activity log transaction writing makes it
possible, that after the crash of a primary node a partial (bit-map based)
resync is sufficient to bring the node back to up-to-date. Setting
al-updates to no might increase normal operation performance but
causes DRBD to do a full resync when a crashed primary gets reconnected. The
default value is yes.
verify-alg hash-alg
During online verification (as initiated by the
verify sub-command), rather than doing a bit-wise comparison, DRBD
applies a hash function to the contents of every block being verified, and
compares that hash with the peer. This option defines the hash algorithm being
used for that purpose. It can be set to any of the kernel's data digest
algorithms. In a typical kernel configuration you should have at least one of
md5, sha1, and crc32c available. By default this is not
enabled; you must set this option explicitly in order to be able to use
on-line device verification.
See also the notes on data integrity.
csums-alg hash-alg
A resync process sends all marked data blocks from the
source to the destination node, as long as no csums-alg is given. When
one is specified the resync process exchanges hash values of all marked blocks
first, and sends only those data blocks that have different hash values.
This setting is useful for DRBD setups with low bandwidth links. During the
restart of a crashed primary node, all blocks covered by the activity log are
marked for resync. But a large part of those will actually be still in sync,
therefore using csums-alg will lower the required bandwidth in exchange
for CPU cycles.
c-plan-ahead plan_time,
The dynamic resync speed controller gets enabled with
setting plan_time to a positive value. It aims to fill the buffers
along the data path with either a constant amount of data fill_target,
or aims to have a constant delay time of delay_target along the path.
The controller has an upper bound of max_rate.
By plan_time the agility of the controller is configured. Higher values
yield for slower/lower responses of the controller to deviation from the
target value. It should be at least 5 times RTT. For regular data paths a
fill_target in the area of 4k to 100k is appropriate. For a setup that
contains drbd-proxy it is advisable to use delay_target instead. Only
when fill_target is set to 0 the controller will use
delay_target. 5 times RTT is a reasonable starting value.
Max_rate should be set to the bandwidth available between the
DRBD-hosts and the machines hosting DRBD-proxy, or to the available
disk-bandwidth.
The default value of plan_time is 0, the default unit is 0.1 seconds.
Fill_target has 0 and sectors as default unit. Delay_target has
1 (100ms) and 0.1 as default unit. Max_rate has 10240 (100MiB/s) and
KiB/s as default unit.
The dynamic resync speed controller and its settings are available since DRBD
8.3.9.
c-min-rate min_rate
A node that is primary and sync-source has to schedule
application IO requests and resync IO requests. The min_rate tells DRBD
use only up to min_rate for resync IO and to dedicate all other available IO
bandwidth to application requests.
Note: The value 0 has a special meaning. It disables the limitation of resync IO
completely, which might slow down application IO considerably. Set it to a
value of 1, if you prefer that resync IO never slows down application IO.
Note: Although the name might suggest that it is a lower bound for the dynamic
resync speed controller, it is not. If the DRBD-proxy buffer is full, the
dynamic resync speed controller is free to lower the resync speed down to 0,
completely independent of the c-min-rate setting.
Min_rate has 4096 (4MiB/s) and KiB/s as default unit.
on-no-data-accessible ond-policy
This setting controls what happens to IO requests on a
degraded, disk less node (I.e. no data store is reachable). The available
policies are io-error and suspend-io.
If ond-policy is set to suspend-io you can either resume IO by
attaching/connecting the last lost data storage, or by the drbdadm
resume-io res command. The latter will result in IO errors
of course.
The default is io-error. This setting is available since DRBD
8.3.9.
cpu-mask cpu-mask
Sets the cpu-affinity-mask for DRBD's kernel threads of
this device. The default value of cpu-mask is 0, which means that
DRBD's kernel threads should be spread over all CPUs of the machine. This
value must be given in hexadecimal notation. If it is too big it will be
truncated.
pri-on-incon-degr cmd
This handler is called if the node is primary, degraded
and if the local copy of the data is inconsistent.
pri-lost-after-sb cmd
The node is currently primary, but lost the
after-split-brain auto recovery procedure. As as consequence, it should be
abandoned.
pri-lost cmd
The node is currently primary, but DRBD's algorithm
thinks that it should become sync target. As a consequence it should give up
its primary role.
fence-peer cmd
The handler is part of the fencing mechanism. This
handler is called in case the node needs to fence the peer's disk. It should
use other communication paths than DRBD's network link.
local-io-error cmd
DRBD got an IO error from the local IO subsystem.
initial-split-brain cmd
DRBD has connected and detected a split brain situation.
This handler can alert someone in all cases of split brain, not just those
that go unresolved.
split-brain cmd
DRBD detected a split brain situation but remains
unresolved. Manual recovery is necessary. This handler should alert someone on
duty.
before-resync-target cmd
DRBD calls this handler just before a resync begins on
the node that becomes resync target. It might be used to take a snapshot of
the backing block device.
after-resync-target cmd
DRBD calls this handler just after a resync operation
finished on the node whose disk just became consistent after being
inconsistent for the duration of the resync. It might be used to remove a
snapshot of the backing device that was created by the
before-resync-target handler.
Other Keywords¶
include file-patternInclude all files matching the wildcard pattern
file-pattern. The include statement is only allowed on the top
level, i.e. it is not allowed inside any section.
NOTES ON DATA INTEGRITY¶
There are two independent methods in DRBD to ensure the integrity of the mirrored data. The online-verify mechanism and the data-integrity-alg of the network section. Both mechanisms might deliver false positives if the user of DRBD modifies the data which gets written to disk while the transfer goes on. This may happen for swap, or for certain append while global sync, or truncate/rewrite workloads, and not necessarily poses a problem for the integrity of the data. Usually when the initiator of the data transfer does this, it already knows that that data block will not be part of an on disk data structure, or will be resubmitted with correct data soon enough. The data-integrity-alg causes the receiving side to log an error about "Digest integrity check FAILED: Ns +x\n", where N is the sector offset, and x is the size of the request in bytes. It will then disconnect, and reconnect, thus causing a quick resync. If the sending side at the same time detected a modification, it warns about "Digest mismatch, buffer modified by upper layers during write: Ns +x\n", which shows that this was a false positive. The sending side may detect these buffer modifications immediately after the unmodified data has been copied to the tcp buffers, in which case the receiving side won't notice it. The most recent (2007) example of systematic corruption was an issue with the TCP offloading engine and the driver of a certain type of GBit NIC. The actual corruption happened on the DMA transfer from core memory to the card. Since the TCP checksum gets calculated on the card, this type of corruption stays undetected as long as you do not use either the online verify or the data-integrity-alg. We suggest to use the data-integrity-alg only during a pre-production phase due to its CPU costs. Further we suggest to do online verify runs regularly e.g. once a month during a low load period.VERSION¶
This document was revised for version 8.4.0 of the DRBD distribution.AUTHOR¶
Written by Philipp Reisner <philipp.reisner@linbit.com> and Lars Ellenberg <lars.ellenberg@linbit.com>.REPORTING BUGS¶
Report bugs to <drbd-user@lists.linbit.com>.COPYRIGHT¶
Copyright 2001-2008 LINBIT Information Technologies, Philipp Reisner, Lars Ellenberg. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.SEE ALSO¶
drbd(8), drbddisk(8), drbdsetup(8), drbdmeta(8), drbdadm(8), DRBD User's Guide[1], DRBD web site[3]NOTES¶
- 1.
- DRBD User's Guide
- 2.
- DRBD's online usage counter
- 3.
- DRBD web site
6 May 2011 | DRBD 8.4.0 |