other versions
- jessie 2:4.2.14+dfsg-0+deb8u9
- stretch 2:4.5.16+dfsg-1
- testing 2:4.9.4+dfsg-4
- unstable 2:4.9.4+dfsg-4
- experimental 2:4.9.5+dfsg-1
CTDB(7) | CTDB - clustered TDB database | CTDB(7) |
NAME¶
ctdb - Clustered TDBDESCRIPTION¶
CTDB is a clustered database component in clustered Samba that provides a high-availability load-sharing CIFS server cluster. The main functions of CTDB are:•Provide a clustered version of the TDB database
with automatic rebuild/recovery of the databases upon node failures.
•Monitor nodes in the cluster and services running
on each node.
•Manage a pool of public IP addresses that are
used to provide services to clients. Alternatively, CTDB can be used with
LVS.
Combined with a cluster filesystem CTDB provides a full high-availablity (HA)
environment for services such as clustered Samba, NFS and other services.
ANATOMY OF A CTDB CLUSTER¶
A CTDB cluster is a collection of nodes with 2 or more network interfaces. All nodes provide network (usually file/NAS) services to clients. Data served by file services is stored on shared storage (usually a cluster filesystem) that is accessible by all nodes. CTDB provides an "all active" cluster, where services are load balanced across all nodes.PRIVATE VS PUBLIC ADDRESSES¶
Each node in a CTDB cluster has multiple IP addresses assigned to it:•A single private IP address that is used for
communication between nodes.
•One or more public IP addresses that are used to
provide NAS or other services.
Private address¶
Each node is configured with a unique, permanently assigned private address. This address is configured by the operating system. This address uniquely identifies a physical node in the cluster and is the address that CTDB daemons will use to communicate with the CTDB daemons on other nodes. Private addresses are listed in the file specified by the CTDB_NODES configuration variable (see ctdbd.conf(5), default /etc/ctdb/nodes). This file contains the list of private addresses for all nodes in the cluster, one per line. This file must be the same on all nodes in the cluster. Private addresses should not be used by clients to connect to services provided by the cluster. It is strongly recommended that the private addresses are configured on a private network that is separate from client networks. Example /etc/ctdb/nodes for a four node cluster:192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.4
Public addresses¶
Public addresses are used to provide services to clients. Public addresses are not configured at the operating system level and are not permanently associated with a particular node. Instead, they are managed by CTDB and are assigned to interfaces on physical nodes at runtime. The CTDB cluster will assign/reassign these public addresses across the available healthy nodes in the cluster. When one node fails, its public addresses will be taken over by one or more other nodes in the cluster. This ensures that services provided by all public addresses are always available to clients, as long as there are nodes available capable of hosting this address. The public address configuration is stored in a file on each node specified by the CTDB_PUBLIC_ADDRESSES configuration variable (see ctdbd.conf(5), recommended /etc/ctdb/public_addresses). This file contains a list of the public addresses that the node is capable of hosting, one per line. Each entry also contains the netmask and the interface to which the address should be assigned. Example /etc/ctdb/public_addresses for a node that can host 4 public addresses, on 2 different interfaces:10.1.1.1/24 eth1 10.1.1.2/24 eth1 10.1.2.1/24 eth2 10.1.2.2/24 eth2
Node 0:/etc/ctdb/public_addresses 10.1.1.1/24 eth1 10.1.1.2/24 eth1 Node 1:/etc/ctdb/public_addresses 10.1.1.1/24 eth1 10.1.1.2/24 eth1 Node 2:/etc/ctdb/public_addresses 10.1.2.1/24 eth2 10.1.2.2/24 eth2 Node 3:/etc/ctdb/public_addresses 10.1.2.1/24 eth2 10.1.2.2/24 eth2
NODE STATUS¶
The current status of each node in the cluster can be viewed by the ctdb status command. A node can be in one of the following states: OKThis node is healthy and fully functional. It hosts
public addresses to provide services.
DISCONNECTED
This node is not reachable by other nodes via the private
network. It is not currently participating in the cluster. It does not
host public addresses to provide services. It might be shut down.
DISABLED
This node has been administratively disabled. This node
is partially functional and participates in the cluster. However, it does
not host public addresses to provide services.
UNHEALTHY
A service provided by this node has failed a health check
and should be investigated. This node is partially functional and participates
in the cluster. However, it does not host public addresses to provide
services. Unhealthy nodes should be investigated and may require an
administrative action to rectify.
BANNED
CTDB is not behaving as designed on this node. For
example, it may have failed too many recovery attempts. Such nodes are banned
from participating in the cluster for a configurable time period before they
attempt to rejoin the cluster. A banned node does not host public
addresses to provide services. All banned nodes should be investigated and may
require an administrative action to rectify.
STOPPED
This node has been administratively exclude from the
cluster. A stopped node does no participate in the cluster and does not
host public addresses to provide services. This state can be used while
performing maintenance on a node.
PARTIALLYONLINE
A node that is partially online participates in a cluster
like a healthy (OK) node. Some interfaces to serve public addresses are down,
but at least one interface is up. See also ctdb ifaces.
CAPABILITIES¶
Cluster nodes can have several different capabilities enabled. These are listed below. RECMASTERIndicates that a node can become the CTDB cluster
recovery master. The current recovery master is decided via an election held
by all active nodes with this capability.
Default is YES.
LMASTER
Indicates that a node can be the location master
(LMASTER) for database records. The LMASTER always knows which node has the
latest copy of a record in a volatile database.
Default is YES.
LVS
Indicates that a node is configued in Linux Virtual
Server (LVS) mode. In this mode the entire CTDB cluster uses one single public
address for the entire cluster instead of using multiple public addresses in
failover mode. This is an alternative to using a load-balancing layer-4
switch. See the LVS section for more details.
NATGW
Indicates that this node is configured to become the NAT
gateway master in a NAT gateway group. See the NAT GATEWAY section for more
details.
The RECMASTER and LMASTER capabilities can be disabled when CTDB is used to
create a cluster spanning across WAN links. In this case CTDB acts as a WAN
accelerator.
LVS¶
LVS is a mode where CTDB presents one single IP address for the entire cluster. This is an alternative to using public IP addresses and round-robin DNS to loadbalance clients across the cluster. This is similar to using a layer-4 loadbalancing switch but with some restrictions. In this mode the cluster selects a set of nodes in the cluster and loadbalance all client access to the LVS address across this set of nodes. This set of nodes are all LVS capable nodes that are HEALTHY, or if no HEALTHY nodes exists all LVS capable nodes regardless of health status. LVS will however never loadbalance traffic to nodes that are BANNED, STOPPED, DISABLED or DISCONNECTED. The ctdb lvs command is used to show which nodes are currently load-balanced across. One of the these nodes are elected as the LVSMASTER. This node receives all traffic from clients coming in to the LVS address and multiplexes it across the internal network to one of the nodes that LVS is using. When responding to the client, that node will send the data back directly to the client, bypassing the LVSMASTER node. The command ctdb lvsmaster will show which node is the current LVSMASTER. The path used for a client I/O is: 1.Client sends request packet to LVSMASTER.
2.LVSMASTER passes the request on to one node across the
internal network.
3.Selected node processes the request.
4.Node responds back to client.
This means that all incoming traffic to the cluster will pass through one
physical node, which limits scalability. You can send more data to the LVS
address that one physical node can multiplex. This means that you should not
use LVS if your I/O pattern is write-intensive since you will be limited in
the available network bandwidth that node can handle. LVS does work wery well
for read-intensive workloads where only smallish READ requests are going
through the LVSMASTER bottleneck and the majority of the traffic volume (the
data in the read replies) goes straight from the processing node back to the
clients. For read-intensive i/o patterns you can acheive very high throughput
rates in this mode.
Note: you can use LVS and public addresses at the same time.
If you use LVS, you must have a permanent address configured for the public
interface on each node. This address must be routable and the cluster nodes
must be configured so that all traffic back to client hosts are routed through
this interface. This is also required in order to allow samba/winbind on the
node to talk to the domain controller. This LVS IP address can not be used to
initiate outgoing traffic.
Make sure that the domain controller and the clients are reachable from a node
before you enable LVS. Also ensure that outgoing traffic to these hosts
is routed out through the configured public interface.
Configuration¶
To activate LVS on a CTDB node you must specify the CTDB_PUBLIC_INTERFACE and CTDB_LVS_PUBLIC_IP configuration variables. Setting the latter variable also enables the LVS capability on the node at startup. Example:CTDB_PUBLIC_INTERFACE=eth1 CTDB_LVS_PUBLIC_IP=10.1.1.237
NAT GATEWAY¶
NAT gateway (NATGW) is an optional feature that is used to configure fallback routing for nodes. This allows cluster nodes to connect to external services (e.g. DNS, AD, NIS and LDAP) when they do not host any public addresses (e.g. when they are unhealthy). This also applies to node startup because CTDB marks nodes as UNHEALTHY until they have passed a "monitor" event. In this context, NAT gateway helps to avoid a "chicken and egg" situation where a node needs to access an external service to become healthy. Another way of solving this type of problem is to assign an extra static IP address to a public interface on every node. This is simpler but it uses an extra IP address per node, while NAT gateway generally uses only one extra IP address.Operation¶
One extra NATGW public address is assigned on the public network to each NATGW group. Each NATGW group is a set of nodes in the cluster that shares the same NATGW address to talk to the outside world. Normally there would only be one NATGW group spanning an entire cluster, but in situations where one CTDB cluster spans multiple physical sites it might be useful to have one NATGW group for each site. There can be multiple NATGW groups in a cluster but each node can only be member of one NATGW group. In each NATGW group, one of the nodes is selected by CTDB to be the NATGW master and the other nodes are consider to be NATGW slaves. NATGW slaves establish a fallback default route to the NATGW master via the private network. When a NATGW slave hosts no public IP addresses then it will use this route for outbound connections. The NATGW master hosts the NATGW public IP address and routes outgoing connections from slave nodes via this IP address. It also establishes a fallback default route.Configuration¶
NATGW is usually configured similar to the following example configuration:CTDB_NATGW_NODES=/etc/ctdb/natgw_nodes CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24 CTDB_NATGW_PUBLIC_IFACE=eth0 CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
Implementation details¶
When the NATGW functionality is used, one of the nodes is selected to act as a NAT gateway for all the other nodes in the group when they need to communicate with the external services. The NATGW master is selected to be a node that is most likely to have usable networks. The NATGW master hosts the NATGW public IP address CTDB_NATGW_PUBLIC_IP on the configured public interfaces CTDB_NATGW_PUBLIC_IFACE and acts as a router, masquerading outgoing connections from slave nodes via this IP address. If CTDB_NATGW_DEFAULT_GATEWAY is set then it also establishes a fallback default route to the configured this gateway with a metric of 10. A metric 10 route is used so it can co-exist with other default routes that may be available. A NATGW slave establishes its fallback default route to the NATGW master via the private network CTDB_NATGW_PRIVATE_NETWORKwith a metric of 10. This route is used for outbound connections when no other default route is available because the node hosts no public addresses. A metric 10 routes is used so that it can co-exist with other default routes that may be available when the node is hosting public addresses. CTDB_NATGW_STATIC_ROUTES can be used to have NATGW create more specific routes instead of just default routes. This is implemented in the 11.natgw eventscript. Please see the eventscript file and the NAT GATEWAY section in ctdbd.conf(5) for more details.POLICY ROUTING¶
Policy routing is an optional CTDB feature to support complex network topologies. Public addresses may be spread across several different networks (or VLANs) and it may not be possible to route packets from these public addresses via the system's default route. Therefore, CTDB has support for policy routing via the 13.per_ip_routing eventscript. This allows routing to be specified for packets sourced from each public address. The routes are added and removed as CTDB moves public addresses between nodes.Configuration variables¶
There are 4 configuration variables related to policy routing: CTDB_PER_IP_ROUTING_CONF, CTDB_PER_IP_ROUTING_RULE_PREF, CTDB_PER_IP_ROUTING_TABLE_ID_LOW, CTDB_PER_IP_ROUTING_TABLE_ID_HIGH. See the POLICY ROUTING section in ctdbd.conf(5) for more details.Configuration¶
The format of each line of CTDB_PER_IP_ROUTING_CONF is:<public_address> <network> [ <gateway> ]
192.168.1.99 192.168.1.1/24
192.168.1.99/24 eth2,eth3
ip rule add from 192.168.1.99 pref 100 table ctdb.192.168.1.99 ip route add 192.168.1.0/24 dev eth2 table ctdb.192.168.1.99
0: from all lookup local 100: from 192.168.1.99 lookup ctdb.192.168.1.99 32766: from all lookup main 32767: from all lookup default
192.168.1.0/24 dev eth2 scope link
192.168.1.99 0.0.0.0/0 192.168.1.1
ip route add 0.0.0.0/0 via 192.168.1.1 dev eth2 table ctdb.192.168.1.99
192.168.1.0/24 dev eth2 scope link default via 192.168.1.1 dev eth2
Sample configuration¶
Here is a more complete example configuration./etc/ctdb/public_addresses: 192.168.1.98 eth2,eth3 192.168.1.99 eth2,eth3 /etc/ctdb/policy_routing: 192.168.1.98 192.168.1.0/24 192.168.1.98 192.168.200.0/24 192.168.1.254 192.168.1.98 0.0.0.0/0 192.168.1.1 192.168.1.99 192.168.1.0/24 192.168.1.99 192.168.200.0/24 192.168.1.254 192.168.1.99 0.0.0.0/0 192.168.1.1
NOTIFICATION SCRIPT¶
When certain state changes occur in CTDB, it can be configured to perform arbitrary actions via a notification script. For example, sending SNMP traps or emails when a node becomes unhealthy or similar. This is activated by setting the CTDB_NOTIFY_SCRIPT configuration variable. The specified script must be executable. Use of the provided /etc/ctdb/notify.sh script is recommended. It executes files in /etc/ctdb/notify.d/. CTDB currently generates notifications after CTDB changes to these states:init
setup
startup
healthy
unhealthy
DEBUG LEVELS¶
Valid values for DEBUGLEVEL are:EMERG (-3)
ALERT (-2)
CRIT (-1)
ERR (0)
WARNING (1)
NOTICE (2)
INFO (3)
DEBUG (4)
REMOTE CLUSTER NODES¶
It is possible to have a CTDB cluster that spans across a WAN link. For example where you have a CTDB cluster in your datacentre but you also want to have one additional CTDB node located at a remote branch site. This is similar to how a WAN accelerator works but with the difference that while a WAN-accelerator often acts as a Proxy or a MitM, in the ctdb remote cluster node configuration the Samba instance at the remote site IS the genuine server, not a proxy and not a MitM, and thus provides 100% correct CIFS semantics to clients. See the cluster as one single multihomed samba server where one of the NICs (the remote node) is very far away. NOTE: This does require that the cluster filesystem you use can cope with WAN-link latencies. Not all cluster filesystems can handle WAN-link latencies! Whether this will provide very good WAN-accelerator performance or it will perform very poorly depends entirely on how optimized your cluster filesystem is in handling high latency for data and metadata operations. To activate a node as being a remote cluster node you need to set the following two parameters in /etc/sysconfig/ctdb for the remote node:CTDB_CAPABILITY_LMASTER=no CTDB_CAPABILITY_RECMASTER=no
SEE ALSO¶
ctdb(1), ctdbd(1), ctdbd_wrapper(1), ltdbtool(1), onnode(1), ping_pong(1), ctdbd.conf(5), ctdb-statistics(7), ctdb-tunables(7), http://ctdb.samba.org/AUTHOR¶
This documentation was written by Ronnie Sahlberg, Amitay Isaacs, Martin SchwenkeCOPYRIGHT¶
Copyright © 2007 Andrew Tridgell, Ronnie Sahlberg11/12/2017 | ctdb |