NAME¶
DROP NODE - Remove the node from participating in the replication
SYNOPSIS¶
DROP NODE (options);
DESCRIPTION¶
Drop a node. This command removes the specified node entirely from the
replication systems configuration. If the replication daemon is still running
on that node (and processing events), it will attempt to uninstall the
replication system and terminate itself.
- ID = ival
- Node ID of the node to remove. This may be represented either by a single
node id or by a quoted comma separated list of nodes
- EVENT NODE = ival
- Node ID of the node to generate the event.
This uses “schemadocdropnode(p_no_ids integer[])” [not available
as a man page].
When you invoke
DROP NODE, one of the steps is to run
UNINSTALL
NODE.
EXAMPLE¶
DROP NODE ( ID = 2, EVENT NODE = 1 );
DROP NODE (ID='3,4,5', EVENT NODE=1);
LOCKING BEHAVIOUR ¶
When dropping triggers off of application tables, this will require exclusive
access to each replicated table on the node being discarded.
DANGEROUS/UNINTUITIVE BEHAVIOUR¶
If you are using connections that cache query plans (this is particularly common
for Java application frameworks with connection pools), the connections may
cache query plans that include the pre-
DROP NODE state of things, and
you will get error messages indicating missing OIDs [“[MISSING
TEXT]” [not available as a man page]].
After dropping a node, you may also need to recycle connections in your
application.
You cannot submit this to an
EVENT NODE that is the number of the
node being dropped; the request must go to some node that will remain in the
cluster.
SLONIK EVENT CONFIRMATION BEHAVIOUR ¶
Slonik waits until nodes (other than the one being dropped) are caught up with
non-SYNC events from all other nodes before submitting the DROP NODE command.
This command was introduced in Slony-I 1.0
In version 2.0, the default value for
EVENT NODE was removed, so a node
must be specified.
In version 2.2, support for dropping multiple nodes in a single command was
introduced