Patroni


Dynamic Configuration Settings

Dynamic configuration is stored in the DCS (Distributed Configuration Store) and applied on all cluster nodes.

In order to change the dynamic configuration you can use either patronictl edit-config tool or Patroni REST API.

Warning

when changing values of loop_wait, retry_timeout, or ttl you have to follow the rule:

loop_wait + 2 * retry_timeout <= ttl

pg_ident: list of lines that Patroni will use to generate pg_ident.conf. Patroni ignores this parameter if ident_file PostgreSQL parameter is set to a non-default value.

  • - mapname1 systemname1 pguser1

  • - mapname1 systemname2 pguser2

standby_cluster: if this section is defined, we want to bootstrap a standby cluster.

  • host: an address of remote node

  • port: a port of remote node

  • primary_slot_name: which slot on the remote node to use for replication. This parameter is optional, the default value is derived from the instance name (see function slot_name_from_member_name).

  • create_replica_methods: an ordered list of methods that can be used to bootstrap standby leader from the remote primary, can be different from the list defined in PostgreSQL

  • restore_command: command to restore WAL records from the remote primary to nodes in a standby cluster, can be different from the list defined in PostgreSQL

  • archive_cleanup_command: cleanup command for standby leader

  • recovery_min_apply_delay: how long to wait before actually apply WAL records on a standby leader

slots: define permanent replication slots. These slots will be preserved during switchover/failover. Permanent slots that don’t exist will be created by Patroni. With PostgreSQL 11 onwards permanent physical slots are created on all nodes and their position is advanced every loop_wait seconds. For PostgreSQL versions older than 11 permanent physical replication slots are maintained only on the current primary. The logical slots are copied from the primary to a standby with restart, and after that their position advanced every loop_wait seconds (if necessary). Copying logical slot files performed via libpq connection and using either rewind or superuser credentials (see postgresql.authentication section). There is always a chance that the logical slot position on the replica is a bit behind the former primary, therefore application should be prepared that some messages could be received the second time after the failover. The easiest way of doing so - tracking confirmed_flush_lsn. Enabling permanent replication slots requires postgresql.use_slots to be set to true. If there are permanent logical replication slots defined Patroni will automatically enable the hot_standby_feedback. Since the failover of logical replication slots is unsafe on PostgreSQL 9.6 and older and PostgreSQL version 10 is missing some important functions, the feature only works with PostgreSQL 11+.

  • my_slot_name: the name of the permanent replication slot. If the permanent slot name matches with the name of the current node it will not be created on this node. If you add a permanent physical replication slot which name matches the name of a Patroni member, Patroni will ensure that the slot that was created is not removed even if the corresponding member becomes unresponsive, situation which would normally result in the slot’s removal by Patroni. Although this can be useful in some situations, such as when you want replication slots used by members to persist during temporary failures or when importing existing members to a new Patroni cluster (see Convert a Standalone to a Patroni Cluster for details), caution should be exercised by the operator that these clashes in names are not persisted in the DCS, when the slot is no longer required, due to its effect on normal functioning of Patroni.

    • type: slot type. Could be physical or logical. If the slot is logical, you have to additionally define database and plugin.

    • database: the database name where logical slots should be created.

    • plugin: the plugin name for the logical slot.

ignore_slots: list of sets of replication slot properties for which Patroni should ignore matching slots. This configuration/feature/etc. is useful when some replication slots are managed outside of Patroni. Any subset of matching properties will cause a slot to be ignored.

  • name: the name of the replication slot.

  • type: slot type. Can be physical or logical. If the slot is logical, you may additionally define database and/or plugin.

  • database: the database name (when matching a logical slot).

  • plugin: the logical decoding plugin (when matching a logical slot).

Note: slots is a hashmap while ignore_slots is an array. For example:

slots:
  permanent_logical_slot_name:
    type: logical
    database: my_db
    plugin: test_decoding
  permanent_physical_slot_name:
    type: physical
  ...
ignore_slots:
  - name: ignored_logical_slot_name
    type: logical
    database: my_db
    plugin: test_decoding
  - name: ignored_physical_slot_name
    type: physical
  ...

Note: if cluster topology is static (fixed number of nodes that never change their names) you can configure permanent physical replication slots with names corresponding to names of nodes to avoid recycling of WAL files while replica is temporary down:

slots:
  node_name1:
    type: physical
  node_name2:
    type: physical
  node_name3:
    type: physical
  ...

Warning

Permanent replication slots are synchronized only from the primary/standby_leader to replica nodes. That means, applications are supposed to be using them only from the leader node. Using them on replica nodes will cause indefinite growth of pg_wal on all other nodes in the cluster. An exception to that rule are permanent physical slots that match the Patroni member names, if you happen to configure any. Those will be synchronized among all nodes as they are used for replication among them.


© Copyright 2015 Compose, Zalando SE. Revision 3d527f57.

Built with Sphinx using a theme provided by Read the Docs.

Read the Docs v: master