Movatterモバイル変換


[0]ホーム

URL:



Facebook
Postgres Pro
Facebook
Downloads
47.2. Logical Decoding Concepts
Prev UpChapter 47. Logical DecodingHome Next

47.2. Logical Decoding Concepts#

47.2.1. Logical Decoding#

Logical decoding is the process of extracting all persistent changes to a database's tables into a coherent, easy to understand format which can be interpreted without detailed knowledge of the database's internal state.

InPostgreSQL, logical decoding is implemented by decoding the contents of thewrite-ahead log, which describe changes on a storage level, into an application-specific form such as a stream of tuples or SQL statements.

47.2.2. Replication Slots#

In the context of logical replication, a slot represents a stream of changes that can be replayed to a client in the order they were made on the origin server. Each slot streams a sequence of changes from a single database.

Note

PostgreSQL also has streaming replication slots (seeSection 26.2.5), but they are used somewhat differently there.

A replication slot has an identifier that is unique across all databases in aPostgreSQL cluster. Slots persist independently of the connection using them and are crash-safe.

A logical slot will emit each change just once in normal operation. The current position of each slot is persisted only at checkpoint, so in the case of a crash the slot may return to an earlier LSN, which will then cause recent changes to be sent again when the server restarts. Logical decoding clients are responsible for avoiding ill effects from handling the same message more than once. Clients may wish to record the last LSN they saw when decoding and skip over any repeated data or (when using the replication protocol) request that decoding start from that LSN rather than letting the server determine the start point. The Replication Progress Tracking feature is designed for this purpose, refer toreplication origins.

Multiple independent slots may exist for a single database. Each slot has its own state, allowing different consumers to receive changes from different points in the database change stream. For most applications, a separate slot will be required for each consumer.

A logical replication slot knows nothing about the state of the receiver(s). It's even possible to have multiple different receivers using the same slot at different times; they'll just get the changes following on from when the last receiver stopped consuming them. Only one receiver may consume changes from a slot at any given time.

A logical replication slot can also be created on a hot standby. To preventVACUUM from removing required rows from the system catalogs,hot_standby_feedback should be set on the standby. In spite of that, if any required rows get removed, the slot gets invalidated. It's highly recommended to use a physical slot between the primary and the standby. Otherwise,hot_standby_feedback will work but only while the connection is alive (for example a node restart would break it). Then, the primary may delete system catalog rows that could be needed by the logical decoding on the standby (as it does not know about thecatalog_xmin on the standby). Existing logical slots on standby also get invalidated ifwal_level on the primary is reduced to less thanlogical. This is done as soon as the standby detects such a change in the WAL stream. It means that, for walsenders that are lagging (if any), some WAL records up to thewal_level parameter change on the primary won't be decoded.

Creation of a logical slot requires information about all the currently running transactions. On the primary, this information is available directly, but on a standby, this information has to be obtained from primary. Thus, slot creation may need to wait for some activity to happen on the primary. If the primary is idle, creating a logical slot on standby may take noticeable time. This can be sped up by calling thepg_log_standby_snapshot function on the primary.

Caution

Replication slots persist across crashes and know nothing about the state of their consumer(s). They will prevent removal of required resources even when there is no connection using them. This consumes storage because neither required WAL nor required rows from the system catalogs can be removed byVACUUM as long as they are required by a replication slot. In extreme cases this could cause the database to shut down to prevent transaction ID wraparound (seeSection 24.1.5). So if a slot is no longer required it should be dropped.

47.2.3. Replication Slot Synchronization#

The logical replication slots on the primary can be synchronized to the hot standby by using thefailover parameter ofpg_create_logical_replication_slot, or by using thefailover option ofCREATE SUBSCRIPTION during slot creation, and then callingpg_sync_replication_slots on the standby. By settingsync_replication_slots on the standby, the failover slots can be synchronized periodically in the slotsync worker. For the synchronization to work, it is mandatory to have a physical replication slot between the primary and the standby (i.e.,primary_slot_name should be configured on the standby), andhot_standby_feedback must be enabled on the standby. It is also necessary to specify a validdbname in theprimary_conninfo. It's highly recommended that the said physical replication slot is named insynchronized_standby_slots list on the primary, to prevent the subscriber from consuming changes faster than the hot standby. Even when correctly configured, some latency is expected when sending changes to logical subscribers due to the waiting on slots named insynchronized_standby_slots. Whensynchronized_standby_slots is utilized, the primary server will not completely shut down until the corresponding standbys, associated with the physical replication slots specified insynchronized_standby_slots, have confirmed receiving the WAL up to the latest flushed position on the primary server.

The ability to resume logical replication after failover depends upon thepg_replication_slots.synced value for the synchronized slots on the standby at the time of failover. Only persistent slots that have attained synced state as true on the standby before failover can be used for logical replication after failover. Temporary synced slots cannot be used for logical decoding, therefore logical replication for those slots cannot be resumed. For example, if the synchronized slot could not become persistent on the standby due to a disabled subscription, then the subscription cannot be resumed after failover even when it is enabled.

To resume logical replication after failover from the synced logical slots, the subscription's 'conninfo' must be altered to point to the new primary server. This is done usingALTER SUBSCRIPTION ... CONNECTION. It is recommended that subscriptions are first disabled before promoting the standby and are re-enabled after altering the connection string.

Caution

There is a chance that the old primary is up again during the promotion and if subscriptions are not disabled, the logical subscribers may continue to receive data from the old primary server even after promotion until the connection string is altered. This might result in data inconsistency issues, preventing the logical subscribers from being able to continue replication from the new primary server.

47.2.4. Output Plugins#

Output plugins transform the data from the write-ahead log's internal representation into the format the consumer of a replication slot desires.

47.2.5. Exported Snapshots#

When a new replication slot is created using the streaming replication interface (seeCREATE_REPLICATION_SLOT), a snapshot is exported (seeSection 9.28.5), which will show exactly the state of the database after which all changes will be included in the change stream. This can be used to create a new replica by usingSET TRANSACTION SNAPSHOT to read the state of the database at the moment the slot was created. This transaction can then be used to dump the database's state at that point in time, which afterwards can be updated using the slot's contents without losing any changes.

Creation of a snapshot is not always possible. In particular, it will fail when connected to a hot standby. Applications that do not require snapshot export may suppress it with theNOEXPORT_SNAPSHOT option.


Prev Up Next
47.1. Logical Decoding Examples Home 47.3. Streaming Replication Protocol Interface
pdfepub
Go to PostgreSQL 17
By continuing to browse this website, you agree to the use of cookies. Go toPrivacy Policy.

[8]ページ先頭

©2009-2025 Movatter.jp