Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit05e982c

Browse files
Reduce memory block size for decoded tuple storage to 8kB.
Commita4ccc1c introduced the Generation Context and modified thelogical decoding process to use a Generation Context with a fixedblock size of 8MB for storing tuple data decoded during logicaldecoding (i.e., rb->tup_context). Several reports have indicated thatthe logical decoding process can be terminated due toout-of-memory (OOM) situations caused by excessive memory usage inrb->tup_context.This issue can occur when decoding a workload involving severalconcurrent transactions, including a long-running transaction thatmodifies tuples. By design, the Generation Context does not free amemory block until all chunks within that block arereleased. Consequently, if tuples modified by the long-runningtransaction are stored across multiple memory blocks, these blocksremain allocated until the long-running transaction completes, leadingto substantial memory fragmentation. The memory usage during logicaldecoding, tracked by rb->size, does not account for memoryfragmentation, resulting in potentially much higher memory consumptionthan the value of the logical_decoding_work_mem parameter.Various improvement strategies were discussed in the relevantthread. This change reduces the block size of the Generation Contextused in rb->tup_context from 8MB to 8kB. This modificationsignificantly decreases the likelihood of substantial memoryfragmentation occurring and is relatively straightforward tobackport. Performance testing across multiple platforms has confirmedthat this change will not introduce any performance degradation thatwould impact actual operation.Backport to all supported branches.Reported-by: Alex Richman, Michael Guissine, Avi WeinbergReviewed-by: Amit Kapila, Fujii Masao, David RowleyTested-by: Hayato Kuroda, Shlok KyalDiscussion:https://postgr.es/m/CAD21AoBTY1LATZUmvSXEssvq07qDZufV4AF-OHh9VD2pC0VY2A%40mail.gmail.comBackpatch-through: 12
1 parent4a933ee commit05e982c

File tree

1 file changed

+11
-6
lines changed

1 file changed

+11
-6
lines changed

‎src/backend/replication/logical/reorderbuffer.c

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -332,15 +332,20 @@ ReorderBufferAllocate(void)
332332
sizeof(ReorderBufferTXN));
333333

334334
/*
335-
* XXX the allocation sizes used below pre-date generation context's block
336-
* growing code. These values should likely be benchmarked and set to
337-
* more suitable values.
335+
* To minimize memory fragmentation caused by long-running transactions
336+
* with changes spanning multiple memory blocks, we use a single
337+
* fixed-size memory block for decoded tuple storage. The performance
338+
* testing showed that the default memory block size maintains logical
339+
* decoding performance without causing fragmentation due to concurrent
340+
* transactions. One might think that we can use the max size as
341+
* SLAB_LARGE_BLOCK_SIZE but the test also showed it doesn't help resolve
342+
* the memory fragmentation.
338343
*/
339344
buffer->tup_context=GenerationContextCreate(new_ctx,
340345
"Tuples",
341-
SLAB_LARGE_BLOCK_SIZE,
342-
SLAB_LARGE_BLOCK_SIZE,
343-
SLAB_LARGE_BLOCK_SIZE);
346+
SLAB_DEFAULT_BLOCK_SIZE,
347+
SLAB_DEFAULT_BLOCK_SIZE,
348+
SLAB_DEFAULT_BLOCK_SIZE);
344349

345350
hash_ctl.keysize=sizeof(TransactionId);
346351
hash_ctl.entrysize=sizeof(ReorderBufferTXNByIdEnt);

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp