Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitfe8091c

Browse files
committed
Revert "For inplace update, send nontransactional invalidations."
This reverts commit95c5acb (v17) andcounterparts in each other non-master branch. If released, that commitwould have caused a worst-in-years minor release regression, viaundetected LWLock self-deadlock. This commit and its self-deadlock fixwarrant more bake time in the master branch.Reported by Alexander Lakhin.Discussion:https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com
1 parentbe74b94 commitfe8091c

File tree

11 files changed

+117
-289
lines changed

11 files changed

+117
-289
lines changed

‎src/backend/access/heap/heapam.c

Lines changed: 7 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -6115,24 +6115,6 @@ heap_inplace_update_and_unlock(Relation relation,
61156115
if (oldlen!=newlen||htup->t_hoff!=tuple->t_data->t_hoff)
61166116
elog(ERROR,"wrong tuple length");
61176117

6118-
/*
6119-
* Construct shared cache inval if necessary. Note that because we only
6120-
* pass the new version of the tuple, this mustn't be used for any
6121-
* operations that could change catcache lookup keys. But we aren't
6122-
* bothering with index updates either, so that's true a fortiori.
6123-
*/
6124-
CacheInvalidateHeapTupleInplace(relation,tuple,NULL);
6125-
6126-
/*
6127-
* Unlink relcache init files as needed. If unlinking, acquire
6128-
* RelCacheInitLock until after associated invalidations. By doing this
6129-
* in advance, if we checkpoint and then crash between inplace
6130-
* XLogInsert() and inval, we don't rely on StartupXLOG() ->
6131-
* RelationCacheInitFileRemove(). That uses elevel==LOG, so replay would
6132-
* neglect to PANIC on EIO.
6133-
*/
6134-
PreInplace_Inval();
6135-
61366118
/* NO EREPORT(ERROR) from here till changes are logged */
61376119
START_CRIT_SECTION();
61386120

@@ -6176,28 +6158,17 @@ heap_inplace_update_and_unlock(Relation relation,
61766158
PageSetLSN(BufferGetPage(buffer),recptr);
61776159
}
61786160

6179-
LockBuffer(buffer,BUFFER_LOCK_UNLOCK);
6180-
6181-
/*
6182-
* Send invalidations to shared queue. SearchSysCacheLocked1() assumes we
6183-
* do this before UnlockTuple().
6184-
*
6185-
* If we're mutating a tuple visible only to this transaction, there's an
6186-
* equivalent transactional inval from the action that created the tuple,
6187-
* and this inval is superfluous.
6188-
*/
6189-
AtInplace_Inval();
6190-
61916161
END_CRIT_SECTION();
6192-
UnlockTuple(relation,&tuple->t_self,InplaceUpdateTupleLock);
61936162

6194-
AcceptInvalidationMessages();/* local processing of just-sent inval */
6163+
heap_inplace_unlock(relation,oldtup,buffer);
61956164

61966165
/*
6197-
* Queue a transactional inval. The immediate invalidation we just sent
6198-
* is the only one known to be necessary. To reduce risk from the
6199-
* transition to immediate invalidation, continue sending a transactional
6200-
* invalidation like we've long done. Third-party code might rely on it.
6166+
* Send out shared cache inval if necessary. Note that because we only
6167+
* pass the new version of the tuple, this mustn't be used for any
6168+
* operations that could change catcache lookup keys. But we aren't
6169+
* bothering with index updates either, so that's true a fortiori.
6170+
*
6171+
* XXX ROLLBACK discards the invalidation. See test inplace-inval.spec.
62016172
*/
62026173
if (!IsBootstrapProcessingMode())
62036174
CacheInvalidateHeapTuple(relation,tuple,NULL);

‎src/backend/access/transam/xact.c

Lines changed: 8 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1249,24 +1249,14 @@ RecordTransactionCommit(void)
12491249

12501250
/*
12511251
* Transactions without an assigned xid can contain invalidation
1252-
* messages. While inplace updates do this, this is not known to be
1253-
* necessary; see comment at inplace CacheInvalidateHeapTuple().
1254-
* Extensions might still rely on this capability, and standbys may
1255-
* need to process those invals. We can't emit a commit record
1256-
* without an xid, and we don't want to force assigning an xid,
1257-
* because that'd be problematic for e.g. vacuum. Hence we emit a
1258-
* bespoke record for the invalidations. We don't want to use that in
1259-
* case a commit record is emitted, so they happen synchronously with
1260-
* commits (besides not wanting to emit more WAL records).
1261-
*
1262-
* XXX Every known use of this capability is a defect. Since an XID
1263-
* isn't controlling visibility of the change that prompted invals,
1264-
* other sessions need the inval even if this transactions aborts.
1265-
*
1266-
* ON COMMIT DELETE ROWS does a nontransactional index_build(), which
1267-
* queues a relcache inval, including in transactions without an xid
1268-
* that had read the (empty) table. Standbys don't need any ON COMMIT
1269-
* DELETE ROWS invals, but we've not done the work to withhold them.
1252+
* messages (e.g. explicit relcache invalidations or catcache
1253+
* invalidations for inplace updates); standbys need to process those.
1254+
* We can't emit a commit record without an xid, and we don't want to
1255+
* force assigning an xid, because that'd be problematic for e.g.
1256+
* vacuum. Hence we emit a bespoke record for the invalidations. We
1257+
* don't want to use that in case a commit record is emitted, so they
1258+
* happen synchronously with commits (besides not wanting to emit more
1259+
* WAL records).
12701260
*/
12711261
if (nmsgs!=0)
12721262
{

‎src/backend/catalog/index.c

Lines changed: 2 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2882,19 +2882,12 @@ index_update_stats(Relation rel,
28822882
if (dirty)
28832883
{
28842884
systable_inplace_update_finish(state,tuple);
2885-
/* the above sendstransactional and immediatecache invalmessages */
2885+
/* the above sendsacache invalmessage */
28862886
}
28872887
else
28882888
{
28892889
systable_inplace_update_cancel(state);
2890-
2891-
/*
2892-
* While we didn't change relhasindex, CREATE INDEX needs a
2893-
* transactional inval for when the new index's catalog rows become
2894-
* visible. Other CREATE INDEX and REINDEX code happens to also queue
2895-
* this inval, but keep this in case rare callers rely on this part of
2896-
* our API contract.
2897-
*/
2890+
/* no need to change tuple, but force relcache inval anyway */
28982891
CacheInvalidateRelcacheByTuple(tuple);
28992892
}
29002893

‎src/backend/replication/logical/decode.c

Lines changed: 15 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -460,19 +460,23 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
460460

461461
/*
462462
* Inplace updates are only ever performed on catalog tuples and
463-
* can, per definition, not change tuple visibility. Inplace
464-
* updates don't affect storage or interpretation of table rows,
465-
* so they don't affect logicalrep_write_tuple() outcomes. Hence,
466-
* we don't process invalidations from the original operation. If
467-
* inplace updates did affect those things, invalidations wouldn't
468-
* make it work, since there are no snapshot-specific versions of
469-
* inplace-updated values. Since we also don't decode catalog
470-
* tuples, we're not interested in the record's contents.
463+
* can, per definition, not change tuple visibility. Since we
464+
* don't decode catalog tuples, we're not interested in the
465+
* record's contents.
471466
*
472-
* WAL contains likely-unnecessary commit-time invals from the
473-
* CacheInvalidateHeapTuple() call in heap_inplace_update().
474-
* Excess invalidation is safe.
467+
* In-place updates can be used either by XID-bearing transactions
468+
* (e.g. in CREATE INDEX CONCURRENTLY) or by XID-less
469+
* transactions (e.g. VACUUM). In the former case, the commit
470+
* record will include cache invalidations, so we mark the
471+
* transaction as catalog modifying here. Currently that's
472+
* redundant because the commit will do that as well, but once we
473+
* support decoding in-progress relations, this will be important.
475474
*/
475+
if (!TransactionIdIsValid(xid))
476+
break;
477+
478+
SnapBuildProcessChange(builder,xid,buf->origptr);
479+
ReorderBufferXidSetCatalogChanges(ctx->reorder,xid,buf->origptr);
476480
break;
477481

478482
caseXLOG_HEAP_CONFIRM:

‎src/backend/utils/cache/catcache.c

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2129,8 +2129,7 @@ void
21292129
PrepareToInvalidateCacheTuple(Relationrelation,
21302130
HeapTupletuple,
21312131
HeapTuplenewtuple,
2132-
void (*function) (int,uint32,Oid,void*),
2133-
void*context)
2132+
void (*function) (int,uint32,Oid))
21342133
{
21352134
slist_iteriter;
21362135
Oidreloid;
@@ -2171,7 +2170,7 @@ PrepareToInvalidateCacheTuple(Relation relation,
21712170
hashvalue=CatalogCacheComputeTupleHashValue(ccp,ccp->cc_nkeys,tuple);
21722171
dbid=ccp->cc_relisshared ? (Oid)0 :MyDatabaseId;
21732172

2174-
(*function) (ccp->id,hashvalue,dbid,context);
2173+
(*function) (ccp->id,hashvalue,dbid);
21752174

21762175
if (newtuple)
21772176
{
@@ -2180,7 +2179,7 @@ PrepareToInvalidateCacheTuple(Relation relation,
21802179
newhashvalue=CatalogCacheComputeTupleHashValue(ccp,ccp->cc_nkeys,newtuple);
21812180

21822181
if (newhashvalue!=hashvalue)
2183-
(*function) (ccp->id,newhashvalue,dbid,context);
2182+
(*function) (ccp->id,newhashvalue,dbid);
21842183
}
21852184
}
21862185
}

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp