Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit9ee1cf0

Browse files
committed
Fix TOAST access failure in RETURNING queries.
Discussion of commit3e2f3c2 exposed a problem that is of longerstanding: since we don't detoast data while sticking it into a portal'sholdStore for PORTAL_ONE_RETURNING and PORTAL_UTIL_SELECT queries, and werelease the query's snapshot as soon as we're done loading the holdStore,later readout of the holdStore can do TOAST fetches against data that canno longer be seen by any of the session's live snapshots. This means thata concurrent VACUUM could remove the TOAST data before we can fetch it.Commit3e2f3c2 exposed the problem by showing that sometimes we had *no*live snapshots while fetching TOAST data, but we'd be at risk anyway.I believe this code was all right when written, because our management of asession's exposed xmin was such that the TOAST references were safe untilend of transaction. But that's no longer true now that we can advance orclear our PGXACT.xmin intra-transaction.To fix, copy the query's snapshot during FillPortalStore() and save it inthe Portal; release it only when the portal is dropped. This essentiallyimplements a policy that we must hold a relevant snapshot whenever weaccess potentially-toasted data. We had already come to that conclusionin other places, cf commits08e261c andec543db.I'd have liked to add a regression test case for this, but I didn't seea way to make one that's not unreasonably bloated; it seems to requirereturning a toasted value to the client, and those will be big.In passing, improve PortalRunUtility() so that it positively verifiesthat its ending PopActiveSnapshot() call will pop the expected snapshot,removing a rather shaky assumption about which utility commands mightdo their own PopActiveSnapshot(). There's no known bug here, but nowthat we're actively referencing the snapshot it's almost free to makethis code a bit more bulletproof.We might want to consider back-patching something like this into olderbranches, but it would be prudent to let it prove itself more in HEADbeforehand.Discussion: <87vazemeda.fsf@credativ.de>
1 parent07a601e commit9ee1cf0

File tree

4 files changed

+80
-22
lines changed

4 files changed

+80
-22
lines changed

‎src/backend/commands/portalcmds.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -317,10 +317,11 @@ PersistHoldablePortal(Portal portal)
317317
Assert(queryDesc!=NULL);
318318

319319
/*
320-
* Caller must have created the tuplestore already.
320+
* Caller must have created the tuplestore already ... but not a snapshot.
321321
*/
322322
Assert(portal->holdContext!=NULL);
323323
Assert(portal->holdStore!=NULL);
324+
Assert(portal->holdSnapshot==NULL);
324325

325326
/*
326327
* Before closing down the executor, we must copy the tupdesc into
@@ -362,7 +363,8 @@ PersistHoldablePortal(Portal portal)
362363

363364
/*
364365
* Change the destination to output to the tuplestore. Note we tell
365-
* the tuplestore receiver to detoast all data passed through it.
366+
* the tuplestore receiver to detoast all data passed through it; this
367+
* makes it safe to not keep a snapshot associated with the data.
366368
*/
367369
queryDesc->dest=CreateDestReceiver(DestTuplestore);
368370
SetTuplestoreDestReceiverParams(queryDesc->dest,

‎src/backend/tcop/pquery.c

Lines changed: 50 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -43,9 +43,11 @@ static uint64 RunFromStore(Portal portal, ScanDirection direction, uint64 count,
4343
DestReceiver*dest);
4444
staticuint64PortalRunSelect(Portalportal,boolforward,longcount,
4545
DestReceiver*dest);
46-
staticvoidPortalRunUtility(Portalportal,Node*utilityStmt,boolisTopLevel,
46+
staticvoidPortalRunUtility(Portalportal,Node*utilityStmt,
47+
boolisTopLevel,boolsetHoldSnapshot,
4748
DestReceiver*dest,char*completionTag);
48-
staticvoidPortalRunMulti(Portalportal,boolisTopLevel,
49+
staticvoidPortalRunMulti(Portalportal,
50+
boolisTopLevel,boolsetHoldSnapshot,
4951
DestReceiver*dest,DestReceiver*altdest,
5052
char*completionTag);
5153
staticuint64DoPortalRunFetch(Portalportal,
@@ -810,7 +812,7 @@ PortalRun(Portal portal, long count, bool isTopLevel,
810812
break;
811813

812814
casePORTAL_MULTI_QUERY:
813-
PortalRunMulti(portal,isTopLevel,
815+
PortalRunMulti(portal,isTopLevel, false,
814816
dest,altdest,completionTag);
815817

816818
/* Prevent portal's commands from being re-executed */
@@ -1039,15 +1041,16 @@ FillPortalStore(Portal portal, bool isTopLevel)
10391041
/*
10401042
* Run the portal to completion just as for the default
10411043
* MULTI_QUERY case, but send the primary query's output to the
1042-
* tuplestore. Auxiliary query outputs are discarded.
1044+
* tuplestore. Auxiliary query outputs are discarded. Set the
1045+
* portal's holdSnapshot to the snapshot used (or a copy of it).
10431046
*/
1044-
PortalRunMulti(portal,isTopLevel,
1047+
PortalRunMulti(portal,isTopLevel, true,
10451048
treceiver,None_Receiver,completionTag);
10461049
break;
10471050

10481051
casePORTAL_UTIL_SELECT:
10491052
PortalRunUtility(portal, (Node*)linitial(portal->stmts),
1050-
isTopLevel,treceiver,completionTag);
1053+
isTopLevel,true,treceiver,completionTag);
10511054
break;
10521055

10531056
default:
@@ -1142,10 +1145,11 @@ RunFromStore(Portal portal, ScanDirection direction, uint64 count,
11421145
*Execute a utility statement inside a portal.
11431146
*/
11441147
staticvoid
1145-
PortalRunUtility(Portalportal,Node*utilityStmt,boolisTopLevel,
1148+
PortalRunUtility(Portalportal,Node*utilityStmt,
1149+
boolisTopLevel,boolsetHoldSnapshot,
11461150
DestReceiver*dest,char*completionTag)
11471151
{
1148-
boolactive_snapshot_set;
1152+
Snapshotsnapshot;
11491153

11501154
elog(DEBUG3,"ProcessUtility");
11511155

@@ -1172,11 +1176,19 @@ PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
11721176
IsA(utilityStmt,UnlistenStmt)||
11731177
IsA(utilityStmt,CheckPointStmt)))
11741178
{
1175-
PushActiveSnapshot(GetTransactionSnapshot());
1176-
active_snapshot_set= true;
1179+
snapshot=GetTransactionSnapshot();
1180+
/* If told to, register the snapshot we're using and save in portal */
1181+
if (setHoldSnapshot)
1182+
{
1183+
snapshot=RegisterSnapshot(snapshot);
1184+
portal->holdSnapshot=snapshot;
1185+
}
1186+
PushActiveSnapshot(snapshot);
1187+
/* PushActiveSnapshot might have copied the snapshot */
1188+
snapshot=GetActiveSnapshot();
11771189
}
11781190
else
1179-
active_snapshot_set=false;
1191+
snapshot=NULL;
11801192

11811193
ProcessUtility(utilityStmt,
11821194
portal->sourceText,
@@ -1190,12 +1202,11 @@ PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
11901202

11911203
/*
11921204
* Some utility commands may pop the ActiveSnapshot stack from under us,
1193-
* so we only pop the stack if we actually see a snapshot set. Note that
1194-
* the set of utility commands that do this must be the same set
1195-
* disallowed to run inside a transaction; otherwise, we could be popping
1196-
* a snapshot that belongs to some other operation.
1205+
* so be careful to only pop the stack if our snapshot is still at the
1206+
* top.
11971207
*/
1198-
if (active_snapshot_set&&ActiveSnapshotSet())
1208+
if (snapshot!=NULL&&ActiveSnapshotSet()&&
1209+
snapshot==GetActiveSnapshot())
11991210
PopActiveSnapshot();
12001211
}
12011212

@@ -1205,7 +1216,8 @@ PortalRunUtility(Portal portal, Node *utilityStmt, bool isTopLevel,
12051216
*or non-SELECT-like queries)
12061217
*/
12071218
staticvoid
1208-
PortalRunMulti(Portalportal,boolisTopLevel,
1219+
PortalRunMulti(Portalportal,
1220+
boolisTopLevel,boolsetHoldSnapshot,
12091221
DestReceiver*dest,DestReceiver*altdest,
12101222
char*completionTag)
12111223
{
@@ -1261,7 +1273,25 @@ PortalRunMulti(Portal portal, bool isTopLevel,
12611273
*/
12621274
if (!active_snapshot_set)
12631275
{
1264-
PushActiveSnapshot(GetTransactionSnapshot());
1276+
Snapshotsnapshot=GetTransactionSnapshot();
1277+
1278+
/* If told to, register the snapshot and save in portal */
1279+
if (setHoldSnapshot)
1280+
{
1281+
snapshot=RegisterSnapshot(snapshot);
1282+
portal->holdSnapshot=snapshot;
1283+
}
1284+
1285+
/*
1286+
* We can't have the holdSnapshot also be the active one,
1287+
* because UpdateActiveSnapshotCommandId would complain. So
1288+
* force an extra snapshot copy. Plain PushActiveSnapshot
1289+
* would have copied the transaction snapshot anyway, so this
1290+
* only adds a copy step when setHoldSnapshot is true. (It's
1291+
* okay for the command ID of the active snapshot to diverge
1292+
* from what holdSnapshot has.)
1293+
*/
1294+
PushCopiedSnapshot(snapshot);
12651295
active_snapshot_set= true;
12661296
}
12671297
else
@@ -1309,14 +1339,14 @@ PortalRunMulti(Portal portal, bool isTopLevel,
13091339
{
13101340
Assert(!active_snapshot_set);
13111341
/* statement can set tag string */
1312-
PortalRunUtility(portal,stmt,isTopLevel,
1342+
PortalRunUtility(portal,stmt,isTopLevel, false,
13131343
dest,completionTag);
13141344
}
13151345
else
13161346
{
13171347
Assert(IsA(stmt,NotifyStmt));
13181348
/* stmt added by rewrite cannot set tag */
1319-
PortalRunUtility(portal,stmt,isTopLevel,
1349+
PortalRunUtility(portal,stmt,isTopLevel, false,
13201350
altdest,NULL);
13211351
}
13221352
}

‎src/backend/utils/mmgr/portalmem.c

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
#include"miscadmin.h"
2525
#include"utils/builtins.h"
2626
#include"utils/memutils.h"
27+
#include"utils/snapmgr.h"
2728
#include"utils/timestamp.h"
2829

2930
/*
@@ -351,6 +352,7 @@ PortalCreateHoldStore(Portal portal)
351352

352353
Assert(portal->holdContext==NULL);
353354
Assert(portal->holdStore==NULL);
355+
Assert(portal->holdSnapshot==NULL);
354356

355357
/*
356358
* Create the memory context that is used for storage of the tuple set.
@@ -526,6 +528,20 @@ PortalDrop(Portal portal, bool isTopCommit)
526528
/* drop cached plan reference, if any */
527529
PortalReleaseCachedPlan(portal);
528530

531+
/*
532+
* If portal has a snapshot protecting its data, release that. This needs
533+
* a little care since the registration will be attached to the portal's
534+
* resowner; if the portal failed, we will already have released the
535+
* resowner (and the snapshot) during transaction abort.
536+
*/
537+
if (portal->holdSnapshot)
538+
{
539+
if (portal->resowner)
540+
UnregisterSnapshotFromOwner(portal->holdSnapshot,
541+
portal->resowner);
542+
portal->holdSnapshot=NULL;
543+
}
544+
529545
/*
530546
* Release any resources still attached to the portal. There are several
531547
* cases being covered here:

‎src/include/utils/portal.h

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -162,6 +162,16 @@ typedef struct PortalData
162162
Tuplestorestate*holdStore;/* store for holdable cursors */
163163
MemoryContextholdContext;/* memory containing holdStore */
164164

165+
/*
166+
* Snapshot under which tuples in the holdStore were read. We must keep a
167+
* reference to this snapshot if there is any possibility that the tuples
168+
* contain TOAST references, because releasing the snapshot could allow
169+
* recently-dead rows to be vacuumed away, along with any toast data
170+
* belonging to them. In the case of a held cursor, we avoid needing to
171+
* keep such a snapshot by forcibly detoasting the data.
172+
*/
173+
SnapshotholdSnapshot;/* registered snapshot, or NULL if none */
174+
165175
/*
166176
* atStart, atEnd and portalPos indicate the current cursor position.
167177
* portalPos is zero before the first row, N after fetching N'th row of

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp