Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitc477f3e

Browse files
committed
Allow repalloc() to give back space when a large chunk is downsized.
Up to now, if you resized a large (>8K) palloc chunk down to a smallersize, aset.c made no attempt to return any space to the malloc pool.That's unpleasant if a really large allocation is resized to asignificantly smaller size. I think no such cases existed when thiscode was designed, and I'm not sure whether they're common even yet,but an upcoming fix to encoding conversion will certainly create suchcases. Therefore, fix AllocSetRealloc so that it gives realloc()a chance to do something with the block. This doesn't noticeablyincrease complexity, we mostly just have to change the order in whichthe cases are considered.Back-patch to all supported branches.Discussion:https://postgr.es/m/20190816181418.GA898@alvherre.pgsqlDiscussion:https://postgr.es/m/3614.1569359690@sss.pgh.pa.us
1 parentb7a1c55 commitc477f3e

File tree

1 file changed

+77
-65
lines changed

1 file changed

+77
-65
lines changed

‎src/backend/utils/mmgr/aset.c

Lines changed: 77 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -1110,62 +1110,12 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size)
11101110
set->header.name,chunk);
11111111
#endif
11121112

1113-
/*
1114-
* Chunk sizes are aligned to power of 2 in AllocSetAlloc(). Maybe the
1115-
* allocated area already is >= the new size. (In particular, we always
1116-
* fall out here if the requested size is a decrease.)
1117-
*/
1118-
if (oldsize >=size)
1119-
{
1120-
#ifdefMEMORY_CONTEXT_CHECKING
1121-
Sizeoldrequest=chunk->requested_size;
1122-
1123-
#ifdefRANDOMIZE_ALLOCATED_MEMORY
1124-
/* We can only fill the extra space if we know the prior request */
1125-
if (size>oldrequest)
1126-
randomize_mem((char*)pointer+oldrequest,
1127-
size-oldrequest);
1128-
#endif
1129-
1130-
chunk->requested_size=size;
1131-
1132-
/*
1133-
* If this is an increase, mark any newly-available part UNDEFINED.
1134-
* Otherwise, mark the obsolete part NOACCESS.
1135-
*/
1136-
if (size>oldrequest)
1137-
VALGRIND_MAKE_MEM_UNDEFINED((char*)pointer+oldrequest,
1138-
size-oldrequest);
1139-
else
1140-
VALGRIND_MAKE_MEM_NOACCESS((char*)pointer+size,
1141-
oldsize-size);
1142-
1143-
/* set mark to catch clobber of "unused" space */
1144-
if (size<oldsize)
1145-
set_sentinel(pointer,size);
1146-
#else/* !MEMORY_CONTEXT_CHECKING */
1147-
1148-
/*
1149-
* We don't have the information to determine whether we're growing
1150-
* the old request or shrinking it, so we conservatively mark the
1151-
* entire new allocation DEFINED.
1152-
*/
1153-
VALGRIND_MAKE_MEM_NOACCESS(pointer,oldsize);
1154-
VALGRIND_MAKE_MEM_DEFINED(pointer,size);
1155-
#endif
1156-
1157-
/* Disallow external access to private part of chunk header. */
1158-
VALGRIND_MAKE_MEM_NOACCESS(chunk,ALLOCCHUNK_PRIVATE_LEN);
1159-
1160-
returnpointer;
1161-
}
1162-
11631113
if (oldsize>set->allocChunkLimit)
11641114
{
11651115
/*
11661116
* The chunk must have been allocated as a single-chunk block. Use
1167-
* realloc() to make the containing block bigger with minimum space
1168-
* wastage.
1117+
* realloc() to make the containing block bigger, or smaller, with
1118+
*minimum spacewastage.
11691119
*/
11701120
AllocBlockblock= (AllocBlock) (((char*)chunk)-ALLOC_BLOCKHDRSZ);
11711121
Sizechksize;
@@ -1180,11 +1130,19 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size)
11801130
if (block->aset!=set||
11811131
block->freeptr!=block->endptr||
11821132
block->freeptr!= ((char*)block)+
1183-
(chunk->size+ALLOC_BLOCKHDRSZ+ALLOC_CHUNKHDRSZ))
1133+
(oldsize+ALLOC_BLOCKHDRSZ+ALLOC_CHUNKHDRSZ))
11841134
elog(ERROR,"could not find block containing chunk %p",chunk);
11851135

1136+
/*
1137+
* Even if the new request is less than set->allocChunkLimit, we stick
1138+
* with the single-chunk block approach. Therefore we need
1139+
* chunk->size to be bigger than set->allocChunkLimit, so we don't get
1140+
* confused about the chunk's status in future calls.
1141+
*/
1142+
chksize=Max(size,set->allocChunkLimit+1);
1143+
chksize=MAXALIGN(chksize);
1144+
11861145
/* Do the realloc */
1187-
chksize=MAXALIGN(size);
11881146
blksize=chksize+ALLOC_BLOCKHDRSZ+ALLOC_CHUNKHDRSZ;
11891147
oldblksize=block->endptr- ((char*)block);
11901148

@@ -1214,17 +1172,21 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size)
12141172
#ifdefMEMORY_CONTEXT_CHECKING
12151173
#ifdefRANDOMIZE_ALLOCATED_MEMORY
12161174
/* We can only fill the extra space if we know the prior request */
1217-
randomize_mem((char*)pointer+chunk->requested_size,
1218-
size-chunk->requested_size);
1175+
if (size>chunk->requested_size)
1176+
randomize_mem((char*)pointer+chunk->requested_size,
1177+
size-chunk->requested_size);
12191178
#endif
12201179

12211180
/*
1222-
* realloc() (or randomize_mem()) will have leftthe newly-allocated
1181+
* realloc() (or randomize_mem()) will have leftany newly-allocated
12231182
* part UNDEFINED, but we may need to adjust trailing bytes from the
12241183
* old allocation.
12251184
*/
1226-
VALGRIND_MAKE_MEM_UNDEFINED((char*)pointer+chunk->requested_size,
1227-
oldsize-chunk->requested_size);
1185+
#ifdefUSE_VALGRIND
1186+
if (oldsize>chunk->requested_size)
1187+
VALGRIND_MAKE_MEM_UNDEFINED((char*)pointer+chunk->requested_size,
1188+
oldsize-chunk->requested_size);
1189+
#endif
12281190

12291191
chunk->requested_size=size;
12301192

@@ -1249,15 +1211,65 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size)
12491211

12501212
returnpointer;
12511213
}
1214+
1215+
/*
1216+
* Chunk sizes are aligned to power of 2 in AllocSetAlloc(). Maybe the
1217+
* allocated area already is >= the new size. (In particular, we will
1218+
* fall out here if the requested size is a decrease.)
1219+
*/
1220+
elseif (oldsize >=size)
1221+
{
1222+
#ifdefMEMORY_CONTEXT_CHECKING
1223+
Sizeoldrequest=chunk->requested_size;
1224+
1225+
#ifdefRANDOMIZE_ALLOCATED_MEMORY
1226+
/* We can only fill the extra space if we know the prior request */
1227+
if (size>oldrequest)
1228+
randomize_mem((char*)pointer+oldrequest,
1229+
size-oldrequest);
1230+
#endif
1231+
1232+
chunk->requested_size=size;
1233+
1234+
/*
1235+
* If this is an increase, mark any newly-available part UNDEFINED.
1236+
* Otherwise, mark the obsolete part NOACCESS.
1237+
*/
1238+
if (size>oldrequest)
1239+
VALGRIND_MAKE_MEM_UNDEFINED((char*)pointer+oldrequest,
1240+
size-oldrequest);
1241+
else
1242+
VALGRIND_MAKE_MEM_NOACCESS((char*)pointer+size,
1243+
oldsize-size);
1244+
1245+
/* set mark to catch clobber of "unused" space */
1246+
if (size<oldsize)
1247+
set_sentinel(pointer,size);
1248+
#else/* !MEMORY_CONTEXT_CHECKING */
1249+
1250+
/*
1251+
* We don't have the information to determine whether we're growing
1252+
* the old request or shrinking it, so we conservatively mark the
1253+
* entire new allocation DEFINED.
1254+
*/
1255+
VALGRIND_MAKE_MEM_NOACCESS(pointer,oldsize);
1256+
VALGRIND_MAKE_MEM_DEFINED(pointer,size);
1257+
#endif
1258+
1259+
/* Disallow external access to private part of chunk header. */
1260+
VALGRIND_MAKE_MEM_NOACCESS(chunk,ALLOCCHUNK_PRIVATE_LEN);
1261+
1262+
returnpointer;
1263+
}
12521264
else
12531265
{
12541266
/*
1255-
*Small-chunk case. We just do this by brute force, ie, allocate a
1256-
* new chunk and copy the data. Since we know the existing data isn't
1257-
* huge, this won't involve any great memcpy expense, so it's not
1258-
* worth being smarter. (At one time we tried to avoid memcpy when it
1259-
* was possible to enlarge the chunk in-place, but that turns out to
1260-
* misbehave unpleasantly for repeated cycles of
1267+
*Enlarge-a-small-chunk case. We just do this by brute force, ie,
1268+
*allocate anew chunk and copy the data. Since we know the existing
1269+
*data isn'thuge, this won't involve any great memcpy expense, so
1270+
*it's notworth being smarter. (At one time we tried to avoid
1271+
*memcpy when itwas possible to enlarge the chunk in-place, but that
1272+
*turns out tomisbehave unpleasantly for repeated cycles of
12611273
* palloc/repalloc/pfree: the eventually freed chunks go into the
12621274
* wrong freelist for the next initial palloc request, and so we leak
12631275
* memory indefinitely. See pgsql-hackers archives for 2007-08-11.)

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp