Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitc00fbe8

Browse files
committed
intarray: Prevent out-of-bound memory reads with gist__int_ops
As gist__int_ops stands in intarray, it is possible to store GiSTentries for leaf pages that can cause corruptions when decompressed.Leaf nodes are stored as decompressed all the time by the compressionmethod, and the decompression method should map with that, retrievingthe contents of the page without doing any decompression. However, thecode authorized the insertion of leaf page data with a higher number ofarray items than what can be supported, generating a NOTICE message toinform about this matter (199 for a 8k page, for reference). Whencalling the decompression method, a decompression would be attempted onthis leaf node item but the contents should be retrieved as they are.The NOTICE message generated when dealing with the compression of a leafpage and too many elements in the input array for gist__int_ops has beenintroduced by08ee64e, removing the marker stored in the array to trackif this is actually a leaf node. However, it also missed the fact thatthe decompression path should do nothing for a leaf page. Hence, as thecode stand, a too-large array would be stored as uncompressed but thedecompression path would attempt a decompression rather that retrievingthe contents as they are.This leads to various problems. First, even if08ee64e tried to addressthat, it is possible to do out-of-bound chunk writes with a large inputarray, with the backend informing about that with WARNINGs. Ondecompression, retrieving the stored leaf data would lead to incorrectmemory reads, leading to crashes or even worse.Perhaps somebody would be interested in expanding the number of arrayitems that can be handled in a leaf page for this operator in thefuture, which would require revisiting the choice done in08ee64e, butbased on the lack of reports about this problem since 2005 it does notlook so. For now, this commit prevents the insertion of data for leafpages when using more array items that the code can handle ondecompression, switching the NOTICE message to an ERROR. If one wishesto use more array items, gist__intbig_ops is an optional choice.While on it, use ERRCODE_PROGRAM_LIMIT_EXCEEDED as error code when alimit is reached, because that's what the module is facing in suchcases.Author: Ankit Kumar Pandey, Alexander LakhinReviewed-by: Richard Guo, Michael PaquierDiscussion:https://postgr.es/m/796b65c3-57b7-bddf-b0d5-a8afafb8b627@gmail.comDiscussion:https://postgr.es/m/17888-f72930e6b5ce8c14@postgresql.orgBackpatch-through: 11
1 parentb5c5173 commitc00fbe8

File tree

3 files changed

+12
-4
lines changed

3 files changed

+12
-4
lines changed

‎contrib/intarray/_int_gist.c

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -181,8 +181,10 @@ g_int_compress(PG_FUNCTION_ARGS)
181181
PREPAREARR(r);
182182

183183
if (ARRNELEMS(r) >=2*num_ranges)
184-
elog(NOTICE,"input array is too big (%d maximum allowed, %d current), use gist__intbig_ops opclass instead",
185-
2*num_ranges-1,ARRNELEMS(r));
184+
ereport(ERROR,
185+
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
186+
errmsg("input array is too big (%d maximum allowed, %d current), use gist__intbig_ops opclass instead",
187+
2*num_ranges-1,ARRNELEMS(r))));
186188

187189
retval=palloc(sizeof(GISTENTRY));
188190
gistentryinit(*retval,PointerGetDatum(r),
@@ -270,7 +272,8 @@ g_int_compress(PG_FUNCTION_ARGS)
270272
lenr=internal_size(dr,len);
271273
if (lenr<0||lenr>MAXNUMELTS)
272274
ereport(ERROR,
273-
(errmsg("data is too sparse, recreate index using gist__intbig_ops opclass instead")));
275+
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
276+
errmsg("data is too sparse, recreate index using gist__intbig_ops opclass instead")));
274277

275278
r=resize_intArrayType(r,len);
276279
retval=palloc(sizeof(GISTENTRY));
@@ -332,7 +335,8 @@ g_int_decompress(PG_FUNCTION_ARGS)
332335
lenr=internal_size(din,lenin);
333336
if (lenr<0||lenr>MAXNUMELTS)
334337
ereport(ERROR,
335-
(errmsg("compressed array is too big, recreate index using gist__intbig_ops opclass instead")));
338+
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
339+
errmsg("compressed array is too big, recreate index using gist__intbig_ops opclass instead")));
336340

337341
r=new_intArrayType(lenr);
338342
dr=ARRPTR(r);

‎contrib/intarray/expected/_int.out

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -566,6 +566,8 @@ SELECT count(*) from test__int WHERE a @@ '!20 & !21';
566566
6343
567567
(1 row)
568568

569+
INSERT INTO test__int SELECT array(SELECT x FROM generate_series(1, 1001) x); -- should fail
570+
ERROR: input array is too big (199 maximum allowed, 1001 current), use gist__intbig_ops opclass instead
569571
DROP INDEX text_idx;
570572
CREATE INDEX text_idx on test__int using gist (a gist__int_ops(numranges = 0));
571573
ERROR: value 0 out of bounds for option "numranges"

‎contrib/intarray/sql/_int.sql

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -125,6 +125,8 @@ SELECT count(*) from test__int WHERE a @@ '(20&23)|(50&68)';
125125
SELECTcount(*)from test__intWHERE a @@'20 | !21';
126126
SELECTcount(*)from test__intWHERE a @@'!20 & !21';
127127

128+
INSERT INTO test__intSELECT array(SELECT xFROM generate_series(1,1001) x);-- should fail
129+
128130
DROPINDEX text_idx;
129131
CREATEINDEXtext_idxon test__int using gist (a gist__int_ops(numranges=0));
130132
CREATEINDEXtext_idxon test__int using gist (a gist__int_ops(numranges=253));

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp