Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit019a40d

Browse files
committed
intarray: Prevent out-of-bound memory reads with gist__int_ops
As gist__int_ops stands in intarray, it is possible to store GiSTentries for leaf pages that can cause corruptions when decompressed.Leaf nodes are stored as decompressed all the time by the compressionmethod, and the decompression method should map with that, retrievingthe contents of the page without doing any decompression. However, thecode authorized the insertion of leaf page data with a higher number ofarray items than what can be supported, generating a NOTICE message toinform about this matter (199 for a 8k page, for reference). Whencalling the decompression method, a decompression would be attempted onthis leaf node item but the contents should be retrieved as they are.The NOTICE message generated when dealing with the compression of a leafpage and too many elements in the input array for gist__int_ops has beenintroduced by08ee64e, removing the marker stored in the array to trackif this is actually a leaf node. However, it also missed the fact thatthe decompression path should do nothing for a leaf page. Hence, as thecode stand, a too-large array would be stored as uncompressed but thedecompression path would attempt a decompression rather that retrievingthe contents as they are.This leads to various problems. First, even if08ee64e tried to addressthat, it is possible to do out-of-bound chunk writes with a large inputarray, with the backend informing about that with WARNINGs. Ondecompression, retrieving the stored leaf data would lead to incorrectmemory reads, leading to crashes or even worse.Perhaps somebody would be interested in expanding the number of arrayitems that can be handled in a leaf page for this operator in thefuture, which would require revisiting the choice done in08ee64e, butbased on the lack of reports about this problem since 2005 it does notlook so. For now, this commit prevents the insertion of data for leafpages when using more array items that the code can handle ondecompression, switching the NOTICE message to an ERROR. If one wishesto use more array items, gist__intbig_ops is an optional choice.While on it, use ERRCODE_PROGRAM_LIMIT_EXCEEDED as error code when alimit is reached, because that's what the module is facing in suchcases.Author: Ankit Kumar Pandey, Alexander LakhinReviewed-by: Richard Guo, Michael PaquierDiscussion:https://postgr.es/m/796b65c3-57b7-bddf-b0d5-a8afafb8b627@gmail.comDiscussion:https://postgr.es/m/17888-f72930e6b5ce8c14@postgresql.orgBackpatch-through: 11
1 parentd1423c5 commit019a40d

File tree

3 files changed

+12
-4
lines changed

3 files changed

+12
-4
lines changed

‎contrib/intarray/_int_gist.c

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -180,8 +180,10 @@ g_int_compress(PG_FUNCTION_ARGS)
180180
PREPAREARR(r);
181181

182182
if (ARRNELEMS(r) >=2*num_ranges)
183-
elog(NOTICE,"input array is too big (%d maximum allowed, %d current), use gist__intbig_ops opclass instead",
184-
2*num_ranges-1,ARRNELEMS(r));
183+
ereport(ERROR,
184+
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
185+
errmsg("input array is too big (%d maximum allowed, %d current), use gist__intbig_ops opclass instead",
186+
2*num_ranges-1,ARRNELEMS(r))));
185187

186188
retval=palloc(sizeof(GISTENTRY));
187189
gistentryinit(*retval,PointerGetDatum(r),
@@ -269,7 +271,8 @@ g_int_compress(PG_FUNCTION_ARGS)
269271
lenr=internal_size(dr,len);
270272
if (lenr<0||lenr>MAXNUMELTS)
271273
ereport(ERROR,
272-
(errmsg("data is too sparse, recreate index using gist__intbig_ops opclass instead")));
274+
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
275+
errmsg("data is too sparse, recreate index using gist__intbig_ops opclass instead")));
273276

274277
r=resize_intArrayType(r,len);
275278
retval=palloc(sizeof(GISTENTRY));
@@ -331,7 +334,8 @@ g_int_decompress(PG_FUNCTION_ARGS)
331334
lenr=internal_size(din,lenin);
332335
if (lenr<0||lenr>MAXNUMELTS)
333336
ereport(ERROR,
334-
(errmsg("compressed array is too big, recreate index using gist__intbig_ops opclass instead")));
337+
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
338+
errmsg("compressed array is too big, recreate index using gist__intbig_ops opclass instead")));
335339

336340
r=new_intArrayType(lenr);
337341
dr=ARRPTR(r);

‎contrib/intarray/expected/_int.out

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -547,6 +547,8 @@ SELECT count(*) from test__int WHERE a @@ '!20 & !21';
547547
6343
548548
(1 row)
549549

550+
INSERT INTO test__int SELECT array(SELECT x FROM generate_series(1, 1001) x); -- should fail
551+
ERROR: input array is too big (199 maximum allowed, 1001 current), use gist__intbig_ops opclass instead
550552
DROP INDEX text_idx;
551553
CREATE INDEX text_idx on test__int using gist (a gist__int_ops(numranges = 0));
552554
ERROR: value 0 out of bounds for option "numranges"

‎contrib/intarray/sql/_int.sql

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,8 @@ SELECT count(*) from test__int WHERE a @@ '(20&23)|(50&68)';
110110
SELECTcount(*)from test__intWHERE a @@'20 | !21';
111111
SELECTcount(*)from test__intWHERE a @@'!20 & !21';
112112

113+
INSERT INTO test__intSELECT array(SELECT xFROM generate_series(1,1001) x);-- should fail
114+
113115
DROPINDEX text_idx;
114116
CREATEINDEXtext_idxon test__int using gist (a gist__int_ops(numranges=0));
115117
CREATEINDEXtext_idxon test__int using gist (a gist__int_ops(numranges=253));

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp