Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitfd726b8

Browse files
committed
test_json_parser: Speed up 002_inline.pl
Some macOS machines are having trouble with 002_inline, which executesthe JSON parser test executables hundreds of times in a nested loop.Both developer machines and buildfarm critters have shown excessive testdurations, upwards of 20 seconds.Push the innermost loop of 002_inline, which iterates through differingchunk sizes, down into the test executable. (I'd eventually like to pushall of the JSON unit tests down into C, but this is an easy win in theshort term.) Testers have reported a speedup between 4-9x.Reported-by: Robert Haas <robertmhaas@gmail.com>Suggested-by: Andres Freund <andres@anarazel.de>Tested-by: Andrew Dunstan <andrew@dunslane.net>Tested-by: Tom Lane <tgl@sss.pgh.pa.us>Tested-by: Robert Haas <robertmhaas@gmail.com>Discussion:https://postgr.es/m/CA%2BTgmobKoG%2BgKzH9qB7uE4MFo-z1hn7UngqAe9b0UqNbn3_XGQ%40mail.gmail.comBackpatch-through: 17
1 parent3e908fb commitfd726b8

File tree

3 files changed

+106
-57
lines changed

3 files changed

+106
-57
lines changed

‎src/test/modules/test_json_parser/README‎

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,12 @@ This module contains two programs for testing the json parsers.
66
- `test_json_parser_incremental` is for testing the incremental parser, It
77
reads in a file and passes it in very small chunks (default is 60 bytes at a
88
time) to the incremental parser. It's not meant to be a speed test but to
9-
test the accuracy of the incremental parser. There are two option arguments,
10-
"-c nn" specifies an alternative chunk size, and "-s" specifies using
11-
semantic routines. The semantic routines re-output the json, although not in
12-
a very pretty form. The required non-option argument is the input file name.
9+
test the accuracy of the incremental parser. The option "-c nn" specifies an
10+
alternative chunk size, "-r nn" runs a range of chunk sizes down to one byte
11+
on the same input (with output separated by null bytes), and "-s" specifies
12+
using semantic routines. The semantic routines re-output the json, although
13+
not in a very pretty form. The required non-option argument is the input file
14+
name.
1315
- `test_json_parser_perf` is for speed testing both the standard
1416
recursive descent parser and the non-recursive incremental
1517
parser. If given the `-i` flag it uses the non-recursive parser,

‎src/test/modules/test_json_parser/t/002_inline.pl‎

Lines changed: 20 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -33,23 +33,37 @@ sub test
3333
print$fh"$json";
3434
close($fh);
3535

36+
# The -r mode runs the parser in a loop, with output separated by nulls.
37+
# Unpack that as a list of null-terminated ASCII strings (Z*) and check that
38+
# each run produces the same result.
39+
my ($all_stdout,$all_stderr) =
40+
run_command([@exe,"-r",$chunk,$fname ]);
41+
42+
my@stdout =unpack("(Z*)*",$all_stdout);
43+
my@stderr =unpack("(Z*)*",$all_stderr);
44+
45+
is(scalar@stdout,$chunk,"$name: stdout has correct number of entries");
46+
is(scalar@stderr,$chunk,"$name: stderr has correct number of entries");
47+
48+
my$i = 0;
49+
3650
foreachmy$size (reverse(1 ..$chunk))
3751
{
38-
my ($stdout,$stderr) = run_command([@exe,"-c",$size,$fname ]);
39-
4052
if (defined($params{error}))
4153
{
42-
unlike($stdout,qr/SUCCESS/,
54+
unlike($stdout[$i],qr/SUCCESS/,
4355
"$name, chunk size$size: test fails");
44-
like($stderr,$params{error},
56+
like($stderr[$i],$params{error},
4557
"$name, chunk size$size: correct error output");
4658
}
4759
else
4860
{
49-
like($stdout,qr/SUCCESS/,
61+
like($stdout[$i],qr/SUCCESS/,
5062
"$name, chunk size$size: test succeeds");
51-
is($stderr,"","$name, chunk size$size: no error output");
63+
is($stderr[$i],"","$name, chunk size$size: no error output");
5264
}
65+
66+
$i++;
5367
}
5468
}
5569

‎src/test/modules/test_json_parser/test_json_parser_incremental.c‎

Lines changed: 80 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,14 @@
1212
* the parser in very small chunks. In practice you would normally use
1313
* much larger chunks, but doing this makes it more likely that the
1414
* full range of increment handling, especially in the lexer, is exercised.
15+
*
1516
* If the "-c SIZE" option is provided, that chunk size is used instead
1617
* of the default of 60.
1718
*
19+
* If the "-r SIZE" option is provided, a range of chunk sizes from SIZE down to
20+
* 1 are run sequentially. A null byte is printed to the streams after each
21+
* iteration.
22+
*
1823
* If the -s flag is given, the program does semantic processing. This should
1924
* just mirror back the json, albeit with white space changes.
2025
*
@@ -88,8 +93,8 @@ main(int argc, char **argv)
8893
StringInfoDatajson;
8994
intn_read;
9095
size_tchunk_size=DEFAULT_CHUNK_SIZE;
96+
boolrun_chunk_ranges= false;
9197
structstatstatbuf;
92-
off_tbytes_left;
9398
constJsonSemAction*testsem=&nullSemAction;
9499
char*testfile;
95100
intc;
@@ -102,11 +107,14 @@ main(int argc, char **argv)
102107
if (!lex)
103108
pg_fatal("out of memory");
104109

105-
while ((c=getopt(argc,argv,"c:os"))!=-1)
110+
while ((c=getopt(argc,argv,"r:c:os"))!=-1)
106111
{
107112
switch (c)
108113
{
109-
case'c':/* chunksize */
114+
case'r':/* chunk range */
115+
run_chunk_ranges= true;
116+
/* fall through */
117+
case'c':/* chunk size */
110118
chunk_size=strtou64(optarg,NULL,10);
111119
if (chunk_size>BUFSIZE)
112120
pg_fatal("chunk size cannot exceed %d",BUFSIZE);
@@ -135,8 +143,6 @@ main(int argc, char **argv)
135143
exit(1);
136144
}
137145

138-
makeJsonLexContextIncremental(lex,PG_UTF8,need_strings);
139-
setJsonLexContextOwnsTokens(lex,lex_owns_tokens);
140146
initStringInfo(&json);
141147

142148
if ((json_file=fopen(testfile,PG_BINARY_R))==NULL)
@@ -145,61 +151,88 @@ main(int argc, char **argv)
145151
if (fstat(fileno(json_file),&statbuf)!=0)
146152
pg_fatal("error statting input: %m");
147153

148-
bytes_left=statbuf.st_size;
149-
150-
for (;;)
154+
do
151155
{
152-
/* We will break when there's nothing left to read */
153-
154-
if (bytes_left<chunk_size)
155-
chunk_size=bytes_left;
156+
/*
157+
* This outer loop only repeats in -r mode. Reset the parse state and
158+
* our position in the input file for the inner loop, which performs
159+
* the incremental parsing.
160+
*/
161+
off_tbytes_left=statbuf.st_size;
162+
size_tto_read=chunk_size;
156163

157-
n_read=fread(buff,1,chunk_size,json_file);
158-
if (n_read<chunk_size)
159-
pg_fatal("error reading input file: %d",ferror(json_file));
164+
makeJsonLexContextIncremental(lex,PG_UTF8,need_strings);
165+
setJsonLexContextOwnsTokens(lex,lex_owns_tokens);
160166

161-
appendBinaryStringInfo(&json,buff,n_read);
167+
rewind(json_file);
168+
resetStringInfo(&json);
162169

163-
/*
164-
* Append some trailing junk to the buffer passed to the parser. This
165-
* helps us ensure that the parser does the right thing even if the
166-
* chunk isn't terminated with a '\0'.
167-
*/
168-
appendStringInfoString(&json,"1+23 trailing junk");
169-
bytes_left-=n_read;
170-
if (bytes_left>0)
170+
for (;;)
171171
{
172-
result=pg_parse_json_incremental(lex,testsem,
173-
json.data,n_read,
174-
false);
175-
if (result!=JSON_INCOMPLETE)
172+
/* We will break when there's nothing left to read */
173+
174+
if (bytes_left<to_read)
175+
to_read=bytes_left;
176+
177+
n_read=fread(buff,1,to_read,json_file);
178+
if (n_read<to_read)
179+
pg_fatal("error reading input file: %d",ferror(json_file));
180+
181+
appendBinaryStringInfo(&json,buff,n_read);
182+
183+
/*
184+
* Append some trailing junk to the buffer passed to the parser.
185+
* This helps us ensure that the parser does the right thing even
186+
* if the chunk isn't terminated with a '\0'.
187+
*/
188+
appendStringInfoString(&json,"1+23 trailing junk");
189+
bytes_left-=n_read;
190+
if (bytes_left>0)
176191
{
177-
fprintf(stderr,"%s\n",json_errdetail(result,lex));
178-
ret=1;
179-
gotocleanup;
192+
result=pg_parse_json_incremental(lex,testsem,
193+
json.data,n_read,
194+
false);
195+
if (result!=JSON_INCOMPLETE)
196+
{
197+
fprintf(stderr,"%s\n",json_errdetail(result,lex));
198+
ret=1;
199+
gotocleanup;
200+
}
201+
resetStringInfo(&json);
180202
}
181-
resetStringInfo(&json);
182-
}
183-
else
184-
{
185-
result=pg_parse_json_incremental(lex,testsem,
186-
json.data,n_read,
187-
true);
188-
if (result!=JSON_SUCCESS)
203+
else
189204
{
190-
fprintf(stderr,"%s\n",json_errdetail(result,lex));
191-
ret=1;
192-
gotocleanup;
205+
result=pg_parse_json_incremental(lex,testsem,
206+
json.data,n_read,
207+
true);
208+
if (result!=JSON_SUCCESS)
209+
{
210+
fprintf(stderr,"%s\n",json_errdetail(result,lex));
211+
ret=1;
212+
gotocleanup;
213+
}
214+
if (!need_strings)
215+
printf("SUCCESS!\n");
216+
break;
193217
}
194-
if (!need_strings)
195-
printf("SUCCESS!\n");
196-
break;
197218
}
198-
}
199219

200220
cleanup:
221+
freeJsonLexContext(lex);
222+
223+
/*
224+
* In -r mode, separate output with nulls so that the calling test can
225+
* split it up, decrement the chunk size, and loop back to the top.
226+
* All other modes immediately fall out of the loop and exit.
227+
*/
228+
if (run_chunk_ranges)
229+
{
230+
fputc('\0',stdout);
231+
fputc('\0',stderr);
232+
}
233+
}while (run_chunk_ranges&& (--chunk_size>0));
234+
201235
fclose(json_file);
202-
freeJsonLexContext(lex);
203236
free(json.data);
204237
free(lex);
205238

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp