Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

JacksonStreamingApi

tmsktn edited this pageMar 21, 2018 ·2 revisions

Processing model: Streaming API

aka 'Incremental Processing', or 'Token Streams'

Of 3 major processing modes thatJackson supports, Streaming Processing (also known as Incremental Processing) is the most efficient way to process JSON content. It has the lowest memory and processing overhead, and can often match performance of many binary data formats available on Java platform (see "Performance Comparison" link below)

This performance comes at a cost: this is not the most convenient way to process JSON content, because:

  • All content to read/write has to be processed in exact same order as input comes in (or output is to go out) -- for random access, you need to useData Binding orTree Model (which both actually use Streaming Api for actual JSON reading/writing).
  • No Java objects are created unless specifically requested; and even then only very basic types are supported (Strings, byte[] for base64-encoded binary content)

As a result, Streaming API is most commonly used by middleware and by frameworks (where performance benefits are available to wider range of using applications, and competition between implementation drives performance as one of measured features), and less often by applications.

Creating Parsers

Parsers are objects used to tokenize JSON content into tokens and associated data. It is the lowest level of read access to JSON content.

Most common way to create parsers is from external sources (Files, HTTP request streams) or buffered data (Strings, byte arrays / buffers). For this purposeorg.codehaus.jackson.JsonFactory has extensive set of methods to construct parsers, such as:

JsonFactoryjsonFactory =newJsonFactory();// or, for data binding, org.codehaus.jackson.mapper.MappingJsonFactoryJsonParserjp =jsonFactory.createJsonParser(file);// or URL, Stream, Reader, String, byte[]

Also, if you happen to have an ObjectMapper, there is alsoObjectMapper.getJsonFactory() that you can use to reuse factory it has (since (re)using a JsonFactory instances is aPerformance Best Practice).

But you can also create parsers from alternate sources:

  • Starting with version1.3, you can read contents of root JsonNode (seeTree Model) byJsonParser jp = node.traverse()
  • Starting with version1.5 you can buffer underlying JSON tokens intoorg.codehaus.jackson.util.TokenBuffer, and later on create JsonParser to read content (for replaying streams).

Reading JSON tokens from these sources is significantly more efficient than re-parsing JSON content from textual representation.

Creating Generators

Generators are objects used to construct JSON content based on sequence of calls to output JSON tokens. It is the lowest level of write access to JSON content.

Most common way to create generators is to pass an external destination (File, OutputStream or Writer) into which write resulting JSON content.For this purposeorg.codehaus.jackson.JsonFactory has extensive set of methods to construct parsers, such as:

JsonFactoryjsonFactory =newJsonFactory();// or, for data binding, org.codehaus.jackson.mapper.MappingJsonFactoryJsonGeneratorjg =jsonFactory.createJsonGenerator(file,JsonEncoding.UTF8);// or Stream, Reader

An alternative method is available by usingorg.codehaus.jackson.util.TokenBuffer (added inJackson 1.5): since it extends JsonGenerator, you can efficiently buffer any JSON output in case it needs to be re-processed:

TokenBufferbuffer =newTokenBuffer();// serialize object as JSON tokens (but don't serialize as JSON text!)objectMapper.writeValue(buffer,myBean);// read back as treeJsonNoderoot =objectMapper.readTree(buffer.asParser());// modify some more, write out// ...StringjsonText =objectMapper.writeValueAsString(root);

(in fact, use of TokenBuffer for re-processing can be considered aperformance Best Practice)

Configuring

Additional Reading


CategoryJackson

Clone this wiki locally


[8]ページ先頭

©2009-2025 Movatter.jp