Movatterモバイル変換


[0]ホーム

URL:


Dev guideRecipesAPI ReferenceChangelog
Dev guideRecipesUser GuidesNuGetDev CommunityOptimizely AcademySubmit a ticketLog InCross-platform products
Dev guide
All
Pages
Start typing to search…

Cursor

How to get large amounts of documents using batch retrieval. The backend system preserves the state of the result set.

Cursor-Based Pagination

Cursor-based pagination is a method for efficiently retrieving large datasets by breaking them into smaller, sequential batches. Unlike traditional offset/limit pagination, it maintains state between requests for better performance with large result sets.

Important Note

If you need to retrieve fewer than 10,000 results, it is strongly recommended to use standardskip andlimit parameters instead of cursor-based pagination. This approach is simpler and more efficient for smaller result sets. Use cursor-based pagination only when you need to retrieve large amounts of data exceeding 10,000 items.

How to Enable Cursor Pagination

By default, the cursor is not enabled. To enable it, select the cursor in the GQL query to get the cursor in combination with a query inwhere using an empty string"" as value, such ascursor: "".

Key Implementation Details

  • The response has a cursor value and then gets the next batches using that cursor value in the GQL query.
  • The results for the first query are preserved in the current state (stateful) for 5 minutes per request, and after that time, the cursor expires and is no longer valid.
  • Continue until there are no more results. The number of results per batch depends on the number set bylimit. For the first query with the cursor enabled, the skip is ignored. You can implement this by ignoring the first batches of results.
  • You should sort byDOC for the fastest retrieval.
  • At least one content item field must be projected, or you will get an error. Onlytotal is insufficient because it is not a field of an item initems.

Why Use Cursor Pagination for Large Datasets?

Cursor pagination provides better performance than traditional skip/limit pagination when dealing with large result sets because:

  • It doesn't need to count and skip through all previous records
  • It maintains a stateful position in the result set
  • It provides consistent results even when the data is being modified

Example Request

Here's an example where you get the batches per one document:

{  StandardPage(    cursor: "FGluY2x1ZGVfY29udGV4dF91dWlkDXF1ZXJ5QW5kRmV0Y2gBFnh3Qmszbmh0UkxPeWVGVHBLcUtQVWcAAAAAAAAATRZJVkJ4eFZBdVM5dTI4R1UzVUFSOEpn"    limit: 1    orderBy: {_ranking: DOC}  ) {    items {      Name      Url      RouteSegment      Changed    }    cursor  }}

Complete Pagination Flow

  1. First request: Use an empty cursor stringcursor: ""
  2. Store the cursor: Save the returned cursor value from the response
  3. Subsequent requests: Use the stored cursor value in the next query
  4. Continue iterating: Repeat until the cursor is null or empty (indicating no more results)

Example Flow:

// First request{  StandardPage(    cursor: ""    limit: 100    orderBy: {_ranking: DOC}  ) {    items {      Name      Url    }    cursor  }}// Subsequent requests (use the cursor from previous response){  StandardPage(    cursor: "FGluY2x1ZGVfY29udGV4dF91dWlkDXF1ZXJ5QW5kRmV0Y2gBFnh3..."    limit: 100    orderBy: {_ranking: DOC}  ) {    items {      Name      Url    }    cursor  }}

Handling Expired Cursors

If a cursor expires after 5 minutes of inactivity, you'll receive an error response. In this case:

  • Restart the pagination process withcursor: ""
  • Consider implementing automatic retry logic in your application
  • Store intermediate results if you need to handle potential interruptions

Updated about 2 months ago



[8]ページ先頭

©2009-2025 Movatter.jp