
What Is Pagination, and Why Do We Need It?
Imagine your application has a database with thousands of records. Sending all these records to users in a single API response:
- Slows down your application.
- Consumes excessive bandwidth.
- Overwhelms users with too much data at once.
Pagination solves this problem by splitting the data into smaller pages. Users get only subset of the data at a time, making APIs faster and applications smoother.
Consider a giant bookshelf with hundreds of books. Instead of searching through the entire shelf, wouldn’t it be easier if the shelf were divided into sections like "Page 1", "Page 2", and so on? That’s exactly what pagination does!
Setting Up the Database
To demonstrate pagination, we’ll use a simpleitems
table in a PostgreSQL database. Here's the schema:
CREATE TABLE items ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, created_at TIMESTAMP DEFAULT NOW());
Now insert some dummy data:
INSERT INTO items (name) VALUES ('Item 1'), ('Item 2'), ('Item 3'), ..., ('Item 100');
Setting Up a Go API with Pagination
Let’s create an API endpoint/items
that accepts two query parameters:
page: The page number (default: 1).
limit: The number of records per page (default: 10).
Here’s the full implementation:
package mainimport ( "database/sql" "fmt" "log" "net/http" "strconv" _ "github.com/lib/pq")func main() { // Connect to the database db, err := sql.Open("postgres", "user=youruser password=yourpass dbname=yourdb sslmode=disable") if err != nil { log.Fatal(err) } defer db.Close() http.HandleFunc("/items", func(w http.ResponseWriter, r *http.Request) { // Extract 'page' and 'limit' query parameters page, err := strconv.Atoi(r.URL.Query().Get("page")) if err != nil || page < 1 { page = 1 // Default to page 1 } limit, err := strconv.Atoi(r.URL.Query().Get("limit")) if err != nil || limit < 1 { limit = 10 // Default to 10 items per page } // Calculate the OFFSET offset := (page - 1) * limit // Query the database rows, err := db.Query("SELECT id, name, created_at FROM items LIMIT $1 OFFSET $2", limit, offset) if err != nil { http.Error(w, "Failed to fetch items", http.StatusInternalServerError) return } defer rows.Close() // Process the rows items := []map[string]interface{}{} for rows.Next() { var id int var name string var createdAt string if err := rows.Scan(&id, &name, &createdAt); err != nil { http.Error(w, "Failed to scan items", http.StatusInternalServerError) return } items = append(items, map[string]interface{}{ "id": id, "name": name, "created_at": createdAt, }) } // Respond with JSON w.Header().Set("Content-Type", "application/json") fmt.Fprintf(w, `{"page":%d,"limit":%d,"items":%v}`, page, limit, items) }) log.Println("Server is running on http://localhost:8080") log.Fatal(http.ListenAndServe(":8080", nil))}
Understanding the Logic :
Pagination Parameters
- Page: Determines which set of records to fetch.
- Limit: Specifies the number of records per page.
Offset Calculation
The offset determines how many records to skip:offset = (page - 1) * limit
For example:
- Page 1 with limit=5 → offset = 0 (skip 0 records).
- Page 2 with limit=5 → offset = 5 (skip the first 5 records).
SQL Query
We useLIMIT andOFFSET in SQL to fetch the desired records:SELECT id, name, created_at FROM items ORDER BY id LIMIT 5 OFFSET 5;
Testing Your API
Test your API using tools like Postman, cURL, or directly in a browser:
- Fetch the first page with 10 items:
curl "http://localhost:8080/items?page=1&limit=10"
- Fetch the second page with 20 items:
curl "http://localhost:8080/items?page=2&limit=20"
API Response
Here’s an example response for/items?page=2&limit=2
:
{ "page": 2, "limit": 2, "items": [map[created_at: 2025-01-10T20: 38: 57.832777Z id: 3 name:Item 3 ] map[created_at: 2025-01-10T20: 38: 57.832777Z id: 4 name:Item 4 ] ]}
Common Doubts and Pitfalls
1. Why not fetch all records and slice them in Go?
Because it’s inefficient. Imagine loading a million records into memory—your API will slow down and possibly crash.
2. What happens if thepage
orlimit
parameters are missing?
Always set defaults (e.g.,page=1
,limit=10
) to ensure your API doesn’t break.
3. Can we optimize this further?
Yes! Use indexes on frequently queried columns (likeid
orcreated_at
) for faster lookups.
Conclusion
With just a few lines of code and smart database querying, you’ve turned an overwhelming API response into something lightweight and user-friendly.
Want to take it up a notch? Try adding total pages, next/previous links, or even cursor-based pagination for large-scale applications.
To get more information about Golang concepts, projects, etc. and to stay updated on the Tutorials do followSiddhesh on Twitter andGitHub.
Until thenKeep Learning,Keep Building 🚀🚀
Top comments(2)

- Email
- PronounsHe/Him
- WorkFreelance Developer
- Joined
Thank you! Sorting and filtering are great suggestions. I'll consider covering them in a follow-up post, stay tuned!
For further actions, you may consider blocking this person and/orreporting abuse