Infinite Scroll Table
This example keeps fetching and shaping rows on the server by using the datatable in infinite online mode, so users scan one continuous list instead of jumping between numbered pages. The live table below behaves like a real product surface: each interaction sends an OnlineQueryInput to this site’s demo API, and the table paints whatever rows, totals, and filter metadata come back. When someone opens a multi-select list filter, the response can include facets so each choice shows a count that matches the current filters; the field names and shapes are documented under Online API. Use viewport virtualization so the DOM stays bounded while the server returns scrolling windows.
Before you start, install the datatable source into your app and add the server helpers when you need them. See Installation for the npx @react-datatable/cli install command and how to use your license token.
1. Shared row type
Define the row shape once (for example in shared/customer.ts or contracts/customer.ts) and import it from both your React table and your API handler. That way the columns, getRowId, and OnlineQueryResponse<Customer> stay aligned with the JSON your route actually returns, without duplicating stringly-typed fields.
The browser still does not load the full dataset: the API returns one scrolling window of rows (plus totals metadata and hasMore) at a time, scoped by tenant, permissions, and whatever else your handler enforces.
// shared (import from the same module in your Vite app and in your server process)
type Customer = {
id: string
company: string
name: string
status: string
plan: string
region: string
seats: number
revenue: number
owner: string
renewal: string
}2. Define columns
This step is client-only: you configure DatatableColumn so the table knows headers, widths, and which built-in filter and sort UIs to show. Give every column a stable id. When you add the route in step 7, reuse those same ids in your server column map so filters and sorts line up with what the table sends.
// client
import type { DatatableColumn } from "./react-datatable"
const statuses = ["Active", "Trial", "Paused"] as const
const columns: DatatableColumn<Customer>[] = [
{
id: "company",
header: "Company",
accessorKey: "company",
width: 240,
enableSorting: true,
enableFiltering: true,
filterType: "text",
},
{
id: "status",
header: "Status",
accessorKey: "status",
width: 150,
enableSorting: true,
enableFiltering: true,
enableGrouping: true,
filterType: "text-list",
filterOptions: {
options: statuses.map((status) => ({ label: status, value: status })),
},
},
]3. Online infinite mode
Use online instead of data, and set mode: "infinite". The table owns interaction state; the backend owns matching rows, totals, facets, and hasMore so the scroll range stays honest. Set pageSize and prefetchRows to control how large each server window is and how far ahead to load.
// client
import { Datatable } from "./react-datatable"
<Datatable
tableKey="customers"
columns={columns}
getRowId={(row) => row.id}
online={{
mode: "infinite",
queryKey: ["customers", workspaceId],
pageSize: 80,
prefetchRows: 160,
supportedGroupingColumns: ["status"],
query: fetchCustomersPage,
}}
initialState={{
sorting: [{ id: "company", desc: false }],
}}
/>4. Implement the query function
The smallest useful implementation is a fetch that posts JSON and returns OnlineQueryResponse<TData>. The route sketch in step 7.6 uses runInMemoryDatatableQuery on every row you load for the tenant (unfiltered within that scope); the helper applies input in memory. That is easy to validate but does not scale if the slice is huge, so plan to replace it with a database-backed query while keeping the same route contract. See Server helper API and Server query endpoint.
// client
import type { OnlineQueryInput, OnlineQueryResponse } from "react-datatable-server"
async function fetchCustomersPage(input: OnlineQueryInput): Promise<OnlineQueryResponse<Customer>> {
const response = await fetch("/api/customers/table", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify(input),
})
if (!response.ok) {
throw new Error("Failed to load customers")
}
return response.json()
}5. Add the toolbar and selection configuration
Online mode moves row computation to the server; you can still expose search, filters, display options, selection, and bulk actions on the client.
// client
toolbar={{
quickSearch: { placeholder: "Search Customers..." },
filterButton: true,
displayOptions: true,
copyLink: false,
views: false,
}}
selection={{
enabled: true,
mode: "multi",
showCheckboxOnHover: false,
allowSelectAllMatching: true,
}}6. Viewport virtualization
Infinite loading decides which server windows are fetched as the user scrolls. Viewport virtualization decides how many loaded rows the browser mounts. They compose cleanly:
// client
virtualization={{
mode: "viewport",
rowOverscanCount: 12,
}}7. Implement the server-side route
This walkthrough uses Hono and Drizzle so you have a concrete, copy-pasteable server file. You can use any web framework and data layer you like; what matters is that your handler accepts OnlineQueryInput as JSON and returns OnlineQueryResponse for the same row type as the table. Shape that handler like a normal HTTP API route: method, path, JSON body, and JSON response are covered in Server query endpoint.
Expose one HTTP route (for example POST /api/customers/table) on your API process. It should read the JSON body as OnlineQueryInput and respond with OnlineQueryResponse for the same row type you share with the client in step 1.
7.1 Install packages
pnpm add hono drizzle-orm pgpnpm add -D drizzle-kit @types/pgnpm install hono drizzle-orm pgnpm install -D drizzle-kit @types/pgyarn add hono drizzle-orm pgyarn add -D drizzle-kit @types/pgbun add hono drizzle-orm pgbun add --dev drizzle-kit @types/pg7.2 Open Postgres and create a Drizzle client
Create one pg.Pool from DATABASE_URL for the whole process, then pass it to drizzle. Reuse that db instance from your route handlers.
// server
import { drizzle } from "drizzle-orm/node-postgres"
import { Pool } from "pg"
const pool = new Pool({ connectionString: process.env.DATABASE_URL })
export const db = drizzle(pool)7.3 Model the table with Drizzle
Add a pgTable that matches your real migration: primary key, a tenant or workspace_id column for scoping, and every column your DatatableColumn definitions read through accessorKey or accessorFn.
// server/schema/customers.ts (example: grow this to match step 2 and your migrations)
import { integer, pgTable, text } from "drizzle-orm/pg-core"
export const customers = pgTable("customers", {
id: text("id").primaryKey(),
workspaceId: text("workspace_id").notNull(),
company: text("company").notNull(),
contactName: text("contact_name").notNull(),
status: text("status").notNull(),
plan: text("plan").notNull(),
region: text("region").notNull(),
seats: integer("seats").notNull(),
revenue: integer("revenue").notNull(),
owner: text("owner").notNull(),
renewal: text("renewal").notNull(),
})7.4 Map SQL rows to your shared row type
db.select() returns rows in the shape of your Drizzle schema. Map each row to the same Customer (or product) interface the React table uses (for example, map contact_name in SQL to name in TypeScript) so the rest of the pipeline works with one type.
// server/map-customer.ts
import type { Customer } from "../../shared/customer"
import type { customers } from "./schema/customers"
type CustomerRow = typeof customers.$inferSelect
export function mapRow(row: CustomerRow): Customer {
return {
id: row.id,
company: row.company,
name: row.contactName,
status: row.status,
plan: row.plan,
region: row.region,
seats: row.seats,
revenue: row.revenue,
owner: row.owner,
renewal: row.renewal,
}
}7.5 Import the same column definitions as the client
Import the DatatableColumn<Customer>[] from step 2 (in a real repo, move that array to a small shared module and import it from both Vite and the server bundle). The column id values are what the table sends in filters and sorting; your server logic must recognize those ids.
// server/customers-table-route.ts
import type { Customer } from "../../shared/customer"
import { customerOnlineColumns } from "../../shared/customer-online-columns"
// Export `customerOnlineColumns` once from shared/customer-online-columns.ts (the same
// array you use in step 2) so filters and sorts hit the same column ids on the server.7.6 Implement POST /api/customers/table
- Resolve the signed-in user and
workspaceId(or equivalent tenant scope). const input = (await c.req.json()) as OnlineQueryInput.- Load rows for that scope only, for example
await db.select().from(customers).where(eq(customers.workspaceId, workspaceId)). - Map rows with your mapper from 7.4 into
Customer[]. - Turn
inputplusdatainto anOnlineQueryResponse(see the snippet below and the note that follows it).
Return c.json(response) with the correct content-type for JSON.
// server/customers-table-route.ts
import { Hono } from "hono"
import { eq } from "drizzle-orm"
import type { OnlineQueryInput } from "react-datatable-server"
import { runInMemoryDatatableQuery } from "react-datatable-server"
import { customerOnlineColumns } from "../../shared/customer-online-columns"
import { customers } from "./schema/customers"
import { db } from "./db"
import { mapRow } from "./map-customer"
export function registerCustomersTableRoute(app: Hono) {
app.post("/api/customers/table", async (c) => {
const workspaceId = c.req.header("x-workspace-id") ?? "demo_workspace"
const input = (await c.req.json()) as OnlineQueryInput
const rows = await db.select().from(customers).where(eq(customers.workspaceId, workspaceId))
const data = rows.map(mapRow)
const response = await runInMemoryDatatableQuery({
data,
columns: customerOnlineColumns,
input,
getRowId: (row) => row.id,
})
return c.json(response)
})
}runInMemoryDatatableQuery takes the unfiltered row array you pass as data (for example every mapped customer in the workspace) and applies the online query shape from input in process memory: filters, sorts, grouping, pagination or scroll windows, and the other fields the table sends. You do not pre-slice data to the rows the grid is showing; the helper returns the correct slice and metadata (including hasMore in infinite mode) in OnlineQueryResponse. That makes it a fast way to stand up the route and verify online mode and column wiring before you build real SQL.
It is not a good fit for large datasets, because each request loads the full scoped table (or whatever subset your where returns) into server RAM. When behavior looks right in the browser, move the same contract to a database-backed planner or query builder so filtering and paging happen in the engine instead of in Node. Server query endpoint is the place to start for that handoff.
7.7 Mount Hono and register the route
Create const app = new Hono(), call your register function from step 7.6, then export app for your runtime (Bun, Node with your adapter, etc.).
// server/app.ts
import { Hono } from "hono"
import { registerCustomersTableRoute } from "./customers-table-route"
const app = new Hono()
registerCustomersTableRoute(app)
export default appThe all-in-one server snippet under Full examples inlines the same mapper, columns, and handler in a single file instead of splitting them across modules.
The same handler also powers Paginated Table: the client sets online.mode to "pagination" and uses explicit pages instead of scroll windows; the server still reads input.mode, input.offset, and input.limit from the JSON body.
The live table above uses the full showcase columns, row preview, and CSV export wired to this site’s /api/showcase/export. The copyable blocks below are a smaller baseline you can paste into your own repository; follow Server query endpoint to implement POST /api/customers/export (or your chosen path) for the same serverExecutor contract.
Full examples
Expand to copy a minimal infinite online table (React)
// client
import { useMemo } from "react"
import {
Datatable,
type DataTableBulkAction,
type DataTableBulkServerActionRequest,
type DatatableColumn,
} from "./react-datatable"
import type { OnlineQueryInput, OnlineQueryResponse } from "react-datatable-server"
type Customer = {
id: string
company: string
name: string
status: "Active" | "Trial" | "Paused"
plan: string
region: string
seats: number
revenue: number
owner: string
renewal: string
}
const statuses = ["Active", "Trial", "Paused"] as const
async function fetchCustomersPage(input: OnlineQueryInput): Promise<OnlineQueryResponse<Customer>> {
const response = await fetch("/api/customers/table", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify(input),
})
if (!response.ok) {
throw new Error("Failed to load customers")
}
return response.json()
}
/** POST /api/customers/export. Same selection payload contract as the bulk-actions guide. */
async function customersBulkServerExecutor(request: DataTableBulkServerActionRequest): Promise<void> {
if (request.actionId !== "export-customers-csv") {
return
}
const response = await fetch("/api/customers/export", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({
selection: request.selection,
payload: request.payload,
}),
})
if (!response.ok) {
throw new Error(`Export failed: ${response.status}`)
}
const blob = await response.blob()
const url = URL.createObjectURL(blob)
const link = document.createElement("a")
link.href = url
link.download = "customers-export.csv"
link.click()
URL.revokeObjectURL(url)
}
export function CustomersInfiniteTable({ workspaceId }: { workspaceId: string }) {
const columns = useMemo<DatatableColumn<Customer>[]>(
() => [
{
id: "company",
header: "Company",
accessorKey: "company",
width: 240,
enableSorting: true,
enableFiltering: true,
filterType: "text",
},
{
id: "status",
header: "Status",
accessorKey: "status",
width: 140,
enableSorting: true,
enableFiltering: true,
enableGrouping: true,
filterType: "text-list",
filterOptions: {
options: statuses.map((status) => ({ label: status, value: status })),
},
},
],
[]
)
const bulkActions = useMemo<DataTableBulkAction<Customer>[]>(
() => [
{
id: "export-selection",
title: "Export selected CSV",
execution: "server",
serverActionId: "export-customers-csv",
buildServerPayload: () => ({ requestedBy: "customers-table" }),
},
],
[]
)
return (
<div className="h-[560px] overflow-hidden rounded-lg border">
<Datatable
tableKey="customers"
columns={columns}
getRowId={(row) => row.id}
online={{
mode: "infinite",
queryKey: ["customers", workspaceId],
pageSize: 80,
prefetchRows: 160,
supportedGroupingColumns: ["status"],
query: fetchCustomersPage,
}}
toolbar={{
quickSearch: { placeholder: "Search Customers..." },
filterButton: true,
displayOptions: true,
copyLink: false,
views: false,
}}
initialState={{
sorting: [{ id: "company", desc: false }],
}}
virtualization={{ mode: "viewport", rowOverscanCount: 12 }}
selection={{
enabled: true,
mode: "multi",
showCheckboxOnHover: false,
allowSelectAllMatching: true,
}}
bulkActions={{
triggerLabel: "Actions",
actions: bulkActions,
serverExecutor: customersBulkServerExecutor,
}}
/>
</div>
)
}Expand to copy a minimal online table server (Hono + Drizzle)
// server
/**
* customers-table-api.ts: single-file sketch you can split into schema/, routes/, etc.
* Serves POST /api/customers/table for both pagination and infinite online modes.
*
* Teaching default: `runInMemoryDatatableQuery` runs the online planner on the scoped
* row array. Replace that call with SQL-backed planning when the workspace slice is too
* large to load whole; keep the same JSON request/response contract.
*/
import { Hono } from "hono"
import { drizzle } from "drizzle-orm/node-postgres"
import { eq } from "drizzle-orm"
import { Pool } from "pg"
import { integer, pgTable, text } from "drizzle-orm/pg-core"
import type { OnlineQueryInput } from "react-datatable-server"
import { runInMemoryDatatableQuery } from "react-datatable-server"
import type { DatatableColumn } from "./react-datatable"
// --- Drizzle schema (match your migrations) ---
const customers = pgTable("customers", {
id: text("id").primaryKey(),
workspaceId: text("workspace_id").notNull(),
company: text("company").notNull(),
contactName: text("contact_name").notNull(),
status: text("status").notNull(),
plan: text("plan").notNull(),
region: text("region").notNull(),
seats: integer("seats").notNull(),
revenue: integer("revenue").notNull(),
owner: text("owner").notNull(),
renewal: text("renewal").notNull(),
})
type CustomerRow = typeof customers.$inferSelect
type Customer = {
id: string
workspaceId: string
company: string
name: string
status: "Active" | "Trial" | "Paused"
plan: string
region: string
seats: number
revenue: number
owner: string
renewal: string
}
function mapRow(row: CustomerRow): Customer {
return {
id: row.id,
workspaceId: row.workspaceId,
company: row.company,
name: row.contactName,
status: row.status as Customer["status"],
plan: row.plan,
region: row.region,
seats: row.seats,
revenue: row.revenue,
owner: row.owner,
renewal: row.renewal,
}
}
const statuses = ["Active", "Trial", "Paused"] as const
// --- Keep in sync with the React table (extract to shared/ in real apps) ---
const customerOnlineColumns: DatatableColumn<Customer>[] = [
{
id: "company",
header: "Company",
accessorKey: "company",
width: 240,
enableSorting: true,
enableFiltering: true,
filterType: "text",
},
{
id: "name",
header: "Contact",
accessorKey: "name",
width: 190,
enableSorting: true,
enableFiltering: true,
filterType: "text",
},
{
id: "status",
header: "Status",
accessorKey: "status",
width: 140,
enableSorting: true,
enableFiltering: true,
enableGrouping: true,
filterType: "text-list",
filterOptions: {
options: statuses.map((status) => ({ label: status, value: status })),
},
},
{
id: "plan",
header: "Plan",
accessorKey: "plan",
width: 130,
enableSorting: true,
enableFiltering: true,
enableGrouping: true,
filterType: "text-list",
filterOptions: {
options: ["Starter", "Team", "Enterprise"].map((plan) => ({ label: plan, value: plan })),
},
},
{
id: "region",
header: "Region",
accessorKey: "region",
width: 120,
enableSorting: true,
enableFiltering: true,
enableGrouping: true,
filterType: "text-list",
filterOptions: {
options: ["NA", "EU", "APAC"].map((region) => ({ label: region, value: region })),
},
},
{
id: "seats",
header: "Seats",
accessorKey: "seats",
width: 110,
enableSorting: true,
enableFiltering: true,
filterType: "number",
},
{
id: "revenue",
header: "Revenue",
accessorKey: "revenue",
width: 130,
enableSorting: true,
enableFiltering: true,
filterType: "number",
},
{
id: "owner",
header: "Owner",
accessorKey: "owner",
width: 170,
enableSorting: true,
enableFiltering: true,
filterType: "text",
},
{
id: "renewal",
header: "Renewal",
accessorKey: "renewal",
width: 140,
enableSorting: true,
enableFiltering: true,
filterType: "date",
},
]
const pool = new Pool({ connectionString: process.env.DATABASE_URL })
const db = drizzle(pool)
const app = new Hono()
app.post("/api/customers/table", async (c) => {
// Replace with real session / tenant resolution.
const workspaceId = c.req.header("x-workspace-id") ?? "demo_workspace"
const input = (await c.req.json()) as OnlineQueryInput
const rows = await db.select().from(customers).where(eq(customers.workspaceId, workspaceId))
const data = rows.map(mapRow)
const response = await runInMemoryDatatableQuery({
data,
columns: customerOnlineColumns,
input,
getRowId: (row) => row.id,
})
return c.json(response)
})
export default appFor a minimal local run, create the pool from DATABASE_URL, start Hono on a port, and point the React fetch URL at http://127.0.0.1:8787/api/customers/table (or mount this app behind your framework’s dev proxy under /api/customers/table). When you outgrow the in-memory helper, follow Server query planning and Server query execution to push filters, sorts, and pagination into the database.
Where to go next
- For the data-boundary decision, read Choose a data mode.
- For request and response shapes, read Online API.
- For explicit pages instead of continuous loading, read Paginated Table.
- For the HTTP contract used by online tables and server bulk payloads, read Server query endpoint.