Database Compatibility
Turbine is built for PostgreSQL. Because it generates standard SQL (parameterized queries, json_agg, correlated subqueries), it also works with databases that speak the PostgreSQL wire protocol.
Turbine v0.14 extends the internal Dialect interface seam across query generation, DML generation, schema DDL, migration tracking SQL, PostgreSQL-generated TypeScript type mapping, and bulk-insert array type mapping. PostgreSQL remains the only GA target in the turbine-orm package, but SQL primitives like quoting, placeholders, JSON aggregation, DML (RETURNING, UNNEST, ON CONFLICT), column types, CREATE TABLE, indexes, _turbine_migrations statements, and schema metadata type names now route through a dialect contract so future @turbine-orm/mysql and @turbine-orm/sqlite packages can implement native SQL without compromising the Postgres path.
Compatibility Matrix
| Database | Adapter | Status | Notes |
|---|---|---|---|
| PostgreSQL 13+ | None (default) | Full | Native target |
| AlloyDB | alloydb (no-op) | Full | Google's PG storage engine |
| TimescaleDB | timescale (no-op) | Full | PG extension — hypertables introspect fine |
| Neon | None | Full | Use turbine-orm/serverless for HTTP driver |
| Supabase | None | Full | Standard Postgres, connect directly |
| YugabyteDB | yugabytedb | Full | Distributed lock adapter for migrations |
| CockroachDB | cockroachdb | Full | Table-based locks, introspection overrides |
AlloyDB
AlloyDB is Google Cloud's PostgreSQL-compatible database service. Under the hood it is PostgreSQL with a custom columnar storage engine — wire protocol, system catalogs, and SQL dialect are identical.
No adapter is needed. Every Turbine feature works out of the box:
json_agg/ nestedwithrelations- Advisory locks for migrations
- Standard
information_schemaintrospection - Transactions, pipelines, streaming
Connection
import { TurbineClient } from 'turbine-orm';
import { SCHEMA } from './generated/turbine/metadata.js';
// AlloyDB uses standard PG connection strings
const db = new TurbineClient({
connectionString: process.env.ALLOYDB_URL,
// AlloyDB requires SSL in production
ssl: { rejectUnauthorized: true },
}, SCHEMA);If you want to be explicit about the target database:
import { alloydb } from 'turbine-orm/adapters';
// turbine.config.ts
export default {
url: process.env.ALLOYDB_URL,
adapter: alloydb, // purely documentation — no behavior changes
};TimescaleDB
TimescaleDB is a PostgreSQL extension that adds hypertables (time-series-optimized tables), continuous aggregates, and compression policies. Because it's an extension (not a fork), the underlying database is standard PostgreSQL.
No adapter is needed. Hypertables introspect as regular tables via information_schema. All Turbine features work identically.
Connection
import { TurbineClient } from 'turbine-orm';
import { SCHEMA } from './generated/turbine/metadata.js';
// Timescale Cloud uses standard PG connection strings
const db = new TurbineClient({
connectionString: process.env.TIMESCALE_URL,
ssl: { rejectUnauthorized: true },
}, SCHEMA);Hypertable considerations
npx turbine generateintrospects hypertables the same as regular tablesfindManywithorderBy: { time: 'desc' }benefits from Timescale's chunk exclusion- Continuous aggregates are not introspected (they're materialized views, not tables)
- If you need to query a continuous aggregate, use
db.$queryRaw
YugabyteDB
YugabyteDB is a distributed SQL database that reuses the PostgreSQL query layer. It supports json_agg, subqueries, transactions, and most of pg_catalog.
All query features work identically to PostgreSQL. The only difference is in migration locking: advisory locks (pg_try_advisory_lock) are scoped per-tserver node, not cluster-wide. In a multi-node deployment, two concurrent turbine migrate commands routed to different nodes could both acquire the "same" lock.
Use the adapter for safe distributed migrations
import { yugabytedb } from 'turbine-orm/adapters';
// turbine.config.ts
export default {
url: process.env.YUGABYTE_URL,
adapter: yugabytedb,
};The YugabyteDB adapter replaces advisory locks with a _turbine_lock table using SELECT ... FOR UPDATE NOWAIT. Because YugabyteDB's row locks are distributed (backed by Raft consensus), this provides a true cluster-wide mutex.
Connection
import { TurbineClient } from 'turbine-orm';
import { SCHEMA } from './generated/turbine/metadata.js';
// YugabyteDB uses standard PG connection strings
const db = new TurbineClient({
connectionString: 'postgresql://yugabyte:yugabyte@localhost:5433/mydb',
}, SCHEMA);What works identically
json_agg+json_build_objectnested relations- Correlated subqueries
- Parameterized queries and pipeline batching
- Transactions with
SAVEPOINT(nested transactions) information_schemafor introspectionpg_indexes,pg_type,pg_enumFOR UPDATE/FOR SHARErow-level locking- All WHERE operators (LIKE, ILIKE, IN, arrays, JSON)
Known differences
| Area | Difference | Impact |
|---|---|---|
| Advisory locks | Per-node, not cluster-wide | Use yugabytedb adapter for migrations |
| Sequences | May have gaps under concurrent inserts | Cosmetic only — IDs are still unique |
pg_class.reltuples | May be stale or zero on new tables | Studio row counts may lag |
| Index types | Hash indexes not supported | GIN, GiST, B-tree all work |
CockroachDB
CockroachDB is a distributed SQL database that speaks the PostgreSQL wire protocol. Turbine's core query features (json_agg, correlated subqueries, parameterized queries) work correctly. However, CockroachDB has several differences that require the cockroachdb adapter.
Use the adapter
import { cockroachdb } from 'turbine-orm/adapters';
// turbine.config.ts
export default {
url: process.env.COCKROACH_URL,
adapter: cockroachdb,
};What the adapter does
| Area | PostgreSQL | CockroachDB (with adapter) |
|---|---|---|
| Migration locks | pg_try_advisory_lock() | _turbine_lock table + SELECT FOR UPDATE NOWAIT |
| Statement timeout | SET LOCAL statement_timeout | SET transaction_timeout (v23.1+) |
| Index introspection | pg_indexes view | Same view (compatible since v22.1) |
| Row estimates | pg_class.reltuples | crdb_internal.table_row_statistics |
Connection
import { TurbineClient } from 'turbine-orm';
import { SCHEMA } from './generated/turbine/metadata.js';
const db = new TurbineClient({
connectionString: process.env.COCKROACH_URL,
ssl: { rejectUnauthorized: true }, // CockroachDB Cloud requires SSL
}, SCHEMA);What works identically
json_agg+json_build_objectnested relations- Correlated subqueries (the core of Turbine's single-query strategy)
- Parameterized queries and pipeline batching (extended query protocol)
information_schemafor table, column, constraint introspectionpg_enum/pg_typefor enum introspectionWITH(CTEs),COALESCE,LIMIT,OFFSET,ORDER BY- All WHERE operators (LIKE, ILIKE, IN, NOT IN, arrays)
- Transactions (note: CockroachDB uses serializable by default)
Known limitations
| Area | Difference | Impact |
|---|---|---|
| Advisory locks | Not supported | Adapter provides table-based alternative |
SERIAL columns | Uses unique_rowid() not sequences | IDs are unique but not sequential |
NULL ordering in json_agg | May differ from PostgreSQL | Cosmetic for most use cases |
SET LOCAL statement_timeout | Not supported | Adapter uses SET transaction_timeout |
| Default isolation | Serializable (vs. Read Committed) | More retryable errors under contention |
pg_class.reltuples | Not reliable | Studio uses crdb_internal for estimates |
Serverless with CockroachDB
CockroachDB Serverless exposes a standard PostgreSQL connection string. Use it directly with turbineHttp:
import { turbineHttp } from 'turbine-orm/serverless';
import { Pool } from 'pg';
import { schema } from './generated/turbine/metadata';
const pool = new Pool({
connectionString: process.env.COCKROACH_URL,
ssl: { rejectUnauthorized: true },
});
export const db = turbineHttp(pool, schema);Other PG-Compatible Databases
If your database speaks the PostgreSQL wire protocol and supports:
json_aggandjson_build_object- Correlated subqueries
$1, $2, ...parameterized queriesinformation_schema.tablesandinformation_schema.columns
Then Turbine will likely work out of the box. Connect using the standard pg driver and run npx turbine generate to verify introspection works.
If you encounter issues with a specific PG-compatible database, open an issue — adapters are straightforward to add.