Some tech stuff

Drop SQLite: Zero-Dependency Quick Starts with pg0

TL;DR: Your users can clone your project, run pip install, and have Postgres running automatically. No “install Postgres first” in your README, no Docker, no SQLite fallbacks. pg0-embedded bundles Postgres for zero-friction onboarding.


The Problem

Your Python app probably looks like this:

1if os.getenv("ENV") == "production":
2    DATABASE_URL = os.getenv("DATABASE_URL")  # Postgres
3else:
4    DATABASE_URL = "sqlite:///dev.db"  # SQLite for "convenience"

You’re paying a hidden tax:

Then there’s the complexity tax. Your codebase accumulates if sqlite: ... else: ... branches. You write abstraction layers to paper over dialect differences. You test against both databases (or worse, you don’t, and production breaks). Every new feature requires asking “does this work in SQLite too?” - and often the answer shapes your architecture in ways that make the Postgres version worse.

I’ve seen codebases where 20% of the database layer was just compatibility glue. That’s code you maintain, debug, and reason about - for a database you don’t even run in production.

The Vector Search Problem

If you’re building anything with embeddings or semantic search, the gap widens. pgvector is battle-tested, well-documented, and ships with managed Postgres everywhere (RDS, Cloud SQL, Supabase, Neon).

SQLite? You have two options:

Both require you to compile or load extensions manually. Neither matches pgvector’s ecosystem maturity. And you’ll still need pgvector in production anyway - so why maintain two vector implementations?

The reason SQLite sticks around? Installing Postgres locally is annoying. Docker works but adds overhead. Homebrew/apt means managing system services.

And if you maintain an OSS project, you’ve seen this: “install Postgres” in your README is a contributor barrier. So you keep SQLite support “for convenience” and pay the complexity tax forever.

pg0-embedded: Zero-Dependency Postgres

pg0-embedded is a Python package that downloads and runs Postgres for you. No Docker. No system packages. Just pip install:

1pip install pg0-embedded
1from pg0 import Pg0
2
3# Context manager handles start/stop
4with Pg0() as pg:
5    print(pg.uri)  # postgresql://postgres:postgres@localhost:5432/postgres
6    pg.execute("CREATE EXTENSION IF NOT EXISTS vector")

First run downloads Postgres 18 with pgvector included. No extension compilation, no manual loading. Subsequent runs use the cached binaries.

Cross-platform: Works on macOS (Intel + Apple Silicon), Linux (x86_64), and Windows. The binary auto-detects your platform and downloads the right build.

Stateful: Data persists in ~/.pg0/instances/<name>/. Stop and restart your app - your data is still there. No volume mounts, no data loss surprises. Delete the directory if you want a fresh start.

Migration: Before and After

Before (SQLite compatibility hell):

1from sqlalchemy import create_engine
2from sqlalchemy.orm import sessionmaker
3
4DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///dev.db")
5engine = create_engine(DATABASE_URL)
6
7# Stuck with sync driver, can't use Postgres features

After (pg0-embedded + asyncpg):

 1import asyncpg
 2from pg0 import Pg0
 3
 4# Dev: start local Postgres
 5if os.getenv("ENV") != "production":
 6    pg = Pg0()
 7    pg.start()
 8    DATABASE_URL = pg.uri
 9else:
10    DATABASE_URL = os.getenv("DATABASE_URL")
11
12# Now use asyncpg directly
13pool = await asyncpg.create_pool(DATABASE_URL)

Practical SDK Usage

Auto port assignment

1from pg0 import Pg0
2
3pg = Pg0()
4pg.start()
5
6# Port is auto-assigned to a free one if not specified
7print(pg.uri)
8# postgresql://postgres:postgres@localhost:54832/postgres

No port conflicts. Spin up multiple instances without worrying about what’s already running on 5432.

Named instances for isolation

 1from pg0 import Pg0
 2
 3# Run dev and test databases simultaneously
 4pg_dev = Pg0(name="dev")
 5pg_test = Pg0(name="test")
 6
 7pg_dev.start()
 8pg_test.start()
 9
10# Each instance gets its own port and data directory
11print(pg_dev.uri)
12print(pg_test.uri)

Custom configuration

 1from pg0 import Pg0
 2
 3pg = Pg0(
 4    name="myapp",
 5    port=5433,
 6    username="myuser",
 7    password="mypass",
 8    database="mydb",
 9    config={"shared_buffers": "512MB"}
10)
11pg.start()

Pytest fixture

 1import pytest
 2import asyncpg
 3from pg0 import Pg0
 4
 5@pytest.fixture
 6async def db():
 7    pg = Pg0(name="test")
 8    pg.start()
 9
10    pool = await asyncpg.create_pool(pg.uri)
11    yield pool
12
13    await pool.close()
14    pg.stop()

FastAPI lifespan

 1from contextlib import asynccontextmanager
 2from fastapi import FastAPI
 3from pg0 import Pg0
 4
 5@asynccontextmanager
 6async def lifespan(app: FastAPI):
 7    # Start Postgres on app startup
 8    pg = Pg0()
 9    pg.start()
10
11    app.state.pool = await asyncpg.create_pool(pg.uri)
12    yield
13
14    await app.state.pool.close()
15    pg.stop()
16
17app = FastAPI(lifespan=lifespan)

Run SQL directly

1from pg0 import Pg0
2
3with Pg0() as pg:
4    pg.execute("CREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT)")
5    pg.execute("INSERT INTO users (name) VALUES ('test')")

What You Unlock

Once you drop SQLite compatibility:

 1# JSONB with indexing
 2await conn.execute("""
 3    CREATE TABLE events (
 4        id SERIAL PRIMARY KEY,
 5        data JSONB NOT NULL
 6    );
 7    CREATE INDEX idx_events_type ON events ((data->>'type'));
 8""")
 9
10# Query JSON directly
11events = await conn.fetch("""
12    SELECT * FROM events WHERE data->>'type' = 'purchase'
13""")
14
15# Arrays
16await conn.execute("""
17    INSERT INTO users (name, tags) VALUES ($1, $2)
18""", "John", ["admin", "active"])
19
20# pgvector for embeddings (included with pg0)
21await conn.execute("""
22    CREATE EXTENSION vector;
23    CREATE TABLE items (
24        id SERIAL PRIMARY KEY,
25        embedding vector(1536)
26    );
27""")

Zero-Setup Quick Starts for Your Users

If you maintain an OSS project, this is in my opinion the killer feature. Your users clone the repo, run pip install, and everything works:

 1# your_package/db.py
 2from pg0 import Pg0
 3import os
 4
 5_pg = None
 6
 7def get_database_url():
 8    global _pg
 9    if url := os.getenv("DATABASE_URL"):
10        return url  # Production: use provided URL
11
12    # Development: start embedded Postgres
13    if _pg is None:
14        _pg = Pg0(name="myapp")
15        _pg.start()
16    return _pg.uri

Your README goes from:

## Prerequisites
- PostgreSQL 18
- pgvector extension

To:

pip install your-package
python -m your_package
# That's it. Postgres starts automatically.

No Docker Compose files. No “works on my machine.” No contributor friction.

Quick Reference

TaskCode
Start (auto port)Pg0().start()
Context managerwith Pg0() as pg:
Fixed portPg0(port=5433).start()
Named instancePg0(name="test").start()
Get URIpg.uri
Run SQLpg.execute("SELECT 1")
Stoppg.stop()
Stop and delete datapg.drop()

Same database in dev, test, and prod. No Docker. No brew. No SQLite compromises.

pg0-embedded on PyPI | pg0 CLI

#database #postgresql #python #asyncpg #developer-experience

Reply to this post by email ↪