Ryan Schachte's Blog
Exposing a local database with Cloudflare Tunnel and Hyperdrive
December 15th, 2025

Network tunneling seems to be a recurring theme for me, but every time I revisit it, there’s something new worth sharing. Cloudflare’s Hyperdrive is one of those things: it speeds up database queries by pooling connections close to your database and caching read results at the edge, so a single-region Postgres or MySQL instance feels a lot more global.

Hyperdrive and Arbitrary TCP from Cloudflare Workers

Hyperdrive sits between your Workers and an existing database, handling connection pooling and short-lived query caching without changing your database setup. It is built for Postgres- and MySQL-compatible databases, which speak their own binary protocol over TCP, not HTTP.

Workers have an API that lets you open outbound TCP sockets (see TCP sockets), which sounds perfect for talking directly to Postgres. The catch is that TCP connections from Workers to Cloudflare IP ranges are blocked, which includes services exposed via Cloudflare Tunnel and protected by Cloudflare Access, so you cannot just dial into your tunnelled “local” database from a Worker over TCP.

Fortunately, Cloudflare Tunnel provides a way to expose your database to the internet, and Hyperdrive provides a way to connect to it from your Worker.

The Architecture

Before diving into the setup, let’s understand what we’re building:

Worker → Hyperdrive → Cloudflare Access → Cloudflare Tunnel → Local Database

The tunnel establishes an outbound bidirectional connection from your local machine to Cloudflare, enabling secure database access without exposing your database publicly. Your Worker talks to Hyperdrive, which handles the connection pooling and routing through the tunnel to your local instance.

Prerequisites

You’ll need a few things before getting started:

  • A local database installation (I’m using CockroachDB, but Postgres works the same way)
  • A Cloudflare account with a zone you can use for the tunnel hostname
  • Docker installed for running the tunnel connector
  • Wrangler CLI v3.65 or above

Setting up Cloudflare Tunnel

Head over to the Cloudflare One dashboard and navigate to Networks > Connectors > Cloudflare Tunnels. Click Create a tunnel and select Cloudflared as the connector type.

Give your tunnel a descriptive name—something like local-db-tunnel—and copy the Tunnel Token that gets generated. You’ll need this to run the connector locally.

Configuring the Public Hostname

With the tunnel created, you need to tell Cloudflare where to route traffic. In your tunnel configuration, go to the Public Hostname tab and add a new entry:

  • Subdomain: Something like local-db
  • Domain: Select your zone
  • Type: TCP
  • URL: host.docker.internal:26257 (or whatever port your database runs on)

The host.docker.internal hostname lets the Docker container reach services running on your host machine. If you’re running the tunnel outside Docker, just use localhost instead.

Your tunnel hostname will look something like local-db.yourdomain.com.

Running the Tunnel with Docker Compose

I prefer using Docker Compose for this since it’s easier to manage and restart. Create a docker-compose.yml:

services:
  cloudflare-tunnel:
    image: cloudflare/cloudflared:latest
    container_name: cloudflare-tunnel
    restart: unless-stopped
    command: tunnel run
    environment:
      - "TUNNEL_TOKEN=${TUNNEL_TOKEN}"
    extra_hosts:
      - "host.docker.internal:host-gateway"
    networks:
      - tunnel-bridge
    healthcheck:
      test: ["CMD", "cloudflared", "--version"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
 
networks:
  tunnel-bridge:
    driver: bridge

Then create a .env file next to it with your tunnel token:

TUNNEL_TOKEN=your-tunnel-token-from-the-dashboard

Spin it up with:

docker compose up -d

The extra_hosts mapping is the key bit—it lets the container resolve host.docker.internal to your host machine, which is how the tunnel reaches your locally running database. The healthcheck keeps things stable, and restart: unless-stopped means it’ll come back up after a reboot.

Database Setup with TLS

If you’re connecting through Hyperdrive, your database needs TLS enabled. For CockroachDB, you can generate self-signed certificates:

cockroach cert create-ca \
  --certs-dir=certs \
  --ca-key=certs/ca.key
 
cockroach cert create-node localhost $(hostname) \
  --certs-dir=certs \
  --ca-key=certs/ca.key
 
cockroach cert create-client root \
  --certs-dir=certs \
  --ca-key=certs/ca.key

Then start CockroachDB with TLS:

cockroach start-single-node \
  --certs-dir=certs \
  --listen-addr=localhost:26257 \
  --http-addr=localhost:8080

For Postgres, you’d configure ssl = on in your postgresql.conf and point to your certificate files.

Creating an Admin User

Hyperdrive authenticates with username and password, so create a user for it:

cockroach sql \
  --certs-dir=certs \
  --host=localhost \
  -e "CREATE USER hyperdrive_user WITH PASSWORD 'your-secure-password'; \
      GRANT admin TO hyperdrive_user;"

Configuring Hyperdrive

With the tunnel running and database ready, head to the Cloudflare Dashboard and navigate to Storage & Databases > Hyperdrive.

Click Create and select Connect private database. Fill in the connection details:

  • Name: Something descriptive like local-db-hyperdrive
  • Database: Your database name
  • Host: Your tunnel hostname (e.g., local-db.yourdomain.com)
  • User: The user you created
  • Password: The password you set

Hyperdrive will automatically create the Cloudflare Access application and Service Auth token needed to route through the tunnel. Copy the Hyperdrive ID once it’s created.

Wiring Up Your Worker

The last piece is connecting your Worker to Hyperdrive. Add the binding to your wrangler.toml:

[[hyperdrive]]
binding = "HYPERDRIVE_DB"
id = "your-hyperdrive-id"

In your Worker code, you can now use the binding to get a connection string:

export default {
  async fetch(request: Request, env: Env) {
    const connectionString = env.HYPERDRIVE_DB.connectionString;
    // Use with your preferred Postgres client
  }
}

Deploy with wrangler deploy and your Worker will route database queries through Hyperdrive to your local instance.

Wrapping Up

This setup is particularly useful for development workflows where you want to test your Worker against a real database without deploying one to the cloud. The tunnel keeps everything secure, and Hyperdrive handles the connection pooling so you don’t have to worry about overwhelming your local instance.

The key insight here is that while Workers can’t make arbitrary TCP connections to Cloudflare IPs, Hyperdrive acts as the bridge—it knows how to authenticate with Access and route through the tunnel on your behalf. Once you’ve got the pieces in place, it just works.

Care to comment?