Skip to content

Deployment

What this answers: where does each component from Components physically run, and what's the operational shape?

This page is a stub until #19 resolves the M1 production hosting decision. The diagram and notes below capture the leading candidate.

Diagram (planned)

graph TB
    subgraph Internet
      User[Visitor / Builder]
    end

    subgraph Hosting["Hosting (TBD per #19)"]
      direction TB
      WebApp[Next.js web app<br/>Node 22]
      Workers[BullMQ scraper workers<br/>Node 22]
    end

    subgraph DBTier["Database tier"]
      direction TB
      PG[(Postgres 17<br/>Neon serverless<br/>or Hetzner VPS)]
      RD[(Redis<br/>Upstash or self-hosted)]
    end

    subgraph SaaS["External SaaS"]
      direction TB
      Resend[Resend<br/>email]
      Auth[Auth.js<br/>Google + magic link]
    end

    subgraph CFP["Cloudflare Pages"]
      Docs[961tech docs<br/>this site]
    end

    User -->|HTTPS| WebApp
    User -.->|HTTPS| Docs
    WebApp -->|Prisma over TLS| PG
    WebApp -->|BullMQ over TLS| RD
    WebApp -->|HTTPS| Resend
    WebApp -->|OAuth/SMTP| Auth
    Workers -->|Prisma over TLS| PG
    Workers -->|BullMQ| RD
    Workers -->|HTTPS scrape| Internet

Hosting candidates (#19)

Option Pros Cons
bits.lb beta Lebanese hosting — politically aligned with the product (a Lebanese tool), beta pricing, edge-close to users Beta — uptime SLO unknown, may need fallback
Hetzner Cheap, reliable, EU-located but performant from LB DIY ops — backups, monitoring, OS patching
Vercel + Neon + Upstash Zero ops, gen-1-ready More expensive at scale; Vercel cold starts may matter
Railway / Fly.io Middle ground — managed but cheap Less generous free tiers than Vercel

The decision will be captured as an ADR when made.

Operational characteristics

Web app

  • Sizing: small at M1 (single Lebanese-market user base, low concurrent traffic). One container is enough.
  • Stateless: sessions are JWT or cookie-signed; no in-memory state. Trivial to add a second instance behind a load balancer when needed.
  • Cold-start sensitivity: matters on Vercel-style serverless; not on a long-running container.

Scraper workers

  • Sizing: scales with retailers × categories × refresh frequency. M1 = 3 × 8 × daily ≈ 24 jobs/day. Trivial.
  • Concurrency: small — one job at a time per retailer is polite. Total active jobs likely < 10.
  • Resilience: failed jobs retry with exponential backoff. Persistent failures alert via #19 observability hooks (TBD).

Postgres

  • Sizing: small — low-millions of rows max for years. Smallest tier on any provider works.
  • Backups: managed providers handle this. If self-hosted, daily logical backups + WAL shipping; restored-from-backup runbook needed before going live.
  • Connection pooling: Prisma's built-in pool, plus PgBouncer if hosting via Hetzner.

Redis

  • Sizing: tiny — only queue state, no application caching. 100MB is plenty.
  • Resilience: queue state is recoverable from DB if Redis is wiped (jobs would re-enqueue). Don't over-invest in HA Redis.

Docs (this site)

  • Cloudflare Pages — already live at https://961tech.pages.dev. See Runbooks → Deploy docs site. No backend, fully static; uptime is whatever CF gives.

Networking

  • Public surfaces: the web app (HTTPS), the docs site (HTTPS).
  • Private: Postgres, Redis. Reachable only from web app + workers via VPC or IP allowlist.
  • Outbound from workers: unrestricted HTTPS for scraping. Workers may need a static egress IP if a retailer ever rate-limits or geofences.

Backups & disaster recovery

Pending decision. RTO/RPO targets TBD as part of #19. At M1 scale, "manual restore from yesterday's backup" is acceptable.

Observability

Pending. Likely:

  • Logs — structured JSON, shipped to a log sink (provider-managed initially)
  • Metrics — basic CF Analytics on the docs; web app metrics minimum: response time p95, error rate
  • Alerts — Telegram (matches MASTER's existing setup); see brain repo for the channel.
  • Scraper drift alerts — when a parser returns 0 listings unexpectedly, that's a parse break

For local dev, see Guides → Local setup.