Components¶
What this answers: what are the major deployable components of 961tech and how do they communicate?
Diagram¶
graph LR
Web[Next.js web app<br/>RSC + API routes]
DB[(Postgres 17)]
Redis[(Redis<br/>BullMQ broker)]
Scrapers[Scraper workers<br/>BullMQ consumers]
Mail[Email service<br/>Resend, planned]
Auth[Auth provider<br/>Google + magic link]
Retailers[(Retailer sites<br/>external)]
Web -- Prisma --> DB
Web -- BullMQ enqueue --> Redis
Scrapers -- BullMQ consume --> Redis
Scrapers -- Prisma --> DB
Scrapers -- HTTP scrape --> Retailers
Web -- HTTP --> Mail
Web -- OAuth/email --> Auth
classDef m1 fill:#0a3d0a,stroke:#5d8b3e,color:#fff
classDef m2 fill:#3d2a0a,stroke:#ff5b1f,color:#fff
classDef external fill:#1a2030,stroke:#a8a397,color:#a8a397
class Web,DB m1
class Scrapers,Redis,Mail,Auth m2
class Retailers external
Green = deployed in M1. Orange = planned for M2. Grey = external systems.
Components¶
Next.js web app¶
The single deployable for M1. Hosts:
- Page routes under
src/app/— landing, browse, product detail, build - API routes under
src/app/api/— currently just/api/go/r/[retailerId]/p/[listingId]for outbound deep-link tracking - Server components by default; client islands only where interactivity demands
Connects to Postgres via Prisma. After #11, also talks to the Auth provider; after #14, to the Email service.
Postgres 17¶
Source of truth for the catalog. See Data model and Reference → Prisma models.
Hosting TBD per #19 — leading candidate is Neon, with Hetzner as a fallback. Postgres has no separate "schema service" — Prisma migrations are the schema.
Redis (M2)¶
BullMQ broker. Holds queued scraper jobs and their state. Currently provisioned in docker-compose.yml but not in active use until #18 lands.
Scraper workers (M2)¶
BullMQ consumers, ran on a schedule (daily per retailer initially). Today the same code runs as a one-shot CLI (npm run scrape); the M2 split puts it behind BullMQ. See Ingest pipeline for what they do.
Likely deployed as a separate container/process from the web app — independent scaling, separate failure modes.
Email service (M2)¶
Resend (planned), via SMTP or HTTP API. Triggered from the web app for UC-G price drop alerts (#14). Not in use until then.
Auth provider (M2)¶
Google OAuth + email magic link via Auth.js (#11). The "provider" is logically Auth.js itself; the actual identity is held by Google or the user's email.
External retailer sites¶
Targets of the scrapers (incoming) and of the deep-link 302s (outgoing). Not part of 961tech; documented for completeness.
Interfaces¶
| From | To | Protocol | When |
|---|---|---|---|
| Web app | Postgres | Prisma (TCP, password auth) | every request |
| Web app | Redis | BullMQ enqueue (TCP) | when scheduling a job (M2) |
| Scraper workers | Redis | BullMQ consume | continuous |
| Scraper workers | Postgres | Prisma | per scraped listing |
| Scraper workers | Retailer sites | HTTPS GET | per scrape |
| Web app | HTTPS API | when sending alert (M2) | |
| Web app | Auth provider | OAuth / email link | sign-in (M2) |
| Web app → user | Browser | HTTP/SSE for streaming RSC | every request |
Why this split¶
- Web vs scrapers — different failure modes (web is request-driven, scrapers are batch-driven), different scaling shapes (web scales with traffic, scrapers scale with retailer count × refresh rate).
- Postgres as one DB — the data is small enough (low-millions of rows max) that no read replicas are needed for a long time. One Postgres serves both web and workers.
- Redis just for queueing — no cache layer in M1. The web app reads Postgres directly. Caching can be added later if we measure something slow.
For the deployment view (where each runs), see Deployment.