Sign In

or

How It's Built

Full transparency. Every decision, every trade-off, every line of code.

Built by Zachary • Powered by Claude

The Premise

fivewyze is a 24/7 scheduled streaming platform — auth, payments, real-time state machines, a marketplace, and a transparency dashboard. It was designed and shipped with AI as co-engineer from day one.

This page documents every major system: the problem it solves, the alternatives that were considered, what was actually built, and how the human-AI collaboration played out. The code is open source.

Architecture Overview

┌─────────────────────────────────────────────────────────────┐
│                        VIEWER BROWSER                       │
│                                                             │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌───────────┐  │
│  │ Schedule  │  │  Live    │  │ On-Demand │  │Transparency│ │
│  │ TV Guide  │  │  Player  │  │  Library  │  │ Dashboard  │ │
│  └────┬─────┘  └────┬─────┘  └────┬─────┘  └─────┬─────┘  │
│       │              │             │               │        │
│  ┌────┴──────────────┴─────────────┴───────────────┴─────┐  │
│  │         Bun.build() TypeScript (ES Modules)           │  │
│  │  supabase-client / auth-helpers / realtime-manager    │  │
│  └────────────────────────┬──────────────────────────────┘  │
└───────────────────────────┼─────────────────────────────────┘
                            │ anon key + RLS
                            ▼
┌─────────────────── Supabase ────────────────────────────────┐
│  Auth    │  Realtime (Broadcast)  │  Edge Functions  │ Cron │
│  Email + │  schedule:{channel_id} │  Admin ops       │ Every│
│  OAuth   │  auction:{auction_id}  │  Payment verify  │ 1min │
│          │  user:{user_id}        │  Content review  │      │
├──────────┴────────────────────────┴──────────────────┴──────┤
│                    PostgreSQL + RLS                          │
│  users · channels · schedule_slots · auctions · bids        │
│  content_submissions · transactions · notifications         │
│  channel_shares · share_holdings · platform_stats           │
└─────────────────────────────────────────────────────────────┘
      

Case Study: Authentication

Problem

A streaming platform needs frictionless sign-up. Every extra step loses users. But we also need crypto wallet connection for payments later.

Approach

We considered three options: wallet-first auth (Web3 native), email-only (simple but high friction), or OAuth + email with wallet as a separate connection. We chose option three.

Solution

Supabase Auth handles email/password, Google, and Discord. The entire OAuth flow runs client-side — signInWithOAuth() redirects to the provider, returns to /auth/callback, and the Supabase client auto-detects the session from the URL hash. No server-side callback routes.

Wallet connection is a separate action post-login. Your identity is your email. Your wallet is your payment method. Clean separation.

AI Collaboration

Claude mapped the entire OAuth flow, identified that Supabase handles the heavy lifting client-side, and built the callback page, auth helpers, and CSP headers in a single pass. The human decision was which providers to support. The AI handled how.

Case Study: Real-Time Schedule

Problem

A 24/7 streaming platform needs second-accurate state transitions. Slots must go live on time, auctions must close precisely, and every viewer's UI must reflect the current state instantly.

Approach

We debated: client-side timers (unreliable), WebSocket push from a custom server (complex), or database-driven transitions with realtime broadcast. We chose the database as the source of truth.

Solution

A single pg_cron function runs every 60 seconds. It atomically handles six state transitions: open auctions, anti-snipe extensions, close auctions, start airing slots, complete aired slots, and expire unpaid reservations. One function. One transaction. No race conditions.

Supabase Broadcast (not Postgres Changes) pushes updates to clients. Broadcast scales independently of RLS — no per-row permission checks on every realtime event.

AI Collaboration

The state machine was designed collaboratively — mapping every valid transition on a whiteboard-style conversation, then translating it into a single SQL function. Claude wrote the GiST exclusion constraint that prevents overlapping slots at the database level, not the application level.

Case Study: Marketplace & Payments

Problem

Selling airtime requires atomic transactions. If two people try to buy the same slot, exactly one must win. Auctions need anti-snipe protection. Every dollar must be traceable.

Approach

Reserve-then-pay: atomically lock a slot for 5 minutes, then confirm payment. This prevents front-running and double-purchases without requiring real-time payment confirmation.

Solution

Fixed-price slots use an atomic SELECT ... FOR UPDATE reservation. Auctions use a separate bid table with anti-snipe logic — bids in the last 2 minutes extend the auction by 2 minutes. The pg_cron function handles all of this in the same pass as schedule transitions.

Channel shares represent ownership stakes. Revenue from ads splits proportionally. The data model is designed for future ERC-20 migration, but launches as platform-managed to avoid securities law complexity.

AI Collaboration

Claude identified the front-running risk in the original "pay immediately" design and proposed the reserve-then-pay pattern. The anti-snipe logic was adapted from auction platform best practices that Claude surfaced during planning.

Case Study: The Live Player

Problem

When a schedule slot transitions, the video player must switch sources without a flash of empty space. Standard iframe replacement causes a visible blank frame.

Approach

Double-buffered iframes. Create the next iframe off-DOM, wait for its load event, then swap it in. The viewer never sees a blank frame.

Solution

Two iframe slots alternate. The "back buffer" loads the next video while the current one plays. On transition, CSS opacity handles the crossfade. Combined with Supabase Broadcast for the transition signal, the viewer experience is seamless.

AI Collaboration

This was a classic "AI knows the pattern" moment. Double-buffering is well-known in graphics programming, but applying it to iframe video players is niche. Claude connected the dots and implemented the buffer swap logic.

Case Study: Security Model

Problem

A platform handling money needs defense in depth. The anon key is public (embedded in client JS). Row-Level Security must be airtight.

Approach

Trust the database, not the application. Every table has RLS policies. The service_role key never leaves the server. CSP headers block inline scripts and restrict connections to known domains.

Solution

RLS policies enforce: users can only read their own data (except public profiles), only channel owners can modify slots, only authenticated users can bid, and platform stats are read-only for everyone. The anon key can only read public data. Admin operations go through Edge Functions using the service_role key.

Security headers in vercel.json: strict CSP, HSTS, X-Frame-Options, X-Content-Type-Options. OAuth domains explicitly whitelisted in connect-src and frame-src.

AI Collaboration

Claude wrote every RLS policy and the CSP headers. The human reviewed each one. This is where AI shines — exhaustive, consistent security rules that a human might miss or get tired writing. The AI doesn't get tired at policy number 15.

Case Study: Database Design

Problem

10 tables, complex relationships, state machines, audit trails. The schema needs to enforce invariants that application code shouldn't be responsible for.

Approach

Push logic into Postgres. Triggers, exclusion constraints, check constraints. If the database says it's valid, it's valid. No ORM layer between you and the truth.

Solution

A GiST exclusion constraint prevents overlapping schedule slots — the database physically cannot store two slots on the same channel at the same time. Check constraints enforce valid state transitions (a slot can't go from completed back to airing). A trigger on auth.users auto-creates the public.users row — OAuth users get their profile without any extra code.

Channel share supply is enforced by trigger: the sum of all holdings can never exceed the channel's total_supply. This is a database-level invariant, not an application-level check.

AI Collaboration

Schema design was deeply collaborative. The human defined the business rules in plain English. Claude translated them into constraints, triggers, and indexes. Four migrations, each reviewed before running against production. The GiST exclusion constraint for non-overlapping slots was a Claude suggestion — the human's original approach was an application-level check.

What We Didn't Build

Taste is as much about what you leave out. Here's what we deliberately skipped:

No React, no Vue, no framework
Vanilla TypeScript + Bun.build(). ES modules with code splitting. The entire client JS is under 20KB gzipped. Frameworks solve problems we don't have.
No ORM
Direct SQL via Supabase client. The database is the application. An ORM would hide the RLS policies and constraints that are the core of the security model.
No custom video infrastructure
Rumble handles video hosting and CDN. We handle scheduling and commerce. Why build a video platform when we can build a platform around video?
No admin SPA
The admin panel is server-rendered HTML with Supabase queries. It doesn't need client-side routing or state management. It needs to show data and run actions.
No on-chain tokens at launch
Channel shares are platform-managed with a data model ready for ERC-20 migration. Shipping today beats tokenizing tomorrow.

Tech Stack

Bun + Elysia.js

Runtime & dev server. TypeScript-first, fast builds, zero config.

Supabase

Postgres, Auth, Realtime, Edge Functions, pg_cron. The entire backend in one service.

Vercel

Static hosting + serverless API routes. Zero-config deploys from git push.

Rumble

Video hosting via iframe embeds. oEmbed API for metadata.

Rumble Wallet

USDT payments. Crypto-native commerce without the complexity.

Claude

AI pair programmer. Architecture, code, security policies, documentation. The co-engineer.

The Human-AI Workflow

This isn't "AI-generated code." It's a collaboration model where the human owns the product decisions and the AI handles implementation at speed.

Human decides WHAT, AI figures out HOW
Every feature starts with a product decision: what problem are we solving, what trade-offs are we making, what are we deliberately not building. Claude maps the implementation path, writes the code, and flags risks.
AI writes first, human reviews everything
Claude generates migrations, components, and security policies. Every diff is reviewed before it ships. This produces the throughput of a team with the consistency of a single author.
AI remembers context, human provides taste
Claude tracks the full architecture, file relationships, and design decisions across sessions. The human decides which features to ship, which to skip, and what the product should feel like.
Documentation as a byproduct
This page, the deployment guide, the transparency dashboard — all produced through conversation. Documentation isn't a separate task when your co-engineer writes prose as naturally as code.

Open Source

This entire platform is open source under AGPL v3. Fork it. Deploy your own instance. Register your channel on the directory.

The code is the documentation. If you want to see how something works, read the source.