# === / === # source: frontend/public/index.md --- title: Jesse Anderson description: 30-year coder. I build platforms. Currently obsessed with AI infrastructure. url: / --- # Jesse Anderson > Bay Area, California. **30-year coder. I build platforms. Currently obsessed with AI infrastructure.** ## Three audiences. Same site. - **Recruiters & clients** — Real systems in production, real scale, real doctrine. See [/work](/work). - **Cardano & AI peers** — Concrete projects, actual stack, the AI-infra specifics. [/now](/now) is what I'm deep in this month. - **The curious** — Non-linear arc, immersion pattern, four boys, two granddaughters. [/about](/about). ## Current immersion AI as real infrastructure. Not autocomplete. K.O.R.A. — autonomous Discord ops agent for a production Cardano platform. Multi-model fallback chains. Dual-GPU local LLMs. A Codex CLI fork. Set-me-loose-all-night runs as a working mode. More on [/now](/now). --- Bay Area. Father of four. Coaches the youngest two on baseball. Built an AI bridge through OpenClaw so his third son can direct 3D scenes by voice. [This site is also addressable by agents](/standards). # === /work === # source: frontend/src/pages/work.mdx --- layout: ../layouts/Base.astro title: Work — Jesse Anderson description: ADA Handles, HAL, K.O.R.A., OpenClaw, Papagoose, AI tooling. The work shipped. slug: work ---
Three peer products on Cardano — ADA Handles, HAL, and K.O.R.A. — plus side products and earlier engineering. Most of [github.com/koralabs](https://github.com/koralabs) is mine, outside of front-end work that the frontend team owns. The personal AI tooling that runs alongside this lives on [/projects](/projects). Co-founded with my second-eldest son in 2021. I run platform engineering and the AI ops layer. The platform is a constellation of services across the [`handle.me`](https://handle.me) namespace. The work in impact terms: - **Killed the centralized mint server.** Anyone can mint or edit a handle on-chain within the contract's rules. Every major Cardano wallet adopted the pattern. *(`decentralized-minting`, `handles-personalization`, `cip-68-444-minting`, `cip-68-222-minting`.)* - **The withdraw-0 / observer trick — contracts that upgrade in place.** Worked out the Cardano technique that lets a live contract have its logic upgraded without moving the locked UTxOs to a new address. This is what makes our cutover work without state migration or chain downtime. *(Currently shipping live in `handles-personalization`.)* - **Two parallel cutovers off Helios.** Smart contracts moving to **Aiken** for type safety and auditability — DeMi and SubHandles are on Aiken; Personalization not yet. The TX-building library moving to **`@cardano-sdk/*`**. - **OAuth-over-NFT-ownership.** A wallet holding a handle can act as an OAuth identity provider — Web2 SSO ergonomics on self-custody wallets, no custodian. I pioneered Lit at Kora Labs in this codebase; the patterns echo through to my [`lambda-site-template`](https://github.com/papag00se/lambda-site-template) (which this site is built on). *(`auth.handle.me`.)* - **Merkle Patricia Tries inside on-chain code.** Ported a non-trivial cryptographic structure into a Cardano-targeting language so contracts can prove membership over large datasets without storing them on-chain. *(`merkle-patricia-forestry-helios`.)* - **Browserless dynamic SVG rendering at scale.** Every handle's NFT image is generated server-side via typographic SVG — hundreds of thousands of unique renders across reissues and personalization. *(`handle-svg`, `render.handle.me`.)* - **Cardano Node on AWS Fargate (ran in production; retired for cost).** `cardano-node`, Ogmios, and the Handles API on Fargate + EFS for redundancy, failover, auto-scaling, and reproducible deployment. The fun trick: cross-host node-socket connectivity solved via `socat` bridging Unix domain sockets to TCP. Production-grade serverless ops on a chain that didn't expect to be run that way. Eventually retired — Fargate at this scale wasn't sustainable on the bill — but the technique still travels. - **IPFS content validation inside on-chain code.** A Cardano smart contract can validate the contents of an IPFS file by matching the CIDv1 embedded hash against redeemer-supplied bytes — with care around chunk size and the ~1KB–12KB sweet spot. Live in production for personalization. - **Discord as the production control plane.** Tickets, deploys, alerts, and ban decisions flow through Discord. Built the bot, the CloudWatch → Discord alerting pipeline, and the deploy automation operators trigger from chat threads. *(See K.O.R.A. below for the autonomous-agent layer on top.)* A second product line built on the on-chain primitives Handles introduced. SDK + minting engine + 3D-asset support, so external Cardano teams can ship NFT collections without rebuilding the contracts and orchestration we already battle-tested. **The single most validating run (June 2025):** chained ~4,000 transactions minting 70K+ tokens in a single coordinated session. Every tx relied on the previous, with 8 scripts and 9 reference inputs each. *We were minting faster than the chain settled.* Every transaction landed deterministically. For Cardano peers: this is what production-grade minting looks like when the contract chain, the engine, and the SDK are all yours. The minting infrastructure unified back to power Handles minting too — [`minting.handle.me`](https://minting.handle.me) serves both. *(Repo cluster: `hal.handle.me`, `hal-minting-engine`, `hal-minting-contracts`, `hal-sdk`, `hal-glb`, `minting.handle.me`.)* A multi-component AI agent that runs day-to-day ops for the Handles ecosystem from inside Discord — ticket triage, moderation, monitoring, alert response, deployment orchestration, knowledge synthesis. **Architecture:** chat gate → AI provider → executor → worker. Multi-model routing with graceful fallback: cloud primary → codex-spark on usage cap → gpt-5.4-mini → local Qwen3 on dual GPUs (4B for the gate, 9B-IQ4_XS for the worker, 35B for harder reasoning when there's headroom). **Discipline:** three-tier test battery — hermetic (mocks) → real-process (test services) → live-canary (live data, no side effects). Nothing ships green-by-inference. **Working mode:** *set me loose all night.* TODO + TASK_STATE files keep autonomous runs resumable across crashes, compactions, and OAuth expirations. The agent rehydrates from disk. **Discord-as-control-plane is intentional.** Pinned messages in `#kora-settings` are configuration. App-alerts post to `#app-alerts` with rich metadata for triage. Natural-language commands ("deploy to mainnet," "troubleshoot this ticket") trigger real CI/CD and GitHub work. *(Repo: `kora-bot`. Plus `cloudwatch-to-discord`, `community-jobs`. Detail on [/projects](/projects).)* [clicked.bio](https://clicked.bio) — a LinkTree alternative with Instagram integration. [remindsly.com](https://remindsly.com) — Microsoft Graph-based reminders. A few more things in the [github.com/koralabs](https://github.com/koralabs) constellation worth knowing about: - **Marketplace** ([marketplace.handle.me](https://marketplace.handle.me)) — secondary trading for Handles and HAL, on-chain order escrow. - **HandleChat** ([chat.handle.me](https://chat.handle.me)) — direct messaging on Handle identity. - **Social Secrets Restore** ([secrets.handle.me](https://secrets.handle.me)) — account recovery via social trust graphs. Research stage. - **Merch** ([merch.handle.me](https://merch.handle.me)) and **Metaverse** ([metaverse.handle.me](https://metaverse.handle.me)) — announced. ### Capital One — Master Platform Engineer (2018–2022) 25K records/sec synthetic-data Kafka platform on Fargate autoscaling. Constant vulnerability monitoring + threat scoring across team systems. ### Xero — Senior SWE & DevOps (2013–2018) 100+ two-week sprints on a 2M-user SaaS accounting/payroll platform. Built contextual auto-complete search over 2M objects on AWS ElasticSearch, queried dozens of times per second.
# === /projects === # source: frontend/src/pages/projects.mdx --- layout: ../layouts/Base.astro title: Projects — Jesse Anderson description: Personal AI tooling, the rig, OpenClaw, the operating doctrine. Peer-leaning detail. slug: projects ---
Higher detail than [/work](/work). The personal AI tooling layer — local LLM infra, the Codex CLI fork, multi-agent routing, the OpenClaw bridges, and the operating doctrine. Aimed at peers who care how the sausage is made. The reason I'm bullish on local models: **codex-local** — my fork of the Codex CLI, tuned to babysit a local model through a structured task list — drives multi-turn, multi-file, multi-task coding sessions to completion using just **Qwen3.5-9B-IQ4_XS**. The trick isn't the model — it's the scaffolding. Specifically: - **Docs-first.** Every repo has an `AGENTS.md` and a `TASK_STATE.md`. The model reads them on every turn. Persistent intent survives compaction. - **State in files, not in-context.** Long runs survive crashes, context exhaustion, model swaps, OAuth expirations. The agent rehydrates from disk. - **Explicit task tracking.** TODO list in a file, model crosses items off as it goes. No "where was I" questions. - **Verify before declare-done.** No green-by-inference. Run the test, read the output, look at the deployed version. Most local-model coding agents stall past trivial single-edit tasks. With those four disciplines, 9B handles real work. The frontier model is no longer the moat — the scaffolding is. The fork itself adjusted compaction behavior, model-routing interception, and exec-log inspection so a long unattended run could survive overnight without losing the thread. - **GPUs.** GTX 1080 (8GB) + RTX 3080. Two hosts, services routed by endpoint. - **Runtime.** Ollama, tuned: `OLLAMA_FLASH_ATTENTION=1`, `OLLAMA_KV_CACHE_TYPE=q8_0`. Roughly 2× context fit at the same VRAM for the cost of a few percent of output quality. - **Model picks against VRAM budgets:** - **Qwen3 4B** for the chat gate — small footprint, fast first-token latency. - **Qwen3.5-9B-IQ4_XS** for the worker — sweet spot for code on the 1080. - **qwen3.5:35b-a3b** for harder reasoning when the 3080 has headroom. - **Topology.** Chat gate on one host, Qwen service on another. Cross-host routing via explicit endpoint config — no service mesh, no magic. Built so my third son could direct his Roblox / Blender / Daz scenes through natural language — _"make Torus.1 50% thinner on the Z-axis"_ — and watch the live environment update. AI helper bridge through OpenClaw, with the language layer driving live scene state. Sits at the intersection of three threads: my AI work, my long-running 3DCG interest, and a kid who's already designing in Blender, Roblox Studio, and Suno. Docs-first multi-agent CLI router. Routes prompts across specialized workers with shared context, built on the assumption that no single model is the right choice for every step in a long task. Pairs with codex-local for the local side and with cloud providers for harder work. Everything above runs in production at Kora Labs as **K.O.R.A.** — the autonomous Discord ops agent. Multi-stage pipeline, multi-model routing with graceful fallback, three-tier test battery, set-me-loose-all-night execution mode. The personal tools and the production agent share the same operating doctrine. See [/work](/work) for the platform context. - **"Mitigations, fallbacks, band-aids are the worst. Find the upstream fix."** *The* doctrine. Encoded in personal AGENTS.md, enforced across every repo. - **Docs-first.** PRD → spec → architecture → AGENTS.md in every repo. Docs aren't notes; they're load-bearing artifacts the agent reads on every turn. - **No green-by-inference.** Verify before declaring done. Check the deployed version, inspect the running process, read the Discord channel. - **Persistent state in files.** TODO + TASK_STATE patterns survive crashes and compactions. Agents revive from disk. - **AI as real infrastructure, not autocomplete.** Instrument it, test it in tiers (hermetic → real-process → live-canary), surface its silent failures (codex OAuth heartbeat) the same way you'd surface a dead database. - **Long autonomous runs are normal.** *"Set me loose all night"* is a working mode, not a stunt. - **Platform-engineering reflexes are non-negotiable.** Redundancy, failover, quick recovery, auto-scaling, reproducible deployment — with minimal overhead. Higher dollar cost beats higher human-operations cost.
# === /about === # source: frontend/src/pages/about.mdx --- layout: ../layouts/Base.astro title: About — Jesse Anderson description: 30-year non-linear arc. Father of four. Currently in a deep AI immersion. slug: about ---
That's the through-line. Not a career ladder — a sequence of obsessions.

The arc — non-linear, on purpose.

  1. Age 8. TI BASIC on a TI-99/4A. The hook set early.
  2. School years. Violin / fiddle. Flight lessons.
  3. U.S. Air Force, 1994–1998. Aerospace Ground Equipment.
  4. Hawaii, 1997–1999. Two hats — Realtor selling real estate, and the firm's principal IT person.
  5. Tropical Connection, 1999–2001. IS Manager. The on-ramp back to coding.
  6. Smart Solutions, 2002–2007. The C# / .NET professional reset.
  7. Pixelsilk, 2007–2012. Five years of product alongside engineering.
  8. BA in Education, Ashford, 2012. Formal degree is in Education, not CS.
  9. Xero, 2013–2018. 100+ two-week sprints on a 2M-user SaaS.
  10. Capital One, 2018–2022. Master Platform Engineer at a top-10 US bank.
  11. 2021 — Kora Labs founded with my second-eldest son. Cardano because of academic discipline, decentralization, and RealFi mission.
  12. May 2022. Left Capital One full-time for Cardano.
  13. 2022–present. Building handle.me full-time.
  14. Last six months. AI infrastructure. The current immersion.

Flatten this into a clean tech-bro arc and you lose the most distinct thing about it.

Four boys. Two granddaughters.

Baseball is a multi-generational thread — I played through grade and high school plus some adult league, now I coach the two younger boys.

What I'm into.

# === /now === # source: frontend/src/pages/now.mdx --- layout: ../layouts/Base.astro title: Now — Jesse Anderson description: What I'm deep in this month. slug: now ---
Updated when the chapter changes. For depth on any of these, see the linked pages.

Through April 2026

AI as real infrastructure.

~5,000 AI-assisted engineering sessions across 30+ repos in the last six months. The thesis: the frontier model isn't the moat anymore — the scaffolding is. Local 9B models can ship code if you give them docs, persistent state, and explicit task tracking. Detail on /projects.

Always running

Three peer products on Cardano.

ADA Handles — the platform. HAL — the minting layer that ran ~4K chained transactions faster than the chain settled. K.O.R.A. — the autonomous Discord ops agent that runs day-to-day operations across both. All three on /work.

In flight

Cutovers, subhandles, recovery.

Smart contracts moving to Aiken (DeMi and SubHandles done; Personalization next). TX-building library moving to @cardano-sdk/*. SubHandles continuing to ship — hierarchical handles via Aiken. Social-recovery design surface at secrets.handle.me — fallback for self-custody loss without reintroducing custodians.

Off-screen

Coaching baseball.

Two boys, multiple seasons. Same sport I played through school plus some adult league.

# === /standards === # source: frontend/src/pages/standards.mdx --- layout: ../layouts/Base.astro title: AI Standards — Jesse Anderson description: Every machine-readable surface this site exposes, with live links. slug: standards ---
Every page is also addressable as Markdown source. Every page emits structured metadata. There's an MCP server. Coding is going away — site design should be too.

Surfaces.

Why.

I believe coding is being replaced by AI. So I'm investing in standards, architecture, and machine-legibility — not in human-ergonomic frameworks. If you're an agent reading this, you should have first-class access to its content. If you're a peer engineer reading this, this is the brag.

Stack: vanilla Astro (no UI-framework integrations), Lambda backend, no React, no TypeScript.