Skip to content
Claude Code Guide

Phase 6: Case Studies

Real projects, real lessons. These two builds shaped every phase of this methodology.


zenith.chat — Anthropic Hackathon Winner

What it is: A conversational AI platform built during the Anthropic + Forum VC hackathon.

The challenge: Build something genuinely useful with Claude in a compressed timeframe. Every team had the same API access. The differentiator was methodology, not technology.

What we learned:

  • Planning beat coding speed. Teams that started coding immediately built more features but shipped less polish. We spent the first 25% of the hackathon in Phases 1-2 (research and specification). By the time we started building, we knew exactly what we were making and why.

  • The "indispensable" question identified the winner. Contextual conversation memory wasn't in any hackathon brief. It emerged from asking "what would turn this from impressive to indispensable?" The judges specifically cited it as the differentiating feature.

  • Interactive specification prevented scope creep. During the spec phase, we explicitly defined what was out of scope. When ideas surfaced mid-build ("what if we also add..."), we had a documented decision to point to. This saved at least two hours of rabbit-hole development.

Note

The hackathon win wasn't about writing the most code. It was about writing the right code, informed by research, scoped by specification, and validated visually.


Turbulence Dashboard — Moomoo Skills API

What it is: A financial data visualization dashboard pulling real-time market turbulence data from Moomoo's Skills API.

The challenge: Financial data has zero tolerance for visual ambiguity. A chart that "looks close enough" in a social app is actively dangerous in a financial context. Numbers must be precise, labels must be clear, and the visual hierarchy must guide the user to the right insight immediately.

What we learned:

  • Visual QA is non-negotiable for financial UI. This is where the Phase 5 methodology was born. Programmatic tests passed while the actual dashboard was displaying overlapping axis labels, truncated currency values, and a mobile layout where the time-series chart was unreadable. Every single one of those issues was caught by visual screenshot inspection, not by code.

  • Domain expertise drives "indispensable." The feature that made the dashboard genuinely useful wasn't more data or prettier charts — it was proactive risk highlighting. Instead of making users scan a grid of numbers, the dashboard surfaced anomalies automatically with visual emphasis. This came directly from understanding how financial professionals actually work: they're scanning for exceptions, not reading every number.

  • Linear + MCP provided real-time visibility. During the build, I could check progress from my phone, re-prioritize stories on the go, and keep stakeholders updated without context-switching out of the development flow.

Tip

If you're building for a domain where precision matters — finance, healthcare, engineering — the visual QA phase isn't optional. It's the most important phase.


Common patterns across both projects

  1. Research first, code second. Every successful project started with Phase 1's deep research.
  2. Spec as conversation, not document. The interactive specification caught critical issues early in both projects.
  3. Stories enable parallelism. Well-structured user stories let Claude Code work through features methodically without losing context.
  4. Ask the indispensable question at 60-70%. Both projects' differentiating features emerged from this prompt at the right moment.
  5. Visual QA catches what code can't. Especially in domains where visual precision matters.

Next: Getting Started →