Systems Thinking Over Syntax: Building Products in the Age of AI
How specification-driven development and AI agents enabled me to ship 100,000 lines of production code in seven weeks—without writing a single line of syntax.
My career started in software engineering, but it's been decades since writing production code was my primary work. As Zolidar's founder, I typically focus on product, partnerships, and company building.
My plan for the holidays was straightforward: update the investor pitch deck and narrative for our first institutional fundraise. I had already built our pitch deck in Cursor—a framework incorporating Khosla Ventures and Sequoia Capital methodologies—and open-sourced it as a reusable library (GitHub repo, LinkedIn post). The remaining work was reframing our narrative to highlight long-range potential for institutional investors.
That plan lasted about a day. While familiarizing myself with recent platform changes, I started addressing a few bugs. Those fixes led to detailed specifications, then prototypes, then full implementations. Why hand these off when I could just keep building? Seven weeks and ~100,000 lines of code later, I merged 100+ changes to production.
The pitch deck work still sits unfinished, but this validated my longstanding belief: systems-level thinking matters more than syntax. Google has a concept called Life of a Query—following how a request flows through every layer of infrastructure. That architectural understanding, paired with the right tools, changes everything.
What I Built
Visual Design & User Experience (~15,000 lines)
I updated 60% of our public-facing pages. But the most interesting work wasn't typical product features—it was building visual assets through code rather than design software.
I created the new Zolid AI logo entirely through specifications and AI assistance: SVG paths, gradient definitions, glow effects, all specified in detail and iteratively tweaked. The animated grid graphics on The Grid by Zolidar with synchronized beam effects followed the same approach. The static version, file size optimization, asset compression, and responsive behavior—all built through agentic coding rather than Figma or Illustrator.
This isn't how most people think of using Cursor. They use it for conventional features, not graphics work.
Platform Insights (~20,000 lines)
Third-party analytics tools create friction: learning new interfaces, switching contexts, checking separate dashboards. I built analytics directly into our platform instead. The system tracks user journeys from first visit through registration and beyond, with referral codes tracing which partners drive signups and automated bot filtering ensuring clean data. Push notifications deliver alerts for important events without anyone needing to check dashboards.
This matters because we can now answer business-critical questions immediately: Which partner referrals convert? Which users need follow-up and support with onboarding? What can we improve about the product? The admin interface surfaces exactly these insights, customized to our specific needs rather than generic metrics from external tools.
Integration & Data Systems (~25,000 lines)
We have built The Grid—our knowledge graph of 60,000+ entities in the employee ownership ecosystem—to be the authoritative domain ontology that makes our AI products fundamentally different from generic AI tools. Generic chatbots don't know who the practitioners are, what relationships between entities mean, or how domain vocabulary actually works. The Grid provides that context. (See our introduction to The Grid.)
We had a solid foundation, but rough edges everywhere: stale code, technical debt, inconsistent data structures, duplicative schemas. Multiple hand-offs had left this code in a messy state. Our in-house CRM processed meeting transcripts, but the processing was unstable and mapping attendees to Grid entities was inconsistent and barely usable. LinkedIn authentication for free Grid access worked, but edge cases broke.
Specification-driven development forced architectural precision. I wrote detailed specs with explicit rules: matching logic for CRM-to-Grid entity resolution, conflict resolution strategies, entity deduplication, deprecating redundant object structures. For LinkedIn auth refactoring, the spec mapped every edge case through Clerk's OAuth capabilities before implementation. Architectural decisions happened upfront—problems caught before production, not discovered in it.
The value compounds: every system integrating with the Grid improves data quality for all products. We're evolving it into a platform cooperative where members collectively own this knowledge infrastructure. But none of this happens without specifications that force those architectural decisions upfront.
Infrastructure & Code Quality (~40,000 lines)
Beyond visible features, substantial work went into the platform's foundation.
Security: Server-side rendering patterns ensure sensitive information (private contact details, internal notes) never reaches the browser. This is a fundamental architectural decision—protecting data at the server layer rather than hoping client-side code hides it properly.
Performance: I fixed memory leaks, resolved server component rendering failures, and cleaned up accumulated technical debt. Feature flags from GrowthBook had piled up—flags for features that launched months ago. Removing stale flags reduced page load times. Sentry error notifications revealed silently failing operations; fixing these improved stability. I also cleaned up unused database schema definitions and optimized structured data markup for search engines.
Quality: CodeRabbit provided AI-powered code review, catching patterns that human reviewers might miss late in development cycles.
Documentation
To ensure this work could be maintained and extended by the team, I created reference guides (roughly 2 pages each) covering everything from user roles to data architecture. Each document is self-contained, progressive (simple overview to deep detail), includes architecture diagrams, and uses language accessible to non-engineers.
![]()
The Method and What It Means
Each major feature started with 6 to 10 hours of specification writing—not documentation for human developers, but precise, actionable instructions for AI agents. The approach:
- Write detailed specs (data models, API boundaries, commit and PR boundaries, code snippets, architecture diagrams)
- Remove all "AI slop" (vague recommendations, optional paths, future dev suggestions, implementation mapped to conventional sprint plans)
- Implement with tight feedback loops
- Refine specs based on what implementation reveals
- Repeat
My React/Next.js syntax knowledge is years outdated. Didn't matter. The tight feedback loop did: spec → implement → verify → refine spec → continue. No standups, no interpretation gaps, no "that's not quite what I meant" cycles. When implementation revealed something the spec missed, I updated it immediately and kept building.
![]()
Seven weeks. 100+ production changes. ~100,000 lines of code. The stats tell the story: 3.86 billion tokens processed, with 3.6 billion served from cache. Estimated cost without my grandfathered plan: $1,343. Actual cost when using advanced thinking models: ~$150. This isn't theoretical—it's viable for anyone.
The barrier to building software has shifted. It's no longer about writing syntactically correct code. It's about articulating requirements with architectural precision in a format AI agents can execute. The pitch deck work still awaits, but this approach will likely shape how I build narrative materials too.
