inspired by rebels

code changes shouldn’t kill your ai experiments

Samir, Sahil, and I were debating how to increase experimentation velocity in our AI workflows.

We’re building an AI storyboarding platform, and Sahil wanted to rapidly test different workflows to find the best results. He was frustrated because every new experiment required code changes (different models, system prompts, input text, input image, hyperparameters, etc.). With so many permutations, iterating manually was painfully slow.

The key insight: in a modern AI-first company, software engineering teams must actively support AI teams with tooling and processes. The inherent unpredictability of AI, compared to traditional software, makes rapid iteration and validation critical. Proper enablement lets the team experiment at high speed, quickly converging on what works and what doesn’t. That is exactly what will make or break the product.

This isn’t new. Back in 2018, OpenAI made a strategic bet on internal validation and data-labeling infrastructure to tame this same complexity.

#AI #LLMs #company #experimentation #hypothesis #startups #strategy