inspired by rebels

show me your claude chat

Gemini-Generated-Image-guo3zsguo3zsguo3

In a world where humans and AI are collaborating, you're not hiring a person anymore. You're hiring a human + AI agent system. Most interviews aren't designed for this.

If you only evaluate the human half, you're reading the conclusion without checking the working. You can't assess the quality of what you're working with.

This matters most in roles where the signal-to-noise ratio is low — product, marketing, HR, strategy. Take a product decision. A team bets on a feature after deep customer research and first principles reasoning. It flops — a competitor shipped the same thing three weeks earlier. The outcome tells you nothing about the quality of thinking that produced it.

A good poker player understands this. The decision and the outcome are two different things. In low-SNR roles, the decision-making process is the best proxy for a good outcome.

So the question is — how do you evaluate the thinking, not just the person? Ask to see their Claude chat. Or whichever AI they use most, which itself tells you something about how they work.

Most people accept the first answer that AI gives them. They don't push back, change their minds, or evolve their thesis. Asking to see a Claude chat they're proud of gives you a glimpse into whether they performed expertise or chased truth. That's the one disposition you can't train into someone after they join.

The gut feeling experienced founders and hiring managers have built over years of hiring will start to fail them. Not because it was wrong — but because the world it was trained on no longer exists. Hiring for human+AI systems requires a new evaluation framework. The Claude chat is where that starts.


What to Watch For

When reviewing someone's AI conversation, here's what separates real thinking from performed expertise:

Signs of deep thinking

Marker What it looks like What it signals
Reframing the question The person restates the problem differently after the AI's first answer They're not outsourcing thinking — they're using the AI to sharpen their own framing
Disagreeing with evidence "I don't think that's right because [specific reason]" They have a thesis before asking, and test it against the AI's response
Changing their mind explicitly "Actually, your point about X changes how I see this. Let me revise..." Intellectual honesty — they update beliefs when presented with better reasoning
Escalating specificity Questions get narrower and more precise over the conversation They're drilling toward truth, not browsing for confirmation
Introducing outside context They bring in data, anecdotes, or domain knowledge the AI doesn't have They're treating the AI as a collaborator, not an oracle
Killing their own darlings They abandon a direction they were pursuing when the reasoning doesn't hold They value being right over being consistent

Red flags


Appendix: When Everyone Starts Optimizing Their Claude Chat

When a measure becomes a target, it ceases to be a good measure. Three ways to stay ahead:

#AI #LLMs #business #collaboration #company #contrarian #entrepreneurship #hypothesis #intelligence #startups #thinking #truth