← back to about
[ about / mental models ]

Mental models I keep returning to

How I frame a hard problem before I even open Claude.

These are the models I've internalized enough that they shape how I structure problems before I even open Claude.

  • 01

    First principles, then analogies.

    Strip the question to what's actually true. Borrow patterns only after you've named the constraints.
  • 02

    Inversion.

    What would guarantee this fails? Now don't do that.
  • 03

    Second-order effects.

    The first order is what the user clicks. The second order is what they do tomorrow. The third order is what the system rewards over time. PMs who optimize for the first order ship things that quietly destroy the second and third.
  • 04

    Bounded uncertainty.

    Set the acceptable range for an AI response, not the exact answer. This is the engineering control most teams haven't internalized yet. (Borrowed from Nate B. Jones.)
  • 05

    The cost of a bad decision is its reversibility, not its visibility.

    Loud bad decisions get caught. Quiet bad decisions compound.

Try them live in the First-Principle app →.