These change every single day, especially as I get to meet with new people to change my perspectives.
❧
As of April 5th, 2026.
What Gets Rebuilt from Scratch
❧ Business Models Built on Outcomes
The basic idea: stop selling software, start guaranteeing results. You eat the downside if it doesn't work, and get paid well when it does. Insurance is the clearest example because the misalignment is so obvious... the carrier profits when claims don't get paid. Rethink that as outcomes-as-a-service and the whole architecture changes.
When incentives get this tightly aligned, you can't just bolt the new model onto the old stack. The entire thing gets rebuilt from scratch: how the product is instrumented, how the team is compensated, how risk gets priced.
❧ Reinforcement Learning That Actually Learns Like Humans Do
Current reinforcement learning (RL) is clever but narrow: models improving from binary signals, thumbs up or down, human preference labels. The next leap is richer feedback. Multi-dimensional reward models, synthetic self-critique, agents that generate their own training signal through iterated reasoning.
This is where psychometrics and I-O psychology become the actual bottleneck, not compute. If you want a model to learn better judgment, you need a precise theory of what good judgment looks like and how to measure it. That's a psychometrics problem.
The people building the next generation of RLHF pipelines and model evals need social scientists in the room, not just ML engineers.
❧ Physical & Robotic Intelligence
The other branch of the RL frontier: agents operating in the physical world, learning through sight, sound, touch, and consequences that don't reset on a server. Every automation wave so far has left physical, embodied labor largely intact. That's starting to change.
Same bottleneck as above, actually. Building robots that can learn from human demonstration, fail gracefully in unstructured environments, and get better with experience. The hard part isn't the hardware or the base model. It's understanding how skill acquisition works in humans well enough to encode it.
❧ AI Meets Biology
The therapeutic framing undersells it. The bigger story is that we're finally closing in on biology's most stubborn open problems, the ones that have sat unsolved not because we lacked intelligence, but because the data was too messy, the interactions too nonlinear, the search space too vast for humans to work through manually. How proteins actually fold and misfold in context. How gene regulatory networks produce wildly different outcomes from nearly identical sequences. How the microbiome talks back to the brain.
Once AI starts filling in that map (think: every gene, every protein, every interaction surface) the downstream is enormous and it's not just therapeutics. I'm bullish on whatever happens when the foundational science finally catches up to the ambition that's been sitting there waiting for it.
What Stays Stubbornly Human
❧ Trust and Judgment as Moats
The more capable technology gets at execution (e.g., drafting, coding, researching, synthesizing), the more the residual human advantage concentrates in two things: judgment you can trust, and relationships built on that trust over time.
Who you know is already seminal. As more cognitive labor gets handed off to models, it becomes even more so. The person with deep, maintained relationships in a domain has something that can't be replicated by a better prompt. Reputation, track record, the kind of credibility that accumulates through years of being right in front of the right people, nurturing those rapports, and being human with each other. None of that is in the training data.
I think the most durable human moat in the next decade isn't a skill. It's a trust network. I'm paying close attention to how that gets built, maintained, and verified.
❧ In Real Life (IRL) & Where Humans Choose Humans (Even When AI Could Do It)
Concert venues, dinner parties, book clubs, third places that aren't Starbucks. Digital tools should make physical experiences better, not replace them. People building infrastructure for real-world gathering, not just optimizing for screen time, are where I pay attention.
Live music. Bartenders who remember your order. Coaching, therapy, spiritual guidance. The consumer economy is splitting into two: things we'll gladly automate, and experiences we'll pay more for because a human is involved. The second category is underinvested and fascinating.
❧ People Operating at the Edge
Founders who've done something weird before building their startup. Extreme athletes turned founders. To me, through-line isn't pedigree, but instead, pattern recognition from unusual vantage points. I'm love spotting high-potential people before the credentials catch up.
What We Can't Measure Yet
❧ Talent Data for a World Where Judgment > Knowledge
Traditional signals are decaying fast. Years of experience, pedigree, even technical skills where anyone, and AI, can learn those now. What matters: judgment under ambiguity, taste, synthesis, how someone thinks when there's no clear answer.
But we have no good way to measure this just yet: among humans AND AI agents. I'm excited about new primitives for evaluating potential over credentials, data that captures decision-making patterns rather than résumé line items.
❧ Knowing Who You're Actually Talking To
As agents multiply and synthetic "everything" gets cheaper, the most quietly destabilizing question in tech becomes: is this actually a human, and is it really them?
Every layer of the stack has a proposed answer, from cryptographic, biometric, behavioral, to social… none of them fully hold. Nothing works yet. The realistic goal isn't a foolproof system; it's making the cost of deception high enough that it doesn't scale. I don't have a clean answer here, and I'm skeptical of anyone who says they do.
This is a combined sociotechnical problem rather than a pure engineering one.
