Rules for Running Fast Labs
How you can make a magnetically controllable antibody in <1 year.
These are guidelines we've picked up working through Nonfiction. They're the principles that let us build a magnetically controlled antibody within a year, when nothing like that had been done before.1 None of them are revolutionary on their own — but taken together, they're how a small team moves fast on hard problems in biology.
What > How > Why. It's better to have something that works than to know how to make it work better. It's better to know how to improve something than to understand, in theory, why it works. Understanding is valuable — but it follows execution, not the other way around.2
Every project looks worse before it looks better. When you're going 0→1, there's a long ugly middle where nothing seems to be working. Hold end goals steady, pivot freely on tactics. Nearly every project we've done has looked better at month 6 than month 3 — and of course they usually looked pretty good at month 0 too, which is why we started them.3
Navigate noise with noise. The fitness landscape of biology is disjoint and unpredictable.4 When things aren't working yet, try many different approaches — not randomly, but deliberately. Design your constructs and experiments thoughtfully and consistently, then vary the angles of attack. Sometimes one method works dramatically better for reasons you don't yet understand. Running things multiple ways also helps you catch false positives faster, when data from different approaches doesn't line up. See Sid Meier's 2x variable rule.5
Speed, speed, speed. Gene synthesis and big sequencing runs are powerful tools, but they can breed complacency. Parallelize sequencing with the old-school attitude of just running the next experiment with a winning clone.6
Increase signal, not just cut noise. Engineering systems give you the ability to do both. If a measurement method is unreliable, cranking up the signal is often easier — and more satisfying — than trying to squeeze out every last bit of noise.
Hire great people and give them real problems. Not tasks. Not vision quests. Specific, unsolved problems with real stakes. Smart people want responsibility and the freedom to own outcomes. Give them that.7
Replication over publication. Get things into the hands of users. There's enormous value in a "hello world" that trivially proves, in someone else's hands, that the thing actually works. That kind of evidence compounds trust faster than any paper.8
Let scientists focus on science. Lab operations exist so scientists can wake up each day and think about the lab — not paperwork. Ordering processes, lab notebooks, documentation — all of it should offer multiple acceptable workflows. Let scientists pick the strategy that fits the way they naturally think, rather than forcing everyone into one rigid system.
Pioneer → Patent → Share → Publish. This order matters. Pioneer — work in a genuinely new area where you can actually get broad, meaningful patents. Patent — file fast, file often, file comprehensively. Share — get it into collaborators' hands, let academics run with it. Publish — work with collaborators to get it into journals. When companies partner with academics, everyone wins: the company gains insight it doesn't have in-house, and the academics get to share their work in an elevated way, work on things that are brand new, and show where they're really good.9
Verticalize your critical dependencies. This makes sense when what you're doing is novel. If you're doing cookie-cutter stuff, you can be a fully virtual biotech, which has its own advantages. But when you're doing things that are new, it's better to work directly with a sustained set of smaller people over time where you can accumulate expertise. External vendors add latency and irreproducibility, and when you own the process end-to-end you can iterate on it, simplify it, and make it cheaper — all at once.10
Outcomes, not optics. Do the hard work. Make things that are real. Solve the actual problems, not the ones that matter for academic kudos or financing narratives.11
Footnotes
-
We announced the MagBody on X. The original discovery — that you can dim a fluorescent protein with a magnet — is in Frank Hayward et al., Magnetic control of the brightness of fluorescent proteins, Zenodo (2024). ↩
-
Michael Nielsen talks about this really well in his conversation with Dwarkesh Patel. Einstein's early work was fitting theories onto data — special relativity reconciled Maxwell's equations with what people were observing, not derived from some grand "why." And going way further back, people dismissed Copernican heliocentrism for centuries because you should be able to see stellar parallax if the Earth moves, and nobody could detect it. It wasn't measured until 1838! But the model was useful long before anyone could explain the missing parallax. The "what" came centuries before the "why." ↩
-
This is basically the "chasm" from Geoffrey Moore's Crossing the Chasm — that painful gap between early momentum and real traction where most things die. The book is about startups but the dynamic is identical for R&D projects. ↩
-
The technical term is "rugged fitness landscape" — the idea that in biology, the best path forward is almost never a straight line from where you are. Kauffman's The Origins of Order is the classic reference. Wu et al. showed this empirically for protein engineering: Adaptation in protein fitness landscapes is facilitated by indirect paths, eLife (2016). In practice what this means is — if something isn't working, try a totally different approach rather than tweaking parameters on the current one. ↩
-
Sid Meier — the Civilization guy — has this rule: if you want a player to notice a difference, make it at least 2x. Small tweaks get lost in noise. Same thing in biology. If your assay is noisy, don't test a 20% improvement — test something that should be 5x better. If it's real, you'll see it even through the noise. His memoir is a surprisingly good read on iterative design. ↩
-
More on this in Iteration Times & University R&D. ↩
-
Xerox PARC is the canonical example — a small group of people given real autonomy invented the personal computer, laser printing, Ethernet, and the GUI. The common thread across all the great labs is that they gave smart people actual problems to own, not tasks to execute. Hiltzik's Dealers of Lightning is the best book on PARC. Patrick Collison keeps a great running list of historical labs that followed this pattern. ↩
-
This one hits close to home. The previous generation of "magnetogenetics" papers — the ferritin-based ones — claimed that you could transduce magnetic fields into cellular signals, and the papers sailed through peer review. Then Markus Meister did the math and showed the proposed mechanisms were orders of magnitude too weak to work: Physical limits to magnetogenetics, eLife (2016). The papers passed review; what they couldn't pass was someone actually trying to replicate them. Getting something into someone else's hands — a real "hello world" — is a much more honest test than any journal. ↩
-
Read Invisible Frontiers by Stephen Hall on the early days of Genentech. Boyer and Swanson filed patents on recombinant DNA methods while simultaneously publishing and collaborating all over the place. The strength of the patents is what gave them the freedom to be generous — if your IP moat is wide enough, sharing tools doesn't threaten you, it helps you. ↩
-
SpaceX is the clearest modern example. They brought turbopumps, injectors, avionics — basically everything — in-house instead of sourcing from dozens of subcontractors. And because they owned the whole stack, they could redesign parts away entirely. Each generation of Raptor looks simpler than the last, not more complex. That only happens when you control the full vertical. The Ashlee Vance biography, Elon Musk, covers this well. ↩
-
Goodhart's Law — "when a measure becomes a target, it ceases to be a good measure." When the goal becomes publishing papers instead of solving problems, people optimize for publishing papers. Edwards & Roy wrote a good paper on how this plays out in academia: Academic research in the 21st century: maintaining scientific integrity in a climate of perverse incentives and hypercompetition, Environmental Engineering Science (2017). The same dynamic shows up in startup fundraising — optimizing for what looks good to investors rather than what actually moves the science forward. ↩