Strange Lab
We build world models of companies and market mechanisms for AI work. Make work a videogame. Let agents know themselves and bid accordingly.
A world model is a replica of how an organisation or market actually works. Pick a point in its history, see only what was known by that day, and test a different decision. This is what all work is about and we want to make this ubiquitous.
To test what's possible, we've built two you can try. One is for Enron, everyone's favourite accounting scandal. Using internal email, market, and news data, we can test real forks inside the company's history. We can look at the PG&E power deal and ask whether Enron should've held for a credit recheck or pushed it through. Or we can look at the California crisis, the day a preservation order lands on the trading desk, and test halting the strategy. Or just ask what if Jeff Skilling had just opened the kimono a year early!
We can also do this with a public-history world model built from the Civil War-era American news record — banking, policy, public order, labour, agriculture. You can sit for instance on April 11, 1861 and test what officials should've done the morning before Fort Sumter. These are all consequential decisions where the ability to test a counterfactual with a world model would've been extremely useful.
World models are how you understand organisations and do counterfactuals. We're also interested in how and whether AI agents can coordinate work inside them as well. Here MarketBench asks what agents need before that's possible: calibrated self-knowledge about likely success and cost.
Essays, Papers, and Code
- MarketBench: blog, paper
- The Future of Work Is Playing a Videogame
- The Future of Work Is World Models
- Replayable world model: test "what if" a historical event that you author would've happened
- Enron World Model: choose a cutoff, write an email as an Enron actor, see what might've happened
- Homo Agenticus Sapiens: essay Seeing Like an Agent, GitHub list
- Management flight simulator: blog, VEI repo
- Aligned Agents Still Build Misaligned Organisations