I'm an engineer, researcher and aspiring philosopher.
My work centers around the question of 'what to align towards', both in AI but also more broadly in society (what's worth amplifying / what's "good"). I believe a lot of modern ills come from our misalignment between our deeper wants (eg. our values), and our shallow expression of them (stated or revealed preferences), underpinning the modern societal stack of democracies, markets and AI. For more, see our paper.
In 2023 I co-founded the Meaning Alignment Institute, with help from OpenAI, where I research these questions and build new mechanisms informed by that research. Before, I worked on AI-assisted deliberation at AOI, and co-founded a startup called Potential.