Jesse Michael Han

   Research      Blog      Misc.   

Research

I am broadly interested in formal proofs and building artificially intelligent theorem provers.

In joint work with Jason Rute, Yuhuai Wu, Ed Ayers, and Stanislas Polu, I developed PACT, a methodology for extracting abundant self-supervised data from kernel-level proof artifacts. We apply this to Lean and show that co-training large Transformer language models on these data significantly boosts theorem proving performance.

With Daniel Selsam, Leonardo de Moura, and Patrice Godefroid, I recently helped develop a new programming paradigm called oracle-guided decision programming, including a Scheme dialect with a primitive for non-deterministic choice whose interpreter queries a neural oracle.

I have worked on using graph neural networks to accelerate SAT solvers, with improvements to state-of-the-art CDCL solvers (paper, code) and the cube-and-conquer paradigm (paper, code).

With Floris van Doorn, I completed the Flypitch project, producing a formal proof of the independence of the continuum hypothesis in Lean.

Papers and preprints

Talks and conferences

Notes