Speakers & Panelists

François Charton is a Research Engineer at Meta AI Research. He studies AI for mathematics with a focus on symbolic mathematics, which includes learning advanced computations (e.g. integration, differential systems, linear algebra) and symbolic regression. Previously, he was a visiting entrepreneur at Facebook AI and CEO/Founder of Carthage.

Peter Clark is the Senior Research Manager for AI2. His work focuses on natural language processing, machine reasoning, large knowledge bases, and the interplay between these three areas. He has received several awards including a AAAI Best Paper (1997), Boeing Associate Technical Fellowship (2004), and AAAI Senior Member (2014). He is leading the Aristo team at AI2, and recently, he has been working on research areas including neuro-symbolic reasoning and universal mathematical reasoners.

Noah D. Goodman
Noah D. Goodman
Stanford University
Noah D. Goodman is Assistant Professor of Psychology, Linguistics (by courtesy), and Computer Science (by courtesy) at Stanford University. He received his Ph.D. in mathematics from the University of Texas at Austin in 2003. In 2005 he entered cognitive science, working as Postdoc and Research Scientist at MIT. In 2010 he moved to Stanford where he runs the Computation and Cognition Lab. He studies the computational basis of human thought, merging behavioral experiments with formal methods from statistics and logic. Specific projects vary from concept learning and language understanding to inference algorithms for probabilistic programming languages.

Cezary Kaliszyk
Cezary Kaliszyk
University of Innsbruck
Cezary Kaliszyk is an Assistant Professor at the University of Innsbruck, where he leads the Computational Logic Group. He studies machine learning for reasoning and formal methods. He co-organized the 2021 Conference on Artificial Intelligence and Theorem Proving (AITP) and co-chaired the 2021 Conference on Interactive Theorem Proving (ITP). Recent projects include developing uniform learning-reasoning systems that support multiple logical foundations, machine learning for automated theorem proving, and proof automation.

Behnam Neyshabur is a senior staff research scientist at Google. Before that, he was a postdoctoral researcher at New York University and a member of Theoretical Machine Learning program at Institute for Advanced Study (IAS) in Princeton. In summer 2017, he received a PhD in computer science at TTI-Chicago. His current primary interest is reasoning and algorithmic capabilities of giant language models, including permutation invariance, out-of-distribution generalization, and mathematical algorithms.

Talia Ringer is an Assistant Professor with the PL/FM/SE group at Illinois. Prior to Illinois, Dr. Talia earned a PhD in 2021 from the University of Washington, working with the PLSE group. Dr. Talia likes to build proof engineering technologies to make the world a reality. In doing so, Dr. Talia loves to use the whole toolbox --- everything from dependent type theory to program transformations to neural proof synthesis --- all in service of real humans.

Daniel Selsam
Daniel Selsam
OpenAI
Daniel Selsam is a researcher at OpenAI. He founded and leads the IMO Grand Challenge, with the goal of building an AI that can win a gold medal in the International Mathematical Olympiad competition. His research includes machine learning for automated reasoning, e.g. neural SAT solving, olympiad geometry problems, and programming languages, including oracle-guided decision programming and optimizations for the Lean theorem prover.

Jacob Steinhardt
Jacob Steinhardt
UC Berkeley
Jacob Steinhardt is an Assistant Professor in Department of Statistics at UC Berkeley. He also consults part-time for Open Philanthropy. In the past he has worked at OpenAI and been a coach for the USA Computing Olympiad and an instructor at SPARC. His research goal is to make the conceptual advances necessary for machine learning systems to be reliable and aligned with human values. For example, he is working on designing ML systems that conform to interpretable abstractions and constructing mathematical benchmarks that measure the problem-solving ability of machine learning models.