Skip to the content.

Beyond Bayes: Paths Towards Universal Reasoning Systems

Abstract

A long-standing objective of AI research has been to discover theories of reasoning that are general: accommodating various forms of knowledge and applicable across a diversity of domains. The last two decades have brought steady advances toward this goal, notably in the form of mature theories of probabilistic and causal inference, and in the explosion of reasoning methods built upon the deep learning revolution. However, these advances have only further exposed gaps in both our basic understanding of reasoning and in limitations in the flexibility and composability of automated reasoning technologies.

This workshop aims to reinvigorate work on the grand challenge of developing a computational foundation for reasoning in minds, brains, and machines. Goals include:

Topics

Specific topics relevant to these larger goals include:

Invited Speakers and Panelists

Thomas Icard

Thomas Icard is an Associate Professor of Philosophy and (by courtesy) of Computer Science at Stanford University. Thomas works at the intersection of philosophy, cognitive science, and computer science, especially on topics that sit near the boundary between the normative (how we ought to think and act) and the descriptive (how we in fact do think and act). Much of his research concerns the theory and application of logic, probability, and causal modeling and inference. Some current topics of interest include explanation, the quantitative/qualitative interface, and reasoning with limited resources.

Title: Some Themes in Cognitive Science of Reasoning

Nan Rosemary Ke

Rosemary is a research scientist at Deepmind. Previously, she was a PhD student at Mila, advised by Yoshua Bengio and Chris Pal. Her research centers around developing novel machine learning algorithms that can generalize well to changing environments. Her research focuses on two key ingredients: credit assignment and causal learning. These two ingredients flow into and reinforce each other: appropriate credit assignment can help a model refine itself only at relevant causal variables, while a model that comprehends causality sufficiently well can reason about the connections between causal variables and the effect of intervening on them. She has co-organized a conference, 6 workshops and 3 challenges. These are the conference on causal learning and reasoning (CLeaR) 2022, the “inductive biases, invariances and generalization in reinforcement learning” workshop at ICML 2020, “causal learning for decision making” workshop at ICLR2020, “efficient credit assignment workshop” at ICML 2018, “reproducibility in machine learning” at ICML 2017, ICML 2018 and ICLR 2019), the “Real robot challenge” at NeurIPS 2021 and the ICLR reproducibility challenge at ICLR 2018 and ICLR 2019.

Title: From What to Why: Towards Causal Deep Learning.

Armando Solar-Lezama

Armando Solar-Lezama is a Professor in the Department of Electrical Engineering and Computer science and associate director of the Computer Science and Artificial Intelligence Lab at MIT. His background is in programming languages, where he is best known for his seminal work on program synthesis. More recently, he has been working at the intersection of programming languages and machine learning, exploring learning techniques that combine the formal guarantees of program synthesis with the expressiveness of traditional machine learning. He has co-organized a number of workshops including the Workshop on Computer Assisted Programming (CAP) at NeurIPS 2020 and the workshop on Machine Learning and Programming Languages (MAPL) at PLDI 2019.

Kimberly Stachenfeld

Kimberly Stachenfeld is a Senior Research Scientist at DeepMind. She received her PhD in Computational Neuroscience from Princeton University in 2018. Before that, she received her Bachelors from Tufts University in 2013, where she majored in Chemical & Biological Engineering and Mathematics. Her research is at the interface of Neuroscience and Machine Learning, focusing on graph-based computations for efficient learning and planning in brains and in machines.

Title: Physical Design using Graph- based Learned Simulators

The ability to design physical objects that serve a purpose is a remarkable property of human reasoning and central to solving real-world engineering problems. Though automating design using machine learning has tremendous promise, existing methods are often limited by the task-dependent distributions they were exposed to during training. Here we showcase a task-agnostic approach to inverse design, by combining general-purpose learned simulators with gradient-based design optimization. The learned simulators are composed of Graph Neural Networks (GNNs), a model class that combines the expressiveness of deep learning with the relational structure of graphs, enabling us to model complex dynamics in a generalizable, differentiable way. Our approach is simple, fast, and reusable, solving high-dimensional problems with complex physical dynamics, from designing surfaces and tools that manipulate fluid flows to engineering problems like airfoil shape optimization. This framework produces high-quality designs, even when propagating gradients through trajectories of hundreds of steps, even when using models that were pre-trained for single-step predictions on data substantially different from the design tasks. Our results suggest that despite some remaining challenges, learned GNN-based simulators are maturing to the point where they can support general-purpose design optimization across a variety of domains.

Tyler Bonnen

Tyler is a PhD student in the Department of Psychology at Stanford University. After transferring from Miami-Dade Community College, tyler studied chemistry and comparative literature at Columbia University. He went on to research fellowships in the Max-Planck Institute in Leipzig, then in the Department of Brain and Cognitive Sciences at MIT before coming to Stanford. In his current work, co-advised by Anthony Wagner and Daniel Yamins, tyler uses biologically plausible computational models, neural data, and animal behavior in order to formalize the relationship between perception and memory.

Ishita Dasgupta

Ishita is a Research Scientist at DeepMind New York City. She was previously a postdoctoral researcher at Princeton University in the Departments of Psychology and Computer Science, working in the Computational Cognitive Science Lab with Prof. Tom Griffiths. She received her PhD from the Department of Physics at Harvard University in 2020, working in the Computational Cognitive Neuroscience Lab with Prof. Sam Gershman. Her research is at the intersection of computational cognitive science and machine learning. Ishita uses advances in machine learning to build new models of human reasoning, applies cognitive science approaches toward understanding black-box AI systems, and combines these insights to build better, more human-like artificial intelligence.

Talk Title: Content Effects on Abstract Reasoning

Abstract or logical reasoning often implies an ability to systematically perform algebraic operations over variables, independent of the contents of these variables. ‘X is bigger than Y’ logically implies that ‘Y is smaller than X’, no matter the values of X and Y. However, humans are not perfect abstract reasoners. Several findings point to content-sensitivity in human logical and probabilistic reasoning – humans can be very logical in grounded familiar domains, while struggling in other more abstract ones. In this talk, I will examine how to capture these dualities – in particular, how the re-use or amortization of past computations using neural networks might provide a solution. Further, failure to perform these kinds of systematic operations has often been highlighted as a failure mode for deep neural network models. I will demonstrate that this failure – both in small trained-from-scratch models, as well as large pretrained models – is not universal, but context-sensitive (similar to in humans). I will discuss possible points of overlap in mechanism and origin of these content effects on reasoning in humans and machines.

Guy Van den Broeck

Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His papers have been recognized with awards from key conferences such as AAAI, UAI, KR, OOPSLA, and ILP. Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.

Title: Tractable Probabilistic Circuits

Probabilistic circuits represent distributions through the computation graph of probabilistic inference, as a type of neural network. They move beyond probabilistic graphical models and other deep generative models by guaranteeing tractable inference for certain classes of queries: marginal probabilities, entropies, expectations, and related queries of interest. These probabilistic circuit models are now also effectively learned from data, outperforming VAE and flow-based likelihoods on MNIST-family benchmarks. They thus enable new solutions to some key problems in machine learning, including state-of-the-art neural compression results. This talk will overview these recent developments, in terms of learning and probabilistic inference.

Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His papers have been recognized with awards from key conferences such as AAAI, UAI, KR, OOPSLA, and ILP. Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.

Jan-Willem van de Meent

Dr. Jan-Willem van de Meent is an Associate Professor (Universitair Hoofddocent) at the University of Amsterdam. He co-directs the AMLab with Max Welling and co-directs the Uva Bosch Delta Lab with Theo Gevers. He also holds a position as an Assistant Professor at Northeastern University, where he is currently on leave. Prior to becoming faculty at Northeastern, he held a postdoctoral position with Frank Wood at Oxford, as well as a postdoctoral position with Chris Wiggins and Ruben Gonzalez at Columbia University. He carried out his PhD research in biophysics at Leiden and Cambridge with Wim van Saarloos and Ray Goldstein.

Jan-Willem van de Meent’s group develops models for artificial intelligence by combining probabilistic programming and deep learning. A major theme in this work is understanding which inductive biases can enable models to generalize from limited data. Inductive biases can take the form of a simulator that incorporates knowledge of an underlying physical system, causal structure, or symmetries of the underlying domain. At a technical level, his group develops inference methods, along with corresponding language abstractions to make these methods more modular and composable. To guide this technical work, his group collaborates extensively to develop models for robotics, NLP, healthcare, and the physical sciences.

Jan-Willem van de Meent is one of the creators of Anglican, a probabilistic language based on Clojure. His group currently develops Probabilistic Torch, a library for deep generative models that extends PyTorch. He is an author on a forthcoming book on probabilistic programming, a draft of which is available on arXiv. He is a co-chair of the international conference on probabilistic programming (PROBPROG). He was the recipient of an NWO Rubicon Fellowship and is a current recipient of the NSF CAREER award.

Title: Thinking Compositionally about Inference

Probabilistic programming draws on ideas from artificial intelligence, statistics, and programming languages. It attempts to combine these ideas in a manner that builds on their respective strengths. In this talk, I will discuss how we can integrate deep learning and importance sampling to perform inference in probabilistic programs. This approach has tremendous potential to make inference scalable, but requires model-specific designs for networks and samplers. I will show how programming language abstractions can make the design of these components more practical and accessible by allowing us to reason compositionally about importance samplers. This opens up opportunities for new model and inference designs, both in the context of simulation-based inference and in the context of deep generative models.

Charles Sutton

Charles Sutton is a Research Scientist at Google Brain and a Reader (equivalent to Associate Professor: http://bit.ly/1W9UhqT) in Machine Learning at the University of Edinburgh. He has published over 50 papers in probabilistic machine learning and deep learning, motivated by the demands of a broad range of applications, including natural language processing (NLP), analysis of computer systems, sustainable energy, data analysis, and software engineering. His work in machine learning for software engineering has won two ACM Distinguished Paper Awards.

Title: Program Synthesis, Program Semantics, and Large Language Models

I will describe our experience with two generations of large language models for code at Google. These models show a range of abilities, including generating small programs from natural language descriptions and engaging in dialog about code, incorporating human feedback to improve solutions. However, in a deeper sense, these models seem not to understand the code that they write, in the sense that they are generally unable to predict the output of a program given a specific input. I will discuss our subsequent efforts to improve the “code understanding” abilities of LMs, by asking them to emit intermediate computation steps as tokens onto a “scratchpad”. These same models are able to perform complex multi-step computations when asked to perform the operation “step by step”, showing the results of intermediate computations, even operations that the LM could not perform directly.

Schedule and Planned Activities

The workshop will consist of an opening keynote presentation and three sessions, each broken up by 30 minute ICML plenary breaks. Discussion will be encouraged at all sessions. Invited talks will be 20 minutes followed by 25 minute panel discussions, in which the audience is welcome to participate in the Q+A. Contributed talks will be rapid, 5 minute presentations; attendees may use the ICML break times immediately following these presentations to meet the researchers and ask any questions.

The workshop will be held at the Baltimore Convention Center in Ballroom 2 (Level 400) on Friday, July 22nd, 2022.

Schedule

8:45 AM - 9:00 AM: Doors Open & Welcome (Opening Remarks)

9:00 AM - 9:45 AM: Opening Keynote: Cognitive Science of Reasoning

9:45 AM - 10:00 AM: Contributed Spotlight Talks: Part 1

10:00 AM - 10:30 AM: ICML Plenary Break

10:30 AM - 11:35 PM: Session 1: New Reasoning Problems and Modes of Reasoning (Talks & Panel Discussion)

11:35 AM - 12:00 PM: Contributed Spotlight Talks: Part 2

12:00 PM - 1:30 PM: ICML Plenary Lunch Break

1:30 PM - 3:00 PM: Session 2: Reasoning in Brains vs Machines (Talks & Panel Discussion)

3:00 PM - 3:30 PM: ICML Plenary Break

3:30 PM - 4:55pm: Session 3: New Computational Technologies for Reasoning (Talks & Panel Discussion)

4:55 PM - 5:00 PM: Closing Remarks

5:00 PM - 6:00 PM: Poster Session (In Person Only)

Accepted Posters

Google Drive PDFs

P01: Maximum Entropy Function Learning. Authors: Simon Segert, Jonathan Cohen. paper

P02: Designing Perceptual Puzzles by Differentiating Probabilistic Programs. Authors: Kartik Chandra, Tzu-Mao Li, Joshua B. Tenenbaum, Jonathan Ragan-Kelley. paper

P03: Interoception as Modeling, Allostasis as Control. Authors: Eli Zachary Sennesh, Jordan Theriault, Dana Brooks, Jan-Willem van de Meent, Lisa Feldman Barrett, Karen Quigley. paper

P04: People Construct Simplified Mental Representations to Plan. Authors: Mark K Ho, David Abel, Carlos G. Correa, Michael Littman, Jonathan Cohen, Thomas L. Griffiths. paper

P05: Using Language and Programs to Instill Human Inductive Biases in Machines. Authors: Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Hu, Robert D. Hawkins, Nathaniel Daw, Jonathan Cohen, Karthik R Narasimhan, Thomas L. Griffiths. paper

P06: Automatic Inference with Pseudo-Marginal Hamiltonian Monte Carlo. Authors: Jinlin Lai, Daniel Sheldon. paper

P07: MoCa: Cognitive Scaffolding for Language Models in Causal and Moral Judgment Tasks. Authors: Allen Nie, Atharva Amdekar, Christopher J Piech, Tatsunori Hashimoto, Tobias Gerstenberg. paper

P08: Map Induction: Compositional Spatial Submap Learning for Efficient Exploration in Novel Environments. Authors: Sugandha Sharma, Aidan Curtis, Marta Kryven, Joshua B. Tenenbaum, Ila R Fiete. paper

P09: Towards a Neuroscience of “Stories”: Metric Space Learning in the Hippocampus. Authors: Zhenrui Liao, Attila Losonczy. paper

P10: Combining Functional and Automata Synthesis to Discover Causal Reactive Programs. Authors: Ria Das, Joshua B. Tenenbaum, Armando Solar-Lezama, Zenna Tavares. paper

P11: MetaCOG: Learning a Meta-cognition to Recover what Objects are Actually There. Authors: Marlene Berke, Zhangir Azerbayev, Mario Belledonne, Zenna Tavares, Julian Jara-Ettinger. paper

P12: Desiderata for Abstraction. Authors: Simon Alford, Zenna Tavares, Kevin Ellis. paper

P13: Estimating Categorical Counterfactuals via Deep Twin Networks. Authors: Athanasios Vlontzos, Bernhard Kainz, Ciarán Mark Gilligan-Lee. paper

P14: Logical Activation Functions: Logit-space Equivalents of Probabilistic Boolean Operators. Authors: Scott C Lowe, Robert Earle, Jason d’Eon, Thomas Trappenberg, Sageev Oore. paper

P15: Bias of Causal Identification using Non-IID Data. Authors: Chi Zhang, Karthika Mohan, Judea Pearl. paper

P16: Bayesian Reasoning with Trained Neural Networks. Authors: Jakob Knollmüller, Torsten Ensslin. paper

P17: Correcting Model Bias with Sparse Implicit Processes. Authors: Simon Rodriguez Santana, Luis A. Ortega, Daniel Hernández-Lobato, Bryan Zaldivar. paper

P18: Abstract Interpretation for Generalized Heuristic Search in Model-Based Planning. Authors: Tan Zhi-Xuan, Joshua B. Tenenbaum, Vikash Mansinghka. paper

P19: Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI. Authors: Suzanna Sia, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, Lambert Mathias. paper

P20: Learning to Reason about and to Act on Cascading Events. Authors: Yuval Atzmon, Eli Meirom, Shie Mannor, Gal Chechik. paper

P21: Reverse-Mode Automatic Differentiation and Optimization of GPU Kernels via Enzyme. Authors: William S. Moses, Valentin Churavy, Ludger Paehler, Jan Hückelheim, Sri Hari Krishna Narayanan, Michel Schanen, Johannes Doerfert. paper - originally published at SC ‘21

P22: Type Theory for Inference and Learning in Minds and Machines. Author: Felix Anthony Sosa, Tomer D. Ullman. paper

P23: Language Model Cascades. Authors: David Dohan, Aitor Lewkowycz, Jacob Austin, Winnie Xu, Yuhuai Wu, David Bieber, Raphael Gontijo-Lopes, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-Dickstein, Kevin Patrick Murphy, Charles Sutton. paper

P24: Unifying Generative Models with GFlowNets. Authors: Dinghuai Zhang, Ricky T. Q. Chen, Nikolay Malkin, Yoshua Bengio. paper

P25: Proving Theorems using Incremental Learning and Hindsight Experience Replay. Authors: Eser Aygün, Laurent Orseau, Ankit Anand, Xavier Glorot, Stephen Marcus McAleer, Vlad Firoiu, Lei M Zhang, Doina Precup, Shibl Mourad. paper

P26: Biological Mechanisms for Learning Predictive Models of the World and Generating Flexible Predictions. Authors: Ching Fang, Dmitriy Aronov, Larry Abbott, Emily L Mackevicius. paper

P27: Explanatory Paradigms in Neural Networks. Authors: Mohit Prabhushankar, Ghassan AlRegib. paper

P28: On the Generalization and Adaption Performance of Causal Models. Authors: Nino Scherrer, Anirudh Goyal, Stefan Bauer, Yoshua Bengio, Nan Rosemary Ke. paper

P29: Predicting Human Similarity Judgments Using Large Language Models. Authors: Raja Marjieh, Ilia Sucholutsky, Theodore Sumers, Nori Jacoby, Thomas L. Griffiths. paper

P30: Meta-Learning Real-Time Bayesian AutoML For Small Tabular Data. Authors: Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter. paper

P31: Can Humans Do Less-Than-One-Shot Learning? Authors: Maya Malaviya, Ilia Sucholutsky, Kerem Oktar, Thomas L. Griffiths. paper

P32: Collapsed Inference for Bayesian Deep Learning. Authors: Zhe Zeng, Guy Van den Broeck. paper

P33: ViRel: Unsupervised Visual Relations Discovery with Graph-level Analogy. Authors: Daniel Zeng, Tailin Wu, Jure Leskovec. paper

P34: ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time. Authors: Tailin Wu, Megan Tjandrasuwita, Zhengxuan Wu, Xuelin Yang, Kevin Liu, Rok Sosic, Jure Leskovec. paper

P35: Hybrid AI Integration Using Implicit Representations With Scruff. Authors: Avi Pfeffer, Michael Harradon, Sanja Cvijic, Joseph Campolongo. paper

P36: Large Language Models are Zero-Shot Reasoners. Authors: Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa. paper

P37: Structured, Flexible, and Robust: Benchmarking and Improving Large Language Models Towards More Human-like Behavior in Out-of-Distribution Reasoning Tasks. Authors: Katherine M. Collins, Catherine Wong, Jiahei Feng, Megan Wei, Joshua B. Tenenbaum. paper

Organizers

Program Chair: Zenna Tavares

Zenna Tavares is the inaugural Innovation Scholar in Columbia University’s Zuckerman Mind Brain Behavior Institute, and Associate Research Scientist in the Data Science Institute. Zenna’s research aims to understand how humans reason, that is, how they come to derive knowledge from observing and interacting with the world. He also constructs computational and statistical tools that help advance his work on causal reasoning, probabilistic programming, and other areas. Prior to Columbia University, he was at MIT, where he received a PhD in Cognitive Science and Statistics and was a Postdoctoral Research researcher in the Computer Science Artificial Intelligence Lab (CSAIL). Zenna has co-organized a number of workshops, including DBAI at Neurips 2021, and OOD Generalization at Neurips 2021, and served on the program committee for UAI, ICML, Neurips, ICLR, and LAFI (POPL).

Nada Amin

Nada Amin is an assistant professor of computer science at Harvard SEAS. Previously, she was a University Lecturer in Programming Languages at the University of Cambridge, and a member of the team behind the Scala programming language at EPFL. She is broadly interested in programming languages, and the intersection of programming languages and artificial intelligence. She has co-organized the Scala, Scheme, miniKanren and TyDe (Type-driven Development) workshops, and has served on the program committee of POPL, FLOPS, OOPSLA, UAI among others.

Eli Bingham

Eli is a Machine Learning Fellow at the Broad Institute of MIT and Harvard’s Data Sciences Platform, where he develops machine learning methods and software for biomedical research applications, and was previously a senior research scientist at Uber AI Labs. His research at the intersection of programming languages and AI focuses on developing general methods for approximate Bayesian inference suited for new and previously inaccessible problems, and on democratization of those methods through the Pyro probabilistic programming language, of which he is a co-creator and core developer. He has served as a program committee member of scientific workshops including HOPE and LAFI, and has also organized and led a number of public and private workshops and tutorials for current and prospective Pyro users.

Nan Rosemary Ke

Rosemary is a research scientist at Deepmind. Previously, she was a PhD student at Mila, advised by Yoshua Bengio and Chris Pal. Her research centers around developing novel machine learning algorithms that can generalize well to changing environments. Her research focuses on two key ingredients: credit assignment and causal learning. These two ingredients flow into and reinforce each other: appropriate credit assignment can help a model refine itself only at relevant causal variables, while a model that comprehends causality sufficiently well can reason about the connections between causal variables and the effect of intervening on them. She has co-organized a conference, 6 workshops and 3 challenges. These are the conference on causal learning and reasoning (CLeaR) 2022, the “inductive biases, invariances and generalization in reinforcement learning” workshop at ICML 2020, “causal learning for decision making” workshop at ICLR2020, “efficient credit assignment workshop” at ICML 2018, “reproducibility in machine learning” at ICML 2017, ICML 2018 and ICLR 2019), the “Real robot challenge” at NeurIPS 2021 and the ICLR reproducibility challenge at ICLR 2018 and ICLR 2019.

John Krakauer

Dr. John Krakauer is John C. Malone Professor, Professor of Neurology, Neuroscience, and Physical Medicine and Rehabilitation, Director of the Brain, Learning, Animation, and Movement Lab at The Johns Hopkins University School of Medicine, and External Professor at the Santa Fe Institute. His areas of research interest include experimental and computational studies of motor control and motor learning in humans, motor recovery and rehabilitation after stroke, and philosophy of mind. He has organized numerous workshops and scientific meetings, recently including The Learning Salon.

Emily Mackevicius

Emily Mackevicius is currently a postdoctoral neuroscientist at Columbia University in the Aronov lab. Previously, she completed her Ph.D. in neuroscience at MIT in the Fee lab. Her research investigates how the brain learns new information in the context of prior knowledge. Her work involves both experiments (recording neurons in birds performing naturalistic memory behaviors) and theory/computation (modeling how neural circuits self-organize, and developing a sequence-detection method, seqNMF). She has been involved in organizing a variety of scientific meetings, including founding an ongoing tutorial series on computational topics at MIT’s Brain and Cognitive Sciences Department, and TAing Woods Hole summer courses (“Methods in Computational Neuroscience”, and “Brains, Minds, and Machines”).

Robert Osazuwa Ness

Robert Osazuwa Ness is a Senior Research Scientist at Microsoft Research in Redmond, WA. He is a tech lead on MSR’s Societal Resilience team. Robert’s research aims to automate human reasoning by enabling experts to program domain knowledge into learning algorithms to achieve predictive capabilities not possible from data alone. He leads the development of MSR’s causal machine learning platform and conducts research into probabilistic models for advanced causal reasoning. Before joining MSR, he worked as a machine learning research engineer in various startups. He attended graduate school at both Johns Hopkins SAIS and Purdue University. He received his Ph.D. in Statistics from Purdue, where his dissertation research focused on Bayesian active learning models for causal discovery.

Talia Ringer

Talia Ringer is an assistant professor at the University of Illinois at Urbana-Champaign. Her work focuses on tools that make it easier to develop and maintain systems verified using proof assistants. Toward that end, she loves to use the whole toolbox—everything from dependent type theory to program transformations to neural proof synthesis—all in service of real humans verifying real systems. Prior to Illinois, she earned her PhD in 2021 from the University of Washington. She also has experience in industry. She has served the community in many capacities, including as founder and chair of the SIGPLAN long-term mentoring committee (SIGPLAN-M), co-chair of PLMW at ICFP 2020, hybridization co-chair of SPLASH 2021, co-organizer of the Coq Workshop 2022, and program committee member for PLDI, ITP, TYPES, CAV, CoqPL, HATRA, and AIPLANS.

Armando Solar-Lezama

Armando Solar-Lezama is a Professor in the Department of Electrical Engineering and Computer science and associate director of the Computer Science and Artificial Intelligence Lab at MIT. His background is in programming languages, where he is best known for his seminal work on program synthesis. More recently, he has been working at the intersection of programming languages and machine learning, exploring learning techniques that combine the formal guarantees of program synthesis with the expressiveness of traditional machine learning. He has co-organized a number of workshops including the Workshop on Computer Assisted Programming (CAP) at NeurIPS 2020 and the workshop on Machine Learning and Programming Languages (MAPL) at PLDI 2019.

Call for Submissions

We seek submissions related to any of the topics in the overview, especially ongoing or preliminary work that bridges gaps between topics, and work that might be unfamiliar to the broader ICML community. Submissions will be lightly reviewed for relevance and clarity. All accepted submissions will be presented as posters at a poster session, and a subset will also be selected for oral presentation as contributed talks. Talks will be selected to generate interesting discussions – speculative/perspective abstracts welcome.

Submissions are due in OpenReview before midnight AoE on June 7, 2022 (extended deadline) and may take one of two forms:

  1. Extended Abstracts: Authors may submit ongoing or preliminary work in the form of an extended abstract of 2-4 pages (excluding references or appendices, preference for shorter abstracts) for consideration for a poster presentation. Submissions should be anonymized and formatted in the ICML style. Abstracts are non-archival, but will be publicly posted on the workshop website if accepted.
  2. Syndicated Submissions: Authors may also submit recent work that has been accepted for publication in another venue within the last 12 months of the deadline for consideration for a poster presentation. To encourage broad participation, preference will be given to work on topics that might be less familiar to the ICML community. Syndicated submissions can be in their original format, have no length requirement, and do not need to be anonymized. They are also non-archival, but will be posted publicly on the workshop website if accepted.

To stimulate discussion and interaction, poster presentations will be entirely in-person absent any further changes from the ICML conference chairs. Some need-based funding for travel and registration for speakers and poster presenters may be available from the workshop’s sponsors.