Skip to the content.

Beyond Bayes: Paths Towards Universal Reasoning Systems

Abstract

A long-standing objective of AI research has been to discover theories of reasoning that are general: accommodating various forms of knowledge and applicable across a diversity of domains. The last two decades have brought steady advances toward this goal, notably in the form of mature theories of probabilistic and causal inference, and in the explosion of reasoning methods built upon the deep learning revolution. However, these advances have only further exposed gaps in both our basic understanding of reasoning and in limitations in the flexibility and composability of automated reasoning technologies.

This workshop aims to reinvigorate work on the grand challenge of developing a computational foundation for reasoning in minds, brains, and machines. Goals include:

Topics

Specific topics relevant to these larger goals include:

Call for Submissions

We seek submissions related to any of the topics in the overview, especially ongoing or preliminary work that bridges gaps between topics, and work that might be unfamiliar to the broader ICML community. Submissions will be lightly reviewed for relevance and clarity. All accepted submissions will be presented as posters at a poster session, and a subset will also be selected for oral presentation as contributed talks. Talks will be selected to generate interesting discussions – speculative/perspective abstracts welcome.

Submissions are due in OpenReview before midnight AoE on June 7, 2022 (extended deadline) and may take one of two forms:

  1. Extended Abstracts: Authors may submit ongoing or preliminary work in the form of an extended abstract of 2-4 pages (excluding references or appendices, preference for shorter abstracts) for consideration for a poster presentation. Submissions should be anonymized and formatted in the ICML style. Abstracts are non-archival, but will be publicly posted on the workshop website if accepted.
  2. Syndicated Submissions: Authors may also submit recent work that has been accepted for publication in another venue within the last 12 months of the deadline for consideration for a poster presentation. To encourage broad participation, preference will be given to work on topics that might be less familiar to the ICML community. Syndicated submissions can be in their original format, have no length requirement, and do not need to be anonymized. They are also non-archival, but will be posted publicly on the workshop website if accepted.

To stimulate discussion and interaction, poster presentations will be entirely in-person absent any further changes from the ICML conference chairs. Some need-based funding for travel and registration for speakers and poster presenters may be available from the workshop’s sponsors.

Invited Speakers and Panelists

This list is still being finalized and may see further additions or removals

Joshua Tenenbaum (tentatively confirmed)

Joshua Tenenbaum is Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM). His long-term goal is to reverse-engineer intelligence in the human mind and brain, and to use these insights to engineer more human-like machine intelligence. In cognitive science, he is best known for developing theories of cognition as probabilistic inference in structured generative models, and applications to concept learning, causal reasoning, language acquisition, visual perception, intuitive physics, and theory of mind. In AI, he and his group have developed widely used models for nonlinear dimensionality reduction, probabilistic programming, and Bayesian unsupervised learning and structure discovery. His current research focuses on common-sense scene understanding and action planning, the development of common sense in infants and young children, and learning through probabilistic program induction and neuro-symbolic program synthesis. His work has been published in many leading journals and recognized with awards at conferences in Cognitive Science, Computer Vision, Neural Information Processing Systems, Reinforcement Learning and Decision Making, and Robotics. He is the recipient of the Distinguished Scientific Award for Early Career Contributions in Psychology from the American Psychological Association (2008), the Troland Research Award from the National Academy of Sciences (2011), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2016), the R&D Magazine Innovator of the Year award (2018), and a MacArthur Fellowship (2019). He is a fellow of the Cognitive Science Society, the Society for Experimental Psychologists, and a member of the American Academy of Arts and Sciences.

Guy Van den Broeck

Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His papers have been recognized with awards from key conferences such as AAAI, UAI, KR, OOPSLA, and ILP. Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.

Kimberly Stachenfeld

Kimberly Stachenfeld is a Senior Research Scientist at DeepMind. She received her PhD in Computational Neuroscience from Princeton University in 2018. Before that, she received her Bachelors from Tufts University in 2013, where she majored in Chemical & Biological Engineering and Mathematics. Her research is at the interface of Neuroscience and Machine Learning, focusing on graph-based computations for efficient learning and planning in brains and in machines.

Thomas Icard

Thomas Icard is an Associate Professor of Philosophy and (by courtesy) of Computer Science at Stanford University. Thomas works at the intersection of philosophy, cognitive science, and computer science, especially on topics that sit near the boundary between the normative (how we ought to think and act) and the descriptive (how we in fact do think and act). Much of his research concerns the theory and application of logic, probability, and causal modeling and inference. Some current topics of interest include explanation, the quantitative/qualitative interface, and reasoning with limited resources.

Tyler Bonnen

Tyler is a PhD student in the Department of Psychology at Stanford University. After transferring from Miami-Dade Community College, tyler studied chemistry and comparative literature at Columbia University. He went on to research fellowships in the Max-Planck Institute in Leipzig, then in the Department of Brain and Cognitive Sciences at MIT before coming to Stanford. In his current work, co-advised by Anthony Wagner and Daniel Yamins, tyler uses biologically plausible computational models, neural data, and animal behavior in order to formalize the relationship between perception and memory.

Ishita Dasgupta

Ishita is a Research Scientist at DeepMind New York City. She was previously a postdoctoral researcher at Princeton University in the Departments of Psychology and Computer Science, working in the Computational Cognitive Science Lab with Prof. Tom Griffiths. She received her PhD from the Department of Physics at Harvard University in 2020, working in the Computational Cognitive Neuroscience Lab with Prof. Sam Gershman. Her research is at the intersection of computational cognitive science and machine learning. Ishita uses advances in machine learning to build new models of human reasoning, applies cognitive science approaches toward understanding black-box AI systems, and combines these insights to build better, more human-like artificial intelligence.

Jan-Willem van de Meent

Dr. Jan-Willem van de Meent is an Associate Professor (Universitair Hoofddocent) at the University of Amsterdam. He co-directs the AMLab with Max Welling and co-directs the Uva Bosch Delta Lab with Theo Gevers. He also holds a position as an Assistant Professor at Northeastern University, where he is currently on leave. Prior to becoming faculty at Northeastern, he held a postdoctoral position with Frank Wood at Oxford, as well as a postdoctoral position with Chris Wiggins and Ruben Gonzalez at Columbia University. He carried out his PhD research in biophysics at Leiden and Cambridge with Wim van Saarloos and Ray Goldstein.

Jan-Willem van de Meent’s group develops models for artificial intelligence by combining probabilistic programming and deep learning. A major theme in this work is understanding which inductive biases can enable models to generalize from limited data. Inductive biases can take the form of a simulator that incorporates knowledge of an underlying physical system, causal structure, or symmetries of the underlying domain. At a technical level, his group develops inference methods, along with corresponding language abstractions to make these methods more modular and composable. To guide this technical work, his group collaborates extensively to develop models for robotics, NLP, healthcare, and the physical sciences.

Jan-Willem van de Meent is one of the creators of Anglican, a probabilistic language based on Clojure. His group currently develops Probabilistic Torch, a library for deep generative models that extends PyTorch. He is an author on a forthcoming book on probabilistic programming, a draft of which is available on arXiv. He is a co-chair of the international conference on probabilistic programming (PROBPROG). He was the recipient of an NWO Rubicon Fellowship and is a current recipient of the NSF CAREER award.

Schedule and Planned Activities

The workshop will consist of several types of sessions, broken up by 15 minute coffee breaks. Discussion will be encouraged at all sessions, and there will be an option for contributed talks and posters. Invited talks will be 25 minute followed by 10 minute breakout discussion at tables and up to 10 minutes full group Q&A with the speaker. Contributed talks will be 15 minutes. Panel discussions will consist of invited panelists each giving a short talk, then an extended stage discussion moderated by an organizer.

The workshop will be held in room BLRM 1,2 T2000, on July 22nd, 2022.

Schedule (tentative)

9am-9:45am: Invited talk on the cognitive science of reasoning.

9:45am-11:00am: Panel, “Reasoning in brains vs machines”

11:15am-12:00pm: Invited talk on new reasoning problems and modes of reasoning.

12:15pm-12:45pm: Contributed talks

12:45pm-1:45pm: Lunch

1:45pm-2:30pm: Invited talk on the future of automated reasoning

2:30-3:45pm: Panel, “New computational technologies for reasoning”

4pm-4:30pm: Invited talk

4:30pm-5pm: Contributed talks

5pm-6pm: In-person poster session and social hour

Organizers

Nada Amin

Nada Amin is an assistant professor of computer science at Harvard SEAS. Previously, she was a University Lecturer in Programming Languages at the University of Cambridge, and a member of the team behind the Scala programming language at EPFL. She is broadly interested in programming languages, and the intersection of programming languages and artificial intelligence. She has co-organized the Scala, Scheme, miniKanren and TyDe (Type-driven Development) workshops, and has served on the program committee of POPL, FLOPS, OOPSLA, UAI among others.

Eli Bingham

Eli is a Machine Learning Fellow at the Broad Institute of MIT and Harvard’s Data Sciences Platform, where he develops machine learning methods and software for biomedical research applications, and was previously a senior research scientist at Uber AI Labs. His research at the intersection of programming languages and AI focuses on developing general methods for approximate Bayesian inference suited for new and previously inaccessible problems, and on democratization of those methods through the Pyro probabilistic programming language, of which he is a co-creator and core developer. He has served as a program committee member of scientific workshops including HOPE and LAFI, and has also organized and led a number of public and private workshops and tutorials for current and prospective Pyro users.

Nan Rosemary Ke

Rosemary is a research scientist at Deepmind. Previously, she was a PhD student at Mila, advised by Yoshua Bengio and Chris Pal. Her research centers around developing novel machine learning algorithms that can generalize well to changing environments. Her research focuses on two key ingredients: credit assignment and causal learning. These two ingredients flow into and reinforce each other: appropriate credit assignment can help a model refine itself only at relevant causal variables, while a model that comprehends causality sufficiently well can reason about the connections between causal variables and the effect of intervening on them. She has co-organized a conference, 6 workshops and 3 challenges. These are the conference on causal learning and reasoning (CLeaR) 2022, the “inductive biases, invariances and generalization in reinforcement learning” workshop at ICML 2020, “causal learning for decision making” workshop at ICLR2020, “efficient credit assignment workshop” at ICML 2018, “reproducibility in machine learning” at ICML 2017, ICML 2018 and ICLR 2019), the “Real robot challenge” at NeurIPS 2021 and the ICLR reproducibility challenge at ICLR 2018 and ICLR 2019.

John Krakauer

Dr. John Krakauer is John C. Malone Professor, Professor of Neurology, Neuroscience, and Physical Medicine and Rehabilitation, Director of the Brain, Learning, Animation, and Movement Lab at The Johns Hopkins University School of Medicine, and External Professor at the Santa Fe Institute. His areas of research interest include experimental and computational studies of motor control and motor learning in humans, motor recovery and rehabilitation after stroke, and philosophy of mind. He has organized numerous workshops and scientific meetings, recently including The Learning Salon.

Emily Mackevicius

Emily Mackevicius is currently a postdoctoral neuroscientist at Columbia University in the Aronov lab. Previously, she completed her Ph.D. in neuroscience at MIT in the Fee lab. Her research investigates how the brain learns new information in the context of prior knowledge. Her work involves both experiments (recording neurons in birds performing naturalistic memory behaviors) and theory/computation (modeling how neural circuits self-organize, and developing a sequence-detection method, seqNMF). She has been involved in organizing a variety of scientific meetings, including founding an ongoing tutorial series on computational topics at MIT’s Brain and Cognitive Sciences Department, and TAing Woods Hole summer courses (“Methods in Computational Neuroscience”, and “Brains, Minds, and Machines”).

Robert Osazuwa Ness

Robert Osazuwa Ness is a Senior Research Scientist at Microsoft Research in Redmond, WA. He is a tech lead on MSR’s Societal Resilience team. Robert’s research aims to automate human reasoning by enabling experts to program domain knowledge into learning algorithms to achieve predictive capabilities not possible from data alone. He leads the development of MSR’s causal machine learning platform and conducts research into probabilistic models for advanced causal reasoning. Before joining MSR, he worked as a machine learning research engineer in various startups. He attended graduate school at both Johns Hopkins SAIS and Purdue University. He received his Ph.D. in Statistics from Purdue, where his dissertation research focused on Bayesian active learning models for causal discovery.

Talia Ringer

Talia Ringer is an assistant professor at the University of Illinois at Urbana-Champaign. Her work focuses on tools that make it easier to develop and maintain systems verified using proof assistants. Toward that end, she loves to use the whole toolbox—everything from dependent type theory to program transformations to neural proof synthesis—all in service of real humans verifying real systems. Prior to Illinois, she earned her PhD in 2021 from the University of Washington. She also has experience in industry. She has served the community in many capacities, including as founder and chair of the SIGPLAN long-term mentoring committee (SIGPLAN-M), co-chair of PLMW at ICFP 2020, hybridization co-chair of SPLASH 2021, co-organizer of the Coq Workshop 2022, and program committee member for PLDI, ITP, TYPES, CAV, CoqPL, HATRA, and AIPLANS.

Armando Solar-Lezama

Armando Solar-Lezama is a Professor in the Department of Electrical Engineering and Computer science and associate director of the Computer Science and Artificial Intelligence Lab at MIT. His background is in programming languages, where he is best known for his seminal work on program synthesis. More recently, he has been working at the intersection of programming languages and machine learning, exploring learning techniques that combine the formal guarantees of program synthesis with the expressiveness of traditional machine learning. He has co-organized a number of workshops including the Workshop on Computer Assisted Programming (CAP) at NeurIPS 2020 and the workshop on Machine Learning and Programming Languages (MAPL) at PLDI 2019.

Zenna Tavares

Zenna Tavares is the inaugural Innovation Scholar in Columbia University’s Zuckerman Mind Brain Behavior Institute, and Associate Research Scientist in the Data Science Institute. Zenna’s research aims to understand how humans reason, that is, how they come to derive knowledge from observing and interacting with the world. He also constructs computational and statistical tools that help advance his work on causal reasoning, probabilistic programming, and other areas. Prior to Columbia University, he was at MIT, where he received a PhD in Cognitive Science and Statistics and was a Postdoctoral Research researcher in the Computer Science Artificial Intelligence Lab (CSAIL). Zenna has co-organized a number of workshops, including DBAI at Neurips 2021, and OOD Generalization at Neurips 2021, and served on the program committee for UAI, ICML, Neurips, ICLR, and LAFI (POPL).