The Topos Oxford Seminar is a weekly, informal seminar, hosting informal talks that are broadly aligned with the research themes of the Topos Institute. You can find links to the recordings of the talks, or accompanying slides, (when available) below. You can also view a YouTube playlist of all recordings.
Everettian quantum mechanics and the problem of ontological extravagance
Rachel Pedersen
(Oxford University)
Mar 27, 2025
Since it neither relies on a collapse postulate nor posits a hidden variable, Everettian quantum mechanics (EQM) is an attractive option among realist interpretations of quantum mechanics. In some sense, EQM is a simpler theory than its competitors. However, it also appears to require more structure; unlike its competitor theories, EQM has traditionally been taken to entail a multitude of worlds, as every possible outcome of every quantum process is actualized in at least one world. Since no other realist competitor theory shares this feature, the most prominent objection raised against EQM is that of ontological extravagance.
Everettians have given this objection little serious attention, at least in part because the objection from ontological extravagance has not yet been carefully articulated in the literature. To clarify this objection, I distinguish between two types of simplicity criteria: ones concerned with ontological abundance and others concerned with postulate abundance. Where ontological abundance criteria concern the number of concrete objects that some theory posits, postulate abundance norms instead concern the number and overall complexity of the set of postulates of the theory. Unfortunately, it is unclear how we ought to weigh these simplicity-related metaphysical considerations, and worse, as I argue, there is no independent motivation for either sort. I argue that we ought to instead consider the proximity between the way the theory says the world is, and the way the world appears to be. With the deadlock between Everettians and non-Everettians on what sort of simplicity criterion reigns supreme, this severely overlooked criterion offers a promising route forward in the dialectic. This alternate criterion roughly corresponds to what Emery (2023) calls the minimal divergence norm, under which we ought to prefer theories that deviate least from the manifest image, or the way the world generally appears to be. However, such a norm requires further specification; there is no image of the world that uniquely picks out the manifest image. Observations themselves depend on our theoretical commitments, which vary from agent to agent, and so there are many manifest images. To avoid this multiplicity, I argue that we ought to look to the physical sciences for a widely-accepted theory that describes the world as we experience it. Classical mechanics (CM), the predecessor theory to quantum mechanics, does exactly that; though it is certainly not a correct description of reality, it is sufficiently accurate for numerous forms of engineering, as it describes macroscopic reality well enough. I argue that we ought to treat the way that classical mechanics says the world is as the manifest image. Then, the third type of criteria, which I call classical divergence, concerns the degree to which the way some theory in question says the world is deviates from the way that CM says the world is. I defend this norm on pragmatic grounds, arguing that adhering to it offers a distinct epistemic advantage.
While EQM does not necessarily tell us that our world is vastly different from the way that CM says it is, it is generally taken to entail many more worlds than its competitor theories. With this new set of simplicity criteria in place, Everettians can better understand the available dimensions along which they may endeavor to improve their ontology. This new norm that I advocate offers hope: Everettians can seek a less extravagant ontology by strategically minimizing classical divergence.
Open translations in mathematics
Tim Hosgood
(Topos)
Mar 20, 2025
Translation, in full generality, is a nuanced and complex art form that requires serious expertise and a holistic approach. So how can we as mathematicians hope to “solve the problem of translation” in our domain? Furthermore, can we do so without making access to academia even harder for non-native English speakers or hurrying a domain collapse of non-English languages? I believe that the answer to both of these questions can only possibly be “yes” if we approach translation as a community-driven activity. In this talk, I will speak about my experiences in working on large translation projects with an open-source approach — the technology and methodology that was helpful for doing so, as well as some of the difficulties — and describe the sorts of resources that I believe would have been helpful. Hopefully this can form a starting point for community thought on the types of projects that we could focus on in the future. (This talk is a repeat of a talk given recently at the Isaac Newton Institute).
Neural certificates
Mirco Giacobbe
(University of Birmingham)
Mar 13, 2025
Model checking aims to derive rigorous proofs for the correctness of systems and has traditionally relied on symbolic reasoning methods. In this talk, I will argue that model checking can also be effectively addressed using machine learning too. I will present a realm of approaches for formal verification that leverage neural networks to represent correctness certificates of systems, known as “neural certificates.” This approach trains certificates from synthetic executions of the system and then validates them using symbolic reasoning techniques. Building upon the observation that checking a correctness certificate is much simpler than finding one, and that neural networks are an appropriate representation for such certificates, this results in a machine learning approach to model checking that is entirely unsupervised, formally sound, and practically effective. I will demonstrate the principles and experimental results of this approach in safety assurance of software, probabilistic systems, and control.
Generalized algebraic theories have long been part of our work at Topos, going back to the very beginnings of Catlab.jl. In this talk, I give a new implementation of generalized algebraic theories in Rust, which takes a different, more type-theoretic approach to generalized algebraic theories. This approach tightly integrates e-graphs into the type-checking process to enable an approximation to extensive equality, and in addition enables greater compositionality of theories, achieving a principled approach to “theory pushout” that has been long-desired. This new implementation is intended as a prototype for type-checking algorithms that will go into CatColab and enable “open notebooks” for compositional modeling, and I give an overview of how I envision that working. In addition to this application in scientific modeling, I believe that lessons from this implementation that I have learned about two-level type theory could be applicable in a wide variety of domains in which abstraction is desired that does not complicate the underlying semantics, such as finite state machines, probabilistic programming, SAT solving, systems programming, constraint programming, databases, and serialization formats.
A next-generation modular and compositional framework for System Dynamics modeling and beyond
System Dynamics (SD) is a powerful methodology for understanding and managing complex systems over time, with applications in various fields such as public health, business, and environmental management. Despite its strengths, traditional System Dynamics modeling methods face several critical limitations, including a lack of modularity, inadequate representation of complex relationships, stratification that obscures model transparency, rigid coupling of model syntax and semantics, and limited support for model composition and reuse. These challenges restrict the scalability, adaptability, and accuracy of models, particularly when applied to large-scale or interdisciplinary systems. To address these limitations, this presentation introduces a novel framework grounded in Applied Category Theory (ACT), which provides a rigorous mathematical foundation for representing, relating, composing, and stratifying complex systems. By leveraging ACT, this research establishes a next-generation SD modeling approach that integrates mathematical rigor with practical utility, significantly enhancing the flexibility, expressiveness, and applicability of System Dynamics for researchers and practitioners across various domains. To contextualize System Dynamics modeling in the realm of infectious disease simulation, the presentation will begin with an overview of a COVID-19 theory-based machine learning model, developed incorporating the Bayesian methodology of Particle Filtering algorithm to a Covid-19 System Dynamics model. This model played a pivotal role in supporting decision-making across 17 Canadian jurisdictions during the COVID-19 pandemic by providing daily transmission monitoring, short-term projections, and counterfactual intervention analysis over a year.
The American philosopher Charles Saunders Peirce (1839-1914) had much to say about the nature of intellectual enquiry. In the realm of deductive logic, category theorists have made important use of his string-diagrammatic logical calculus. But Peirce’s interests in inference extended beyond deduction to induction and abduction. In this talk I shall be exploring the thesis that we can understand this triple in terms of the category-theoretic notions of composition, extension and lift. We will also touch on his broader semiotics and his account of concept formation.
This talk covers the basics of Markov semigroups from a slightly categorical point of view, with an aim of conveying a basic intuition about the setup for and the meaning of the Hille-Yosida theorem. At the end, a synthetic viewpoint is conjectured.
What 2-algebraic structure do cartesian pseudo-functors preserve?
If universal algebra is the study of sets equipped with the structure of some operations satisfying universally quantified equations, then 2-algebra is the study of categories equipped with the structure of some operations, some natural transformations between these operations, and some equations between these. Examples of 2-algebras include symmetric monoidal categories and categories with finite products.
Just as the functorial semantics (due to Lawvere) for algebra lets us interpret algebraic theories in categories other than sets (so long as they have finite products), the functorial semantics for 2-algebra (due to Power, Lack, and others) lets us interpret 2-algebraic structure in 2-categories other than Cat. For example, instead of symmetric monoidal categories, we can consider symmetric monoidal double categories.
Any strict 2-functor that preserves finite products will push forward 2-algebraic structure. But 2-functors are often only functorial up to coherent isomorphism. What 2-algebraic structure do such cartesian pseudo-functors preserve? In this talk, we’ll ask the question and put forward an answer: so long as the 2-algebraic theory is flexible, in the sense that it only asks for equations between transformations and not between operations, then it will be preserved by cartesian pseudo-functors.