We are a group of University of London philosophers. We organize talks in all areas of philosophy, with an emphasis on formal methods.
We are a group of University of London philosophers. We organize talks in all areas of philosophy, with an emphasis on formal methods.
Due to COVID-19, our talks will be held via Zoom for the foreseeable future. If you want to attend a Zoom session, please join our mailing list; we will send out links via email.
May 27 2021, 16.00–18.00 (UK time)
Sometimes we acquire evidence of others’ evidence, and based on this new evidence we often change our epistemic states. An assumption underlying such practice is that the following EEE Slogan is correct: ‘evidence of evidence is evidence’ (Feldman 2007, p. 208). We suggest that evidence of evidence is best understood as higher-order evidence about the epistemic state of agents. To model higher-order evidence we introduce a powerful framework for modeling epistemic states, Dyadic Bayesianism. Based on this framework, we then discuss characterizations of evidence of evidence and argue for one of them. Finally, we show that hardly any specification of the EEE slogan applies.
June 24 2021
Postponed due to COVID-19; date TBC
co-hosted with UCL’s Language and Meaning Centre
Information is sometimes public among a group. Whether information is public matters for accounts of rational choice, when the consequences of the actions an individual takes depend on how others act too. Standard models of communication presuppose public information—assertions make new information public, for example.
A standard analysis of public information identifies it with (some variant of) common belief. The latter notion is stipulatively defined as an infinite conjunction: for p to be commonly believed is for it to believed by all members of a group, for all members to believe that all members believe it, and so forth. This analysis is often presupposed without much argument in philosophy. It has recently come under fire, in particular, in work by Harvey Lederman. That raises the question: is there any systematic reason to accept this identification?
This paper justifies the identification of publicity with (a variant of) common belief without appeal to either theoretical entrenchment or intuitions about cases. The strategy is to characterize a practical-normative role for information being public, and show that that the only things that play that role are (variants of) common belief as stipulatively characterized.
In more detail: I start by characterizing the role of “taking a proposition for granted” in non-isolated decision making. Minimal conditions under which such an attitude is correctly held are presented. I add the thesis that to take a proposition for granted in the relevant sense is to believe that it is public. This thesis may be justified either as theoretical speculation (to avoid positing new sui generis attitudes) or as an expressivist analysis of publicity (my personal preference). I also need some other, less contentious principles and also the assumption that the principles are true a priori.
Some theorists of belief (contentiously!) assume that belief is, a priori, closed under a priori consequence. Granted that, the result that publicity entails common belief can be derived from the premises I identify. Without the contentious assumption, it follows only that for a proposition to be public among a group, group members must be commonly committed to believe the proposition—a result resembling Lewis and Gilbert’s variants on the common belief analysis of publicity. I look also at the question of whether common belief is sufficient for publicity, and consider objections that target the required a priori character of the premises used.
Click the title of a talk to toggle the abstract
March 25 2021
Al Fārābī believed it was impossible for there to be a perfect vacuum. According to relationism about propositional attitudes, Fārābī was related, by the relation of believing, to something, namely, its being impossible for there to be a perfect vacuum. Relationism has typically been discussed as a thesis which says that attitudes like belief are relations between two things: people and propositions. Attention has then focused on what kinds of things these propositions are. But higher-order metaphysics makes salient a different possibility: that attitudes like belief are relations between people and the sui generis denotations of sentences, which are not things. In this talk I consider relationism from this higher-order perspective. I start by showing how higher-order metaphysics sidesteps or resolves a number of questions that have been central to debates over propositions in the first-order setting. I then focus attention on two arguments based on opacity which threaten the higher-order version of relationism. My main goal is to explore the constraints on higher-order relationism imposed by these arguments. But at the end I tentatively argue for a position which I call “deep opacity”, according to which identity at higher types is a source of opacity.
March 18 2021
Hyperintensionality has been a hot-topic in philosophical semantics in recent years (it’s been a topic much longer, of course). Let’s call a semantic framework hyperintensional iff the framework allows us to distinguish necessarily equivalent propositions. There are different hyperintensional frameworks on the market: structured propositions, transparent intensional logic, HYPE, impossible worlds semantics, and (exact) truthmaker semantics are prominent examples. Which one should we endorse? If we want to pick rationally, we need a way of comparing frameworks systematically. In this talk, I’ll explore the idea of doing this by means algebraic abstraction. The core proposal is a definition of comparative granularity for hyperintensional frameworks, an answer to the question: When is one framework “more hyperintensional” than another?
December 10 2020
Mathematical explanations of physical phenomena typically explain such phenomenon by showing them to follow from mathematics together with physical background conditions. Marc Lange (2016) calls such explanations, ‘explanations by constraint’. I have argued elsewhere (Leng 2012) that we should think of these explanations as a form of structural explanations. As both Lange and Leng understand these explanations they look very close in form to Hempel’s ‘covering law’ explanations. As such, mathematical explanations look vulnerable to the problem of explanatory symmetries: if the height of the flagpole mathematically explains the length of its shadow, it looks like the length of the shadow equally well explains the height of the flagpole. Lange has tried to show that his account of mathematical explanation has the means to block these explanatory reversals. This paper argues by contrast that defenders of mathematical explanation should stop worrying about preserving explanatory asymmetries, and accept that the shadow’s length does indeed mathematically explain the flagpole’s height.
November 26 2020
Hamkins uses an analogy between set theory and geometry to explain and (perhaps) additionally motivate his famous Multiverse Program. I’ll highlight a sense in which the change in attitudes to set theory Hamkins advocates seems much more radical than the change in attitude to geometry he invokes for comparison. And I’ll consider whether he faces an explanatory indispensability worry about how to eliminate or replace combinatorial explanations in science (explanations which seemingly appeal to a unique, favored notion of ‘all possible ways of choosing’).
November 5 2020
Deflationists about truth hold that the function of the truth predicate is to enable us to make certain assertions we could not otherwise make. Pragmatists claim that the utility of negation lies in its role in registering incompatibility. The pragmatist insight about negation has been successfully incorporated into bilateral theories of content, which take the meaning of negation to be inferentially explained in terms of the speech act of rejection. In this talk, I will implement the deflationist insight in a bilateral theory by taking the meaning of the truth predicate to be explained by its inferential relation to assertion. This account of the meaning of the truth predicate is combined with a new diagnosis of the Liar Paradox: its derivation requires the truth rules to preserve evidence, but these rules only preserve commitment. The result is a novel inferential deflationist theory of truth. The theory solves the Liar Paradox in a principled manner, and, unlike extant approaches to the paradoxes, deals with the revenge paradoxes of classical recapture in exactly the same way. If time permits, I will show how the theory and simple extensions thereof have the resources to axiomatise the internal logic of several supervaluational hierarchies, thereby solving open problems of Halbach (2011) and Horsten (2011). This is joint work with Julian Schlöder.
September 24 2020
The aim of this talk is to explore ways by which logic may be considered to be exceptional among the sciences on the backdrop of a naturalistic outlook. The conception of logic focused on emphasises the traditional role of logic as a methodology for the sciences. On the proposed conception, the methodological aims of logic drive its definitions and principles, rather than the description of scientific phenomena. Logic serves as a methodological discipline with respect to all sciences, and this generality, as well as its unique reflexive nature, distinguish it from other methodological disciplines. In the talk I shall delineate the proposed conception of logic, and I’ll explain the notion of a methodological discipline. Finally, if time permits, I’ll take the evolution of model theory as a case study, and observe its methodological role. Following recent work by John Baldwin and Juliette Kennedy, I look at model theory from its inception in the mid-twentieth century as a foundational endeavour until developments at the end of the century, where the classification of theories has taken centre-stage.
July 23 2020
In recent years classical logicians have opposed the idea that classical recapture can aid non-classical approaches to the semantic paradoxes. As I see it, there are various related objections scattered in the literature. First, non-classical theorists tend to adopt a lazy attitude in their explanations of how classical logic should be retained. In particular, they don’t provide an account of the conditions under which it is safe to reason classically. Second, it has been suggested that by appealing to classical recapture the non-classical theorist seems to be committed to a piecemeal approach, since she is employing specific auxiliary hypotheses on a case-by-case basis. On account of this, classical recapture is said to be an ad hoc move. Third, one could worry that it is not always easy to disentangle the circumstances under which classical logic can be retained from the circumstances under which it can’t. If that is so, then one may argue that there is an explanatory deficiency affecting non-classical accounts. Fourth, in establishing that her favored logic admits of a recapture result the non-classical theorist employs classical logic. But then it seems that she is guilty of using classical principles that are not available to her in the meta-language. The main goal of this talk will be to ascertain whether these objections succeed. I will claim that there are at least some cases where they do not.
June 25 2020
We explore the practice of guessing: how subjects make predictions amongst several options when they can’t pick one with certainty. Guesses show intricate and surprising patterns that the most obvious theories do not predict. We give a model which does predict these features based on the Jamesian idea that subjects aim to optimize a trade-off between accuracy and informativity in their answers. We then use this model to argue that guessing has a central role to play in our cognitive and conversational lives. In particular, we argue that the accuracy-informativity tradeoff that governs guessing offers (1) a theory of belief, (2) a generalization of the standard pragmatics of assertion, and (3) an account of the conjunction fallacy—the puzzling psychological finding that people sometimes rate conjunctions as more probable than their conjuncts.
June 11 2020
Lewis’s (1976) triviality argument against The Equation (aka., Adams’s Thesis) rests on an implausibly strong presupposition about the nature of (epistemic) rational requirements. Interestingly, Lewis (1980) later rejected this presupposition. In his discussion of the Principal Principle, Lewis assumes something weaker, and more reasonable, about the nature of rational requirements. In this paper, I explain how to apply the insights of Lewis (1980) to repair Lewis’s earlier discussion. This leads to a more reasonable rendition of The Equation — one which is (a) immune from triviality, and (b) a better candidate for a (bona fide) rational requirement.
27 February 2020
This talk explores the fineness of grain of propositions in a formal framework with propositional quantifiers and plural propositional quantifiers. In this setting, an argument going back to Russell and Myhill shows that propositions cannot be structured. It will be shown how to use this argument to impose two further logical limitations on the fineness of grain of propositions: First, we cannot recover conjuncts from conjunctions and instances from universal quantificational claims; indeed, this impossibility holds for all binary operators and propositional quantifiers. Second, standard principles of immediate logical ground for conjunctions, disjunctions and universal quantifiers are inconsistent. In certain cases, the relevant arguments can even be carried out using only predicative instances of plural comprehension.
If you want to be kept informed about LGFP events, join our mailing list by completing this form (an automatically generated password will be sent to you once you’ve confirmed your subscription):