This special issue of Synthese is in honor of Gerhard Schurz, our good friend and colleague, who contributed to philosophy in a multitude of novel and interesting ways. The idea to do such a special issue was born at a symposium on the occasion of Gerhard’s 60th birthday back then in 2016. As a fun fact, the final publication of the special issue in print might be quite close to his 65th birthday in 2021. So in some sense this special issue can be expected to kill two birds with one stone. Let us start this introduction with a few words on Gerhard’s life and work and on the guest editors’ own relationship to him. We then zoom in and will say more about this special issue, followed by a brief description of its content.

Gerhard obtained an MA in chemistry in 1980 followed by a PhD in philosophy on scientific explanation in 1983. From 1983 to 2000 he was a research assistant, and later assistant and associate professor at the Department of Philosophy at the University of Salzburg, Austria, where Paul Weingartner was one of his most important promoters. In 2000 Gerhard became Professor of Philosophy of Science at the University of Erfurt, Germany, and in 2002 chair of Theoretical Philosophy at the University of Düsseldorf. Over his career Gerhard was a visiting professor at many leading universities such as the University of California at Irvine and Yale University, and since 2016 he has been president of the German Society for Philosophy of Science.

Gerhard’s empirically and scientific minded view on philosophy and his way to approach philosophical problems and tasks has inspired the guest editors of this special issue as well as his students (among them Hannes Leitgeb, Franz Huber, and Helmut Prendiger) in many ways. Three of us—in particular Markus Werning, Alexander Gebharter, and Christian Feldbacher-Escamilla—did our PhDs with Gerhard. All of us are very grateful for our time together and for what we have learned over the years from Gerhard, about philosophy, science, and life.

Gerhard worked on a multitude of topics within philosophy, ranging from the is-ought problem through logic and especially relevance logic, probability theory, truthlikeness, explanation and understanding, induction and meta-induction, abduction, and causation, to the generalized theory of evolution. It is interesting to note that to most of these topics that intersect with the contributions of this issue, David Hume formulated skeptical concerns in a highly influential way. To mention the most famous areas of Hume’s skepticism: There is his stressing of the gap in reasoning from is to ought, from knowledge about the past and present to the future (induction) and from observation or experience to causation. The contributions in this special issue are arranged along these skeptical foci. Since Gerhard has provided in all these areas significant contributions, approaches, and solutions, one might consider him—at least regarding the range of topics he has worked on—as a Humean anti-skeptic.

1 The is-ought problem

I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought or an ought not.

(Hume 1738/1960: Treatise, book III, part I, section I)

The first two articles in this special issue are connected to the “is-ought problem”, which basically consists in the question of how one can move from descriptive to normative statements, a topic Gerhard investigated particularly in his earlier career. In his “The is-ought problem: A study in philosophical logic” (Schurz 1997), Gerhard presented a logical investigation of this problem in the framework of alethic-deontic logics that grew out of his habilitation and went far beyond the logical treatments of this problem in the existing literature (cf. Schurz 2010 for an overview). The most innovative points of his investigation are that (a) he was able to prove that the so-called special Hume thesis—which asserts that no consistent set of descriptive premises logically entails a purely normative conclusion—holds for all alethic-deontic logics that are Halldén-complete and axiomatizable without is-ought bridge principles, thereby generalizing the result of Kutschera (1977) and Stuhlmann-Laeisz (1983). (b) He developed a solution to Prior’s paradox (Prior 1960) by proving that the general Hume thesis—which asserts that in all valid inferences with descriptive premises and mixed conclusions, all normative parts of the conclusion are completely ought-irrelevant—holds in all alethic-deontic logics that are axiomatizable without is-ought bridge principles, thereby generalizing a result of Pigden (1989). (c) He proved that functional is-ought bridge principles (such as the ought-can or the means-end principle) do not allow for deriving non-trivial forms of such principles (thereby generalizing a result of Galvan 1988); only substantial is-ought bridge principles (such as the ones of utilitarian ethics) do.

The first of the two articles on the “is-ought problem” in this special issue is written by Wolfgang Spohn, entitled “Defeasible Normative Reasoning” (Spohn 2018), and briefly discusses three bridge principles between is and ought statements studied in Gerhard’s (1997). The article then focuses on the means-end principle and suggests to replace Gerhard’s version of that principle by a subjunctive reading which he takes as a starting point for his paper. After introducing the basics of ranking theory (for details, see Spohn 2012) needed for subsequent sections, Wolfgang Spohn first provides an analysis of the modified means-end principle and then an analysis of its consequence to the background of that framework. This is followed by an investigation of how the antecedent and consequence might be related and a justification for the developed approach.

As outlined above, Gerhard has argued in his (1997) for the claim that Hume’s insight stands firm in the light of modern modal and deontic logic: No relevant deontic conclusion is licensed purely on descriptive grounds. Rather, in order to validate such inferences one needs bridge principles. As psychologists, Jonathan Evans and Shira Elqayam are particularly interested in the more empirical claim that people in fact frequently reason from is to ought and also in the question which bridge principles might be used. For this reason they discuss in their “How and Why we Reason from Is to Ought” (Evans and Elqayam 2018) whether humans actually make is-ought inferences, and if so, how they do so and what evolutionary function such inferences have. Regarding the first question (whether?), they stress that psychological investigations of is-ought inferences arose more or less as a by-product of psychological research on indicative conditionals (the so-called Wason selection tasks) and that the approaches to make the experimental results intelligible led to a huge amount of data and investigations of not indicative, but deontic conditionals. How exactly do people reason this way is described by them via a reasoning chain of what they call deontic introduction. In earlier work they have experimentally identified the following characteristics: deontic introduction consists of a causal inference (A causes B), a goal inference (B is good), a value transference (A is good) and deontic bridging (from A is good to A ought to be done). Such form of reasoning is enthymematic and implicit since these intermediary steps are not made explicit; it is also contextualised and defeasible in the sense that additional information might render such an inference invalid. Regarding the third question (what function?), one can distinguish in general instrumental (achieving one’s goals) versus epistemic (acquire true beliefs) normativity. In evolutionary terms, the instrumental function seems to supervene on the epistemic one: Typically, we need accurate knowledge in order to achieve our goals. The authors argue that deontic introduction provides a crucial linchpin between the instrumental and the epistemic function: If rationality were just about true beliefs (i.e. epistemic), then we would lack a link to action; and without action, there would be no instrumental function. Deontic introduction directly links the epistemic function (of acquiring truth) to the instrumental one (of achieving one’s goals). They also link this to previous work of Jonathan Evans, namely his two minds theory. According to their interpretation, the old mind concerns intuitive, associative, conditioned, and instinctive forms of learning (this is mainly the foundation of instrumental rationality), whereas the new mind concerns simulation of consequences in unobserved or new contexts; this is the foundation of deontic introduction, which serves as generator of bridge principles. In this sense they complement Gerhard’s logical investigation on is-ought reasoning and underlying bridge principles by an experimentally tested and characterised scheme of such principles.

2 The problem of perception

Nothing is ever really present with the mind but its perceptions or impressions and ideas.

(Hume 1738/1960: Treatise, book I, part II, section VI)

It is clear that for empiricists like Hume the objectivity and veridicality of observation plays a key role in scientific theory assessment. However, one traditional argument against this cornerstone of empiricism is that of the theory-ladenness of observation: If observational concepts and the acceptance of observational statements vary with our background theories, then they are not inter-subjective; and if they are not inter-subjective, then they are not veridical. Ioannis Votsis addresses this problem in his “Theory-Ladenness: Testing the ‘Untestable’” (Votsis 2018). He proposes an experimental design for testing observational judgements with regards to theory-ladenness by help of a stimulus exchange procedure—a particular classification task in which experts and layman have to categorise “observations” in form of instrumental-produced images as well as drawings of these by the other experimental participants according to similarity (discriminability) considerations. Ioannis compares his experimental design with the ostensive learnability criterion proposed by Gerhard (Schurz 2015a) according to which a concept is the less theory-laden the higher the success rate and the faster the increase in the curve of the concept’s learning process. He highlights advantages and disadvantages of both designs and expresses the overt stance that both designs are promising and complementary tests for theory-neutrality. Ioannis concludes his investigation with an abductive argument for the claim that, although intersubjectivity provides no guarantee for veridicality, in such a way established objectivity of observational judgements speaks also in favour of the veridicality of such judgements. This holds, so Ioannis, simply because such an assumption provides the best explanation for observational judgment convergence in comparison to, e.g., different forms of constructivism which can even be shown to be self-defeating.

3 The problem of causation

Not only our reason fails us in the discovery of the ultimate connexion of causes and effects, but even after experience has inform’d us of their constant conjunction, ‘tis impossible for us to satisfy ourselves by our reason, why we shou’d extend that experience beyond those particular instances, which have fallen under our observation. (Hume 1738/1960: Treatise, book I, part III, section VI)

The next two articles are on another topic Gerhard worked on over the last decade: causation. The first one of these articles is “A new proposal how to handle counterexamples to Markov causation à la Cartwright, or: fixing the chemical factory” (Gebharter and Retzlaff 2018). In this article, Alexander Gebharter and Nina Retzlaff discuss common causes that do not screen off their effects such as Cartwright’s (1999a) chemical factory and the like (see, e.g., Cartwright 1999b; Retzlaff 2017; Wood and Spekkens 2015). It is to some extent still controversial whether such scenarios actually exist, but if they do, it is clear that they would pose a serious threat to the core principle of modern causal modeling approaches (Pearl 2000; Spirtes et al. 1993) and a general theory of causation based on such approaches (Gebharter 2017; Schurz and Gebharter 2016): the causal Markov condition. In (Schurz 2017) Gerhard proposed to revise the causal Markov condition in such a way that it allows also for common causes that do not screen off their effects. In their article of this special issue Alexander and Nina discuss Gerhard’s and other proposals to save the causal Markov condition. They also come up with their own solution: Instead of revising the causal Markov condition, they propose to introduce a certain kind of non-causal element to the models describing the purported counterexamples that can account for the additional dependence between the problematic common causes’ effects.

The second causation paper in this special issue is “Processes, pre-emption and further problems” (Hüttemann 2018) written by Andreas Hüttemann. Contrary to Gerhard whose work on causation focuses on type-level causal relations, Andreas is mainly interested in token-level causation in this article. In particular, he proposes a process theory of causation that can, contrary to classical process theories, handle problems that arise in contexts involving pre-emption, negative causation, misconnection and disconnection (cf. Dowe 2009). To this end, Andreas’ theory does not define causation in terms of causal processes and interactions, but rather in terms of interferences in so-called quasi-inertial processes, where the latter can be analyzed in scientific terms. Andreas also briefly discusses ways in which his account could be used to support analyses of actual causation within type-level causal modeling approaches of the kind preferred by Gerhard.

4 The problem of induction (and truth-tracking)

There can be no demonstrative arguments to prove, that those instances, of which we have had no experience, resemble those, of which we have had experience.

(Hume 1738/1960: Treatise, book I, part III, section V)

The next set of articles is connected to Gerhard’s most recent research on meta-induction. Meta-induction is an approach to deal with Hume’s problem of justifying induction and the dilemma that a deductive approach fails for being too weak, whereas an inductive approach fails for being circular. In the tradition of Hans Reichenbach’s vindication of induction (cf. Reichenbach 1940), Gerhard has shown that if one restates the epistemic goal from providing a guarantee for (expected) success of inductive methods to proving an (accessible) optimum of such methods, one gains a solution, namely the optimality of meta-induction (cf. Schurz 2008, 2019a). Meta-induction is a social strategy, which makes predictions by help of success- or so-called attractivity-based weighting of its competitors’ predictions. It turns out that by help of such an application of induction on the meta-level of success rates, this strategy is provably optimal in the long run when compared with its competitors.

As a purely social strategy, meta-induction partly depends on the condition that the success rates or track-records of all agents are accessible. However, “the problem is that this condition is arguably rarely satisfied in practice” (Hahn, Hansen & Olsson—for short H2O—Hahn et al. 2018, Sect. 1). The question is what one should do, when success rates are not accessible. One interesting approach is to jump back to the level of object-predictors who interact with each other. Although these agents cannot rely on success rates, they can try to use a measure of trustworthiness, which they infer on the basis of prediction content: Those competitors who predict something expected, get a boost in trustworthiness; likewise, the degree of trustworthiness of those who predict something unexpected is decreased. H2O discuss in their “Truth Tracking Performance of Social Networks” such an alternative model. They particularly focus on the impact of network structure on deviations between the average degree of belief in such a social network and the true value, the so-called veritistic value or V-value. They do so by performing simulations on the basis of the Bayesian agent-based model Laputa (cf. Olsson 2011; Vallinder and Olsson 2014). They analyzed the simulations with respect to a bulk of common properties for classifying networks (network metrics). Their analysis shows that two negative correlations are of particular interest for the V-value: It is negatively correlated with connectivity, i.e., the minimum number of nodes that need to be removed to separate the remaining nodes into independent subnetworks; and it is negatively correlated with clustering as measured by help of common cluster coefficients; these represent the ability to cluster networks into subnetworks. Their discussion of this result brings also an illustrative interpretation with it: “A cluster which is initially on the wrong track can reinforce itself through internal communication, locking into a false belief. Internal trust turns the cluster into a group of `conspiracy theorists’.” The general conclusion one may draw is that topological structure matters a lot for the performance of/within a social network and that the “commonsense or internet age” claim that “the more connected a community of agents is, the better it will be at tracking truth” does not stand up to scrutiny.

The contribution of Christian J. Feldbacher-Escamilla on “An Optimality-Argument for Equal Weighting” (Feldbacher-Escamilla 2018) is also about social strategies, but takes a particular network structure for granted, namely a fully connected set of agents. Christian argues that Gerhard’s account of meta-induction cannot only be employed for justifying individual sources of knowledge such as inductive learning, but also for rationalizing social sources of knowledge such as learning from peer disagreement. In the debate on how to deal with epistemic peer disagreement three classical positions emerged: the equal weight view, the remain steadfast view, and the total evidence view. Whereas the latter two views ask for partially taking into account higher order evidence about peer disagreement, or not taking it into account at all, the equal weight view demands one to completely rely on such higher order evidence. The main argument put forward for this view stems from indifference considerations. The view seems to be similar to the purely social strategy of meta-induction in the sense that it relies on higher order evidence alone. However, Christian argues that there is not only a superficial similarity, but that the equal weight view, when explicated in detail, is about a particular case of applying the theory of meta-induction. In embedding the former into the latter, he is able to transfer the optimality argument of meta-induction also to the case of peer disagreement and to strengthen the argument from reasoning via epistemic indifference to reasoning from optimality.

Measuring success or the track-record of predictors presupposes some way of scoring predictions. In the case of probability distributions, the question is about how to score probabilistic forecasts. Igor Douven takes up this question in his “Scoring in Context” (Douven 2018) and argues that approaches to scoring depend heavily on context. This is particularly due to the reason that there exists a bewildering variety of scoring rules for which objectivity in form of satisfying certain standards of goodness of scoring are claimed, but not all of such standards can be met by one and the same scoring rule. Igor argues that for different purposes different standards might be adequate and that one important standard, namely to get intuitions about truthlikeness right, was quite neglected so far. For this purpose he introduces the notion of a verisimilitude-sensitive scoring rule: Since how far away from the truth a false hypothesis is depends on which hypothesis is true, he suggests to relativize scoring rules to the true hypothesis (of course, what counts as true hypothesis varies when one considers expected values in scoring). He then shows that all such verisimilitude-sensitive scoring rules are improper. However, as he argues by help of examples, propriety seems to be particularly relevant when one wants to elicit probabilities ex ante, but not ex post.

By connecting the debate of scoring rules to that of verisimilitude and truthlikeness, Igor's paper also provides a natural connection between the set of papers on truth-tracking and the following set of papers on truthlikeness.

5 The problem of verisimilitude and truthlikeness

I disagree with Hume’s opinion […] that induction is a fact and in any case needed. […] What we do use is a method of trial and of the elimination of error; however misleadingly this method may look like induction, its logical structure, if we examine it closely, totally differs from that of induction. [… Rather] we are led to the idea of the growth of informative content, and especially of truth content.

(Popper 1974, pp. 1015, 1022)

The next three articles focus on another topic Gerhard worked on extensively over his career: truthlikeness. The first of these three articles is Ilkka Niiniluoto’s “Truthlikeness: Old and new debates” (Niiniluoto 2018). Ilkka starts his article with a little bit of personal history. In particular, he talks about where his and Gerhard’s paths crossed and gives a rough summary of their different views of truthlikeness and their discussions of these views they had in the past. The main parts of the article provide an analysis of old and new debates about truthlikeness with a special focus on how the competing approaches on the market can handle false disjunctive theories. The main protagonists are Gerhard’s, Theo Kuipers’, Graham Oddie’s, and the author’s own works (next to the contributions of many other authors). When Ilkka walks the reader to the different sections he links the discussion again and again back to Gerhard’s contributions and to the two philosophers’ interactions. In the end Ilkka’s article does not only provide an excellent overview and analysis of old and new debates on truthlikeness, but also clearly shows that the issue about how to define truthlikeness is far from settled and there is a lot more to be expected in the future.

The next article in this special issue is Theo Kuipers’ “Refined nomic truth approximation by revising models and postulates” (Kuipers 2018). The article fleshes out the basic version of generalized nomic truth approximation Theo developed in (Kuipers 2016). In particular, Theo identifies three possible plausible concretizations of his basic account—a quantitative version, a refined version, and a stratified version—and goes for the refined version. Based on the concept of structurelikeness, a ternary similarity relation, Theo provides several refined definitions of core concepts of his approach and a refined success theorem that holds unconditionally. He finishes his article by zooming out and embedding his refined approach into a broader context and by discussing its connection to some general principles and possible objections that have been discussed in the literature.

Gustavo Cevolani and Roberto Festa’s contribution to this special issue is entitled “A partial consequence account of truthlikeness” (Cevolani and Festa 2018). In their paper Gustavo and Roberto propose a new account of truthlikeness for propositional theories that builds on the old intuition that truthlikeness is intimately connected to the true and false propositions that follow from different theories or hypotheses which is shared by many philosophers, among them Popper (1963) and Schurz and Weingartner (1987, 2010). But contrary to Popper, who suggests that a proposition is the more truthlike the more true and the less false propositions it entails, they analyze the truthlikeness of h in terms of “the amount of true and false information provided by h on the basic features of the world” (Cevolani and Festa 2018, Sect. 5). In doing so their consequence-based approach can avoid several classical problems Popper’s original approach has to face. Gustavo and Roberto finally compare their measure to other approaches on the market and, among many other interesting observations, find a close connection to Oddie’s similarity-based account.

Finally, the contribution of Elke Brendel adds to the above mentioned discussions of epistemological questions and questions of philosophy of science surrounding truth, namely truth-tracking and truthlikeness, a logical and metaphysical perspective by discussing “Truthmaker Maximalism and the Truthmaker Paradox” (Brendel 2018). In her contribution, Elke argues against the view of Milne (2013) that truthmaker maximalism, the position that each truth has a truthmaker, can be refuted on mere logical grounds. Milne argued that the sentence “This sentence has no truthmaker.” is true without having a truthmaker and hence refutes truthmaker maximalism—similarly as a Gödel sentence refutes a theory’s completeness. However, as Elke’s detailed reconstruction of assumptions in this argument shows, the truthmaker sentence plays, contrary to Milne’s claims, structurally the same role like the Liar sentence and hence gives rise to a Truthmaker paradox: A self-referential application of the truthmaker predicate leads to inconsistency. This shows that sentences like that one put forward by Milne pose no particular problem for truthmaker maximalism, but more generally for all truthmaker accounts. For non-classical remedies like going paracomplete and allowing for truth-value gaps for sentences like “This sentence has no truthmaker.” a revenge problem shows up. As Elke demonstrates, also going dialetheist in assigning such sentences the truth-value true-and-false results in triviality. Finally, a classical remedy of Tarski-like typing the truthmaker predicate clearly avoids the paradox, but at cost of giving up the idea of a single truthmaker predicate principally applicable to all sentences (cf. Schurz 2015b).

A contribution of Gerhard (Schurz 2019b), Jack of all trades and also master of all, closes this special issue with detailed comments on and replies to all the papers.