Accéder au contenu principal

Reflections (1): on the practical equivalence between semantic and syntactic conceptions of theories

Having reviewed the work of the most influential authors on the syntactic and semantic views of theories (and having read a bunch of articles that I won’t review here), I guess it’s about time to give my own reflections on this debate. So, what is it all about? Are theories statements about the world, or are they families of models? Or something else entirely?

As my draft was becoming very long, I decided to cut these reflections into several parts.

  • Today, I will argue that there are no differences of grain in the way the two views can individuate theories, so that the difference must lie somewhere else: in the way theories are interpreted.
  • Tomorrow, I will examine and debunk an idea: that the semantic view is about there being various more or less abstract levels of representation in science, with models playing an intermediary role between statements and phenomena: this cannot be the claim.
  • The day after I will examine a better idea: that it's all about representation in science being non-linguistic. This might indeed be what the semantic conception is really after, but I think it's incorrect.
  • Finally, I will argue that the focus on models of the semantic view is still relevant, but that we need a pragmatic conception of theories, not a semantic one, to do it justice.
Bernard Trebacz Argument of the scholars

On the equivalence between syntactic and semantic views

Let’s start from the observation that presenting statements (as per the syntactic views) or presenting the family of structures satisfying these same statements (as per the semantic view) makes no big difference in practice. In both cases, we need a language to do so, statements that is, and by assumption, these statements, whether they are directly presented (per the syntactic view) or used to present some structures (per the semantic view), will in effect be the exact same statements. If this is so, then the difference between the syntactic and semantic view must lie somewhere else, presumably at the interpretive level.

I’m implying that the statements are exactly the same in both cases, so that theories are identified with the same level of grain by both views, but it’s important to remember that so much was typically denied by defenders of both the semantic and syntactic conception, and that most of the debates have focused precisely on this question. This is what I will review today. In particular:

  1. Semanticists such as Suppe (1989) claimed that the syntactic view would count different formulations of the same theory (different descriptions of the same models) as different theories. This means that according to them, the syntactic view is too fine-grained in some respects because of its focus on language.
  2. Semanticists such as Bas van Fraassen (2024/2012) typically claim that the syntactic view is limited to first-order logic, and is therefore incapable of uniquely identifying the intended models. So the syntactic view isn’t fine-grained enough in other respects.
  3. Many authors, such as Chakravartty 2001, have interpreted the semantic view as entirely doing away with language to focus on abstract structures, which makes it not fine-grained enough, since it identifies everything up to isomorphism and fails to differentiate theories that have the same structure but are about different things.
  4. Finally, Hans Halvorson (2012) has argued that the semantic view is too fine-grained in other respects, because it doesn’t have the means to formulate criteria of theoretical equivalence; think for example of the way some “surplus structure” (such as gauge) appears in the models of some formulations, but not others, of the same theory.

I think it’s safe (today at least) to reject all these claims, and to conclude that the statements that can be used in one or the other view to present the theory are exactly the same, and that the difference lies somewhere else. Let’s review all these arguments in turn.

The syntactic view is too fine-grained?

Regarding (1), Suppe (pp, 3-4) says:

To say that something is a linguistic entity is to imply that changes in its linguistic features, including the formulation of its axiom system, produce a new entity. Thus on the Received View, change in the formulation of a theory is a change in theory. However, scientific theories have different individuation properties, and a given theory admits of a variety of different full and partial formulations.

The idea is that syntacticists would make differences between theories that are mere linguistic variants with no scientifically relevant difference (because they have the same models), as in the case of the various axiomatisations of arithmetic. However, I’m not sure that any syntacticist ever subscribed to this (as if expressing a theory in French or Spanish would make a difference according to them).

Of course, logical empiricists wanted to analyse theories as sentences couched in a particular language for the sake of rigour, but that doesn’t entail that they viewed any two different axiomatisations as expressing two different theories. The logical empiricists were empiricists in a very strong, semantic sense after all, subscribing to empirical criteria of meaning: they assumed in particular that meaning is ultimately rooted in a connection to experience. We can presume that they implicitly assumed that two axiomatic systems that are logically equivalent, or even merely empirically equivalent, express the same theory, even if, of course, one needs to select a particular formulation of it for analytic purposes. And empirical equivalence is typically not fine-grained at all when compared to other criteria of theoretical equivalence!

Furthermore, such criteria of equivalence can be analysed either syntactically or semantically, in terms of having the same models or being compatible with the same possible worlds for instance, which is how Carnap analysed intensions and meaning: by no means did logical empiricists refrain from carrying out semantic analyses of this kind. And we cannot blame them for being insensitive to questions of theoretical equivalence either, since they were discussed by Reichenbach and others (actually, it is semanticists who haven’t thought that much about theoretical equivalence, as if it wasn’t required for them at all: see point 4). So, I think it’s fair to assume that point (1) is something of a caricature and shouldn’t be taken seriously.

Now you might object that identity and equivalence are different, but I think that we are quibbling here. The question is whether different theoretical presentations amount to the same thing: that's what is at stake with criteria of equivalence; the rest is verbal dispute. Or if you wish, just formulate a syntactic view where theories are equivalent classes of axiomatic systems, and there you go.

The syntactic view is too coarse-grained?

OK, the syntactic view isn’t necessarily too fine-grained assuming syntactic criteria of equivalence. But what about point (2): can this view make all the differences that are scientifically significant, or is it limited by first-order logic?

The problem that is highlighted here is that first-order logic is less expressive than set-theory, type-theory or ordinary mathematical language, in that it isn’t always possible to identify the intended model of a finite set of axioms. For example, there are non-standard models of Peano arithmetic, which contain further loops of “numbers” disconnected from the main sequence, and we have no means of excluding these models without quantifying over predicates or sets. This remark was an important part of the semanticist criticism of the syntactic view, which, the story goes, would pursue an unattainable ideal of logical rigour, whereas flexibility is much needed in science (see my review of van Fraassen's semantic conception).

This looks like a fair criticism, even though we might wonder whether the differences between standard and non-standard models are really significant (maybe not for an empiricist?). But in any case, Sebastian Lutz (2012) has argued, convincingly in my opinion, that this is a straw-man: there never were any such commitments from syntacticists to first-order logic. This is an invention of the semanticists. Actually, all these distinctions between more or less expressive languages and why they matter only became clear with Gödel and others after Carnap initially formulated his views. But in any case, Carnap actually relied on type theory, not first-order logic, and he also suggested later expressing theories in the form of Ramsey sentences, which are second-order logic sentences. So, there are no particular expressive limitations for the syntactic view.

The semantic view is wrong because we need language?

In the other direction now, let’s address syntacticist criticisms of the semantic view.

Regarding point (3), the reproach comes from the idea that the semanticist would want us to be able to characterise theories using set-theory or mathematical language only, without using any empirically interpreted theoretical term, in principle at least, since theories are pure structures and not worldly entities. There are various ways of fleshing out this idea (see Halvorson’s article), but in any case, this has been the main source of criticism, with many authors arguing that theoretical language is needed for referential purposes, to endow theories with empirical content and with a proper domain of application (e.g. Frigg 2006, Psillos and Hendry 2007, Chakravartty 2001). Somehow relatedly, Halvorson also argued in this context that theoretical terms are needed to relate the various models of the theory together, otherwise a theory would be just a disparate collection of unrelated structures (but an electron in a model is also an electron in another model, and this is important): according to him, it would be more apt to view a theory as a topology of models.

Fine, but according to van Fraassen (ibid), this criticism is also a straw man. The semantic conception never wanted to do away with theoretical language. It only objected to a wholesale linguistic regimentation of theories, and wished to restrict the role of language to some “fragments” (as with Ramsey sentences, you mean? could we ask ironically), in particular to state attribution in experimental practice (locating an actual system in a state space after a measurement).

I have some doubts here: there are clearly stated ideas of language independence in the writing of semanticists, including (not to say particularly) in van Fraassen’s. There is clearly this structuralist idea that in the end, in principle at least, mathematical comparisons (isomorphism etc.) are sufficient for establishing representation, meaning and reference. He mentions fragments of language in “Law and Symmetries” but the claim is apparently that it is the part of languages that should be modelled and ultimately dispensed with in principle ("This gives us I think the required leeway for a programme in the theory of meaning. [...] First, certain expressions are assigned values in the family of models and their logical relations derive from relations among these values. Next, reference or denotation is gained indirectly because certain parts of the model may correspond to elements of reality."). I think that van Fraassen is backtracking a bit here; it's not crystal clear at any rate (see also Hendry and Psillos’s quotes in section 3 and 5 for more evidence). But it’s true that other semanticists such as Suppe and Giere entertained less radical structuralist views.

Anyway, let’s grant that this idea of language independence was a straw-man all along as well. It might also offer a solution to point (4). Equipped with this minimal vocabulary used in state attributions, it is certainly possible to formulate criteria of theoretical equivalence and to identify the surplus structure of some formulations for example. So, the semantic view isn’t necessarily too fine-grained either.

Good. Where are we now?

Well, now we're back to square one it seems: both views agree on the same statements/propositions being used to present a theory: statements that

  1. are couched in an expressive enough language such as set theory or second-order logic,

  2. make use of empirically interpreted terms, such as a theoretical vocabulary, and

  3. come with implicit or explicit criteria of theoretical equivalence.

And so, there’s no reason why further analyses of theoretical reduction, confirmation or whatever couldn’t take the same form as well. As Steven French says (see previous review), all that one conception of theory can do, the other can do as well. There seems to be no difference between the syntactic and the semantic views when it comes to presenting theories.

No difference in presentation, except for an important one of course: the syntactic view characterises the theory with the statements that are presented, and the semantic view characterises it with what these statements supposedly describe. What shall we make of this?

That's what I will try to figure out in the next article By examining (and debunking) one option: that it's all about there being two stages in scientific representation.

Commentaires

Posts les plus consultés de ce blog

Review of "There Are No Such Things As Theories", by Steven French

In “There Are No Such Things As Theories”, Steven French defends that there are no such things as theories. This well-named book provides a careful review of a wide range of issues, ranging from the philosophy of science (the syntactic and semantic conceptions of scientific theories, fictionalism about scientific representation, theoretical equivalence, science in practice) to the philosophy of art, with a particular focus on the metaphysics of abstract objects, including fictions and works of art, and, of course, theories, so as to arrive at its provocative conclusion, which is roughly the following: scientific theories don’t really exist, but we can still make sense of ordinary discourse about scientific theories. I wasn’t convinced by this conclusion in the end, but I think that this is because I approach things very differently at the meta-philosophical level, and I am not particularly moved by metaphysical considerations in general (the ones signalled by the “really” emphasised...

(non-)Review of Models as Mediators

Models as Mediators, edited by Mary Morgan and Margaret Morrison (so many Ms!), is a collective book from 1999 that has now become kind of a classic for whoever is interested in modelling activities in science. It played an important role in impulsing a trend in philosophy of science that consists in focusing more on modelling activities, considered scientifically important for their own sake, and less on the content of abstract theories (a trend that arguably started earlier, in particular with Nancy Cartwright’s work, who contributed to the book, and also perhaps even earlier with Mary Hesse, cited in the introduction). This book is somehow relevant for my project of discussing how a pragmatic conception of theories could fare better than a semantic conception, because, as we will see, it opposes, or at least attempts to supersede the semantic conception of theories in some respects. However, it is also partly irrelevant to my project, because my main focus is still on abstract ...