Accéder au contenu principal

Review of An Architectonic for Science: The Structuralist Program (Balzer, Moulines and Sneed 1987)

Hertzberger Centraal Beheer1

Summary of the programme

Along with the infamous semantic conception of theories (of which it can be considered part), the structuralist programme wants to analyse scientific theories in set-theoretic terms rather than in logico-linguistic terms. In other words, theories are not axiomatic systems, but set-theoretical structures.

As stated in the introduction of the book, the program is not against the idea that theoretical knowledge is propositional in nature. Their understanding of structure is: how different pieces of propositional knowledge hang together. So, we should have this idea, I guess, that theories somehow organise propositional knowledge in a structural form, and their aim is to elucidate this structure in full generality.

Models

Just as more mainstream presentations of the semantic conception (by Suppe, Giere, van Fraassen), they take theories to be families of model. But we will see that they want it to have a bit more internal structure than this.

What is a model? An helpful aspect that is too absent in other accounts is that they quickly differentiate the logician view of a model as target of representation and the scientist view of a model as a source of representation: quite the opposite! For the record, according to the logician, a model is (just like the model of a painter) some part of the world to be represented with statements. It is a domain of objects with a structure (any groupings of these objects into sets that are mapped to linguistic terms). Tarskian model-theory is a meta-semantic tool that can be used to represent the relation between statements and models. But in the end, model-theoretical descriptions, expressed using set theory, also lie on the representational side of course. Yet the "real" model is supposed to be some part of the world. For scientists, on the other hand, models are used to represent the world. They are not themselves parts of the external world. Set theory is the language that is used in mathematical modelling alongside with other pieces of language to build these representations.

The author adopt the logician construal. A model is supposed to be some part of the world (a domain of object with a structure of groupings into sets), not a mental entity of sort.

So, theories are roughly sets of models. But we can distinguish within a theory its main framework, which I find helpful to think of in terms of its analytic, a priori part, and its laws (for me the synthetic part). Think any descriptions of particle trajectories in classical spacetime, versus the subset of these trajectories that satisfy the laws, i.e. kinematic vs dynamic possibilities. I'm not sure that this corresponds exactly to what they have in mind (and I'm aware, for having published on this, that talking of analyticity about kinematical possibilities is improper), but I guess it's close enough. The former defines what is called the potential models, and the latter the models simpliciter. They want to make this distinction formally, but I'm not really interested in technicalities, so I will not dwell on this.

Remember that models are parts of the world. Applying a theory is postulating that a given domain of objects, conceptualised as a potential model (endowed with a mapping between groupings, however arbitrary, and linguistic terms) is actually a model of the theory satisfying its laws. This is, roughly, the empirical content of the theory for a specific application.

Additional structure

As already said, a theoretical core is not a disparate set of models, but has more structure. It also a set of constraints that relate models together as well as a set of intertheoretical links. I find it useful to think of constraints in terms of how models combine or divide into bigger and smaller models (the constraint would say that if two structures are models, then their combination is as well). They give the example of the fact that the same object must have the same mass in all models, but I think it is ultimately amenable to this idea of model combination. As for intertheoretical links, they are mappings between potential models of two different theories.

The notion of intertheoretical link is what allows to distinguish between theoretical concepts and non-theoretical concepts, relative to a given theory. The main idea is that theoretical concepts, such as mass and force in classical mechanics, can only be determined (measured, etc.) by supposing an actual model of the theory, that is, by using the theoretical laws. Non-theoretical concepts can be determined by other theories without using the theory considered. So, the notion is theory-relative.

From this, we can take the potential models (all structures satisfying the analytic framework of the theory, not necessarily its laws) and suppress from them all the theoretical terms. This gives us what is called a partial potential model. This is, so to speak, the most extensive domain of application for the theory that we could think of: any conceivable way of interpreting it in terms of actual objects and their groupings. But among these interpretations, only some are intended. Perhaps a set of farm animals satisfy Newtonnian laws when interpreting their mass as the number of hair they have and positions as their colour, or whatever, but this is not intended. However, the way the intended models are identified is informal, by means of similarity relations (they only claim later that by lumping all our theories together, including ordinary representations, we could structurally restrict this set of intended applications using the theoretical links).

In any case, in this view, a theory is a set of models, potential models, partial models, constraints, links, and intended applications. It is ideally true, valid, adequate or what have you roughly if all intended applications (the partial models that are informally recognised as intended) can be completed into a model of the theory (satisfying its laws), and satisfying links to other theories, and if the set of such models respects the constraints of the theory. This notion of completion is not far from ideas of Ramseyfications of theories.

After this come diachronic and synchronic analysis of relations between theories. Theory nets are theories partially ordered by a specialisation relation. Specialised theories have exactly the same potential/partial models but equally or more restricted domains of application, laws, linked models and constraints. Theory evolutions are sequences of theory nets such that all new theory elements are specialisations of, or identical to old ones. They claim that we can recover a broadly Kuhnian picture of the evolution of science from this. Finally, theory reduction by another theory is understood in terms of the non-theoretical concepts of the fundamental theory being theoretical concepts of the reduced theory. Accounts of theory equivalence relations and approximation are also proposed to complete the view.

Comments on the notion of a model

A common point between the structuralist programme and some other versions of the semantic view is their model-theoretic understanding of models. Personnally, I've always thought that the scientist way of thinking about models as representating rather than being represented was more useful for philosophers of science. But this undermines in large part the semantic conception of theories if (as Suppe puts it, review of his book soon to come) its main idea is to concentrate on the object described by theoretical statements instead of the theoretical statements themselves. For according to the scientific way of thinking, theorical statements and models are not very different. They are both in some sense representations of the external world. And I think it is perfectly right.

Now the semantic conception theorist wants to bypass issues that have to do with linguistic interpretations: it was wrong to identify a theory with a particular expression (set of statements), since there are different formulations of the same theories. Sure, as "snow is white" and "la neige est blanche" are different formulations of the same proposition. This is certainly not something that the philosophy of language has ever avoided. So why divorce philosophy of language and of science on this ground? We can identify a theory with the set of propositions expressed by an axiomatic system. Insisting that this must be done using model-theoretic tools is just to insist in using the sparse language of set-theory instead of the rich language of science. Why would we want to do that? And apart from this, I don't see anything else to the point that, per the semanticist slogan, "theories are not linguistic entities, but structures" (in substance, this just means considering many potential word-world mappings, as if theoretical terms were mere uninterpreted placeholders, or only "internally" interpreted from within the language, which I think is a mistake).

These remarks concerns some versions of the mainstream semantic view, but I think there lies a problem for the project of the structuralist programme too. They adopt the logician way of thinking about models, which is fine if we want to analyse theory--world relations, not if we identify theories with collections of models. They are not against a propositional conception of knowledge, as already said, and yet, they somehow want to do away with language by considering theories to be described by statements, as parts of the world can be, rather than expressed by them as propositions and thoughts can be. In my opinion, this is wrong. It makes more sense, despite what they claim, to put them on the representational side of things, the same side as propositions then, not really "in the world", but about the world.

I find the idea that models are part of reality, and not of our representations of it, hard to grasp. So, for example, we should think that many systems in the actual world satisfy the laws of classical mechanics if we provide sufficiently gerymandered interpretations of its vocabulary (not the intended ones, it does not matter), and that many other systems are just describable using the intended interpretation of its vocabulary (but not necessarily satisfy its laws). The theory of classical mechanics is, according to the structuralist programme, constituted of these two sets of real systems among other things. It is true if they coincide or overlap or something along theese lines. This is a very unnatural way of thinking about theories in my view. I think this comes from getting lost in Tarskian's technicalities instead of relying on the more commonsensical view that models are mental representations of the external world (or norms concerning such representations), expressed in a particular scientific language, on a par with linguistic statements.

Comments on intended applications

A first difference with other semantic views is that the programme puts more structure into its construal, which is a good thing, since some of the criticisms of the semantic conception rely on the idea that according to it a theory could be just a disorganised collection of models, which doesn't seem right. This is certainly not the case for the structuralist programme. But among these additional structures are aspects that compensate for the lack of respect towards linguistic formulations, and in this case, I think we should simply accept that a theory comes with a particular language and is not purely structural.

This is the case in particular of the way they handle intended application: contrarily to some other versions of the semantic conceptions, intended applications are full part of the theory. The theory is not purely abstract. On the other hand, this intended domain is still defined entirely structurally as a set of partial models (since models are supposed to be part of the world, it is allowed). But what this means is only that they want to do away with language, or, more precisely, that they think that we only need the language of set-theory to talk about everything relevant to science and its applications. I think this is the heart of the problem. This is a very austere and abstract conception of knowledge that is very far away from pragmatist approaches. And I would say that the same problem affects more mainstream semantic conceptions. In this end, they lump everything that the logical positivists wanted to analyse by means of linguistic postulates under the "informal" label. This is not very helpful. They throw the baby of philosophy of language with the water of semantic reductionism, so to speak.

My main point, here, is that we should accept that there is a meaningful scientific vocabulary, however its meaning is analysed, and that theories are defined in this vocabulary, and that it is this vocabulary that fixes the intended domain of application. So, a theory is not entirely described by means of set-theory. And if we accept this, then we are back to the old view of theories as axiomatic system, or at any rate, presenting them in this way, or in the way of the set of set-theoretical structures that satisfy the axiomatic system, is not very different. As for models, they are just specific theories about a kind of object.

Commentaires

Posts les plus consultés de ce blog

Review of Van Fraassen's Semantic Conception of theories (mainly Laws and Symmetry, 1989)

The next stage of our journey is van Fraassen’s presentations of the semantic conception of theories. I will focus here mainly on parts of chapter 8 and chapter 9 of his “Laws and Symmetry” (1989), as well as the appendix of chapter 1 in his “Scientific Representation” (2008), and to a less extent his 1970 article “On the Extension of Beth’s Semantics of Physical Theories”. I will also mention in passing a presentation by Thompson “Formalisations of Evolutionary Biology” that is cited favourably in Scientific Representation. Summary of the material Laws and Symmetries is largely concerned with giving an account of theoretical laws that does not take them to represent real aspects of nature (actual laws), but to be mere structural features of our representation of nature, characterised in particular by symmetries. Laws are laws of models only, and it is ultimately the content of these models, not their overarching structure, that is supposed to be adequate. As we can see, the whole p...

Review of Giere's account of scientific theories

After Suppe and van Fraassen, we’re now reaching the last defender of the semantic conception of theories that I will comment on: Ronald Giere. I’m particularly interested in his work, because he takes a much more pragmatic stance that the others, who remain generally more structuralist, and I think I can find much in common with my own stance. I will comment on chapter 3 of his “Explaining science” (1988), chapters 5 and 6 of “Science without Laws” (1999) and chapter 4 of “Scientific Perspectivism” (2006). Summary of Explaining Science In “Explaining Science”, Giere tells us: in order to know what a theory is, instead of looking at abstract theoretical reconstructions of their content by philosophers, we can simply have a look at science textbooks. It would be presomptuous to claim that their authors and users do not know what a theory is. And if we do so (he takes a few textbooks of classical mechanics as illustration), what we observe is that indeed, a system of laws is present...