When you define satisfaction for quantified formulas, e.g., \(\forall x\, A\), you have to have a way to make \(x\) range over all elements of the domain. Here are the common options:

A. Tarski-style: use variable assignments \(s\colon V \to D\) (where \(V\) is the set of variables and \(D\) the domain), then define \[\mathfrak{M}, s \models \forall x \, A \iff \mathfrak{M}, s’ \models A \text{ for all $x$-variants $s’$ of $s$}.\] This requires a definition of “\(x\)-variant” but is otherwise very clean. Its drawback is that it obscures how we let \(x\) range over all elements of \(D\), and my students have a hard time understanding it and an even harder time working with it.

B. Alternative Tarski-style: we use variable assignments as above, but avoid talking about \(x\)-variants. Instead, we define the notation \(s[m/x]\), the variable assignment just like \(s\) except it assigns \(m \in D\) to \(x\). Then we have \[\mathfrak{M}, s \models \forall x \, A \iff \mathfrak{M}, s[m/x] \models A \text{ for all } m \in D.\]

C. Model theory style: instead of introducing variable assignments that provide the interpretation for variables, we define directly when a formula is satisfied by a sequence of objects: if the variables of \(A\) are among \(y_1, \dots, y_k\) then \(\mathfrak{M} \models A[n_1,\dots, n_k]\) means what \(\mathfrak{M}, s \models A\) means Tarski-style for the assignment \(s\) that maps each \(y_i\) to \(n_i\). Then the clause for the universal quantifier becomes \[\mathfrak{M} \models \forall x \, A[n_1, \dots, n_k] \iff \mathfrak{M} \models A[m, n_1, \dots, n_k] \text{ for all } m \in D.\] This is simple in that it avoids an intermediary function, but can easily be confusing for beginning students because it is neither clean nor precise. We have to understand that \(A\) is a formula with the free variables \(x, y_1\, \dots, y_k\). But what determines the order? Or, put another way, which object interprets which variable?

D. In logic textbooks for philosophers you often see semantics developed for sentences only (i.e., formulas with free variables are avoided). Given a structure \(\mathfrak{M}\) we can define \(\mathfrak{M}[m/a]\) as the structure that’s just like \(\mathfrak{M}\) except the constant \(a\) is interpreted by \(m\in D\). Then we can define truth (not satisfaction) using \[\mathfrak{M} \models \forall x \, A \iff \mathfrak{M}[m/a] \models A[a/x] \text{ for all } m \in D,\] where \(A[a/x]\) is the substitution operation and \(a\) is a constant not already occurring in \(A\).

E. Finally, there’s Robinson-style: we treat every \(m\in D\) as a constant symbol that names itself. Then substituting \(m\) for \(x\) in \(A\) is possible, since \(m\) belongs to both the domain of the structure and to the language, and we can write \[\mathfrak{M} \models \forall x \, A \iff \mathfrak{M} \models A[m/x] \text{ for all } m \in D.\] Naturally, this is not something philosophers like to do because it just seems confused to allow domain elements to function as linguistic symbols naming themselves.

Maybe I’ll find the time to write a paper tracing the origins of all of these at some point. But for now, I wonder: which way is best, pedagogically? Specifically, the Open Logic Project uses Tarski-style, but I’d like to replace it with a definition that is easier to understand and avoids the use of \(x\)-variants. Which would you prefer for the OLP? Which do you prefer in your own teaching or research and why?

As preparatory work, before using the Robinson-style approach, the underlying first-order signature should in principle be extended so as to include at least one constant symbol for each object in the domain. That is not unheard of in Model Theory, but is also not entirely without a practical consequence: just consider a domain whose cardinality is larger than the original, non-extended, first-order language. Maybe this would also be an issue that philosophers (those reading Van Dalen’s book, for instance) would prefer to be careful about?

It seems that there are only two broad choices: (i) You can appeal to x-variant interpretations/models (A, B, C, D, among others) or (ii) you can assume everything has a name and appeal to all substitutions for x (E, and other substitutional approaches). Its seems to me that presentations such as A or B are the most natural. It not to difficult to get the idea that a formula “x+2=5” is true relative to some objects but false relative to others. From there one can see that “∀x(x+2=5)” says that “x+2=5” is true relative to every object. To go beyond monadic we notice that likewise “x+y=5” is true relative to some /pairs/ of objects and false relative to others. So we get the general idea of formulae being true or false only relative to sequences of objects (or satisfied/unsatisfied). So really “∀x(x+2=5)” says that “x+2=5” is true relative to every sequence that differs at most in the first position (or in the x-position which I’m assuming in associated with the first position. Really we have: x1,x2,x3…). Of course, you know all this. I’m just sketching how I work up to the full Tarski definition in A. And to me it is actually the most intuitive and natural. Otherwise, probably a simple substitutional presentation (not the Robinson version) is easiest to understand. But I’d prefer to avoid that.

I too favor the first approach (“West coast semantics”). Do you then officially use A, B, or C? Since you talk about sequences, maybe C? But I don’t know how to do properly—ie, correctly but also in such a way that the notation doesn’t become overwhelming or require lots of explanation. For instance, with sequences you can’t correctly say that x+y=z is satisfied by (2,3,5), since that is only true if x is really x1 and y is x2 and z is x3. What happens if z is x4? Is x1 + x2 = x4 still satisfied by (2,3,5) or do you need a slot for the unused x3? If the latter, it becomes confusing: x1 + x2 = x4 has 3 variables but is satisfied by no 3-element sequence. If the former then you no longer get that whenever A and B are both satisfied by the same sequence, their conjunction is. Etc. All problems you get to avoid nicely if you talk about assignment functions. (If you just use “sequence” for “variable assignment function” ignore what I said or take it as elaboration of what I find difficult to do with C)

Right. There are some tricky choices, and the general technical implementation always seems to get more complicated than seems necessary for the the basic idea. I kind of toggle back and forth between talking about assignments and talking about sequences. When doing so I’m assuming that the variables are ordered (or “enumerated”). Note: Tarski originally relativized to a sequence of objects under the stipulation that the variables were ordered. Here there are two things going on: a function from the variables to numbers and a function from numbers to objects. Alternatively, one could just cut out the middle man, and talk about the assignment function. But I don’t really see these as importantly different approaches. Sorry, I haven’t really answered your question about pedagogy