My former colleague Charlie Martin died last Thursday. He was a major figure in metaphysics, one of the first to talk about truthmakers.
The venerable Swedish philosophy journal Theoria is published by Blackwell since this past March, and that means it is online, including the back issues. I’m not sure of the exact dates, but in the 70s, when Krister Segerberg was the editor-in-chief of that journal, Theoria was the place to publish modal logic and formal philosophy. Just go and browse the back issues, lots of interesting stuff there!
(The latest issue has a new paper by Kripke.)
One very common informal statement of Gödel’s theorem is that it shows that for any (sufficiently strong consistent blah blah) formal system, there are truths that it can’t prove. And if you don’t formulate Gödel’s incompleteness theorem that way, at least you state this as a corollary: Gödel’s theorem shows that truth and provability (in any one formal system) come apart. But if you read Gödel”s original paper(s) on incompleteness, you are probably struck by the fact that Gödel doesn’t say this. In fact, the word “wahr” doesn’t even appear in Gödel’s 1931 paper. I think Gödel doesn’t note this consequence until the 1934 Princeton lectures (p. 363 of Collected Works I). A number of reasons have been offered for this: a) he was reacting against Hilbert, so he wanted to keep everything purely finitary, b) he was working in the context of the Vienna Circle, and they considered “truth” to be a suspect, metaphysical notion. This is an aspect that people interested in Gödel’s platonism have written extensively about, since Gödel claimed that even at the time of writing the 1931 paper he was a platonist.
So who was the first to note this consequence in print? My money is on the guy who made truth respectable among the logical empiricists. Here’s the end of Tarski’s “Einige Betrachtungen über die Begriffe der ω-Widerspruchsfreiheit und der ω-Vollständigkeit“, Monatshefte für Mathematik und Physik 40 (1933) 97-112 (the paper was received in July 1932), translated as “Some observations on the concepts of ω-completeness and ω-consistency”, Logic, Semantics, Metamathematics, 2nd ed., pp. 279-295:
[T]he following is worthy of emphasis: the profound analysis of Gödel’s investigations shows that whenever we have undertaken a sharpening of the rules of inference, the facts, for the sake of which this sharpening was felt to be necessary, still persist, although in a more complicated logical structure. The formalized concept of consequence will, in extension, never coincide with the ordinary one, the consistency of that system will not prevent the possibility of ‘structural falsehood’. However liberally we interpret the concept of the deductive method, its essential feature has always been (at least hitherto) that in the construction of the system and in particular in the formulation of its rules of inference, use is made exclusively of general logical and structurally descriptive concepts. If now we wish to regard as the ideal of deductive science the construction of a system in which all the true statements (of the given language) and only such are contained, then this ideal unfortunately cannot be combined with the above view of the deductive method.
Now here’s something puzzling. The translation has a footnote to this passage which is not in the original:
The validity of the remarks stated in the last paragraph essentially depends on the decision not to use in metamathematical discussion any devices which cannot be formalized within the framework of the theory of types of Principia Mathematica. But as soon as we abandon this decision and allow ourselves to use stronger devices in metamathematical discussion, most of the remarks as originally formulated lose their validity and the whole paragraph requires a thorough revision.
What in the passage quoted depends for its validity on metamathematics being formalizable in PM? Maybe that if the metatheory is stronger than PM, you can define a “formalized notion of consequence” which characterizes the true statements of PM, e.g., using an ω rule. But Tarski just did that immediately before the quoted passage and says that the resulting formalized notion of consequence is not “in harmony with the current view of the deductive method”. So what can he have in mind here?
Ryan North, the author of Dinosaur Comics, has used philosophical/logical themes in his comic as well, e.g., the Twin Earth comic, or the Post Hoc Ergo Propter Hoc comic, or the one on pragmatism. This one is pretty funny, too.
Of course, you’ve all seen the xkcd comic on Gödel and Russell’s Paradox.
Update on my old post on modal logic textbooks: Two new modal logic books I have recently come across:
- Nino B. Cocchiarella and Max A. Freund, Modal Logic: An Introduction to its Syntax and Semantics (Oxford, 2008)
- Walter Carnielli and Claudio Pizzi, Modalities and Multimodalities (Springer, 2008)
Anyone already read these and have an opinion? The Carnielli/Pizzi text looks especially interesting–and the full text is available online! (Not free, of course. It’s Springer, after all.)
I also noticed that a few classic books/collections of classical papers on modality are now online at Oxford Scholarship Online, including:
Since I’m hanging out with a bunch of category theorists every Wednesday, web finds with “category theory” in them keep attracting my attention. A couple of weeks ago, I came across this book draft posted on arXiv:
At first I thought, cool!, a new book on logic of diagrams or something. But then I read the abstract and intro, and it’s of course not about that at all. A sketch is a certain kind of graph that lets you define mathematical structures categorically. They were introduced by Charles Ehresmann. I’d links to an introduction on the web, but the only introductory thing I could find is a short paper by Wells. (The MacTutor biography of Ehresmann doesn’t even mention sketches…) So: still cool, but a lot harder for me to understand.
The other item was
which ‘summarizes and compares a number of set-theoretic foundations for category theory, and describes their implications for the everyday use of category theory.’ More elementary, and I look forward to reading it. Alas, I don’t know when I’ll find the time.
Are you sitting in front of the computer, hitting the “reload” button every two seconds to see if the October Jobs for Philosophers is posted on the APA website yet? Why not check out the job that we have right here: Assistant Professor with AOS in Logic?
UPDATE: Hm, looks like there aren’t many logic jobs this year. Irvine has a TT logic job (223), UMD has a linguistics/logic (99), and a formal epistemology/logic/decision theory job (100). That’s it. BU (14), Yale (15), Stanford (232) list logic as one among several areas in the AOS.
Today I taught Ch. 13 of Peter Smith‘s book. We showed that every Σ1 function can be written as a composition of two Δ0 functions (p. 108). In his proof of this, Peter’s following Boolos Burgess & Jeffrey (Lemma 16.12 on p. 206 of the 4th & 5th ed.; it’s not in the 3rd so I’m guessing it’s due to John Burgess). Until today I really didn’t understand why (as opposed to that) this works or how you’d come up with it. Peter calls it “clever trickery.” But standing up there at the blackboard I realized that there’s a way to think of it which makes it completely straight-forward—and which results (maybe?) in a simpler solution.
Suppose f is Σ1, i.e., there is a formula ∃z F(x, y, z) so that f(x) = y iff ∃z F(x, y, z) (in N). How close to Δ0 can you get? [NB: I’m using F instead of Peter’s R, and different variables, but I think it’s clearer this way.]
Note first of all that if f is total (as all functions are in Peter’s book), then for any x, there will be exactly one y so that ∃z F(x, y, z).
The ∃z is the problem, you want to put a bound on that. How far do you have to go to find z so that F(x, y, z) when y is the value of f(x), i.e., there is a z so that F(x, y, z)? Trivially, as far as
the least u so that ∃z≤u F(x, y, z).
But that function still depends on y; obviously you can’t really bound z by it. You’d like the bound to be a function of x only. Well, we can put in a quantifier to get rid of the free y. Naturally you want a bounded quantifier and the obvious bound is u:
g(x) = the least u so that ∃y≤u∃z≤u F(x, y, z)
If f is total, then for any x, there is such a u, so g is total.
g(x) might be further than you have to go to find a witness for the ∃z, but that’s ok. And it’s a Δ0 function since
g(x) = u iff ∃y≤u∃z≤u F(x, y, z) ∧ ∀v≤u (∃y≤v∃z≤v F(x, y, z) → u = v)
So f(x) = y iff ∃z≤g(x) F(x, y, z). Let’s see if the formula we get by removing the g(x) defines a function:
h(x, u) = y iff ∃z≤u F(x, y, z)
For any pair x, u, there’s at most one y with ∃z≤u F(x, y, z), since for any x there’s at most one y with ∃z F(x, y, z). Hence, h is functional. It might not be total, since for some pairs x, u (in particular u less than g(x)) there might not be a z≤u with F(x, y, z). So let’s fix that:
h(x, u) = y iff ∃z≤u F(x, y, z) ∨ (¬∃z≤u F(x, y, z) ∧ y = 0)
Now we have that f(x) = h(x, g(x)) since the only cases where h(x, u) might be different from f(x) are those where h(x, u) = 0 because u is less than g(x).
My definition of h is a bit simpler than John and Peter’s: they have “the least y ≤ u” where I just have “the y“. Did I overlook some subtlety?