Bill was born on January 22, 1929, in Freeport, NY, and received a BA from Lehigh University in 1952 where he was taught by Adolph Grünbaum. He undertook graduate studies in philosophy (1952–54) and mathematics (1955–58) at Yale University, and spent the 1954/55 academic year at the University of Amsterdam as a Fulbright Scholar. After receiving his PhD from Yale (under the supervision of Frederic Fitch with a thesis on “The theory of partial recursive operators”) in 1958, Bill held appointments at Stanford University (1958–64) and the University of Illinois at Chicago Circle (1965–71) before joining the University of Chicago Department of Philosophy as Professor in 1972. He was a member of the Institute for Advanced Study in 1961/62, a Guggenheim Fellow 1968/69, and a Visiting Professor of Mathematics at the University of Aarhus in 1971/72. He retired from teaching in 1996 but remained active scientifically. He was elected a Fellow of the American Academy of Arts and Sciences in 2002 and delivered the Tarski Lectures in 2016.

Bill’s work up until 1980 concentrated on proof theory, specifically cut-elimination, functional interpretations, the ε-substitution method, type theory and lambda calculus. In Tait (1966), he proved the consistency of second order logic, settling the Takeuti conjecture positively. (Prawitz and Takahashi each independently settled Takeuti’s conjecture for the full simple theory of types a year or two later.) His two most well-known contributions are perhaps the development of the Schütte-Tait method of proving cut elimination (Tait 1968) and the method of proving normalization for lambda calculus using Tait computability predicates (Tait 1967). He is also sometimes credited with the observation that the Curry–Howard correspondence between formulas and types and proofs and lambda terms extends to normalization of natural deduction and reduction of the corresponding lambda term.

After 1981, Bill published extensively in the philosophy of mathematics and its history. His philosophical essays are collected in Tait (2005), and more recent essays can be found on his website. His most influential contribution in the philosophy of mathematics is what’s come to be known as “Tait’s thesis”: the identification of Hilbert’s “finitary standpoint” with what’s primitive recursively computable and provable in primitive recursive arithmetic PRA (Tait 1981, 2002, 2019).

You may be interested in reading about Bill’s way into logic and philosophy in his own words.

Tait’s thesis was a target of my dissertation, and Bill did me the honor of engaging deeply with my criticisms. I’m persuaded by his arguments in favor of Tait’s thesis—according to Ken Taylor, who I believe was a student of Bill, this makes me a “Taiter Tot”. He was very supportive of my early career (I owe invitations to present my work at the University of Chicago as well as my first invited talk at an ASL conference to him). We’ve both been involved in a project to edit and translate Paul Bernays’s philosophical writings, and corresponded regularly over the last couple of decades. He visited Calgary in 2004 (I got to take him hiking in Banff National Park), and he attended a workshop I organized at BIRS in 2007. He was a wonderful and inspiring person and I will always remember him fondly.

Tait, W. W. (1966). ‘A nonconstructive proof of Gentzen’s Hauptsatz for second order predicate logic’, *Bulletin of the American Mathematical Society* 72/6: 980–3. DOI: 10.1090/S0002-9904-1966-11611-7

Tait, W. W. (1967). ‘Intensional interpretations of functionals of finite type I’, *The Journal of Symbolic Logic* 32/2: 198–212. DOI: 10.2307/2271658

Tait, W. W. (1968). ‘Normal derivability in classical logic’. Barwise J. (ed.) *The Syntax and Semantics of Infinitary Languages*, Lecture Notes in Mathematics, pp. 204–36. Springer: Berlin, Heidelberg. DOI: 10.1007/BFb0079691

Tait, W. W. (1981). ‘Finitism’, *The Journal of Philosophy* 78/9: 524–46. DOI: 10.2307/2026089

Tait, W. W. (2002). ‘Remarks on finitism’. Sieg W., Sommer R., and Talcott C. (eds) *Reflections on the Foundations of Mathematics. Essays in Honor of Solomon Feferman*, Lecture Notes in Logic 15, pp. 410–9. Association for Symbolic Logic and A K Peters.

Tait, W. W. (2005). *The Provenance of Pure Reason: Essays in the Philosophy of Mathematics and Its History*. New York: Oxford University Press.

Tait, W. W. (2019). ‘What Hilbert and Bernays meant by “finitism”’. Mras G. M., Weingartner P., and Ritter B. (eds) *Philosophy of Logic and Mathematics Proceedings of the 41st International Ludwig Wittgenstein Symposium*, pp. 249–62. De Gruyter. DOI: 10.1515/9783110657883-015

(Photo credit: Jean-Yves Girard, courtesy of Bill Tait)

]]>First, LaTeX to HTML conversion has long been tricky. No solution is perfect. There are basically three workable approaches:

- Assume that your LaTeX code is basically just LaTeX-flavored markup, and use a converter that reads such “vanilla” LaTeX, like pandoc.
**If**your input is simple (or can easily be made simple), that works very well. I use it a lot, e.g., to produce my CV. Pandoc is amazing, but it is intended to be an all-purpose converter between dozens of**markup**formats, so complex LaTeX projects are beyond its scope.**If**you are starting from scratch and**if**you don’t need to rely on special LaTeX packages, then Pandoc would be my tool of choice. (See e.g. Jonathan Weisberg’s*Odds & Ends*for a wonderful example–the source is even in Markdown with just the formulas in plain TeX code.) - Use a package that compiles your LaTeX
**using**but provides added info in the resulting file, then turn that file into HTML. That’s the approach that TeX4HT and lwarp use. It has the advantage that more LaTeX commands and packages are supported, but (afaict) neither produces good mathematical formulas in the final result: either you get images, or you get the source LaTeX code and rely on MathJax to render it. (Caveat: I have not actually tried these!) - The solution I used is LaTeXML. It is basically a reimplementation of the LaTeX kernel, but it outputs to XML instead of to DVI. Because it simulates what LaTeX is actually doing with your code, it can (to a large extent) deal with packages and LaTeX programming directly. It does natively support a large number of popular packages and classes, but packages it does not support can be loaded and “compiled” using the
`--includestyles`

flag. But because it directly outputs XML, it can compile formulas directly to MathML. (LaTeXML is what ar5iv uses: a project to compile everything on the arXiv to HTML.)

I just ran LaTeXML on the *forall x* source code and it **almost** worked. I just had to comment out a few lines: it wasn’t quite happy with some of the layout commands of the `memoir`

class, there was a `sidewaystable`

and a rotated `\iota`

that made it stumble. But after maybe an hour of trial and error it produced something without a major error and the result was passable. I was especially impressed by the fact that natural deduction proofs produced using Peter Selinger’s `fitch`

package (which is 25 years old and not even on CTAN) were turned into functioning MathML code and came out looking **almost** the way they were supposed to.

The standard output of LaTeXML doesn’t have any fancy styling. Without further work it looks like a webpage from the early (1990s) WWW. But, fortunately someone else built a wonderful wrapper around LaTeXML that applies a modified Gitbook style to the result: BookML.

Ok, now I was so close that I didn’t really want to stop at “almost good”. There were a few things that I still wanted to fix (and did):

P.D.’s original LaTeX code for forallx included a few custom environments to typeset lists, arguments, a symbolization keys, and some of these did multiple duty in the text (e.g., both numbered lists and unnumbered arguments were coded as the `earg`

environment). I decided to redo these definitions using the `enumitem`

package, which is supported by LaTeXML. That required a bit of search-and-replace of `\begin{earg}`

and `\end{earg}`

throughout the entire source code. The original `example`

environments needed some extra care: They are numbered lists where we refer to the numbers in the text, and the numbering is consecutive within a chapter. The labels were generated using the original `forallx.sty`

command like so \item[`\ex{label}]`

. Those didn’t quite work: the numbers were incremented by 2. First I wanted to just use `enumitem`

to make a new `enumerate`

list with the option `resume`

, but LaTeXML’s implementation of the `enumitem`

package is a bit buggy (`newlist`

doesn’t quite work right). So I decided to just replace all `example`

environments with `enumerate`

, and use two new environments `compactlist`

and `numberedlist`

for enumerated lists that **should** restart the numbering (i.e., are not `example`

‘s). Labelled items now use the more standard `\item\label{label}`

.

References (with hyperlinks) were working already, but it’s nice to make links to, say, sections, have all of “section V.2” be an active link, not just the “V.2”: easier to click/tap, clearer for screen readers what the link goes to. So I replaced all the `\ref`

‘s with `\cref`

‘s and used the `cleveref`

package, also supported by LaTeXML.

I wanted the proofs to display better, and most importantly, to work for blind users relying on screen readers. (The specific impetus for trying this was a request from the accessibility office from the University of Cincinnati to provide an HTML version for a blind student.) The solution I implemented is described in the accessibility notes. Technically, it required some work though: The `fitch`

package actually produces proofs as a LaTeX `array`

environment, which LaTeXML turned into a MathML array (as it should). But MathML arrays are hard to navigate: it would be better to produce an HTML table instead, where the MathML (i.e., the formulas) just appear in the table cells, so you can navigate from line to line (and, e.g., let your screen reader read out line numbers, formulas, and justifications to you.

So I redefined the relevant bits of `fitch`

to produce not LaTeX’s `\begin{array}`

, `&`

and `\\`

to indicate array starts, cells and line breaks, but directly produce HTML `<table>`

, `<tr>`

and `<td>`

tags. A bit of CSS then provides the styling of the result (i.e., the scope lines and bars under assumptions). It wasn’t too much more work to also produce the invisible extra info for screen readers (count scope lines, insert markers for “begin subproof” and “end subproof”). These are done by adding some cells (the subproof level counters) and some empty `<div>`

‘s with the `aria-label`

attribute: `aria-label="some text"`

tells a screen reader to say “some text” when it comes across that element on the page, but “some text” doesn’t appear anywhere visually. Warning: The ability to insert raw HTML into the LaTeX source code is actually a feature of BookML, so it doesn’t work if you’re using just LaTeXML. [Update, since the subproof level text is visually hidden anyway, I might just include the subproof start/end announcements directly rather than as `aria-label`

s.]

The insert-raw-HTML-with aria-label strategy also helped with another challenges: the “iff” that’s pronounced by screen readers the same way as “if”. I defined a command that produces `<span aria-label="if-f">iff</span>`

: it looks just like “iff” on the page but a screen reader (should) pronounce it “if-eff”. [Update: it might be easier/better to just put in `if⁠f`

which renders as “iff” with an invisible non-breaking space between the two ‘f’s but should also be pronounced “if-eff”. But you could still use the `aria-label`

solution if you wanted it to be read out always as “if and only if”. Update^2: I’ll probably remove this in favor of instructions how to customize the screen reader; see this post on overriding screen reader pronunciation.]

ARIA labels are also useful for figures generated by `tikzpicture`

. LaTeXML turns those into inline SVGs, which don’t have a standard way of providing a text description. (`<img>`

tags take an `alt`

attribute, which LaTeXML generates using the `tikzpicture`

option `alt={Some text}`

of core LaTeX, but nothing like this exists for `tikzpicture`

. So I defined an `arialabe`

l environment that surrounds a `tikzpicture`

with a `<div>`

tag that does take an `aria-label`

attribute. [Update: this is untested in any screen reader except ChromeVox, and not supported according to the spec: `aria-label`

is only allowed on interactive elements.]

What turned out to be a fair bit of work was actually debugging the CSS: there were a few formatting issues on the HTML side that I couldn’t tell if they were bugs in my code, bugs in LaTeXML, or bugs in BookML. Often it turned out some bit of CSS provided by BookML conflicted with some other bit, so I had to provide some extra CSS to override some things (e.g., BookML made `paragraph`

headings too large because).

The result is good (if I may say so myself) but not perfect. In particular, the MathML output by LaTeXML isn’t quite optimal: many of the symbols we logicians use either aren’t the same type as when they’re used in regular math (e.g., for us an arrow is an operator while for mathematicians it’s a relation), or we would like them pronounced differently (e.g., noone says “left tack” for the provability single turnstile ⊢). This requires another layer: LaTeXML outputs MathML, but then we use MathJax to display the MathML and produce MathML that can be read out with a screen reader. So it’s not clear where to fix things. But that’s a project for future work.

]]>Cet ouvrage offre une introduction accessible à la théorie de la démonstration : il donne les détails des preuves et comporte de nombreux exemples et exercices pour faciliter la compréhension des lecteurs. Il est également conçu pour servir d’aide à la lecture des articles fondateurs de Gerhard Gentzen.

L’ouvrage introduit également aux trois principaux formalismes en usage : l’approche axiomatique des preuves, la déduction naturelle et le calcul des séquents. Il donne une démonstration claire et détaillée des résultats fondamentaux du domaine : traduction de l’arithmétique classique vers l’arithmétique intuitionniste, élimination des coupures, théorème de normalisation et conduit ensuite pas à pas le lecteur vers l’exposé de la célèbre preuve de cohérence de Gentzen pour l’arithmétique de Peano du premier ordre. Il comble ainsi une importante lacune éditoriale en présentant à la fois la théorie structurelle et la théorie ordinale de la démonstration.

Traduction de Yacine Aggoune, David Appadourai et Agathe Rolland, révisée par David Waszek.

L’acheter: Librairie Vrin | fnac | Amazon

]]>Angell’s logic of analytic containment AC has been shown to be characterized by a 9-valued matrix NC by Ferguson, and by a 16-valued matrix by Fine. It is shown that the former is the image of a surjective homomorphism from the latter, i.e., an epimorphic image. Some candidate 7-valued matrices are ruled out as characteristic of AC. Whether matrices with fewer than 9 values exist remains an open question. The results were obtained with the help of the MUltlog system for investigating finite-valued logics; the results serve as an example of the usefulness of techniques from computational algebra in logic. A tableau proof system for NC is also provided.

]]>Any intermediate propositional logic (i.e., a logic including intuitionistic logic and contained in classical logic) can be extended to a calculus with epsilon- and tau-operators and critical formulas. For classical logic, this results in Hilbert’s ε-calculus. The first and second ε-theorems for classical logic establish conservativity of the ε-calculus over its classical base logic. It is well known that the second ε-theorem fails for the intuitionistic ε-calculus, as prenexation is impossible. The paper investigates the effect of adding critical ε- and τ -formulas and using the translation of quantifiers into ε- and τ -terms to intermediate logics. It is shown that conservativity over the propositional base logic also holds for such intermediate ετ -calculi. The “extended” first ε-theorem holds if the base logic is finite-valued Gödel-Dummett logic, fails otherwise, but holds for certain provable formulas in infinite-valued Gödel logic. The second ε-theorem also holds for finite-valued first-order Gödel logics. The methods used to prove the extended first ε-theorem for infinite-valued Gödel logic suggest applications to theories of arithmetic.

]]>The use of the symbol ∨ for disjunction in formal logic is ubiquitous. Where did it come from? The paper details the evolution of the symbol ∨ in its historical and logical context. Some sources say that disjunction in its use as connecting propositions or formulas was introduced by Peano; others suggest that it originated as an abbreviation of the Latin word for “or”, *vel*. We show that the origin of the symbol ∨ for disjunction can be traced to Whitehead and Russell’s pre-*Principia* work in formal logic. Because of *Principia*’s influence, its notation was widely adopted by philosophers working in logic (the logical empiricists in the 1920s and 1930s, especially Carnap and early Quine). Hilbert’s adoption of ∨ in his *Grundzüge der theoretischen Logik* guaranteed its widespread use by mathematical logicians. The origins of other logical symbols are also discussed.

We investigate a recent proposal for modal hypersequent calculi. The interpretation of relational hypersequents incorporates an accessibility relation along the hypersequent. These systems give the same interpretation of hypersequents as Lellman’s linear nested sequents, but were developed independently by Restall for S5 and extended to other normal modal logics by Parisi. The resulting systems obey Došen’s principle: the modal rules are the same across different modal logics. Different modal systems only differ in the presence or absence of external structural rules. With the exception of S5, the systems are modular in the sense that different structural rules capture different properties of the accessibility relation. We provide the first direct semantical cut-free completeness proofs for K, T, and D, and show how this method fails in the case of B and S4.

]]>