# Uncategorized

# Grading for Mastery in Introductory Logic

I’ve been thinking for a long time about how to do assignments, exams, and grading differently in my intro logic course. Provincial budget cuts mean my enrolment will double to 200 students in the Fall term, and the fact that it will have to be fully online raises additional challenges. So maybe now is a good time as any to rethink things!

Mastery Grading aka Standards-based Grading is an approach that’s become increasingly popular in university math courses. In points-based grading, you assign points on all your assignments and exams, and then assign grades on the basis of these points. The system relies heavily on partial credit. Students will revolt if you don’t give it, because so much can hang on fractions of a percentage point. In mastery grading, you define the learning outcomes you want students to achieve, and grade based on how many many they have achieved (and perhaps at what level they have achieved them). Big perk for the instructor: you don’t have to worry about partial credit.

In intro logic of course a great many problems are of the kind that we ordinarily think students must be able to do (so get high point values on tests) but are terribly hard to award partial credit for. If a student doesn’t get a formal proof right, do you dock points for incorrect steps? Grade “holistically” according to how far along they are? If they are asked to show that A doesn’t entail B, is an interpretation that makes A and B both true worth 50%? In mastery grading, instead it makes sense to only count correct solutions. Of course you’ll want to help students get to being able to correctly solve the problems, with a series of problems of increasing difficulty on problem sets before they are tested on a timed, proctored exam, for instance, and with opportunities to “try again” if they don’t get it right the first time.

Now for an online logic course, especially one with high enrollment like mine, academic honesty is going to be a bigger issue than if I had the ability to do proctored in-class tests. Evaluation that discourages cheating will be extra important, and one of the best ways to do that is to lower the stakes on exams. If I can have many short exams instead of a midterm and a final, I’ll have to worry about cheating less. That works out nicely if I want each exam to test for a specific learning objective. More exams also means more grading, and I have limited resources to do this by hand. Luckily most of the objectives in a formal logic course can be computer graded. I’ve already made heavy use of the Carnap system in my logic courses. One drawback is that Carnap can only tell if a solution is completely correct or not. Although partial credit functionality has been added since COVID hit, not having to manually go through a hundred half-done proofs every week will be crucial in the Fall. So mastery grading is a win-win on this front.

Assigning letter grades and incentivizing various behaviors (such as helping other students in online discussion boards) is, however, a lot harder than in a points-based approach. For this, I’m planning to use specification grading: You decide at the outset what should count as performance worthy of a specific letter grade (e.g., completing all problem all problem sets, passing 90% of quizzes an exams for an A) and then use these specifications to convert many individual all-or-nothing data points to a letter grade. To encourage a “growth mindset” (practice makes perfect) I’ll allow students to revise or repeat assignments and tests (within limits). This would be a nightmare with 200 students and 10 tests, but if they are computer graded, I just need to have two versions of each (short!) test — about the same effort as having makeup versions of two or three longer exams.

I’ve already used specifications grading in Logic II (our metatheory course), where I just copied what Nicole Wyatt had pioneered. That, I think, has worked pretty well. The challenge is to implement it in the much larger Logic I.

I have a preliminary plan (learning outcomes, activities, grade scheme, token system). That’s a google doc with commenting turned on. Please let me know what you think!

If you want more info on mastery & specs grading especially for math-y courses, check out the website for the Mastery Grading Conference just completed, especially the pre-conference assignments and the resource page. Recordings of sessions to come soon, I hear.

# Satisfaction and assignments

When you define satisfaction for quantified formulas, e.g., \(\forall x\, A\), you have to have a way to make \(x\) range over all elements of the domain. Here are the common options:

A. Tarski-style: use variable assignments \(s\colon V \to D\) (where \(V\) is the set of variables and \(D\) the domain), then define \[\mathfrak{M}, s \models \forall x \, A \iff \mathfrak{M}, s’ \models A \text{ for all $x$-variants $s’$ of $s$}.\] This requires a definition of “\(x\)-variant” but is otherwise very clean. Its drawback is that it obscures how we let \(x\) range over all elements of \(D\), and my students have a hard time understanding it and an even harder time working with it.

B. Alternative Tarski-style: we use variable assignments as above, but avoid talking about \(x\)-variants. Instead, we define the notation \(s[m/x]\), the variable assignment just like \(s\) except it assigns \(m \in D\) to \(x\). Then we have \[\mathfrak{M}, s \models \forall x \, A \iff \mathfrak{M}, s[m/x] \models A \text{ for all } m \in D.\]

C. Model theory style: instead of introducing variable assignments that provide the interpretation for variables, we define directly when a formula is satisfied by a sequence of objects: if the variables of \(A\) are among \(y_1, \dots, y_k\) then \(\mathfrak{M} \models A[n_1,\dots, n_k]\) means what \(\mathfrak{M}, s \models A\) means Tarski-style for the assignment \(s\) that maps each \(y_i\) to \(n_i\). Then the clause for the universal quantifier becomes \[\mathfrak{M} \models \forall x \, A[n_1, \dots, n_k] \iff \mathfrak{M} \models A[m, n_1, \dots, n_k] \text{ for all } m \in D.\] This is simple in that it avoids an intermediary function, but can easily be confusing for beginning students because it is neither clean nor precise. We have to understand that \(A\) is a formula with the free variables \(x, y_1\, \dots, y_k\). But what determines the order? Or, put another way, which object interprets which variable?

D. In logic textbooks for philosophers you often see semantics developed for sentences only (i.e., formulas with free variables are avoided). Given a structure \(\mathfrak{M}\) we can define \(\mathfrak{M}[m/a]\) as the structure that’s just like \(\mathfrak{M}\) except the constant \(a\) is interpreted by \(m\in D\). Then we can define truth (not satisfaction) using \[\mathfrak{M} \models \forall x \, A \iff \mathfrak{M}[m/a] \models A[a/x] \text{ for all } m \in D,\] where \(A[a/x]\) is the substitution operation and \(a\) is a constant not already occurring in \(A\).

E. Finally, there’s Robinson-style: we treat every \(m\in D\) as a constant symbol that names itself. Then substituting \(m\) for \(x\) in \(A\) is possible, since \(m\) belongs to both the domain of the structure and to the language, and we can write \[\mathfrak{M} \models \forall x \, A \iff \mathfrak{M} \models A[m/x] \text{ for all } m \in D.\] Naturally, this is not something philosophers like to do because it just seems confused to allow domain elements to function as linguistic symbols naming themselves.

Maybe I’ll find the time to write a paper tracing the origins of all of these at some point. But for now, I wonder: which way is best, pedagogically? Specifically, the Open Logic Project uses Tarski-style, but I’d like to replace it with a definition that is easier to understand and avoids the use of \(x\)-variants. Which would you prefer for the OLP? Which do you prefer in your own teaching or research and why?

# Letter grades in Brightspace/D2L (or other LMS)

So, we’re all moving to online courses, and for some of us that means we have to figure out how to switch from scribbling feedback and letter grades on papers, handing them back to students, and turning those letter grades into a course grade at the end. Most of us are using learning management systems (LMS, such as Brightspace/D2L, Blackboard, Canvas, Moodle) to do this already, and now it’s just a matter of keeping track of papers entirely in the LMS, rather than just entering grades. You probably already have a system for converting between letter grades and percentages, which the LMS use to calculate overall grades based on weights of various course components. If so, it’s probably easiest to stick to that. This post is to record how to do two things (mainly in Brightspace/D2L since that’s what we use): a) get the LMS to offer letter grades (or other scales) as grade choices when grading a paper, and b) to set up a grade scheme that keeps track of (averages and weights) of letter grades without a percentage scheme.

For a), you will need a Grade Scheme (see “Schemes” tab on the “Grades” screen) that contains all the letter grades you want to assign. For each grade scale, you need a separate scheme. You might have one scheme for all letter grades (including + and – grades), or one for letter grades and slash grades (A, A/B, B, etc.), or even one with just ✓, ✓+, and ✓-. Once you have such a scheme, you can make the grade item corresponding to the assignment you want to receive, assess, and return in the D2L dropbox of type “Selectbox”. For a selectbox grade item, you have to pick a grade scheme. When you then associate this grade item with a dropbox folder, the assessment pane will give you a drop-down menu rather than a box to enter a numerical score in:

To set up the grade scheme in the first place, and to solve problem b), we have to do some math. D2L does grade schemes in percentages, but we think more naturally of grade point values: an A+ is 4.3, and A is 4, and A- is 3.7, etc. To convert these into percentages, just divide by the maximum score (i.e., 4.3): A+ is 100%, A is 4/4.3 = 93.02%, A- is 3.7/4.3 = 86.04% etc. Do the same for any other grade scheme you want to use. E.g., if you want slash grades, you’d assign 3.5 to A/B, and 3.5/4.3 = 81.50%.

Note that whatever the maximum score is in your grade scheme here should also be the “maximum score” in any individual grade item that uses this score.

The grade scheme in D2L asks for not just an “assigned value %” for each letter grade (or other scale item), but also for a “start %”. If you average the grades (or rather, their “assigned %” values), you may get a value that does not correspond precisely to a letter grade. You have to decide where to start to “round up”. Say, you have three papers. If you want a student to get an A overall only if they get three As, then the “start %” should be the same as the “assigned value %” for an A (93.02%). But maybe you want to give them an A if they turn in two As and an A- (or some other threshold, e.g., 3 As and 2 A- on 5 papers, etc.). If so, compute the average of the grade point values of the threshold pattern, e.g., if A/A/A- earns an A, (4.0+4.0+3.7)/3 = 3.9. Then convert that to a percentage: 3.9/4.3 = 90.70. You probably want to be careful with the start % value of a D: If you have three papers and want to pass a student with a D if they have turned in 2/3 papers with Ds, the start % is (1+1+0)/3/4.3, i.e., 15.50%. But if you have many assignments, a low percentage like that will make it possible to earn a D with an A on a single assignment (if you have 6 papers, and a student gets an A on the first and then never submits another paper, they will earn 15.5% overall. But probably you don’t want to pass that student). For this reason, I like to make the start value of a D the percentage equivalent of 0.9/4.3, or 20.93).

If you’re using some other LMS, you will have to figure out how to do all this there. E.g., Moodle has grading scales (corresponding to D2L’s “assigned %” scale) and also a course-wide system of converting percentages to letter grades, which corresponds to D2L’s “start %” scale.

You can use this Letter Grade Scheme together with other schemes. For instance, you may use it only as the grade scheme for the final grade, but assess other assignments on a more coarse-grained basis, such as slash grades. Some items may even just get a pass/fail or turned-in/not-turned-in assessment. A popular scale for such assignments is ✓, ✓+, and ✓-. You’ll need to decide to assign letter grade or grade point equivalents to such grades for the purpose of calculation (perhaps A, A+, B+) and use these to compute the “assigned %” for your ✓+/- grade scheme. (Unless you want to display a category average using these alternative grade schemes, you can set the “start %” equal to the “assigned %”.) For a scale like ✓+/- remember to add a grade value corresponding to “not turned in” (with assigned value 0%), or else you won’t be able to distinguish between an assignment that’s received a ✓- and one that’s missing.

A final tip: if you’re using grade schemes, having D2L show you the grade symbol and also the percentages will clutter your grade sheet view. So when you create your selectbox grade items, check “override display options” and only leave “scheme symbol” (and maybe “scheme color”) checked. This will also keep students from being confused.

Here is my own letter grade scheme:

Letter grade | Start % | Assigned value | |

F | 0 | 0.00% | 0.00% |

D | 1 | 20.93% | 23.26% |

D+ | 1.3 | 27.91% | 30.23% |

C- | 1.7 | 36.43% | 39.53% |

C | 2 | 44.19% | 46.51% |

C+ | 2.3 | 51.16% | 53.49% |

B- | 2.7 | 59.69% | 62.79% |

B | 3 | 67.44% | 69.77% |

B+ | 3.3 | 74.42% | 76.74% |

A- | 3.7 | 82.95% | 86.05% |

A | 4 | 90.70% | 93.02% |

A+ | 4.3 | 97.67% | 100.00% |

# Need a logic course, fast?

I wasn’t going to put this online until it was done and cleaned up, but given the situation, maybe this can be of help.

I just developed and tried out a course on formal logic for a 13-week semester. It has:

- a free online textbook:
*forall x: Calgary* - beamer slides for lectures (or screencasts)
- problem sets, which are mostly completed online on Carnap and graded automatically (see here if you want to use Carnap with a different textbook)
- practice problems for Carnap (accessible on carnap.io as well)
- 3 tests (only one converted to online/carnap so far)

Here are the beamer slides. If you’re an instructor and want the sources, drop me an email at rzach@ucalgary.ca. (Of course you’ll also get the sources to the problem sets etc.)

lecturesIf you can bear it, here are screencasts of my talking through the stuff on identity in these lecture slides, and doing some proofs on Carnap.

https://ucalgary.yuja.com/V/PlayList?node=261149&a=1258679219&autoplay=1

# Chalk-and-talk online: whiteboard screencasting (on Linux)

Well, all my logic lectures moved online as of last week. It’s been a bit of a scramble, as I’m sure it’s been for you as well. I needed to rapidly produce videos of lectures (on logic in my case) I can give with students to watch. I thought I’d quickly share what I’m doing in case you’re in a similar situation.

My laptop runs Linux (Ubuntu 19.10 to be specific). So there are few options. If you’re on a Mac or Windows machine, there’s lots and you probably don’t need any help. Maybe your University even has a preferred solution for screencasting that integrates with your LMS.

For screencast *recording* on Linux I find Kazam works fine. It’s super-simple, all it does is record the microphone (or computer speaker output) together with whatever happens on your screen (or in a window). So if you want to show your students how to work Overleaf or Carnap or whatever, or if you want to show them a beamer presentation and talk over it, that’s all you need. (Well, you might want to invest in a decent microphone.)

But what if your lecture is chalk-and-talk? You need a way to let yourself “write on the board” while you talk through your proof or whatever. For that you need a handwriting/sketching app and a way to write comfortably (touchscreen/tablet and stylus). I did get a stylus and an Android tablet and tried out a few handwriting apps, but I couldn’t get the palm rejection to work on any of them. (If you rest your palm on the screen, the tablet won’t recognize what your stylus is doing, so you need an app or a stylus that can isolate the stylus from your hand. I’m told iPads are better for that and/or there are active styluses that have palm rejection built in. Not going to buy an iPad just to try that out though.)

I also have a Wacom Intuos writing tablet I got last week in panicked anticipation ($70 US/CAD 90). It works with Linux (plug-and-play USB), just takes a little getting used to. For a handwriting app, I discovered StylusLabs Write. It works really nice. I just fire it up, hit record on Kazam, start writing. It can easily add a new page/whiteboard area, you can scroll back to a previous one easily, and in the end you can save the whiteboard as a PDF. Here’s an example of me talking through the truth lemma in the completeness proof. {Update: See comment below for a vote for xournal++.} {Update 2: OpenBoard now runs on Ubuntu 20.04 — full-feature whiteboard with PDF import functionality and built-in screencast support!}

What is your solution? I made a Google spreadsheet where you can record your solution; maybe it’ll help other instructors who are struggling right now to adapt in the great COVID-19 rush online.

I would prefer to use my ReMarkable for all of this: it has a desktop app for Mac & Windows that shows what you’re drawing on it. So if you have one, try that out! I was hoping to make it work in Linux using srvfb, but have to wait until ReMarkable fixes a bug that turned off ssh access to the tablet. Will let you know what I find out.

# Adding online exercises with automated grading to any logic course with Carnap

A couple of years ago I posted a roundup of interactive logic courseware with an automatic grading component. The favorite commercial solution is Barwise & Etchemendy’s *Language, Proof, and Logic* textbook that comes with software for doing truth tables, natural deduction proofs, and semantics for propositional and first-order logic, which also automatically grades student’s solutions. The problem is, that (a) it costs money and (b) will only grade problems from that textbook. I already pointed to the open-source, and free-to-use alternative Carnap by Graham Leach-Krouse and Jake Ehrlich. Graham wrote a guest post on Daily Nous about it a while ago. I’ve now used this myself with great success and thought I’d write up my experience.

Carnap is an online tool that allows you to do the following. You can upload webpages (written in a variant of Markdown) which may include logic problems of various sorts. These are, right now:

- Translation exercises (i.e., you provide a sentence in English and the student’s task is to symbolize it in propositional or first-order logic).
- Truth tables (you give sentence(s) of propositional logic, the student must fill in a truth table, and use it to determine, say, if an argument is valid, a sentence is a tautology, or if two sentences are equivalent, etc.).
- Derivations (you provide a sentence or argument and the student has to construct a formal proof for it).
- Model construction (you provide a sentence or argument, the student has to give a domain and extensions of predicates to make the sentence true, false, or show that the argument is invalid, etc.).
- Basic multiple choice and short-answer questions.

Carnap comes with its own textbook and a collection of pre-made problem sets. But you can make up your own problem sets. That’s of course a bit of work, but you have complete control over the problems you want to assign. Here are some sample exercises that go with the Calgary version of *forall x*:

- Propositional symbolizations
- Truth tables
- Fitch-style natural deduction proofs for propositional logic
- Symbolization in first-order logic
- Model construction
- Natural deduction proofs for first-order logic

These are pages I give to my students to get them to become familiar with Carnap before they have to actually do problems for credit. The main difference is that for a real problem set, each exercise has a “submit” button that the student can click once they’ve found a correct solution.

To get an idea of how these problem sets are written, have a look at the documentation.

As you see, the problems are interactive: the student enters the solution, and Carnap will tell them if the solution is correct. In the case of derivations, it will also provide some feedback, e.g., tell the student why a particular application of a rule is incorrect.

You can assign a point value to each problem. Carnap also allows you to set up a course, let students sign up for the course, and lets you assign the pages you’ve created as problem sets. It will allow students to submit problems they have correctly solved, and Carnap will tally the point score earned. You can then download a spreadsheet of student scores per problem set and assign marks on the basis of that.

As you see, Carnap is incredibly flexible. Moreover, it supports the syntax and proof rules of a number of popular textbooks. I’ll highlight the free/open ones:

- Graham Leach-Krouse,
*Carnap: The Book* - Gary Hardegree,
*Introduction to Modal Logic* - P. D. Magnus,
*forall x: An Introduction to Formal Logic*, and also its derivatives- P. D. Magnus, Jonathan Ichikawa-Jenkins,
*forall x: UBC* - P. D. Magnus, Tim Button, et al.
*forall x: Calgary*

- P. D. Magnus, Jonathan Ichikawa-Jenkins,

(Of course, the last is my favorite.)

Commercial texts supported by Carnap, which you would be evil to make your students buy of course, are:

- Bergman, Moore, Nelson,
*The Logic Book*(McGraw-Hill, $130) - Goldfarb,
*Deductive Logic*(Hackett, $39) - Hausman, Kahane, Tidman,
*Logic and Philosophy*(Wadsworth, $120) - Howard-Snyder, Howard-Snyder, Wasserman,
*The Power of Logic*(McGraw-Hill, $130) - Kalish and Montague,
*Logic: Techniques of Formal Reasoning*(Oxford, $90) - Tomassi,
*Logic*(Routledge, $54)

All of these textbooks use a linear version of natural deduction (Fitch, Lemmon, or Montague), but Carnap now also has proof editors for Gentzen-style sequent calculus and natural deduction proofs and checks them for correctness.

How does it support different textbooks? Basically, the document you upload just tells Carnap, say, what sentence you want the student to produce as a translation, or what sentence you want them to give a proof of. You can change the “system” in which they do that, and based on that Carnap will show them the symbols differently (e.g., will ask them to do a truth table for \((\lnot P \land Q) \to R\) or for \((\mathord\sim P \mathbin\& Q) \supset R\)), and and will accept and display proofs in different formats and allow different rules. Even if your favorite text doesn’t show up above it’s likely that it is already partially supported. Graham is also incredibly helpful and responsive; last term he introduced new proof system systems and other features based on my request, often within days. (Bug reports and feature requests are handled via GitHub.)

Carnap is already pretty smart. It will accept not only solutions to translation questions that are syntactically identical to the model solution, but any equivalent solution (the equivalence check is not perfect for the first-order case, but will generally accept any reasonable equivalent translation). Graham has recently introduced a few new features, e.g., you can randomize problems for different students, or require that some conditions are met for translation problems (e.g., that the translation only uses certain connectives or is in a normal form).

To get set up, just email Graham. Once you have an instructor account and are logged in, you’ll be able to see the actual problem sets I assign in my class. You’re welcome to copy and use them of course! (If you happen to use a different textbook, you’ll just have to adjust the terminology and change the “system” Carnap is supposed to use in each problem.) Check here for more of the course, like lecture slides.

# BibTeX-friendly PDF management with Zotero

For years I’ve been jealous of colleagues with Macs who apparently all use BibDesk for managing their article PDF collections and BibTeX citations in one nice program. I think I’ve finally figured out how to do both things on Linux: Zotero, with the Better BibTeX and ZotFile add-ons.

Zotero is first of all a citation management system. It’s multi-platform, open-source, not tied to a commercial publisher, widely used and well-supported. Your article database lives on your computer, but is synced with a central server. So any changes you make to the citation database gets automatically mirrored to your other computers (even if they run different OSs), and you can access it online as well. The browser extension Zotero Connector lets you import & download references and PDFs from publishers’ websites, JSTOR, etc., with a single click. It does everything a reference manager does, e.g., give you bibliographies and citations in Word or LibreOffice.

Zotero manages PDFs in one of two ways: you can store a PDF in Zotero, or you can add links to PDFs on your local drive. The former option manages the PDFs for you, syncs them across computers, etc. But you only get 300MB of online storage for free, and that’s gone quickly. But if you keep your PDF directory synced across computers (e.g., if it lives in your Dropbox), linking the PDFs is just as good. If you add a PDF, Zotero will look up the metadata for you and add a reference to your database. It keeps an index of the content of PDFs, so search will pick up hits in the PDFs and not just in the metadata. If you have a reference already, Zotero can look it up online and help you find the PDF (or library call number). The ZotFile add-on makes this even easier. For instance, with one click you can add the most recently downloaded file to a reference item, move it into your PDF directory, and rename it according to some standard schema, say “Author – Year – Title.pdf”.

All of this has worked to some extent for a while, and also works with other reference managers. What has kept me from using them is that I want my reference manager to play nice with BibTeX. That means it should export BibTeX files with (a) proper capitalization, (b) LaTeX code where necessary (e.g., mathematical formulas in titles), (c) keep track of BibTeX fields like the `crossref`

and `note`

fields, which may contain LaTeX code itself (e.g., “Reprinted in `\cite{...}`

“), and (d) not change cite keys on you. On the other hand, the database itself should look as normal as possible and avoid LaTeX code whenever possible (e.g., I want Gödel’s papers to be indexed under “Gödel”, not under “G{\”o}del”). When I tried Zotero the last time (and other reference managers), it would deal with (a) by enclosing the entire title field in braces. That meant BibTeX would not lowercase anything; and sometimes the style does require lowercasing things. I don’t remember if (b) or (c) ever worked.

Anyway, Zotero’s Better BibTeX extension does an excellent job. You can put in “Gödel” as the author name, and it will export to “G{\”o}del”. It assigns cite keys according to a configurable pattern, but it keeps cite keys the same when importing BibTeX files. You can change them manually, and it will remember them. Additional BibTeX fields not already supported by Zotero are saved on import and included on export. It will convert HTML tags (which Zotero understands as well) to LaTeX code on export (e.g., `<i>...</i>`

to `\emph{...}`

). If you need LaTeX code, just enclose it in `<pre>...</pre>`

tags. It does a very good job with capitalization and is even smart enough to do its transformations only to English language titles. Especially nice: Better BibTeX will keep your exported BibTeX files up to date. So, e.g., you can put all the references for a paper you’re working on in a Zotero “collection”, tell Better BibTeX to export it, and the .bib file will stay up to date if you change or add something to the collection in Zotero.

Want to try it?

- Install Zotero Standalone and Connector
- Install ZotFile
- Install Better BibTeX
- Check your preferences, e.g., whether you want Zotero to rename PDFs or look up metadata when saving them.
- If you want your PDFs to be linked and collected in, say, your Dropbox, set the attachment base directory in Preferences: Advanced: Files and Folders.
- Tell ZotFile where your downloaded files and your PDF direcories are, so it knows where to look for the most recent PDF to attach (in Tools > ZotFile preferences) and where to move them to. Make sure you set the ZotFile PDF naming pattern there to something you like.
- Set up an account on zotero.org and link your library to it in Preferences: Sync (but uncheck “Sync attachment files” if you don’t want your PDFs on zotero.org)
- Tweak your Better BibTeX settings, esp. the cite key pattern to make sure imported cite keys are the way you want them.

# The Emergence of First-Order Logic

The SEP entry on “The Emergence of First-Order Logic” by William Ewald is out today.

# Indian Conference on Logic and its Applications 2019

The Association for Logic in India (ALI) announces the eighth edition of its biennial *International Conference on Logic and its Applications* (ICLA), to be held at the Indian Institute of Technology Delhi from March 3 to 5, 2019.

ICLA is a forum for bringing together researchers from a wide variety of fields in which formal logic plays a significant role, along with mathematicians, computer scientists, philosophers and logicians studying foundations of formal logic in itself. A special feature of this conference is the inclusion of studies in systems of logic in the Indian tradition and historical research on logic.

As in the earlier events in this series, we shall have eminent scholars as invited speakers. Details of the last ICLA 2017 may be found at https://icla.cse.iitk.ac.in. See http://ali.cmi.ac.in for information on past events as well as updates on this conference.

The call for papers is here: https://easychair.org/cfp/icla2019

# forall x is going CC BY

*forall x*by P.D. Magnus, as well as Tim Button’s

*forall x: Cambridge*, and the

*forallx: Calgary*remix are now released under a Creative Commons Attribution (rather than the more restrictive Attribution-ShareALike license). The Fall 2018 version also incorporates some of Tim’s revisions for the latest version of

*forall x: Cambridge*. You can find all three on Github: forall x, forall x: Cambridge, and forall x: YYC.

# PhD, Postdoc with Rosalie Iemhoff

## Postdoc position in Logic at Utrecht University, the Netherlands.

The postdoc is embedded in the research project “Optimal Proofs” funded by the Netherlands Organization for Scientific Research led by Dr. Rosalie Iemhoff, Department of Philosophy and Religious Studies, Utrecht University. The project in mathematical and philosophical logic is concerned with formalization in general and proof systems as a form of formalization in particular. Its mathematical aim is to develop methods to describe the possible proof systems of a given logic and establish, given various criteria of optimality, what the optimal proof systems of the logic are. Its philosophical aim is to develop general criteria for faithful formalization in logic and to thereby distinguish good formalizations from bad ones. The mathematical part of the project focusses on, but is not necessarily restricted to, the (non)classical logics that occur in computer science, mathematics, and philosophy, while the philosophical part of the project also takes into account domains where formalization in logic is less common. The postdoc is expected to contribute primarily to the mathematical part of the project. Whether the research of the postdoc also extends to the philosophical part of the project depends on his or her interests.

# Proof by legerdemain

Peli Grietzer shared a blog post by David Auerbach on Twitter yesterday containing the following lovely quote about Smullyan and Carnap:

I particularly delighted in playing tricks on the philosopher Rudolf Carnap; he was the perfect audience! (Most scientists and mathematicians are; they are so honest themselves ‘that they have great difficulty in seeing through the deceptions of others.) After one particular trick, Carnap said, “Nohhhh! I didn’t think that could happen in

anypossible world, let alonethisone!”

In item # 249 of my book of logic puzzles titled

What Is the Name of This Book?, I describe an infallible method of proving anything whatsoever. Only a magician is capable of employing the method, however. I once used it on Rudolf Carnap to prove the existence of God.

“Here you see a red card,” I said to Professor Carnap as I removed a card from the deck. “I place it face down in your palm. Now, you know that a false proposition implies any proposition. Therefore, if this card were black, then God would exist. Do you agree?”

“Oh, certainly,” replied Carnap, “

ifthe card were black, then God would exist.”

“Very good,” I said as I turned over the card. “As you see, the card is black. Therefore, God exists!”

“Ah, yes!” replied Carnap in a philosophical tone. “Proof by legerdemain! Same as the theologians use!”

Raymond Smullyan,

5000 BC and Other Philosophical Fantasies.New York: St. Martin’s Press, 1983, p. 24.

See Auerbach’s post for more Carnap and Smullyan anecdotes.

# Why φ?

Yesterday, @gravbeast asked on Twitter,

Does anyone know why we traditionally use Greek phi and psi for metasyntactic variables representing arbitrary logic formulas? Is it just because ‘formula’ begins with an ‘f’ sound? And chi was being used for other things?

Although Whitehead and Russell already used φ and ψ for propositional functions, the convention of using them specifically as meta-variables for formulas seems to go back to Quine’s 1940 *Mathematical Logic*. Quine used μ, ν as metavariables for arbitrary expressions, and reserved α, β, γ for variables, ξ, η, σ for terms, and φ, χ, ψ for statement. (ε, ι, λ had special roles.) Why φ for statements? Who knows. Perhaps simply because Whitehead and Russell used it for propositional functions in *Principia*? Or because “p” for “proposition” was entrenched, and in classic Greek, φ was a p sound, not f?

The most common alternative in use at the time was the use of Fraktur letters, e.g., \(\mathfrak{A}\) as a metavariable for formulas, and *A* as a formula variable; *x* as a bound variable and \(\mathfrak{x}\) as a metavariable for bound variables. This was the convention in the Hilbert school, also followed by Carnap. Kleene later used script letters for metavariables and upright roman type for the corresponding symbols of the object language. But indicating the difference by different fonts is perhaps not ideal, and Fraktur may not have been the most appealing choice anyway, both because it was the 1940s and because the type was probably not available in American print shops.

# Postdoc in Formalism, Formalization, Intuition and Understanding in Mathematics

Archives Poincaré (Nancy) and IHPST Paris are advertising for a 20-month postdoc fellowship.

# Logic Colloquium, Udine

The European Summer Meeting of the Association of Symbolic Logic will be in Udine, just north of Venice, July 23-28. Abstracts for contributed talks are due on April 27. Student members of the ASL are eligible for travel grants!

# The Significance of Philosophy to Mathematics

If you wanted to explain how philosophy has been important to mathematics, and why it can and should continue to be, it would be hard to do it better than Jeremy Avigad. In this beautiful plea for a mathematically relevant philosophy of mathematics disguised as a book review he writes:

Throughout the centuries, there has been considerable interaction between philosophy and mathematics, with no sharp line dividing the two. René Descartes encouraged a fundamental mathematization of the sciences and laid the philosophical groundwork to support it, thereby launching modern science and modern philosophy in one fell swoop. In his time, Leibniz was best known for metaphysical views that he derived from his unpublished work in logic. Seventeenth-century scientists were known as natural philosophers; Newton’s theory of gravitation, positing action at a distance, upended Boyle’s mechanical philosophy; and early modern philosophy, and philosophy ever since, has had to deal with the problem of how, and to what extent, mathematical models can explain physical phenomena. Statistics emerged as a response to skeptical concerns raised by the philosopher David Hume as to how we draw reliable conclusions from regularities that we observe. Laplace’s

Essai philosophique sur la probabilités, a philosophical exploration of the nature of probability, served as an introduction to his monumental mathematical work,Théorie analytique des probabilités.

In these examples, the influence runs in both directions, with mathematical and scientific advances informing philosophical work, and the converse. Riemann’s revolutionary

Habilitationlecture of 1854,Über die Hypothesen welche der Geometrie zu Grunde liegen(“On the hypotheses that lie at the foundations of geometry”), was influenced by his reading of the neo-Kantian philosopher Herbart. Gottlob Frege, the founder of analytic philosophy, was a professor of mathematics in Jena who wrote his doctoral dissertation on the representation of ideal elements in projective geometry. Late nineteenth-century mathematical developments, which came to a head in the early twentieth-century crisis of foundations, provoked strong reactions from all the leading figures in mathematics: Dedekind, Kronecker, Cantor, Hilbert, Poincaré, Hadamard, Borel, Lebesgue, Brouwer, Weyl, and von Neumann all weighed in on the sweeping changes that were taking place, drawing on fundamentally philosophical positions to support their views. Bertrand Russell and G. H. Hardy exchanged letters on logic, set theory, and the foundations of mathematics. F. P. Ramsey’s contributions to combinatorics, probability, and economics played a part in his philosophical theories of knowledge, rationality, and the foundations of mathematics. Alan Turing was an active participant in Wittgenstein’s 1939 lectures on the foundations of mathematics and brought his theory of computability to bear on problems in the philosophy of mind and the foundations of mathematics.

Go and read the whole thing, please. And feel free to suggest other examples!

The book reviewed is *Proof and Other Dilemmas: Mathematics and Philosophy*, Bonnie Gold and Roger A. Simons, eds., Mathematical Association of America, 2008

[Photo: Bertrand Russell and G. H. Hardy as portrayed by Jeremy Northam and Jeremy Irons in *The Man Who Knew Infinity*, via MovieStillsDB]

# Ptolemaic Astronomy

Working on the chapters on counterfactual conditionals for the Open Logic Project, I needed some illustrations for David Lewis’s sphere models, which he jokingly called “Ptolemaic astronomy.” Since Franz Berto joked that this should just require `\usepackage{ptolemaicastronomy}`

, I wrote some LaTeX macros to make this easier using Ti*k*Z. You can download `ptolemaicastronomy.sty`

(it should work independently of OLP); examples are in the OLP chapter on minimal change semantics (PDF, source).

(This will probably interest a total of two people other than me so I didn’t spend much time documenting it, but if you want to use it and need help just comment here.)

Update: it’s now in its own github repository and properly documented.

# A New University of Calgary LaTeX Thesis Class based on Memoir

The University of Calgary provides a LaTeX thesis class on its website. That class is based on the original thesis class, modified over the years to keep up with changes to the thesis guidelines of the Faculty of Graduate studies. It produces atrocious results. Chapter headings are not aligned properly. Margins are set to 1 inch on all sides, which results in unreadably long lines of text. The template provided sets the typeface to Times New Roman. Urgh. A better class (by Mark Girard) is already available, which however also sets the margins to 1 inch. FGS no longer requires that the margins be exactly 1 inch, just that they are at a minimum 1 inch. So we are no longer forced to produce that atrocious page layout.

I made a new thesis class. It’s based on memoir, which provides some nice functionality to compute an attractive page layout. By default, the class sets the thesis halfspaced, 11 point type, and with about 65 characters per line. This produces a page approximating a nicely laid out book page. The `manuscript`

class option sets it up for 12 point, double spaced, with 72 characters per line, and 25 lines per page. That’s still readable, but gives you extra space between the lines for annotations and editing marks, and wider margins. There are also class options to load some decent typefaces (`palatino`

, `utopia`

, `garamond`

, `libertine`

, and, ok, `times`

).

Once upon a time, theses were typed on a typewriter and submitted to the examination committee in hardcopy. Typewriter fonts are “monospaced,” i.e., every character takes the same amount of space. “Elite” typewriters would print 12 characters per inch, or 72 characters per 6 inch line, and “Pica” typewriters 10 cpi, or 60 characters per line. Typewriters fit 6 lines into a vertical inch, or 25 lines per double-spaced page. A word is on average 5 characters long, hence we get about 250 words per manuscript page.

Noone uses typewriters anymore to write theses, but thesis style guidelines are still a holdover from the time we did. The guidelines still require that theses be halfspaced or double spaced. But of course they allow use of word processing software. Those don’t use monospaced typewriter fonts, and the recommended typefaces such as Times Roman are much more narrow and proportionally spaced. That means even with 12 point type, a 6” line now contains 89 characters on average, rather than 60. (Chris Pearson has estimated “character constants” for various typefaces which you can use to estimate the average number of characters per inch in various type sizes. For Times New Roman, the factor is 2.48. At a line length of 6”, i.e., 432 pt, and 12 pt type that gives 432 × (2.48/12)=89.28 characters per line. With minimal margins of 1” you get 96 characters per line.)

Applying typewriter rules to electronically typeset manuscripts results in lines that are very long—and that means they are hard to read. Ideally, there should be anywhere between 50 and 75 characters per line, and 66 characters is widely considered ideal. *Readability* is a virtue you want your thesis to have. And the thesis guidelines, thankfully, no longer *set* the margins, but only require *minimum* margins of 1” on all sides.

# Modal Logic! Propositional Logic! Tableaux!

*Boxes and Diamonds*, and you can check out what’s there so far on the builds site. This project of course required new material on modal logic. So far this consists in revised and expanded notes by our dear late colleague Aldo Antonelli. These now live in

`content/normal-modal-logic`

and cover relational models for normal modal logics, frame correspondence, derivations, canonical models, and filtrations. So that’s one big exciting addition.
Since the OLP didn’t cover propositional logic separately, I just now added that part as well so I can include it as review chapters. There’s a short chapter on truth-value semantics in `propositional-logic/syntax-and-semantics`

. However, all the proof systems and completeness for them are covered as well. I didn’t write anything new for those, but rather made the respective sections for first-order logic flexible. OLP now has an `FOL`

“tag”: if `FOL`

is set to true, and you compile the chapter on the sequent calculus, say, you get the full first-order version with soundness proved relative to first-order structures. If `FOL`

is set to false, the rules for the quantifiers and identity are omitted, and soundness is proved relative to propositional valuations. The same goes for the completeness theorem: with `FOL`

set to false, it leaves out the Henkin construction and constructs a valuation from a complete consistent set rather than a term model from a saturated complete consistent set. This works fine if you need only one or the other; if you want both, you’ll currently get a lot of repetition. I hope to add code so that you can first compile without `FOL`

then with, and the second pass will refer to the text produced by the first pass rather than do everything from scratch. You can compare the two versions in the complete PDF.
Proofs systems for modal logics are tricky; and many systems don’t have nice, say, natural deduction systems. The tableau method, however, works very nicely and uniformly. The OLP didn’t have a chapter on tableaux, so this motivated me to add that as well. Tableaux are also often covered in intro logic courses (often called “truth trees”), so having them as a proof system included has the added advantage of tying in better with introductory logic material. I opted for prefixed tableaux (true and false are explicitly labelled, rather than implicit in negated and unnegated formulas), since that lends itself more easily to a comparison with the sequent calculus, but also because it extends directly to many-valued logics. The material on tableaux lives in `first-order-logic/tableaux`

.
Thanks to Clea Rees for the the `prooftrees`

package, which made it much easier to typeset the tableaux, and to Alex Kocurek for his tips on doing modal diagrams in Tikz.