Cut elimination and normalization for generalized single and multi-conclusion sequent and natural deduction calculi

Forthcoming in The Review of Symbolic Logic.

Any set of truth-functional connectives has sequent calculus rules that can be generated systematically from the truth tables of the connectives. Such a sequent calculus gives rise to a multi-conclusion natural deduction system and to a version of Parigot’s free deduction. The elimination rules are “general,” but can be systematically simplified. Cut-elimination and normalization hold. Restriction to a single formula in the succedent yields intuitionistic versions of these systems. The rules also yield generalized lambda calculi providing proof terms for natural deduction proofs as in the Curry-Howard isomorphism. Addition of an indirect proof rule yields classical single-conclusion versions of these systems. Gentzen’s standard systems arise as special cases.

DOI: 10.1017/S1755020320000015

Download preprint

gencalcs

Grading for Mastery in Introductory Logic

I’ve been thinking for a long time about how to do assignments, exams, and grading differently in my intro logic course. Provincial budget cuts mean my enrolment will double to 200 students in the Fall term, and the fact that it will have to be fully online raises additional challenges. So maybe now is a good time as any to rethink things!

Mastery Grading aka Standards-based Grading is an approach that’s become increasingly popular in university math courses. In points-based grading, you assign points on all your assignments and exams, and then assign grades on the basis of these points. The system relies heavily on partial credit. Students will revolt if you don’t give it, because so much can hang on fractions of a percentage point. In mastery grading, you define the learning outcomes you want students to achieve, and grade based on how many many they have achieved (and perhaps at what level they have achieved them). Big perk for the instructor: you don’t have to worry about partial credit.

In intro logic of course a great many problems are of the kind that we ordinarily think students must be able to do (so get high point values on tests) but are terribly hard to award partial credit for. If a student doesn’t get a formal proof right, do you dock points for incorrect steps? Grade “holistically” according to how far along they are? If they are asked to show that A doesn’t entail B, is an interpretation that makes A and B both true worth 50%? In mastery grading, instead it makes sense to only count correct solutions. Of course you’ll want to help students get to being able to correctly solve the problems, with a series of problems of increasing difficulty on problem sets before they are tested on a timed, proctored exam, for instance, and with opportunities to “try again” if they don’t get it right the first time.

Now for an online logic course, especially one with high enrollment like mine, academic honesty is going to be a bigger issue than if I had the ability to do proctored in-class tests. Evaluation that discourages cheating will be extra important, and one of the best ways to do that is to lower the stakes on exams. If I can have many short exams instead of a midterm and a final, I’ll have to worry about cheating less. That works out nicely if I want each exam to test for a specific learning objective. More exams also means more grading, and I have limited resources to do this by hand. Luckily most of the objectives in a formal logic course can be computer graded. I’ve already made heavy use of the Carnap system in my logic courses. One drawback is that Carnap can only tell if a solution is completely correct or not. Although partial credit functionality has been added since COVID hit, not having to manually go through a hundred half-done proofs every week will be crucial in the Fall. So mastery grading is a win-win on this front.

Assigning letter grades and incentivizing various behaviors (such as helping other students in online discussion boards) is, however, a lot harder than in a points-based approach. For this, I’m planning to use specification grading: You decide at the outset what should count as performance worthy of a specific letter grade (e.g., completing all problem all problem sets, passing 90% of quizzes an exams for an A) and then use these specifications to convert many individual all-or-nothing data points to a letter grade. To encourage a “growth mindset” (practice makes perfect) I’ll allow students to revise or repeat assignments and tests (within limits). This would be a nightmare with 200 students and 10 tests, but if they are computer graded, I just need to have two versions of each (short!) test — about the same effort as having makeup versions of two or three longer exams.

I’ve already used specifications grading in Logic II (our metatheory course), where I just copied what Nicole Wyatt had pioneered. That, I think, has worked pretty well. The challenge is to implement it in the much larger Logic I.

I have a preliminary plan (learning outcomes, activities, grade scheme, token system). That’s a google doc with commenting turned on. Please let me know what you think!

If you want more info on mastery & specs grading especially for math-y courses, check out the website for the Mastery Grading Conference just completed, especially the pre-conference assignments and the resource page. Recordings of sessions to come soon, I hear.