Why is Every Σ1 Function a Composition of Two Δ0 Functions?

Today I taught Ch. 13 of Peter Smith‘s book. We showed that every Σ1 function can be written as a composition of two Δ0 functions (p. 108). In his proof of this, Peter’s following Boolos Burgess & Jeffrey (Lemma 16.12 on p. 206 of the 4th & 5th ed.; it’s not in the 3rd so I’m guessing it’s due to John Burgess). Until today I really didn’t understand why (as opposed to that) this works or how you’d come up with it. Peter calls it “clever trickery.” But standing up there at the blackboard I realized that there’s a way to think of it which makes it completely straight-forward—and which results (maybe?) in a simpler solution.

Suppose f is Σ1, i.e., there is a formula ∃z F(x, y, z) so that f(x) = y iff ∃z F(x, y, z) (in N). How close to Δ0 can you get? [NB: I’m using F instead of Peter’s R, and different variables, but I think it’s clearer this way.]

Note first of all that if f is total (as all functions are in Peter’s book), then for any x, there will be exactly one y so that ∃z F(x, y, z).

The ∃z is the problem, you want to put a bound on that. How far do you have to go to find z so that F(x, y, z) when y is the value of f(x), i.e., there is a z so that F(x, y, z)? Trivially, as far as

the least u so that ∃zu F(x, y, z).

But that function still depends on y; obviously you can’t really bound z by it. You’d like the bound to be a function of x only. Well, we can put in a quantifier to get rid of the free y. Naturally you want a bounded quantifier and the obvious bound is u:

g(x) = the least u so that ∃yuzu F(x, y, z)

If f is total, then for any x, there is such a u, so g is total.

g(x) might be further than you have to go to find a witness for the ∃z, but that’s ok. And it’s a Δ0 function since

g(x) = u iff ∃yuzu F(x, y, z) ∧ ∀v≤u (∃y≤vz≤v F(x, y, z) → u = v)

So f(x) = y iff ∃z≤g(x) F(x, y, z). Let’s see if the formula we get by removing the g(x) defines a function:

h(x, u) = y iff ∃z≤u F(x, y, z)

For any pair x, u, there’s at most one y with ∃z≤u F(x, y, z), since for any x there’s at most one y with ∃z F(x, y, z). Hence, h is functional. It might not be total, since for some pairs x, u (in particular u less than g(x)) there might not be a z≤u with F(x, y, z). So let’s fix that:

h(x, u) = y iff ∃z≤u F(x, y, z) ∨ (¬∃z≤u F(x, y, z) ∧ y = 0)

Now we have that f(x) = h(x, g(x)) since the only cases where h(x, u) might be different from f(x) are those where h(x, u) = 0 because u is less than g(x).

My definition of h is a bit simpler than John and Peter’s: they have “the least yu” where I just have “the y“. Did I overlook some subtlety?

Leave a Reply

Your email address will not be published. Required fields are marked *