# Abstract

P v NP is resolved by refuting Cantor’s Uncountability Theorems, but allowing them to live on in a finite model of ZFC which interprets the least infinite cardinal as the greatest finite number ever referred to, and which turns out to make ZFC consistent and complete under that finite interpretation. This result relies on findings from my earlier piece on Cantor’s Fallacy, particularly that paper’s key insight that Cantor committed a category mistake in positing some higher form of infinite extension that makes a set uncountable, when in fact what makes a set appear uncountable is its infinite intension, which renders it not uncountable but, rather, countable but unspecifiable.

In the context of my refutation of Cantor’s Uncountability Theorems, I am able to to prove P = NP almost trivially, but this proof comes with the caveat that it ends up implying the opposite of what conventional wisdom has thought such a proof would mean for practical computation.

For rather than implying that NP-Complete problems can be efficiently solved, a corollary of my proof of P = NP turns out to be that there exists a distinct sub-category of problems in P that cannot be efficiently computed despite being solvable in polynomial time, and which I denote as P>.

I also prove that the subset of P consisting of the problems that can be efficiently computed, which I denote as P< and define in relation to the greatest finite number ever referred to, does not equal NP.

My proof that P< != NP forms a bookend with my proof that P = NP, with implications for the foundations of mathematics that very satisfyingly reflect the overall empirical experience of working mathematicians, computer scientists and engineers with computation in general.

The paradigm shift underlying these proofs also resolves all known philosophical paradoxes in the foundations of mathematics and formal logic, and provides a new foundation for mathematics that leaves ample room for the continuing development of useful theory with ZFC and other popular foundational projects, and that brings into workable harmony the views and insights of intuitionists, formalists, positivists and empiricists into the nature of mathematics in such a way that opens up wide vistas for new research based on the work of all of them.

Finally, the new foundation for mathematics presented in this paper gives us foundational insight into the true meaning of Godel’s Second Incompleteness Theorem and the Continuum Hyphothesis, and it also strongly suggests that Heisenberg’s Uncertainty Principle is simply an applied instance of the representation within ZFC of unspecifiably large finite numbers as being “transfinite.”

It also suggests that Perelman’s proof of the Poincare Conjecture can be best understood as an instance of awakening to the true finiteness of a Cantorian infinity, and that further work stemming from this paper could investigate the relationship between its De-Cantorized Foundations and both Abraham Robinson’s non-standard analysis and Mandelbrot’s monsters.

# The Essence of P vs NP

An NP-Complete problem is one that involves an exponential number of candidates to be filtered to determine whether a polynomial-time verifiable criterion is met by any of them.

Brute force solutions all require listing out and testing a number of candidates that is exponentially large in comparison to the size of the initial parameters, which usually include a finite set whose number of subsets stands in polynomial relation to the number of candidates.

If no solution can do better than polynomially better than brute force, then P != NP. If some solution can, then P=NP.

But since verifying any candidate takes only polynomial time, the exponential time component of the solution must occur in the candidate identification process.

Thus for any NP problem, if for each set of initial conditions there is a polynomial time method of identifying a polynomial-size subset of the candidates that intersects the set of all criterion-satisfying candidates, then the problem is solved in P time, and hence P = NP.

Thus proving P != NP requires proving, for some NP-Complete problem, that all methods of listing a candidate subset that intersects the set of all criterion-satisfying candidates produces a candidate set that is super-polynomial in size in relation to the initial conditions.

All NP-Complete problems concern finite initial conditions and finite candidate sets, but proving P != NP or P = NP requires proving something about all possible finite initial conditions, thus about arbitrarily large finite initial conditions.

# Diagonalization Revisited

Now, suppose we have a proof that P = NP. It must produce, for any initial conditions no matter how large, a candidate subset that is polynomial in size in relation to those initial conditions. Then the list of all the candidates for all the initial conditions is itself a recursively enumerable set (we know the set of all possible initial conditions is recursively enumerable since it is for at least one NP-Complete problem, SAT). To enumerate it, we simply enumerate all the possible initial conditions and, for each, produce the finite polynomial-size candidate list in polynomial time.

It might seem, then, that all we need to do to prove P != NP is prove that the infinite union of the finite candidate sets of each set of possible initial conditions is not recursively enumerable. And we should be able to come up with a diagonalization argument to prove this.

Unfortunately, however, there is no requirement that the method for producing polynomial-size candidate sets in polynomial time be uniform across all initial conditions. What if there is a unique method of doing this for each slate of initial conditions?

Then for each slate of initial conditions we must include the time required to determine the unique method in the computation time of the candidate set.

It is fair, however, to require that any proof that P = NP also include computation of initial-condition-dependent method variations within the computation time claimed for producing the polynomial-size candidate set. Thus our strategy to use diagonalization to prove P != NP appears salvageable in this way.

Suppose, then, that we succeed in diagonalizing the candidate set for a given proof that P = NP, thus refuting the proof.

Have we thereby proven that P != NP? No, for we have only proven that one particular method of generating the polynomial-size candidate sets in polynomial time for all initial conditions fails to list all actual solutions.

All we can do by diagonalization is to attack each attempt to recursively enumerate the solutions for all initial conditions and bat them down one by one. Diagonalization gives us no general method for pre-empting all such attempts at once. We can perhaps manage to pre-empt certain subclasses of such attempted methods, but we cannot pre-empt them all.

So am I saying that P != NP is unprovable? Not exactly.

I am saying that P != NP is not provable by diagonalization, and of course this is not a new claim. The general consensus gave up decades ago on proving P != NP by diagonalization for the very reasons I have stated. One could even say that it was this failure of diagonalization for proving P != NP that has made the P vs NP question both interesting and important.

Or did they give up? The theorem that showed that relativizable proofs cannot resolve P vs NP would seem to have put an end to diagonalizing methods of proving P != NP, but the continued pursuit of super-polynomial circuit lower bounds that tacitly rely on diagonalization shows that some people are as tenacious in pursuing a dead end route to P != NP as some are in pursuing dead end routes to P = NP.

But I have more to say than just that.

# Attempts to prove P = NP diagonalize attempts to prove P != NP

The main point I want to make here is that diagonalization fails for proving P != NP not because it is irrelevant to the core issue in P vs NP, but because diagonalization actually works against proving P != NP.

Now, this does not mean diagonalization can be used to prove P = NP. It simply means diagonalization is at the heart of the difficulty in proving P != NP, because it is in fact the attempts to prove P != NP that keep getting diagonalized by fresh approaches trying to prove P = NP.

So I want to elucidate exactly how this is the case. I am not making a mere heuristic observation. I am making a mathematical claim.

Any proof that P != NP must prove that all possible attempts to recursively enumerate a set intersecting that of all the solutions for all initial conditions of an NP-Complete problem must fail. But to succeed, such a proof must recursively enumerate all possible ways of recursively enumerating such purported intersecting sets.

I claim the following.

**Theorem 1: The set of all possible ways to recursively enumerate a candidate subset intersecting that of all the criterion-satisfying candidates for all initial conditions of an NP-Complete problem is not recursively enumerable.**

**Proof:** It will suffice to consider the 3SAT problem. The initial condition for it is simply a CNF formula. The set of all initial conditions is the set of all CNF formulae. A set intersecting that of all the criterion-satisfying candidates for all initial conditions includes every pairing of a CNF formula (the formula is the slate of initial conditions) with some polynomial-size set of valuations that includes at least one that satisfies it if there is one.

Construct a recursive enumeration of the 3SAT CNF formulae by running through all formulae with 2 literals, all those with 3 literals, etc.

Suppose we have an attempted proof that P=NP. It must include some recursive enumeration that pairs each 3SAT CNF formula with at least one polynomial-size set of valuations that intersects the set of all its satisfying valuations, if that set is non-empty.

When we say “polynomial-size” we mean of course that there is a finite maximum exponent t such that, for all CNF formulae of any number of literals n, the number of valuations in the polynomial-size candidate subset chosen by the proof’s exponential-bustingly clever search algorithm is less than n^t.

Thus for any one particular such proof, we can try to construct a refutation by producing a formula that has a non-empty set of satisfying valuations but for which the the proof’s search algorithm cannot produce a polynomial size intersect set with it in polynomial time.

We can be certain, however, that there will come along yet another purported proof that P=NP which takes all known methods of refuting all known exponential-bustingly clever search algorithms, and evades all those known methods with an incrementally more clever exponential-busting search algorithm.

To prove that any such proof’s enumeration fails, no matter how incrementally more wise to our refutation methods its search algorithm is built, we must show that all such proofs, regardless of how they repartition the 3SAT formulae to arrive at some new maximum exponent t for the maximum size of any of the proof’s intersecting sets of the formula-satisfying valuation sets, can be trumped by some formula which forces the proof’s algorithm to generate more than n^t satisfying valuations in its intersect set.

But alas, to deal such a final blow to the P=NP enterprise, we must recursively enumerate the set of all possible recursive enumerations of valuations that intersect the set of satisfying valuations of CNF formulae.

To give halt to our pretensions in this regard, consider this lemma:

**Lemma 1: There are infinitely many 3SAT CNF formulae for which the set of polynomial-size intersecting sets of its satisfying valuations has exponential size in relation to the number of literals in the CNF formula.**

**Proof of Lemma 1: **It will suffice to show that there are 3SAT CNF formulae with any number of literals that have only one satisfying valuation, since the number of possible valuations of any 3SAT CNF formula is exponential in size in relation to the number of literals in it, and so is that number minus one. Given any n literals x1, x2, … xn, construct the 3SAT formula that includes the following conjuncts for each xi, 1 <= i <= n:

(xi V x[i+1 mod n] V x[i+2 mod n])

(xi V ~x[i+1 mod n] V x[i+2 mod n])

(xi V x[i+1 mod n] V ~x[i+2 mod n])

(xi V ~x[i+1 mod n] V ~x[i+2 mod n])

Since we can construct as many such 3SAT formulae as we please, their number is infinite. **QED.**

Now suppose we have a proof that P != NP. It must include some recursive enumeration of all possible ways to generate the set of all possible pairings of each CNF formula with some polynomial-size set intersecting the set of its satisfying valuations.

Suppose we have such a recursive enumeration. It recursively enumerates an infinite list of all the algorithms that recursively enumerate pairings from each CNF formula to at least one polynomial-size set intersecting that formula’s set of satisfying valuations, which is to say, all the algorithms that, for some finite positive integer t, recursively enumerate pairings from each 3SAT CNF formula to at least one intersect set containing at least one, and fewer than n^t, of that formula’s set of satisfying valuations, n being the number of literals in the formula. Can this enumeration possibly be complete?

Well, we know that there are infinitely many 3SAT formulae, hence 3SAT formulae containing any number n of literals, that have O(2^n) non-satisfying valuations, and which thus each has O(2^n) intersecting sets of its set of satisfying valuations.

This means that there are 3SAT formulae containing any number of literals, thus infinitely many 3SAT formulae, for which any proof of P=NP fails to enumerate all the intersecting sets of its set of satisfying valuations, which means that each of these formulae causes a branching of at least two distinct possible proofs that P=NP (and in fact exponentially many in an infinite number of cases).

Thus any proof that P != NP must, for any n, enumerate at least 2^n possible algorithms that each map each 3SAT formula to a polynomial-size set intersecting its set of satisfying valuations.

Alas, this is impossible. We cannot recursively enumerate 2^n algorithms for all values of n. We will be diagonalized. The P=NP minions will keep trumping us forever, and we will have to keep shoveling their trumps into new classifications and folding them into our enumeration to keep them at bay.

# Constructive Proof of P = NP and its Case By Case Debunking constitute an NP-Complete problem

Given any specific purported proof that P=NP, we know we can easily verify if it is valid or not. Our method of verification, in fact, is simply a method of refutation. We take the upper bound of the purported proof’s satisfier-including polynomial-size candidate subset, and we diagonalize it. We keep replaying the Yannakakis winning tic tac toe move against the daring gambit by Swart. But we cannot seem to show that for every possible initial condition, every possible proof that P=NP and its algorithm that purportedly polynomially assays every formula’s set of satisfying valuations, there exists some refuting diagonal formula that defies the assay.

Does this situation seem eerily familiar to us yet?

Indeed it should. And we can formalize this eerie feeling in a theorem:

**Theorem 2: For any NP-Complete Problem C, the problem C’ of determining whether C is in P, is itself an NP-Complete Problem.**

**Proof:** Consider any NP-Complete problem C, which we define as follows:

**Definitions:** C is an “**NP-Complete Problem**” iff C is the triplet <S, C(S), V> such that:

- S is a finite set, which we call the “
**initial condition**” of C - C(S) is a non-polynomial-time enumerable subset of the power set of S, which we call the “
**candidate set**” of C. - V is a predicate that is decidable in polynomial time on any member of C(S), and is called the “
**filter**” of C.

**Definition:** For any NP-Complete problem C, F is a “**V-negative assay**” of C(S) iff F is the couplet [ R, <c ] such that R is a partition of the candidate set C(S), and <c is an ordering on C(S) such that if V(x) is false for every member of Rmin = {x|(Exists r)(All q[(r in R & q in r) → x <c q]}, meaning V is false of the least element of every element of partition R, then (All c)((c in C(S)) → ~V(c)), meaning V is false of all members of C(S).

**Definition:** For any NP-Complete problem C, any V-negative assay F of C(S) is a “**polynomial-time V-negative assay**“ of C(S) iff the partition of C(S) R in F is polynomial-time enumerable.

**Lemma 1: For any NP-Complete Problem C, any method M of finding an algorithm A that proves C is in P (and hence P=NP) must ultimately rely on proving the existence of some polynomial-time V-negative assay of the candidate set C(S) of C for any initial condition S.**

**Proof of Lemma 1:** Suppose the lemma were false. Then some method M would give us some algorithm A that proves P=NP on C despite the fact that for every partition R of the candidate set C(S) and every ordering <c of the candidates in C(S) that give us a polynomial-time enumerable set Rmin, V is false on all elements of Rmin but V is true on some element of C(S). But this is self-contradictory, for it means that every polynomial-time enumerable V-negative sampling of C(S) fails as an assay to show that V is false on all candidates in C(S), and hence only a super-polynomial sampling of C(S) can show that V is false on all candidates in C(S), hence the problem C is not solvable in polynomial-time, and is thus not in P, which means no method M and algorithm A could possibly prove P=NP on C. By contradiction, then, Lemma 1 is true.

From Lemma 1, we know that proving P != NP requires that we prove, for any NP-Complete problem C, that there exists no polynomial-time V-negative assay of C(S).

But any polynomial-time V-negative assay of C(S) is a subset of C(S) and:

**Lemma 2:** **The set of all subsets (the power set) of C(S) is not itself polynomial-time enumerable.**

**Proof of Lemma 2:** Suppose it were. Then we would have an enumeration of the power set of C(S) that is polynomial in size compared to S. But the power set of C(S), by definition of power set, is exponential in size compared to C(S), and by definition C(S) is super-polynomial in size compared to S. This means the power set of C(S) would be both exponential in size to something super-polynomial in size compared to S, and yet also be polynomial in size compared to S, which is impossible by definition of super-polynomial and exponential size.

Lemma 2 means that any polynomial-time V-negative assay of C(S) must be fished out from a candidate set that is exponential in size compared to C(S).

**Definition:** For any NP-Complete Problem C, C’ is the “**jump**” of C iff C’ is the triplet <S’, C'(S’), V’> where:

- S’ = C(S).
- C'(S’) = the set of all polynomial-size (compared to S) partitions of C(S).
- V'(c’) (where c’ in C'(S’)) = “c’ is a polynomial-time V-negative assay of C(S).”

**Lemma 3:** **S’ would be the initial condition of C’ if C’ were an NP-Complete Problem.**

**Proof of Lemma 3:** Trivially, S’, being a subset of the power set of the finite set S, is itself a finite set.

**Lemma 4:** **C'(S’) would be the candidate set of C’ if C’ were an NP-Complete Problem**

**Proof of Lemma 4:** Trivially C'(S’), being the power set of C(S), is exponential in size, thus super-polynomial in size, compared to C(S).

**Lemma 5:** **V’ would be the filter of C’ if C’ were an NP-Complete Problem.**

**Proof of Lemma 5:** It suffices to show that for any c’ in C'(S’), we can determine in polynomial time (compared to S’ = C(S)) whether V’ is true of c’, where V'(c’) means that c’ is a polynomial-time V-negative assay of C(S).

Let F = <R, <c> where R is a partition of C(S) and <c is an ordering on C(S). Then the question is whether we can show in polynomial time (compared to C(S)) whether:

- R is polynomial-size (compared to S).
- If R is polynomial-size (compared to S) then R is a V-negative assay on C(S), meaning if V(x) is false for every member of Rmin = {x|(Exists r)(All q[(r in R & q in r) → s <s q]}, meaning V is false of the least element of every element of partition R, then (All c)((c in C(S)) → ~V(c)), meaning V is false of all members of C(S).

R is a subset of the power set of C(S). If R is polynomial-size compared to S, then C(S) is super-polynomial-size compared to R, because it is to S. Thus at least one element of R is super-polynomial size compared to S, for otherwise the union of the elements of R would have too few elements to be C(S), which would mean R is not a partition of C(S), contrary to premise.

Consider the following algorithm that attempts to determine whether there is a superpolynomial size (compared to S) element of R:

- Let S, an initial condition of C, be any finite set.
- Order the elements of R in any linear ordering as column headings of a table
- Enumerate the elements of C(S)
- Check each element for membership in each element of R in order, and record it in a new row under the column of the table headed by the element of R to which it belongs.
- When the table is complete, count the rows in each column and write at the bottom of each column the ratio of its number of rows to the cardinality of S.
- Let S2 be an initial condition of S with 2|S| elements and repeat steps 1 through 5 with S2, then let S3 be an initial condition of S with 3|S| elements and repeat steps 1 through 5 with S3, and continue for Sn of size n|S| for all n <= |S|.
- Look at the column with the largest total number of rows in all the tables combined, and plot the ratios to |S| from that column in each table on a graph to see if it looks like an exponential growth curve.

Clearly this algorithm cannot determine whether R contains a set of superpolynomial size compared to S. This is because superpolynomial size is a characteristic of the intension of a set, not of its extension. To determine whether any element of R is superpolynomial in size compared to S, and hence whether R is polynomial in size compared to S, we must find the answer in the intension of R, its definition in terms of S.

This may present a problem, for what if R has no definable relation with S other than that |R(S)| is less than x^(p(|S|)) for all real numbers x, all polynomials p and all finite sets S, or some other suitable definition of polynomial-size in relation to S?

To dispense with this problem we need:

**Lemma 5a:** **For any NP-Complete Problem C, any proof that C is in NP must contain more information about R(S) than that it is polynomial-size in comparison to S.**

**Proof of Lemma 5a:** Any proof that C is NP that includes and relies upon a lemma that there exists an R(S) that is polynomial-size in relation to S for some NP-Complete Problem C must contain in its proof of that lemma some additional information about R(S) than what the lemma itself asserts. Otherwise the proof simply restates, and thus does not prove, the lemma. Since by our premise the proof as a whole relies on the lemma, the proof itself thus fails.

By Lemma 5a, any proof that C is in NP includes some premise that Q(R(S)) where Q is some predicate that implies something more about the relationship between |R| and |S| than that R(S) is polynomial-size in relation to S.

Since Q(R(S)) is the same regardless of the size of S, there exists some S such that |S| is large enough to be greater than the finite set of all valid logical inferences from the theory that includes all the statements that define C, plus Q(R(S)).

Thus to determine in polynomial time in relation to C(S) whether Q(R(S)) implies that R(S) is polynomial-size in relation to S, we simply run through the list of all those valid logical inferences to determine whether any of them state that R(S) is polynomial in size in relation to S.

Alternatively, we could suppose R(S) is not polynomial in size in relation to S, assert that supposition in statement H, and determine whether H conjoined with any subset of the set of statements that together define C imply contradiction.

Either way, we have a method of determining whether R(S) is polynomial in size in relation to S, which takes only polynomial time in relation to C(S).

If R is not polynomial size in relation to S, then by definition we know R is not a polynomial-time V-negative assay of C(S).

So assuming R(S) is polynomial size in relation to S, we need to determine, in polynomial-time in relation to C(S) ,whether <R(S), <c> is a V-negative assay of C(S), which is to say, whether V being false of the least element of every element of R implies V is false of all members of C(S).

Well, this is trivial, since we only have to determine this in polynomial time in relation to C(S), not in relation to S. We simply determine one by one whether V is false of every least element of the elements of R, and if so, then we determine one by one whether V is false of all the remaining elements of C(S). Since, by definition of C, V is decidable on any element of C(S) in polynomial time, then our brute force method of determining whether <R(S), <c> is a V-negative assay of C(S) takes less than p(|C(S)|) times |C(S)| steps for some polynomial p, hence still in polynomial time in relation to C(S).

Thus we have proven Lemma 5, that we can determine in polynomial-time in relation to C(S), which is also S’, whether R(S) is a polynomial-time V-negative assay of C(S).

**QED to Lemma 5.**

Now we can easily prove our theorem.

**Proof of Theorem 2:** By Lemmas 3, 4 and 5, C’ satisfies the definition of an NP-Complete Problem. **QED.**

# Why Partitioning Doesn’t Help

It is easy to verify the failure of any specific purported proof that P = NP, but it is hard to prove that all such purported proofs will fail.

Now, what if we enumerate not the set of algorithms behind each and every possible proof that P=NP, but the set of algorithm classes behind some set of classes of proofs that P=NP? Could we not perhaps then find some way of classifying these proofs and their algorithms so that the set of these classes is recursively enumerable?

Suppose we did. Then for all but finitely many of the formulae with an exponential number of intersecting sets of its set of satisfying valuations, we would have to show that the choice of which poly-size intersecting set or sets the algorithm chooses for the formula makes no difference to the diagonal argument refuting it.

But we then run into the problem that, for infinitely many formulae, this choice, which we must prove irrelevant, is from among an exponential number of possibilities. Thus any attribute we assert about all of these choices cannot be constructed as a recursively enumerable set. We cannot even assert the irrelevance of all these choices. Our assertion will be diagonalized as being an assertion about only a proper subset of these choices. In the end, then, there is simply no way we can avoid having our purported proof of P != NP getting diagonalized.

## Cantor-Dependent Unsolvability Results

From Theorem 2 we can now show

**Theorem 3: ****The problem of proving P != NP is Cantor-Unsolvable**

Solving C’ for any NP-Complete problem C consists of either:

a) proving that C is not just in NP but also in P, which proves P=NP, or

b) proving that C is not in P, which refutes any attempted proof of P=NP that relies on the definition of C as being in NP.

Now it would be natural to suppose that if we accomplish b), we could simply go on to show that our proof is independent of the definition of this particular C as being in NP. Then we could generalize our proof to mean that no C is in P, which is to say, that P != NP.

But ironically enough, if we do manage to generalize our proof in this way, we would be solving C’ in such a way that our method of proof would constitute a proof procedure, for all NP-Complete problems C, that none of them are in P. If only the set of all NP-Complete problems were recursively enumerable, for then we really would have proven that P != NP.

We have already shown, however, that the set of all NP-Complete problems is Cantor-uncountable, by which I mean it can be diagonalized according to the method used in Cantor’s uncountability theorems.

Thus, as long as we accept the Cantorian proof that there exist uncountable sets, we have to accept that the set of all NP-Complete problems is among them, and that it is thus not recursively enumerable. Being not r.e., there is no method of proving for all NP-Complete problems C that C is not in P. The problem of identifying a method of proving that P != NP is in this sense what I call a “Cantor-Unsolvable” problem.

Now, it may seem that if P != NP is unsolvable, then so must P = NP be. After all, proving P = NP by proving for some C that it is in both NP and P would prove P != NP false, right? That’s just the law of excluded middle. But no, my claim is not that P != NP is undecidable. My claim is that any attempt to decide that P != NP by taking a proof that a particular C in NP is not in P, and generalizing that proof to apply it to all C in NP, must fail in a mathematics founded upon Cantor’s uncountability theorems. There will always be a diagonal C not covered by the proof.

If we prove that P=NP by constructing some C that is in both NP and P, then yes, we will have rendered P != NP decidable, for we will have solved the problem of generalizing a proof that some C1 is in NP but not in P to show that all problems in NP are not in P, by showing that no such method can be generalized to apply to C. We would be able to show constructively how, given any method of proving that any other problem in NP is not in P, that method specifically fails to prove that C is not in P.

But if we prove non-constructively that P=NP, by proving the assertion that “for some C in NP, C is also in P,” then excluded middle would give us only a non-constructive proof that any method of proving some C1 in NP is not in P cannot be generalized to apply to all problems in NP. The question would remain whether such a method could be generalized to apply to all problems in NP except some C or species of C-like problems, none of which can be constructed. Such a generalized method would constitute a constructive proof that P != NP, and the uneasy truce between constructivists and non-constructivists would become a war, with the paraconsistent logicians tut-tutting both sides as loud as crows flocking around two wounded bulls in battle.

So it is true that since P != NP is Cantor-Unsolvable, P = NP is not constructively provable. That suggests that perhaps Cantor-Unsolvability and constructive provability are in fact one and the same attribute of problems in general. In fact we can now actually prove this is true:

**Theorem 4: Every Cantor-Unsolvable problem is constructively unprovable, and vice versa.**

**Proof of Theorem 4:** A Cantor-Unsolvable problem is one whose affirmative solution can only be achieved by generalizing a formula proven over a finite range to extend it over a non-r.e. range. Suppose a constructive proof of such a problem exists. Then we would have recursively enumerated a non-r.e. set. By contradiction, no such constructive proof exists.

Suppose now we have a constructively unprovable problem. If it were Cantor-Solvable, it could be proven by generalizing a formula over an r.e. set. Such a generalization of such a formula would constitute an effective construction, hence render the proof constructive, contradicting the premise that it is constuctively unprovable. By contradiction, if a problem is constructively unprovable, it must also be Cantor-Unsolvable.

We have proven both directions of the equivalence. **QED.**

So the situation we are facing thus far appears to be that P !=NP, and hence also P=NP, are both constructively unprovable, so P vs NP is constructively unsolvable. And P!=NP is also non-constructively unsolvable, so we know P vs NP cannot be resolved in favor of P!=NP either constructively or non-constructively.

But P = NP may yet be non-constructively provable, so the jury is still out on whether P vs NP is non-constructively solvable in favor of P=NP.

On Cantorian foundations, the P != NP camp are thwarted and on the defensive. They cannot prove P != NP constructively or non-constructively. The P = NP camp are half-thwarted but they still have the upper hand. They cannot prove P=NP constructively, but they can still hope to prove P=NP non-constructively. P != NP believers, and constructivists on both sides of the P vs NP debate, are out of luck in Cantorland. Only the non-constructivist P=NP believers are still in the game in Cantorland, and how many of those are there? Has anyone ever met one?

# Why we cannot even effectively assert P != NP

To prove that every possible proof of P = NP, speaking in the generalizable terms of the 3SAT problem, fails to account for all 3-disjunct CNF formulae, we must assert something provable about all possible proofs of P=NP.

To do that, we must recursively enumerate the special exponentiation-busting search algorithms that distinguish the proofs of P=NP from one another.

To do that, we must show that the infinitely many formulae which have an exponentially large set of possible polynomial-size sets of valuations containing some satisfying valuation all conform uniformly to some polynomial-size partitioning of that set in each of them, such that for all i and j, i the index of the formulae and j the index of the partition sets, the exponential number of valuations in the <i,j>-th partition all share some characteristic that renders the choice among them irrelevant to any distinction among the search algorithms of purported P=NP proofs that matters to their diagonalizability.

But any assertion that an infinite sequence of exponentially growing sets shares some characteristic must rely on a non-recursively-enumerable set to define that characteristic. Thus no proof of P != NP can even be computably asserted, never mind proven.

What’s more, we have just proven that every possible proof of P != NP is diagonalizable. And that proves that we cannot prove that every possible proof of P = NP is diagonalizable.

# Beat the diagonalization of P != NP by rejecting Cantor’s Fallacy

So is that the end of the story? P != NP is not provable, but P = NP is, if only non-constructively?

We have to accept this situation only if we stubbornly insist upon maintaining the validity of Cantor’s uncountability theorems.

## Cantor’s Fallacy and Bijection between Countable and Uncountable Sets

In my article on Cantor’s Fallacy, I show that both Cantor’s proof that no one-to-one mapping of the positive integers with the real numbers exists, and his later proof that there is no one-to-one mapping between any infinite set and its power set, both rely on a category mistake, the confounding of intensional size with extensional size.

I showed that the true distinction between an infinite set and its power set is that the power set’s definition implicitly incorporates a finite reference to its infinite base set an infinite number of times, and is hence an infinitely long definition in relation to the finite size of the expression in it used to refer to its base set.

The base set itself may also turn out to have an infinitely long definition if it is itself the power set of its own base set. That set, in turn, could be a power set too. Ultimately, however, only a finite number of such iterations is possible, and there must be some base set at the core of it that is not itself a power set, and which has a finite definition with no expressions denoting any infinite sets. That base set is recursively enumerable.

I also showed that diagonalization arguments presume the recursive enumerability of the set being diagonalized, which is to say the finite definability of the set, and hence no diagonal argument applies to any set with an infinitely long minimal definition.

And I showed that this does not bar us from constructing meaningful subsets of finitely undefinable sets, because we can construct them by piggy-backing on the generating algorithm of the infinitely long definition itself, which consists of an infinite sequence of increasingly long finite meta-definitions, each incorporating fully the previous finite meta-definition. We call each successive meta-definition a “revision” of the definition and we take the infinitely long limit of that infinite sequence of revisions to be the definition of the non-recursively-enumerable set we are constructing.

Thus are we able to construct a one-to-one relation between the positive integers and the real numbers, but with the caveat that our method of construction prohibits us from referring to any pairing of a specific integer with a specific real number. This is because an infinitely long definition is, to us mere mortals, an ever-changeable creature.

## On the Meaning of Constructivity

Thus our construction is, in a very meaningful sense, “non-constructive.” It is non-constructive in the sense that term is commonly understood, because it is commonly understood to mean that it lends itself to instantiation with constants and without variables. But that is not its logical definition.

Constructive proof, technically speaking, simply means proof susceptible to the inference rule of existential instantiation in a proof in classical propositional logic. That kind of instantiation still uses variables, but it treats them as having the characteristics of a typical instantiation, and hence facilitates proofs that rely on operations upon specific instances. Our construction of the one-to-one mapping of the positive integers with the real numbers, and of any infinite set with its power set, is certainly constructive in that sense, because it allows us to reason consistently and productively on the basis of the general behavior of specific instances.

## The Intuitiveness of Intuitionism and All Logical Paradoxes Resolved

Moreoever, there is nothing more non-intuitive about such a one-to-one mapping than there is about the well-ordering of the real numbers. In fact, our construction alleviates us of the need of the Axiom of Choice, and the Well-Ordering Theorem that is its logical equivalent and its sole purpose in life.

And once one gets accustomed to our new way of incorporating Brouwer’s notion of choice sequences into set theory, one realizes that our construction of the one-to-one mapping of finitely definable infinity with finitely undefinable infinity is very intuitive indeed, for it clearly accounts for our intuition that there is a quantitative difference between the integers and the reals, and between any infinite set and its power set, for we make it clear that this difference is the difference between the finite and the infinite, but in the intensions and not the extensions of the sets thus distinguished.

And once one realizes also that by relieving infinite extensions from carrying the false burden of our true intuitions about these truly intensional quantitative distinctions, we greatly simplify and clarify the mathematics of infinite extension to make it match our intuitions about it perfectly as well.

For how, after all, can there be a number greater than any number which is not as great as another number greater than any number? There cannot be, and we can finally dispense with the conundra that result from assuming that there can.

We have resolved Russell’s Paradox. We have resolved Richard’s Paradox. We have resolved, in fact, every single logical paradox known. How much better can we do by our intuitions than to resolve all known inconsistencies in the foundations of logic and mathematics?

## A New Logical Positivism That Embraces Intuitive Reason

In doing so, have we reopened the insanity of logical positivist reductionism? Certainly not. Logical positivism thrived on its project of disciplining reason against the errors that lead it into paradox. Godel’s theorem finally convinced Russell that reason simply could not be saved from paradox. Logical positivism then splintered into various more humble projects, but none of them gave up its fundamental premise that reasoning as an intuitive act was not to be trusted.

I reject that premise. Reasoning as an intuitive act must be trusted, but not because it is infallible. It is to be trusted because trusting it is simply how it works.

So the art of reasoning is not in whether we trust our inferential intuitions, fallacies and all, but in how.

Or to put it another way, I have proven here that First-Order Logic is consistent, but Godel’s result still stands showing that First-Order Logic is incomplete. First-Order Classical Logic is sound and it does lead reliably from certainty to certainty, but it does not supplant intuition in the construction of knowledge.

## The Ross-Littlewood Paradox Resolved By Rejecting Cantor’s Fallacy

### Priority Removal Scheduling

Consider, for example the Ross-Littlewood Paradox. I will state it in terms of set construction. Suppose we construct a set of non-negative integers by inserting 0, then removing zero and inserting 1 through 10. Then remove 1 and insert 11 through 20, and for every n > 0, remove n and insert 10n+1 through 10(n+1). How many positive integers are in this set? Infinity or zero?

Clearly the answer depends on the order in which the additions and removals are performed. The construction calls for adding every non-negative integer and removing every non-negative integer at some point or another. Any integer we remove first then later add, will end up in the set. Any integer we add first, then later remove, will end up not in the set.

The way the construction is defined, it is clear that at any given step n, n has already been added at some previous step (or in the case of the first step, the same step), so the answer to the question is clearly that n is added first, then removed. Thus we can answer without reservation that the resulting set is empty.

But the paradox is not thereby resolved. The paradox lies precisely in the fact that the contents of the set depend upon the way the constitutive acts of construction, of adding and removing members of the set, are bundled together within the ordered infinite sequence of revisions of the set’s membership. The strict linear ordering of revision allows later revisions to trump earlier revisions.

The contradiction between the scheduled removal of all the non-negative integers and their scheduled insertion is resolved by a temporal metaphor, which is really, however, not about time but boils down instead to logical priority. Later revisions get logical priority over earlier versions as to the final membership of the set being constructed. Because of this infinite deferral of the final say, the membership of the set is never settled at any finite step of the construction. But since an increasing finite segment of the set’s characteristic function is settled at each higher step in the construction, it is clear that the set is scheduled to be emptied out completely.

### Emptying Out a Cantorian Ross-Littlewood Set

If we revise the construction to insert not just 10 times the number of integers we remove, but insert 2^n higher integers for each integer n, then we make this paradox relevant to the question of Cantor’s fallacy.

Even though at each step we are adding an exponentially larger number of integers than the size of the integer we are inserting, the priority order in which the insertion and removal are performed compels us to conclude that the set must still ultimately be empty. Even infinitely accelerated accumulation cannot cause one infinite pile to be larger than another infinite pile.

If the defining construction calls for the removal of every element to take logical priority over its insertion, then the set is empty, regardless of how many other elements are also inserted prior to any given element’s removal.

### The Crux of the Ross-Littlewood Paradox

The truly paradoxical core of this paradox is the fact that a strictly ascending infinite sequence of non-negative integers can approach infinity but ultimately converge to zero in the limit, for that is precisely what happens to the cardinality of the nth revision of this set in its construction as n diverges to infinity.

And the only way to resolve this paradox is to embrace it, by identifying the mistaken assumption it throws into question, and proving that assumption to be unfounded. The unfounded assumption here is that all strictly increasing infinite sequences of non-negative integers diverge to infinity. It may seem non-intuitive to reject this assumption, but only because we too easily fall back on fallacious finitary assumptions about infinity. Cantor’s fallacy contributes to this error by confabulating infinities that are greater than other infinities, when cardinality is clearly a finitary concept that has no applicability to infinite quantities in relation to each other.

In fact Cantor’s uncountability theorem can be used to prove the absurd result that a certain variation of the Ross-Littlewood set is uncountable.

The proof is simple. Construct a set as follows. Insert zero in it. Remove zero and insert every integer from 1 to 2^1, inclusive. Remove 1 and insert every integer from 2 to 2^2 inclusive. For every n >= 0, remove 2^n and insert every integer from (2^n + 1) to 2^(n+1) inclusive. The binary representations of these integers define all the non-singleton subsets of the set containing the first n non-negative integers, if they are read as the characteristic functions of them.

As n approaches infinity, the set we are constructing contains a larger and larger finite segment of every possible non-singleton subset of the non-negative integers. What will be the ultimate result of this construction?

It is certainly wrong to say it will be the union of all the partially constructed versions of the set at each step in its construction, for every step removes as well as inserts something in the set.

One could still try to argue that the set contains only integers with finitely long binary representations, hence that it contains the characteristic functions of only finite sets. This would certainly be the natural conclusion if each step were merely inserting more elements in the set. But since each step both removes and inserts, we know that the set does not necessarily ultimately contain anything that it contains at any finite step in its construction.

If we believe in an infinite ordinal, then the final, w-th (I use w instead of omega for typographical convenience), step in the construction of this set could plausibly be construed as inserting all the completed infinite characteristic functions whose increasingly large finite segments are inserted at each successive step in its construction.

One could argue, however, that it could also plausibly be construed as inserting merely the finite initial segments of all the infinite characteristic functions of the infinite subsets.

Conventional wisdom in a Cantor-believing worldview would hold the former construal unreasonable and the latter obvious. But I think the reverse is true, and I can explain why in a way that is compelling to Cantorians and non-Cantorians alike.

### Infinitizing the ever-increasing in the leap to infinity

I believe in an infinite ordinal, but only one, w. And I believe that, at the w-th step in our construction, there is a leap to infinity, not merely a hand-waving generalization over all the finite steps that precede it. Thus the w-th construction step, it makes sense to me to presume, takes each and every strictly increasing finite quantity in the finite-ordinal steps and infinitizes it as it culminates in the limit at infinity.

Thus if we insert all the characteristic functions of length n at every step n, then at step w, we do not merely acknowledge insertion of all characteristic functions of any finite length n at prior steps; we insert all characteristic functions of infinite length w. Thus we insert at step w enough steps to map one-to-one with the power set of the infinite set of non-negative integers.

Does this mean that at step w, the insertions finally get to trump the removals? If we believe there are more subsets of the non-negative integers than there are non-negative integers, then it is hard to escape this conclusion. But if we acknowledge the simple fact that all infinite quantities have the same cardinality, then the insertion of enough integers to map one-to-one with the entire power set of the non-negative integers into the set we are constructing does not do any harm to our ultimate conclusion that, according to the clear definition of the construction, each number we insert in the set gets removed at a later, higher-priority, step in the construction, and hence the ultimate content of the set is nothing.

If we believe there are more subsets of the integers than integers, however, we can easily convince ourselves that the w-th step in our construction inserts so many elements in our set that the removal it calls for cannot possibly remove them all.

## Cantorian objections laid to rest

Of course, the typical Cantorian will simply reject the notion that the w-th step requires insertion of any infinite numbers in the set. For the infinite characteristic functions I speak of in my argument are actually 2-adic binary numbers with infinitely many digits of increasing place-value, so the Cantorian can argue that by adding them in at infinity, I am adding numbers that simply do not meet the membership definition of the set.

My answer to this, if we must keep doing tit for tat, is simply to say that the recursion defines the set, and thus for Cantorians to insist that we define the limit behavior of the recursion one plausible way rather than another is simply to choose one of two possible definitions of the set, not to prove that one definition is correct or consistent and the other not.

In any case, it doesn’t matter whether we interpret the construction rule to mean that at the nth step, 2^n – n finite numbers will be inserted, or if we interpret it to mean that all characteristic functions of non-singleton sets of length n will be inserted. Either way, the w-th step inserts what Cantorians would consider an uncountable number of numbers, most of them 2-adic numbers.

In this way, then, Cantorians believe that this particular kind of Ross-Littlewood set is uncountable. But I say, all Ross-Littlewood sets, including this one, are empty, because they all are constructed by removing each and every element added to the set, with the removal taking priority over, trumping, the insertion. This straightforward resolution of the Ross-Littlewood paradox becomes problematic only if we insist on taking the Cantorian view that some infinities are larger than others, and that hence some Ross-Littlewood sets are constructed by inserting more into them than we remove, even though by the set’s definition we clearly remove each tentative member we insert.

## Cantorians Are Finitarians

Cantorians, when it comes down to it, are actually finitarians. They do not believe in a true infinite ordinal. The behavior of the Cantorian w ordinal, in fact, is not infinitary at all. It is the behavior of a ** terminal finite ordinal**.

Suppose for some arbitrarily large positive integer w, we define the sub-transfinite numbers as those less than w, and the transfinite numbers as those equal to or greater than w. The result of this will be all of the mathematics of transfinite sets, without the infinite sets.

How large is w? Let it be greater than the absolute value of any integer ever constructively specified, and greater than the round-up of the absolute value of any real number ever referred to. Thus whenever any integer or real number is mentioned, w is greater in absolute value.

I’m not saying there can’t be a set of infinitely many finite sets. I’m saying if we construct an infinite set in an infinite sequence of stages, and at each stage we increment the size of its members, then that set will have infinite-sized members. To claim that such a construction results in only an infinite number of finite sets is to ignore the fact that the size of its successive members gets incremented infinitely many times.

In an infinite-step construction whose steps map one-to-one with the positive integers, we can certainly construct a set that consists only of elements of every finite size n, and no sets of infinite size, but we can also construct a set that has 2^w sets of size w.

Cantorians treat the latter kind of construction as if it were the construction of a finite set with 2^w elements, where w is defined as above, as the number that is greater in absolute value than any number ever mentioned. I treat it as what it is: a construction of an infinite set of infinite sets.

But I have nothing against the Cantorian construction of w. It is very interesting, even as a foundation for mathematics. Though I think I can show that it results in many difficulties when pressed into that service that are seriously less than ideal .

Cantorians believe such a set is “uncountable,” incapable of one-to-one mapping with the positive integers, because they define “finite” as being less than w, and “countable” as mapping one-to-one with w. But 2^w is obviously greater than w, so it must be some transfinite number greater than the first transfinite number, w.

Yet as we have seen, w is just an arbitrarily large finite number deemed to be greater than all finite numbers, and defined as the set of all “finite” numbers.

The fact is, 2^w = w, if we are talking about the real w, the truly infinite set of all finite sets.

## Lowenheim-Skolem Paradox Resolved

This sheds new light on the meaning of the Lowenheim-Skolem Paradox, which is the existence of countable models of first-order logic despite the existence of uncountable ranges of first-order predicates definable within first-order logic.

The conventional wisdom is that the Lowenheim-Skolem Paradox is not a true paradox, because it does not break the consistency or completeness of first-order logic with any formal antinomy the way Russel’s Paradox did to Frege’s naïve set theory. But that response to the paradox is mere indirection. In fact, it is the very fact that it does not break consistency or completeness that makes it a paradox. For how can we consistently or completely model uncountable-set-range predicates using only countable sets, if countable sets cannot map one-to-one with uncountable sets? This is a true logical paradox, not just a nothing-to-see-here non-intuitive aspect of the theory that only appears paradoxical to the untutored naif.

The reason first-order logic has a countable model, despite being capable of proving the existence of an uncountable set, is because all uncountable sets are actually finite. Thus the paradox disappears. The meaning of “uncountable” in the statements proven in the theory is modeled accurately in the countable model for what it truly is, the attribute of being exponentially greater than the finite number w, which is defined in the model as the finite set of all “finite” sets.

I am not arguing that there actually are only a finite number of finite-sized objects. I am merely showing that a model of the counting numbers that corresponds in the realm of numbers with that physical notion makes perfect sense out of first-order logic and every relatively well-studied axiomatization of set theory.

The finite reality underlying Cantor’s transfinite ordinals and cardinals also sheds light on the Continuum Hypothesis (CH). The reason CH is independent of ZFC is because it is merely a choice of whether or not to count transfinite ordinals as transfinite cardinals, or simply to treat the limit ordinals as the cardinality of all the ordinals obtainable from them by finite recursion of the successor operation. CH is clearly definitional in nature when understood in that light.

If it seems nonsensical to model Cantorian talk of infinite ordinals and cardinals using finite numbers that just happen to be greater than the finite number w, which in our model is the finite number larger than all finite numbers ever constructively specified, consider that this is no more absurd than to model Cantorian talk of uncountable sets using countable ones, and that is something Cantorians are comfortable doing day in and day out.

And so am I. And for the same reasons – because doing so is useful and interesting. I am not willing to give up Cantorian thinking, but I am unwilling to remain limited to it.

Pick your poison. My view is not more pristine than the Cantorian view. It’s just at least as pragmatically good in most ways, and pragmatically far better in a few other ways.

## Making Room for True Infinity

But let’s go back to the Ross-Littlewood paradox, particularly my version with the exponentiating insertions.

I have claimed it is obvious that the set is empty, because every number inserted is eventually removed. But how does that square with the intuition that the ratio of quantity of numbers removed to the quantity of numbers inserted approaches zero, and fast, as n goes to infinity?

It squares with it because that vanishing ratio is the ratio between two finite variables as their assigned values increase with each revision of the set. But the nature of infinity is that it collapses measure.

Compared to infinity, every finite quantity is the same, infinitesimal. It truly does not matter how large the finite increments are that a variable receives as compared to another, because as long as the difference between the sets remains finite, that difference will never get large enough to be more than infinitesimal to the ultimate value of both variables, infinity.

The counterargument would be that another variable, set equal to the increasing difference between the two variables, would be strictly increasing, and hence would also have to leap to infinity as we leap to the infinite step. But this does not mean the two variables that increasingly differ end up differing at infinity. Sure, their difference at infinity is infinity, but infinity plus infinity is just infinity, so an infinite difference between two infinite values is no difference at all. It does not cause there to be a greater and a lesser infinity.

It is only the Cantorian’s refusal to accept the inherent nature of infinity that compels the Cantorian to suppose there is something incorrect in this result.

Thus it becomes even more clear that when Cantorians insist that vanishing ratios and divergent differences matter at infinity, they are ignoring the fact that finite quantities no matter how large do not matter at infinity. Consequently, they are treating infinity as if it were really just a very very large finite number, “as large as you please.”

Now, it turns out this Cantorian finitism, most commonly pursued today in the form of ZFC, is capable of modeling quite a lot of mathematics. But believing that ZFC is truly interpreted by its infinite model, believing that it actually is a mathematics of infinity, rather than what it really is, which is a mathematics of arbitrarily large finite quantities, prevents its practitioners from recognizing their enterprise for what it is best suited for, and from realizing that beyond it lies a neglected true mathematics of infinity that also has its own place in mathematics, science and philosophy.

# A General Method For Refuting Attempted Proofs of P=NP

I have shown how Cantor’s diagonalization method of producing counterexamples to one-to-one mappings actually presumes the crypto-finiteness of the sets being compared. I have also shown how this flaw in Cantor’s proof of the uncountability of the real numbers and of the power set of the integers actually saves our attempts to prove P != NP from diagonal doom.

I am now in a position to explain exactly how one goes about refuting a specific proof that P = NP, the specifics of which I deliberately glossed over earlier in this exigesis.

Above I wrote this:

“All NP-Complete problems concern finite initial conditions and finite candidate sets, but proving P != NP or P = NP requires proving something about all possible finite initial conditions, thus about arbitrarily large finite initial conditions.”

Now that I have also explained that the Cantorian least transfinite ordinal w is actually an arbitrarily large finite number, the above statement takes on new meaning. It can be restated, with substitution:

“All NP-Complete problems concern finite initial conditions and finite candidate sets, but proving P != NP or P = NP requires proving something about all possible finite initial conditions, thus about Cantorian transfinite initial conditions.”

So a proof that P = NP is hard because it requires proving that for every actually finite initial condition, including the elusively defined arbitrarily large Cantorian transfinite w-th finite initial condition, there exists a recursively enumerable sequence of sets Q such that, for some finite integer t, for all initial condition sizes n, Q(n) intersects the satisfier set (if non-empty) for the n-sized initial condition, and |Q(n)| < n^t.

The reason this is hard to prove is that it is unclear what “finite” means.

If by every instance of the term “finite” we mean Cantorian sub-transfinite, then we can neither prove nor disprove P=NP in the case of any specific polynomial time algorithm that generates the sequence Q(n), without essentially assuming what we are trying to prove.

We can prove this. Suppose we have such an algorithm. Then for any Q(n) we can construct, in a number of steps polynomial in n, a minimal subset Q'(n) that contains only one element, which of course is also in S(n). Thus we have a polynomial time algorithm for generating the sequence Q'(n) of singleton sets of a member of S(n), from which in polynomial time we can also construct a sequence Q”(n) that maps each n to a member of S(n) if the latter is non-empty, and to the empty set if it is empty.

It is not hard to see that Q”(n) is a choice function, and that any assertion of its existence for all transfinite n implies the Axiom of Choice. It is also straightforward to see that the Axiom of Choice implies it as well. Now, of course, in ZF the Axiom of Choice is not needed for countable sets, never mind finite sets, but in our model we are representing the entire arithmetic hierarchy with finite integers. So for the integers greater than or equal to w, whether they be the transfinite ordinals (ZFC + ~CH) or the transfinite cardinals (ZFC + CH), we need the Axiom of Choice to prove P = NP, and its negation to prove P != NP.

Thus ZFC |= P=NP and ZF+~AC |= P!=NP.

To see this, however, it is necessary to model ZF in such a way that proves it is consistent, and the only way to do that is to represent w as “the least positive integer greater than any positive integer ever specified.”

The Axiom of Choice and its negation are each sufficient and necessary for deciding P vs NP because they each supply the “something about all possible finite initial conditions, thus about Cantorian transfinite initial conditions” required to do it.

So the mystery of P vs NP is resolved to the extent it has been puzzled over up to now.

However, like all great mysteries, its resolution opens a new vista upon an even greater mystery.

# The Almost-Trivial proof that P=NP

Suppose now we define “finite” as the opposite of true infinity, not of Cantorian pseudo-infinity. This is the Brouwer infinity of infinite choice, of patternless randomness. It is the infinity that is the answer to the paradox that random sequences, in order to avoid conforming to any pattern, must have no definable property and must hence be so bland as to seem perfectly homogeneous, but how can something perfectly homogeneous be random?

The answer to this paradox that we can now give, is that truly random sequences have finitely undefinable properties and hence, far from being homogeneous, are infinitely heterogeneous.

So here is the new mystery. Suppose we mean the opposite of true infinity, not the opposite of Cantorian pseudo-infinity, when we say:

**Theorem 3:** P=NP.

**Definition:** “**poly-n**” is any polynomial expression in the variable n.

**Lemma 7:** P=NP iff there exists a recursively enumerable sequence of sets Q such that, for some finite integer t, for all initial conditions n of size poly-n, Q(n) intersects the satisfier set (if non-empty) for the poly-n-sized initial condition, and |Q(n)| < (poly-n)^t.

**Proof of Lemma 7:** This follows directly from Lemma 1 by applying it to the case where |S| = w, the Cantorian pseudo-infinity.

**QED to Lemma 7.**

Then by Lemma 7 t could be w or any higher Cantorian pseudo-infinite number, if we allow that such numbers exist. If they do, then it is trivial to prove that P=NP, because the term “recursively enumerable sequence” allows us to enumerate the entire arithmetic hierarchy, because we are running the recursion in our countable model and not inside the narrative world of the (also countable) theory of Cantorian sentences in which the Cantorian pseudo-infinite numbers are regarded as being not merely actual infinities, but all the infinities there are.

We simply set t = w and we are done, because “for all initial condition sizes n” refers only to specified values of n, for initial conditions by definition are specified conditions, and thus by the definition of w “all initial condition sizes n” are less than w. Trivially, then, even if our clever search algorithm is not so clever after all, and is even equivalent to brute force search, like if |Q(n)| = 2^n or even n^n, then |Q(n)| < n^t for all n, because we have set t=w and by definition of w and initial condition n, m^poly-n < n^w for all m, n < w.

**QED to Theorem 3.**

So in the real world, the world where there exists such a thing as true infinity, P = NP, but trivially so since the equation tilts in the opposite direction we thought. And therein lies its kernel of non-triviality.

## The Non-Trivial Real-World Implications of Trivial P=NP

Typically we have thought of P = NP to mean that NP problems become tractable for computation, that NP problems all reduce to easy P problems. But it turns out that P = NP means the converse, that even polynomial-time computable problems can, and for all the interesting and hard NP cases we want to solve, do require, in reality, nearly exponential time to solve, which is to say that proving a problem is in P does not necessarily prove it can be computed significantly faster than any problem in NP.

Well, the good news is that we already knew that. So my result is not shocking, but rather, confirms in theory what experience in practice has already shown, that in the real world P is not meaningfully different from NP for computation, meaning the classification of a problem in terms of the exponential or polynomial expanse of its possible solutions in relation to the size of its initial conditions is not a meaningful practical measure of its finite computational complexity. We have proven here merely that this is the case even in theory. And I believe that a theory that matches practical experience is in some important sense superior to a theory that does not.

The key thing to understand here is what we are doing when we set t=w. We are saying that we will still count a problem as being in P if the computation time is the initial condition size n, raised to the power of the least finite positive integer greater than any positive integer ever specified, with the understanding that setting any initial condition of size n is specifying n.

Yes, this is cheating. Yes, we have simply in some sense assumed what we are proving. But no, this does not mean we have not proven P = NP. We have indeed, and the proof is indeed trivial.

What is not trivial is that we have shown that by modeling ZFC in the only way it can be modeled as both consistent and complete, then looking at the P vs NP question in that canonical model, we find that P = NP becomes an almost trivial result, but not quite trivial.

The almost trivial proof that P = NP is actually very significant because it unmasks the real question behind P vs NP, which is whether there is a way to achieve specific-exponent polynomial-time computation of an NP problem. In other words, if we consider only t < w, does P = NP?

This reframing of the real question actually opens the way to a decidedly non-trivial answer.

**Definition: Let us call “P<” the class of problems solvable in n^t time where t<w.**

Then the real question is whether P< = NP.

The answer is that P< != NP.

But don’t take my word for it. Let’s prove it.

# Proof that P< != NP

**Theorem 4: P< != NP**

**Proof:** Since we reject Cantor’s fallacy, we accept the existence of a one-to-one mapping between any infinite set and its power set, with the caveat that we can only refer to a generic, not any specific, member of that mapping. Thus we can at least assert P< != NP, because our list of all possible choice functions that shortcut a search into an exponential candidate pool under some finite number t < w, cannot be diagonalized.

Since we can generalize over all possible polynomializing choice functions as a non-diagonalizably listed set, we can partition that set into equivalence classes based on the least member of each element mapped to that as a satisfying element. Each partition’s members reduce to that partition’s singular choice function mapping each initial condition n < w to some satisfying element, or to the empty set if there is no satisfying element for that n.

This puts us in position to diagonalize. In the case of 3SAT, given any choice set, we can construct a 3-disjunct CNF formula such that no singular choice function is able to specify a satisfying set, if there is one.

There are two cases.

First, if w exists, then there are only finitely many specific CNF formulae, because there are only finitely many CNF formulae of size < w. Thus we simply assert the non-constructive existence of some CNF formula of finite size > w, and no choice function can map it to either a satisfying valuation or to the empty set, because doing so would require specifying it as a member of an ordered pair which is, in turn, a member of the choice function.

Second, if w does not exist, then there is no unspecified finite number, and hence there are truly an infinite number of specific CNF formulae, because there are CNF formulae of any finite size. But what becomes of our choice functions when they are constructed in the infinite limit to range over the true infinity of all CNF formulae? There are two sub-cases, call them Case 2a and Case 2b.

In Case 2a, each such choice function infinitizes the formulae along with itself. Thus each choice function includes in its domain every possible infinite conjunction of triple-disjunctions. This, however, is impossible for any choice function, because a choice function by definition specifies all its members, uniquely pairing each formula with either a specific satisfying valuation or the empty set if no such valuation exists, for otherwise it is not serving its defined purpose of separating satisfiable formulae from non-satisfiable ones. Since all choice functions with infinitized domains that infinitize the members of its domain are finitely undefinable, the mappings they represent are unspecifiable, and hence in being infinitized they cease to be choice functions, and thus cease to prove that P< = NP.

In Case 2b, each such choice function infinitizes none of the formulae in its domain along with itself. Thus each choice function includes in its domain every possible finite conjunction of triple-disjunctions, but no infinitely long formulae. We have already shown, however, that any set constructed as the infinite limit of a sequence of recursively defined increasingly inclusive sets cannot be truly infinite without infinitizing its members. For if we define such a set so as to avoid infinitizing its members, for example as the union of all members of some recursively defined set of increasingly inclusive sets, then we are defining a set with infinite extension but finite intension. Such a set, as it turns out, and as we have shown above, is in fact truly finite in cardinality, as it has true cardinality w in the only consistent model of ZFC there is. Thus case 2b simply reduces to Case 1.

In conclusion, we have shown how P< != NP by showing that any proof that P< = NP relies on the existence of a choice function that ranges over all possible initial conditions of the problem and that picks out a unique satisfier, if one exists, for those initial conditions from among exponentially many candidates. And we have shown that this choice function must fall into one of the following three categories:

1) Cantorian sub-transfinite, meaning truly and specifiably finite in extension, thus failing to account for initial conditions of size greater than its own size, which is the same as saying in Cantorian terms that the set is recursive and thus diagonalizable, or

2) Cantorian pseudo-infinite (recursively “infinite” in extension but finite in intension, thus not infinitizing its elements), thus in fact truly (though unspecifiably) finite in extension, thus, like in category 1), failing to account for initial conditions of size greater than its own size, which is the same as saying in Cantorian terms that the set is recursive and thus diagonalizable, or

3) Truly infinite (infinite in both extension and intension, thus infinitizing its elements), thus incapable of uniquely specifying any satisfying valuations at all.

Since in all three cases, the choice function fails to tie more than a proper subset of all possible initial conditions to satisfying valuations, no choice function exists that can do so, thus an essential ingredient of any proof that P< = NP is missing, and hence P< != NP.

**QED.**

# Godels’ Second Incompleteness Theorem DeCantorized

Godel’s Second Incompleteness Theorem, in light of our finitary canonical model of ZFC and the Brouwerian true infinity that lies outside that model’s scope, also becomes a proof that P< != NP. It states that any finite set of axioms of ZFC can be proven consistent in ZFC (so there is a poly-time verification algorithm), but that no proof exists in ZFC that all finite sets of its axioms are consistent (but no poly-time search algorithm to recursively enumerate the models proving each set of its axioms consistent). Thus it amounts to saying that the consistency of the finite sub-theories of ZFC is an NP problem but not in P<, because it is equivalent to saying you can construct a choice function that specifies a model for each and every finite sub-theory, but only as a set that falls into one of the three cases described above:

1) Cantorian sub-transfinite, meaning truly and specifiably finite in extension, thus failing to account for initial conditions of size greater than its own size, which is the same as saying in Cantorian terms that the set is recursive and thus diagonalizable, or

2) Cantorian pseudo-infinite (recursively “infinite” in extension but finite in intension, thus not infinitizing its elements), thus in fact truly (though unspecifiably) finite in extension, thus, like in category 1), failing to account for initial conditions of size greater than its own size, which is the same as saying in Cantorian terms that the set is recursive and thus diagonalizable, or

3) Truly infinite (infinite in both extension and intension, thus infinitizing its elements), thus incapable of uniquely specifying any models at all.

But Godel’s Second Incompleteness Theorem can also be understood with the assumption that the upper-bounding exponent t is allowed to be w, the least unspecified finite number. Under that assumption, like in our general treatment above, it becomes an almost trivial proof that P = NP.

# Trivial Proof of the Consistency and Completeness of ZFC

But what does this almost-trivial equation P = NP mean for the consistency of ZFC? It means the finitary model of ZFC that proves its consistency is also complete. We can actually formulate and prove within ZFC the theorem that ZFC is consistent under our finitary model, because in our finitary model the statement enumerating the set of all sets of axioms of ZFC, which is forebodingly non-recursively-enumerable in infinitary models of ZFC (models that attempt to interpret Cantorian pseudo-infinity as true infinity) becomes a trivially finite statement of size 2^w.

# Pseudo-Infinity in ZFC and Heisenberg’s Uncertainty Principle

Now, of course, the odd thing about this is that, within the pseudo-infinite world of ZFC theory, this finite statement does not stand out as anything special, for it is just another finite formula, and ZFC thinks of its own axiom schemas as infinite, so although ZFC does in fact prove its own consistency, in some basic sense it still does not know it.

Fortunately this blissful ignorance is merely a subjective matter, for we can show with full rigor that ZF does in fact prove a statement that our model dutifully interprets as asserting the consistency of ZF. Thus our model is complete, and under our finitary interpretation of ZF, ZF is both consistent and complete.

Since the sense in which ZF does not “know” it proves its own consistency is entirely subjective, is there a way of “doing” ZF so as to realize as we are doing it that this otherwise non-descript finite theorem is in fact CON(ZF)? Well, the answer to that is yes and no. Yes, we can do ZF as if we knew it proved its own consistency, for that is actually what we do all the time. After all, if we ever actually did math or even set theory in ZFC without assuming it was consistent, we would sure have an easy time proving all our conjectures, since they would all be true, including the ones that imply each other’s negations.

But the answer is “no” in the sense that we cannot, in ZFC, ever, as it were, become lucid in its dreamworld and realize that the infinities it flies around in are actually just unspecified finite numbers. And the finite formula that encodes its consistency can never be recognized as such within ZF because to recognize it we must first specify it, and we cannot specify it either within ZF or even outside it, without in effect chasing the spirit of its definition to reside in some larger formula of unspecified size. So it behaves according to something akin to Heisenberg’s Uncertainty Principle. We cannot both specify it and recognize its existence at once. If we recognize its existence, we also recognize that we cannot specify it. If we specify it, it cannot exist as what we have specified. Its ontological position and momentum do not coexist, if we understand its ontological position to mean its unique reference, its specifiability, its extension, and understand its ontological momentum to mean its meaning, its definition, its intension.

We can conjecture, in fact, that the Heisenberg Uncertainty Principle is in fact just a special case of the quixotic nature of the least unspecifiable infinity, for the finite speed of light and Planck’s constant are just applications of that number to theoretical physics, and their function in establishing a virtual infinity (finite maximum) and a virtual zero (finite minimum) for energy lies at the heart of quantum mechanics.

# Reliable Measure Linking Math to Physics with the Banach-Tarski Paradox Resolved

The Banach-Tarski Paradox relies on the ultrafilter lemma, which is a kludge to impart some of the finite attributes that the fatter kludge called the Axiom of Choice shoehorns onto supposedly uncountable infinities, to prove that there exist “unmeasurable” subsets of points in a sphere into which a sphere may be partitioned such that those subset may be rotated and reassembled into two spheres of the same size as the original.

This so-called construction relies in two different places on the premise that any finite sequence of steps of irrational length along a grid of two polar dimensions can be transformed into the subsequence containing all but the last step, by adding a backward step that cancels the last step. Or to put it more generally, the premise is that given any sequence of words of a group, each word in that sequence consisting of the prior word plus a generator that is not the inverse of the last generator in its predecessor, the last word in that sequence may be transformed into the prior word in the sequence by adding to the last word the inverse of its final generator.

There is nothing wrong with that premise until the word gets to the length equal to the largest specified natural number, which we shall designate as ‘m.’ It is not possible to construct a longer sequence of natural numbers, so it is not possible to add the inverse of the mth generator in the sequence as the (m+1)th in order to cancel out the mth to arrive back at the (m-1)th word in the sequence.

Thus two moves in the sphere-doubling construction fail. Let’s examine how they fail in relation to the group SO(3) that has four generators: a, its inverse a^-1, b and its inverse b^-1, with the non-commutative operator denoted by concatenation.

First, the move that strips the “a’ off the end of each a-ended sequence to produce all the a, b and b^-1 ended sequences fails because any sequence of length m cannot have an a^-1 added to it to cancel its final a. In fact it can’t have any specific element added to it at all. Although there can exist sequences longer than m, those sequences cannot have specific elements beyond the mth, only generic ones. Only variables, free or bound, can refer to elements of a sequence beyond the mth, not constants.

And since there are only finitely many sequences up to length m, the elimination of the final a from all those that end in a does not result in all the sequences up to length m that end in a, nor all that end in b, nor all that end in b^-1. It just leaves us with a little under a third of each. No paradox, no doubling, no rabbits out of hats, no Hilbert roach motels.

The second move that relies on Cantorian sleight-of-mind is the one where we supposedly can fill any hole in a circle without adding points to the circle just by rotating onto it the first point in an infinite sequence of points that begins a rational length of arc along the circle from the hole, and increments by adding that rational length again along the arc to reach the next point. This wrongly presumes that an infinity of points in that sequence are specified. In fact the mth point in that sequence would leave a hole behind when it is rotated to replace the (m-1)th point, and the (m+1)th point, being unspecified, would not be a measurable distance away from the mth-point-vacated hole, and thus could not be rotated into it,

Point translation requires distance measure. Measure requires quantitative specification. We do not have that for the (m+1)th point, so we cannot translate it to replace the mth, so the hole is merely shifted to the mth point’s position on the circle, not miraculously filled by fixed point-set rotation.

And even if we did allow rotation of unspecified points into specified positions, the infinite regress of rotation would look more like the unwinding of a tape measure spool than a “fixed rotation,” so the question would still naturally arise as to the meaning of “fixed rotation” when it’s applied to an infinitely long coil of tape. In a physical tape coil, each coil wind is smaller in diameter than the one outside it, so the inner winds spin through a greater angle than the outer winds for the same length of tape unwound. That is why it looks different from a fixed rotation. In the radius length modulo pi circumference coil, though, each wind intersperses with the next so they are all exactly the same diameter, making the unwind appear like a fixed rotation of every wind at once through the same angle.

This really is a fixed rotation if the coil is finite in length, and like any other fixed rotation of points, if it fills an empty spot on one end, than it vacates some other empty spot on the other end. But to call it a fixed rotation when the tail is endless is nonsense, because it is really not a rotation at all but an extension.

If we did the rotation sequentially one point at a time, we would see the hole get deferred indefinitely around the coil. It would never disappear, just travel endlessly around the circle. To say that making all those deferrals simultaneously makes the hole disappear altogether would be the same as saying that moving a point round in fixed intervals around a circle an infinite number of times at once is also to make that point disappear.

In either case, supposing that the hole or the point actually disappears is nonsense. Clearly the right answer to the question of where an infinitely displaced hole or point ends up, is that it ends up somewhere at the limit of all its motions. What would be the limit location of a radius modulo circumference point sequence? It seems to me it would just be some other point on the circle, or perhaps it would be at the point’s center. Either way, it does not disappear from the volume that contains the circle.

But the true infinitism answer gives the most rigorously sound and intuitive result – the hole or point simply ends up at the highest finite location in the sequence that it can occupy, which is the location with the highest specified natural number as its index.

A physical consequence of this resolution of the Banach-Tarski Paradox is that it amounts to proving the Conservation of Space as a mathematical theorem. Points in space can be neither created nor destroyed.

Only finite information is accessible to any finite physical system, and in True Infinitism only sequences up to a finite maximum length are accessible to specific computations. Beyond that finite maximum, which is observer-dependent, only generic computations, which is to say computations on free or bound variables rather than on constants, can be executed.

Thus True Infinitism math naturally models useful physical notions like the Planck Length as a minimum accessible length and the speed of light as a maximum accessible speed. However, True Infinitism leaves open the possibility that both the Planck Length and the speed of light are observer-dependent.

# True and Pseudo Infinities at the Foundations of Mathematics

True infinity also has some interesting cosmological consequences. For example, if I were to sit in this room forever, with my limited view of the universe, would I eventually see every event that has ever happened anywhere, repeated before my eyes? The existence of true infinity implies that yes, I would. But if the only infinity is Cantorian pseudo-infinity, I could prove that no, I would not.

What I have shown here is that, despite what may be proven about Cantorian pseudo-infinity, it is simply not true of true infinity, and that there is nothing inconsistent in this fact. We can make use of Cantorian pseudo-infinity for all it is worth to us, and it is worth a great deal, yet also recognize that it is not about true infinity, and that we can also theorize about true infinity for all that is worth to us, and it is also worth a great deal.

And I think this position on the foundations of mathematics and logic reflects the general consensus today among mathematicians and logicians, that however we may like or dislike this or that foundational program or project, we are all, as a community, better off on the whole for having all these programs and projects to ponder, than we would be if some of them were not open to us for inquiry. I have proven here that they may coexist in a single complete and consistent model for the foundations of mathematics.

The discovery here of the finiteness of Cantorian pseudo-infinity is especially important to modern mathematics precisely because so much of mathematics is currently modeled in ZFC. Thus a lot of confusion has also arisen in mathematics over the last century and a half because of Cantor’s fallacy. By correcting that fallacy, we are able to provide simple and elegant solutions to many problems involving Cantorian pseudo-infinities being mistaken for true infinity.

One of these, for example, is the Poincare Conjecture, which was solved by Pereleman by demonstrating that a Cantorian pseudo-infinity actually reduces to a finite number. Pereleman showed no understanding in his proof, however, that this was in fact what he had done. He understood his proof as something anomalous within the Cantorian worldview.

What I have shown in this paper is that Perelmen’s result comes into focus, makes sense, in a way becomes a trivial result, from the standpoint of a Kantified Brouwerian and Kroneckerian worldview that embraces the Cantorian worldview as a useful fiction within its imaginary ambit.

It would be interesting to explore, as well, how Abraham Robinson’s “Non-Standard” analysis may also be understood as an instance of seeing through to the true finiteness of a Cantorian infinity, such that it ought to be called “True Infinity Analysis” rather than “Non-Standard.”

An exploration of the relationship between seeing through to the finiteness of Cantorian pseudo-infinities and seeing through to the regularity of Mandelbrot’s monsters also promises an interesting journey.