May 17

Ten New Poems – The Monomoth Decameron

My submission of ten poems to the 2014 Campbell Corner Poetry Contest was not awarded a prize, so according to my submit once policy I am publishing them here now. Since I wrote them as a set, I have posted them on a single page and titled them The Monomoth Decameron.

Apr 21

Five New Poems – Wrinkles, barge, subways, girl, ours

These five poems were entered in the New Ohio Review’s 2014 Poetry Contest earlier this month, and more suitable poems were selected for the prizes and for publication. So in following my policy of submit once, then publish openly, I am sharing these poems now with the general public:

Wrinkles

barge

subways (Mature content, rated R)

girl

ours (Mature content, rated R)

Apr 19

Three New Poems – out of water, Regret, Verisimilarattitude

These three poems were entered in the Dash Literary Journal’s 2014 Poetry Contest last month, and more suitable poems were selected for the prizes and for publication. So in following my policy of submit once, then publish openly, I am sharing these poems now with the general public:

out of water

Regret

Verisimilarattitude

 

Apr 17

Bleu: A Creation Myth in the Making is the #1 Amazon Literary Satire Fiction Free Download Today

Okay, so there are only four books in that category currently in the Top 100 Free list on Amazon:

http://www.amazon.com/Best-Sellers-Kindle-Store-Literary-Satire-Fiction/zgbs/digital-text/7588855011/ref=zg_bs_fvp_p_f_7588855011?_encoding=UTF8&tf=1

but it’s nice to see that my novel is finding an audience, even if it is within a small niche of the reading public.

I couldn’t be prouder of my curious little book. I feel it is already a success now, because eighty-one people have looked it over and decided to give it a free try! I welcome you to try it as well. It will be on free promotion until the end of April 20, 2014.

Cheers!

Apr 06

P v NP Resolved In a DeCantorized Foundation for Mathematics

Abstract

P v NP is resolved by refuting Cantor’s Uncountability Theorems, but allowing them to live on in a finite model of ZFC which interprets the least infinite cardinal as the greatest finite number ever specified (unambiguously referred to) and which turns out to make ZFC consistent and complete under that finite interpretation. This result relies on findings from my earlier piece on Cantor’s Fallacy, particularly that paper’s key insight that Cantor committed a category mistake in positing some higher form of infinite extension that makes a set uncountable, when in fact what makes a set uncountable is its infinite intension.

In the context of my refutation of Cantor’s Uncountability Theorems, I am able to to prove P = NP almost trivially, but this proof comes with the caveat that it ends up implying the opposite of what conventional wisdom has thought such a proof would mean for practical computation.

For rather than implying that NP-Complete problems can be efficiently solved, a corollary of my proof of P = NP turns out to be that there exists a distinct category P of problems that cannot be efficiently computed despite being solvable in polynomial time.

I also prove that the subset of the set P of problems that can be efficiently computed, which I denote as P< and define in relation to the greatest finite number ever referred to, does not equal NP.

My proof that P< != NP forms a bookend with my proof that P = NP, with implications for the foundations of mathematics that very satisfyingly reflect the overall empirical experience of working mathematicians and computer scientists and engineers with computation in general.

The paradigm shift underlying these proofs also resolves all known philosophical paradoxes in the foundations of mathematics and formal logic, and provides a new foundation for mathematics that leaves ample room for the continuing development of useful theory with ZFC and other popular foundational projects, and that brings into workable harmony the views and insights of intuitionists, formalists, positivists and empiricists into the nature of mathematics in such a way that opens up wide vistas for new research based on the work of all of them.

Finally, the new foundation for mathematics presented in this paper gives us foundational insight into the true meaning of Godel’s Second Incompleteness Theorem and the Continuum Hyphothesis, and it also strongly suggests that Heisenberg’s Uncertainty Principle is simply an applied instance of the representation within ZFC of unspecifiably large finite numbers as being “transfinite.”

The author also suggests that Perelman’s proof of the Poincare Conjecture can be best understood as an instance of awakening to the true finiteness of a Cantorian infinity, and that further work stemming from this paper could investigate the relationship between its De-Cantorized Foundations and both Abraham Robinson’s non-standard analysis and Mandelbrot’s monsters.

The Essence of P vs NP

An NP-Complete problem is one that involves an exponential number of candidates to be filtered to determine whether a polynomial-time verifiable criterion is met by any of them.

Brute force solutions all require listing out and testing a number of candidates that is exponentially large in comparison to the size of the initial parameters, which usually include a finite set whose number of subsets stands in polynomial relation to the number of candidates.

If no solution can do better than polynomially better than brute force, then P != NP. If some solution can, then P=NP.

But since verifying any candidate takes only polynomial time, the exponential time component of the solution must occur in the candidate identification process.

Thus for any NP problem, if for all sets of initial conditions there is a polynomial time method of identifying a polynomial-size subset of the candidates that intersects the set of all criterion-satisfying candidates, then the problem is solved in P time and P = NP.

Thus proving P != NP requires proving, for some NP-Complete problem, that all methods of listing a candidate subset that intersects the set of all criterion-satisfying candidates produces a candidate set that is super-polynomial in size in relation to the initial conditions.

All NP-Complete problems concern finite initial conditions and finite candidate sets, but proving P != NP or P = NP requires proving something about all possible finite initial conditions, thus about arbitrarily large finite initial conditions.

Diagonalization Revisited

Now, suppose we have a proof that P = NP. It must produce, for any initial conditions no matter how large, a candidate subset that is polynomial in size in relation to those initial conditions. Then the list of all the candidates for all the initial conditions is itself a recursively enumerable set (we know the set of all possible initial conditions is recursively enumerable since it is for at least one NP-Complete problem, SAT). To enumerate it, we simply enumerate all the possible initial conditions and, for each, produce the finite polynomial-size candidate list in polynomial time.

It might seem, then, that all we need to do to prove P != NP is prove that the infinite union of the finite candidate sets of each set of possible initial conditions is not recursively enumerable. And we should be able to come up with a diagonalization argument to prove this.

Unfortunately, however, there is no requirement that the method for producing polynomial-size candidate sets in polynomial time be uniform across all initial conditions. What if there is a unique method of doing this for each slate of initial conditions?

Suppose, for example, that we succeed in diagonalizing the candidate set for a given proof that P = NP, thus refuting the proof. Have we thereby proven that P != NP? No, for we have only proven that one particular method of generating the polynomial-size candidate sets in polynomial time for all initial conditions fails to list all actual solutions.

All we can do by diagonalization is to attack each attempt to recursively enumerate the solutions for all initial conditions and bat them down one by one. Diagonalization gives us no general method for pre-empting all such attempts at once. We can perhaps manage to pre-empt certain subclasses of such attempted methods, but we cannot pre-empt them all.

Now we might try to say that for each slate of initial conditions we must include the time required to determine the unique method in the computation time of the candidate set. So suppose we require that any proof that P = NP also include computation of initial-condition-dependent method variations within the computation time claimed for producing the polynomial-size candidate set. Our strategy to use diagonalization to prove P != NP might appear salvageable in this way.

But alas, this is an illusion, because we still cannot generalize over the manner in which each potential proof of P=NP incorporates computation of its initial-condition-dependent method variations into its computation measures. Each potential proof could do so in a unique way, so any attempt we make to generalize over them in a recursively enumerable way could miss some of them. We can diagonalize over any proper subset of potential proofs of P=NP based on the commonalities in their methods of incorporating initial-condition-dependent computation into their computation time calculations, but we can’t diagonzalize over them all.

So am I saying that P != NP is unprovable? Not exactly.

I am saying that P != NP is not provable by diagonalization, and of course this is not a new claim. The general consensus gave up decades ago on proving P != NP by diagonalization for the very reasons I have stated. One could even say that it was this failure of diagonalization for proving P != NP that has made the P vs NP question both interesting and important.

Or did they give up? The theorem that showed that relativizable proofs cannot resolve P vs NP would seem to have put an end to diagonalizing methods of proving P != NP, but the continued pursuit of super-polynomial circuit lower bounds that tacitly rely on diagonalization shows that some people are as tenacious in pursuing a dead end route to P != NP as some are in pursuing dead end routes to P = NP.

But I have more to say than just that.

Attempts to prove P = NP diagonalize attempts to prove P != NP

The main point I want to make here is that diagonalization fails for proving P != NP not because it is irrelevant to the core issue in P vs NP, but because diagonalization actually works against proving P != NP.

Now, this does not mean diagonalization can be used to prove P = NP. It simply means diagonalization is at the heart of the difficulty in proving P != NP, because it is in fact the attempts to prove P != NP that keep getting diagonalized by fresh approaches trying to prove P = NP.

So I want to elucidate exactly how this is the case. I am not making a mere heuristic observation. I am making a mathematical claim.

Any proof that P != NP must prove that all possible attempts to recursively enumerate a set intersecting that of all the solutions for all initial conditions of an NP-Complete problem must fail. But to succeed, such a proof must recursively enumerate all possible ways of recursively enumerating such purported intersecting sets.

I claim the following.

Theorem 1: The set of all possible ways to recursively enumerate a candidate subset intersecting that of all the criterion-satisfying candidates for all initial conditions of an NP-Complete problem is not recursively enumerable.

Proof: It will suffice to consider the 3SAT problem. The initial condition for it is simply a CNF formula. The set of all initial conditions is the set of all CNF formulae. A set intersecting that of all the criterion-satisfying candidates for all initial conditions includes every pairing of a CNF formula (the formula is the slate of initial conditions) with some polynomial-size set of valuations that includes at least one that satisfies it if there is one.

Construct a recursive enumeration of the 3SAT CNF formulae by running through all formulae with 2 literals, all those with 3 literals, etc.

Suppose we have an attempted proof that P=NP. It must include some recursive enumeration that pairs each 3SAT CNF formula with at least one polynomial-size set of valuations that intersects the set of all its satisfying valuations, if that set is non-empty.

When we say “polynomial-size” we mean of course that there is a finite maximum exponent t such that, for all CNF formulae of any number of literals n, the number of valuations in the polynomial-size candidate subset chosen by the proof’s exponential-bustingly clever search algorithm is less than n^t.

Thus for any one particular such proof, we can try to construct a refutation by producing a formula that has a non-empty set of satisfying valuations but for which the the proof’s search algorithm cannot produce a polynomial size intersect set with it in polynomial time.

We can be certain, however, that there will come along yet another purported proof that P=NP which takes all known methods of refuting all known exponential-bustingly clever search algorithms, and evades all those known methods with an incrementally more clever exponential-busting search algorithm.

To prove that any such proof’s enumeration fails, no matter how incrementally more wise to our refutation methods its search algorithm is built, we must show that all such proofs, regardless of how they repartition the 3SAT formulae to arrive at some new maximum exponent t for the maximum size of any of the proof’s intersecting sets of the formula-satisfying valuation sets, can be trumped by some formula which forces the proof’s algorithm to generate more than n^t satisfying valuations in its intersect set.

But alas, to deal such a final blow to the P=NP enterprise, we must recursively enumerate the set of all possible recursive enumerations of valuations that intersect the set of satisfying valuations of CNF formulae.

To give halt to our pretensions in this regard, consider this lemma:

Lemma 1: The set of all 3SAT CNF formulae for which the set of polynomial-size intersecting sets of its satisfying valuations has exponential size in relation to the number of literals in the CNF formula is infinite in size.

Proof of Lemma 1: It will suffice to show that there are 3SAT CNF formulae with any number of literals that have only one satisfying valuation, since the number of possible valuations of any 3SAT CNF formula is exponential in size in relation to the number of literals in it, and so is that number minus one. Given any n literals x1, x2, … xn, construct the 3SAT formula that includes the following conjuncts for each xi, 1 <= i <= n:

(xi V x[i+1 mod n] V x[i+2 mod n])

(xi V ~x[i+1 mod n] V x[i+2 mod n])

(xi V x[i+1 mod n] V ~x[i+2 mod n])

(xi V ~x[i+1 mod n] V ~x[i+2 mod n])

Since we can construct as many such 3SAT formulae as we please, their number is infinite. QED.

Now suppose we have a proof that P != NP. It must include some recursive enumeration of all possible ways to generate the set of all possible pairings of each CNF formula with some polynomial-size set intersecting the set of its satisfying valuations.

Suppose we have such a recursive enumeration. It recursively enumerates an infinite list of all the algorithms that recursively enumerate pairings from each CNF formula to at least one polynomial-size set intersecting that formula’s set of satisfying valuations, which is to say, all the algorithms that, for some finite positive integer t, recursively enumerate pairings from each 3SAT CNF formula to at least one intersect set containing at least one, and fewer than n^t, of that formula’s set of satisfying valuations, n being the number of literals in the formula. Can this enumeration possibly be complete?

Well, we know that there are infinitely many 3SAT formulae, hence 3SAT formulae containing any number n of literals, that have O(2^n) non-satisfying valuations, and which thus each has O(2^n) intersecting sets of its set of satisfying valuations.

This means that there are 3SAT formulae containing any number of literals, thus infinitely many 3SAT formulae, for which any proof of P=NP fails to enumerate all the intersecting sets of its set of satisfying valuations, which means that each of these formulae causes a branching of at least two distinct possible proofs that P=NP (and in fact exponentially many in an infinite number of cases).

Thus any proof that P != NP must, for any n, enumerate at least 2^n possible algorithms that each map each 3SAT formula to a polynomial-size set intersecting its set of satisfying valuations.

Alas, this is impossible. We cannot recursively enumerate 2^n algorithms for all values of n. We will be diagonalized. The P=NP minions will keep trumping us forever, and we will have to keep shoveling their trumps into new classifications and folding them into our enumeration to keep them at bay.

Constructive Proof of P = NP and its Case By Case Debunking form an NP-Complete problem

Given any specific purported proof that P=NP, we know we can easily verify if it is valid or not. Our method of verification, in fact, is simply a method of refutation. We take the upper bound of the purported proof’s satisfier-including polynomial-size candidate subset, and we diagonalize it. We keep replaying the Yannakakis winning tic tac toe move against the daring gambit by Swart. But we cannot seem to show that for every possible initial condition, every possible proof that P=NP and its algorithm that purportedly polynomially assays every formula’s set of satisfying valuations, there exists some refuting diagonal formula that defies the assay.

Does this situation seem eerily familiar to us yet?

Indeed it should. And we can formalize this eerie feeling in a theorem:

Theorem 2: For any NP-Complete Problem C, the problem C’ of determining whether C is in P, is itself an NP-Complete Problem.

Proof: Consider any NP-Complete problem C, which we define as follows:

Definitions: C is an “NP-Complete Problem” iff C is the triplet <S, C(S), V> such that:

  1. S is a finite set, which we call the “initial condition” of C
  2. C(S) is a non-polynomial-time enumerable subset of the power set of S, which we call the “candidate set” of C.
  3. V is a predicate that is decidable in polynomial time on any member of C(S), and is called the “filter” of C.

Definition: For any NP-Complete problem C, F is a “V-negative assay” of C(S) iff F is the couplet [ R, <c ] such that R is a partition of the candidate set C(S), and <c is an ordering on C(S) such that if V(x) is false for every member of Rmin = {x|(Exists r)(All q[(r in R & q in r) → x <c q]}, meaning V is false of the least element of every element of partition R, then (All c)((c in C(S)) → ~V(c)), meaning V is false of all members of C(S).

Definition: For any NP-Complete problem C, any V-negative assay F of C(S) is a “polynomial-time V-negative assay“ of C(S) iff the partition of C(S) R in F is polynomial-time enumerable.

Lemma 1: For any NP-Complete Problem C, any method M of finding an algorithm A that proves C is in P (and hence P=NP) must ultimately rely on proving the existence of some polynomial-time V-negative assay of the candidate set C(S) of C for any initial condition S.

Proof of Lemma 1: Suppose the lemma were false. Then some method M would give us some algorithm A that proves P=NP on C despite the fact that for every partition R of the candidate set C(S) and every ordering <c of the candidates in C(S) that give us a polynomial-time enumerable set Rmin, V is false on all elements of Rmin but V is true on some element of C(S). But this is self-contradictory, for it means that every polynomial-time enumerable V-negative sampling of C(S) fails as an assay to show that V is false on all candidates in C(S), and hence only a super-polynomial sampling of C(S) can show that V is false on all candidates in C(S), hence the problem C is not solvable in polynomial-time, and is thus not in P, which means no method M and algorithm A could possible prove P=NP on C. By contradiction, then, Lemma 1 is true.

From Lemma 1, we know that proving P != NP requires that we prove, for any NP-Complete problem C, that there exists no polynomial-time V-negative assay of C(S).

But any polynomial-time V-negative assay of C(S) is a subset of C(S) and:

Lemma 2: The set of all subsets (the power set) of C(S) is not itself polynomial-time enumerable.

Proof of Lemma 2: Suppose it were. Then we would have an enumeration of the power set of C(S) that is polynomial in size compared to S. But the power set of C(S), by definition of power set, is exponential in size compared to C(S), and by definition C(S) is super-polynomial in size compared to S. This means the power set of C(S) would be both exponential in size to something super-polynomial in size compared to S, and yet also be polynomial in size compared to S, which is impossible by definition of super-polynomial and exponential size.

Lemma 2 means that any polynomial-time V-negative assay of C(S) must be fished out from a candidate set that is exponential in size compared to C(S).

Definition: For any NP-Complete Problem C, C’ is the “jump” of C iff C’ is the triplet <S’, C'(S’), V’> where:

  1. S’ = C(S).
  2. C'(S’) = the set of all polynomial-size (compared to S) partitions of C(S).
  3. V'(c’) (where c’ in C'(S’)) = “c’ is a polynomial-time V-negative assay of C(S).”

Lemma 3: S’ would be the initial condition of C’ if C’ were an NP-Complete Problem.

Proof of Lemma 3: Trivially, S’, being a subset of the power set of the finite set S, is itself a finite set.

Lemma 4: C'(S’) would be the candidate set of C’ if C’ were an NP-Complete Problem

Proof of Lemma 4: Trivially C'(S’), being the power set of C(S), is exponential in size, thus super-polynomial in size, compared to C(S).

Lemma 5: V’ would be the filter of C’ if C’ were an NP-Complete Problem.

Proof of Lemma 5: It suffices to show that for any c’ in C'(S’), we can determine in polynomial time (compared to S’ = C(S)) whether V’ is true of c’, where V'(c’) means that c’ is a polynomial-time V-negative assay of C(S).

Let F = <R, <c> where R is a partition of C(S) and <c is an ordering on C(S). Then the question is whether we can show in polynomial time (compared to C(S)) whether:

  1. R is polynomial-size (compared to S).
  2. If R is polynomial-size (compared to S) then R is a V-negative assay on C(S), meaning if V(x) is false for every member of Rmin = {x|(Exists r)(All q[(r in R & q in r) → s <s q]}, meaning V is false of the least element of every element of partition R, then (All c)((c in C(S)) → ~V(c)), meaning V is false of all members of C(S).

R is a subset of the power set of C(S). If R is polynomial-size compared to S, then C(S) is super-polynomial-size compared to R, because it is to S. Thus at least one element of R is super-polynomial size compared to S, for otherwise the union of the elements of R would have too few elements to be C(S), which would mean R is not a partition of C(S), contrary to premise.

Consider the following algorithm that attempts to determine whether there is a superpolynomial size (compared to S) element of R:

  1. Let S, an initial condition of C, be any finite set.
  2. Order the elements of R in any linear ordering as column headings of a table
  3. Enumerate the elements of C(S)
  4. Check each element for membership in each element of R in order, and record it in a new row under the column of the table headed by the element of R two which it belongs.
  5. When the table is complete, count the rows in each column and write at the bottom of each column the ratio of its number of rows to the cardinality of S.
  6. Let S2 be an initial condition of S with 2|S| elements and repeat steps 1 through 5 with S2, then let S3 be an initial condition of S with 3|S| elements and repeat steps 1 through 5 with S3, and continue for Sn of size n|S| for all n <= |S|.
  7. Look at the column with the largest total number of rows in all the tables combined, and plot the ratios to |S| from that column in each table to on a graph to see if it looks like an exponential growth curve.

Clearly this algorithm cannot determine whether R contains a set of superpolynomial size compared to S. This is because superpolynomial size is a characteristic of the intension of a set, not of its extension. To determine whether any element of R is superpolynomial in size compared to S, and hence whether R is polynomial in size compared to S, we must find the answer in the intension of R, its definition in terms of S.

This may present a problem, for what if R has no definable relation with S other than that |R(S)| is less than x^(p(|S|)) for all real number x, all polynomials p and all finite sets S, or some other suitable definition of polynomial-size in relation to S?

To dispense with this problem we need:

Lemma 5a: For any NP-Complete Problem C, any proof that C is in NP must contain more information about R(S) than that it is polynomial-size in comparison to S.

Proof of Lemma 5a: Any proof that C is NP that includes and relies upon a lemma that there exists an R(S) is polynomial-size in relation to S for some NP-Complete Problem C must contain in its proof of that lemma some additional information about R(S) than what the lemma itself asserts. Otherwise the proof simply restates, and thus does not prove, the lemma. Since by our premise the proof as a whole relies on the lemma, the proof itself thus fails.

By Lemma 5a, any proof that C is in NP includes some premise that Q(R(S)) where Q is some predicate that implies something more about the relationship between |R| and |S| than that R(S) is polynomial-size in relation to S.

Since Q(R(S)) is the same regardless of the size of S, there exists some S such that |S| is large enough to be greater than the finite set of all valid logical inferences from the theory that includes all the statements that define C, plus Q(R(S)).

Thus to determine in polynomial time in relation to C(S) whether Q(R(S)) implies that R(S) is polynomial-size in relation to S, we simply run through the list of all those valid logical inferences to determine whether any of them state that R(S) is polynomial in size in relation to S.

Alternatively, we could suppose R(S) is not polynomial in size in relation to S, assert that supposition in statement H, and determine whether H conjoined with any subset of the set of statements that together define C imply contradiction.

Either way, we have a method of determining whether R(S) is polynomial in size in relation to S, which takes only polynomial time in relation to C(S).

If R is not polynomial size in relation to S, then by definition we know R is not a polynomial-time V-negative assay of C(S).

So assuming R(S) is polynomial size in relation to S, we need to determine, in polynomial-time in relation to C(S) ,whether <R(S), <c> is a V-negative assay of C(S), which is to say, whether V being false of the least element of every element of R implies V is false of all members of C(S).

Well, this is trivial, since we only have to determine this in polynomial time in relation to C(S), not in relation to S. We simply determine one by one whether V is false of every least element of the elements of R, and if so, then we determine one by one whether V is false of all the remaining elements of C(S). Since, by definition of C, V is decidable on any element of C(S) in polynomial time, then our brute force method of determining whether <R(S), <c> is a V-negative assay of C(S) takes less than p(|C(S)|) times |C(S)| steps for some polynomial p, hence still in polynomial time in relation to C(S).

Thus we have proven Lemma 5, that we can determine in polynomial-time in relation to C(S), which is also S’, whether R(S) is a polynomial-time V-negative assay of C(S).

QED to Lemma 5.

Now we can easily prove our theorem.

Proof of Theorem 2: By Lemmas 3, 4 and 5, C’ satisfies the definition of an NP-Complete Problem. QED.

Why Partitioning Doesn’t Help

It is easy to verify the failure of any specific purported proof that P = NP, but it is hard to prove that any such purported proof will fail.

Now, what if we enumerate not the set of algorithms behind each and every possible proof that P=NP, but the set of algorithm classes behind some set of classes of proofs that P=NP? Could we not perhaps then find some way of classifying these proofs and their algorithms so that the set of these classes is recursively enumerable?

Suppose we did. Then for all but finitely many of the formulae with an exponential number of intersecting sets of its set of satisfying valuations, we would have to show that the choice of which poly-size intersecting set or sets the algorithm chooses for the formula makes no difference to the diagonal argument refuting it.

But we then run into the problem that for infinitely many formulae this choice, which we must prove irrelevant, is from an exponential number of possibilities. Thus any attribute we assert about all of these choices cannot be constructed as a recursively enumerable set. We cannot even assert the irrelevance of all these choices. Our assertion will be diagonalized as being an assertion about only a proper subset of these choices. In the end, then, there is simply no way we can avoid having our purported proof of P != NP getting diagonalized.

Cantor-Dependent Unsolvability Results

From Theorem 2 we can now show

Theorem 3: The problem of proving P != NP is Cantor-Unsolvable

Solving C’ for any NP-Complete problem C consists of either:

a) proving that C is not just in NP but also in P, which proves P=NP, or

b) proving that C is not in P, which refutes any attempted proof of P=NP that relies on the definition of C as being in NP.

Now it would be natural to suppose that if we accomplish b), we could simply go on to show that our proof is independent of the definition of this particular C as being in NP. Then we could generalize our proof to mean that no C is in P, which is to say, that P != NP.

But ironically enough, if we do manage to generalize our proof in this way, we would be solving C’ in such a way that our method of proof would constitute a proof procedure for all NP-Complete problems C that none of them are in P. If only the set of all NP-Complete problems were recursively enumerable, for then we really would have proven that P != NP.

We have already shown, however, that the set of all NP-Complete problems is Cantor-uncountable, by which I mean it can be diagonalized according to the method used in Cantor’s uncountability theorems.

Thus, as long as we accept the Cantorian proof that there exist uncountable sets, we have to accept that the set of all NP-Complete problems is among them, and that it is thus not recursively enumerable. Being not r.e., there is no method of proving for all NP-Complete problems C that C is not in P. The problem of identifying a method of proving that P != NP is in this sense what I call a “Cantor-Unsolvable” problem.

Now, it may seem that if P != NP is unsolvable, then so must P = NP be. After all, proving P = NP by proving for some C that it is in both NP and P would prove P != NP false, right? That’s just the law of excluded middle. But no, my claim is not that P != NP is undecidable. My claim is that any attempt to decide that P != NP by taking a proof that a particular C in NP is not in P, and generalizing that proof to apply it to all C in NP, must fail in a mathematics founded upon Cantor’s uncountability theorems. There will always be a diagonal C not covered by the proof.

If we prove that P=NP by constructing some C that is in both NP and P, then yes, we will have rendered P != NP decidable, for we will have solved the problem of generalizing a proof that some C1 is in NP but not in P to show that all problems in NP are not in P, by showing that no such method can be generalized to apply to C. We would be able to show constructively how, given any method of proving that any other problem in NP is not in P, how that method specifically fails to prove that C is not in P.

But if we prove non-constructively that P=NP, by proving the assertion that “for some C in NP, C is also in P,” then excluded middle would give us only a non-constructive proof that any method of proving some C1 in NP is not in P cannot be generalized to apply to all problems in NP. The question would remain whether such a method could be generalized to apply to all problems in NP except some C or species of C-like problems, none of which can be constructed. Such a generalized method would constitute a constructive proof that P != NP, and the uneasy truce between constructivists and non-constructivists would become a war, with the paraconsistent logicians tut-tutting both sides as loud as crows flocking around two wounded bulls in battle.

So it is true that since P != NP is Cantor-Unsolvable, P = NP is not constructively provable. That suggests that perhaps Cantor-Unsolvability and constructive provability are in fact one and the same attribute of problems in general. In fact we can now actually prove this is true:

Theorem 4: Every Cantor-Unsolvable problem is constructively unprovable, and vice versa.

Proof of Theorem 4: A Cantor-Unsolvable problem is one whose affirmative solution can only be achieved by generalizing a formula proven over a finite range to extend it over a non-r.e. range. Suppose a constructive proof of such a problem exists. Then we would have recursively enumerated a non-r.e. set. By contradiction, no such constructive proof exists.

Suppose now we have a constructively unprovable problem. If it were Cantor-Solvable, it could be proven by generalizing a formula over an r.e. set. Such a generalization of such a formula would constitute an effective construction, hence render the proof constructive, contradicting the premise that it is constuctively unprovable. By contradiction, if a problem is constructively unprovable, it must also be Cantor-Unsolvable.

We have proven both directions of the equivalence. QED.

So the situation we are facing thus far appears to be that P !=NP, and hence also P=NP, are both constructively unprovable, so P vs NP is constructively unsolvable. And P!=NP is also non-constructively unsolvable, so we know P vs NP cannot be resolved in favor of P!=NP either constructively or non-constructively.

But P = NP may yet be non-constructively provable, so the jury is still out on whether P vs NP is non-constructively solvable in favor of P=NP.

On Cantorian foundations, the P != NP camp are thwarted and on the defensive. They cannot prove P != NP constructively or non-constructively. The P = NP camp are half-thwared but they still have the upper hand. They cannot prove P=NP constructively, but they can still hope to prove P=NP non-constructively. P != NP believers, and constructivists on both sides of the P vs NP debate, are out of luck in Cantorland. Only the non-constructivist P=NP believers are still in the game in Cantorland, and how many of those are there? Has anyone ever met one?

Why we cannot even effectively assert P != NP

To prove that every possible proof of P = NP, speaking in the generalizable terms of the 3SAT problem, fails to account for all 3-disjunct CNF formulae, we must assert something provable about all possible proofs of P=NP.

To do that, we must recursively enumerate the special exponentiation-busting search algorithms that distinguish the proofs of P=NP from one another.

To do that, we must show that the infinitely many formulae which have an exponentially large set of possible polynomial-size sets of valuations containing some satisfying valuation all conform uniformly to some polynomial-size partitioning of that set in each of them, such that for all i and j, i the index of the formulae and j the index of the partition sets, the exponential number of valuations in the <i,j>-th partition all share some characteristic that renders the choice among them irrelevant to any distinction among the search algorithms of purported P=NP proofs that matters to their diagonalizability.

But any assertion that an infinite sequence of exponentially growing sets shares some characteristic must rely on a non-recursively-enumerable set to define that characteristic. Thus no proof of P != NP can even be computably asserted, never mind proven.

What’s more, I have just proven that every possible proof of P != NP is diagonalizable. And that proves that we cannot prove that every possible proof of P = NP is diagonalizable.

Beat the diagonalization of P != NP by rejecting Cantor’s Fallacy

So is that the end of the story? P != NP is not provable, but P = NP is, if only non-constructively?

We have to accept this situation only if we stubbornly insist upon maintaining the validity of Cantor’s uncountability theorems.

Cantor’s Fallacy and Bijection between Countable and Uncountable Sets

In my article on Cantor’s Fallacy, I show that both Cantor’s proof that no one-to-one mapping of the positive integers with the real numbers exists, and his later proof that there is no one-to-one mapping between any infinite set and its power set, both rely on a category mistake, the confounding of intensional size with extensional size.

I showed that the true distinction between an infinite set and its power set is that the power set’s definition implicitly incorporates a finite reference to its infinite base set an infinite number of times, and is hence an infinitely long definition in relation to the finite size of the expression in it used to refer to its base set.

The base set itself may also turn out to have an infinitely long definition if it is itself the power set of its own base set. That set, in turn, could be a power set too. Ultimately, however, only a finite number of such iterations is possible, and there must be some base set at the core of it that is not itself a power set, and which has a finite definition with no expressions denoting any infinite sets. That base set is recursively enumerable.

I also showed that diagonalization arguments presume the recursive enumerability of the set being diagonalized, which is to say the finite definability of the set, and hence no diagonal argument applies to any set with an infinitely long minimal definition.

And I showed that this does not bar us from constructing meaningful subsets of finitely undefinable sets, because we can construct them by piggy-backing on the generating algorithm of the infinitely long definition itself, which consists of an infinite sequence of increasingly long finite meta-definitions, each incorporating fully the previous finite meta-definition. We call each successive meta-definition a “revision” of the definition and we take the infinitely long limit of that infinite sequence of revisions to be the definition of the non-recursively-enumerable set we are constructing.

Thus are we able to construct a one-to-one relation between the positive integers and the real numbers, but with the caveat that our method of construction prohibits us from referring to any pairing of a specific integer with a specific real number. This is because an infinitely long definition is, to us mere mortals, an ever-changeable creature.

On the Meaning of Constructivity

Thus our construction is, in a very meaningful sense, “non-constructive.” It is non-constructive in the sense that term is commonly understood, because it is commonly understood to mean that it lends itself to instantiation with constants and without variables. But that is not its logical definition.

Constructive proof, technically speaking, simply means proof susceptible to the inference rule of existential instantiation in a proof in classical propositional logic. That kind of instantiation still uses variables, but it treats them as having the characteristics of a typical instantiation, and hence facilitates proofs that rely on operations upon specific instances. Our construction of the one-to-one mapping of the positive integers with the real numbers, and of any infinite set with its power set, is certainly constructive in that sense, because it allows us to reason consistently and productively on the basis of the general behavior of specific instances.

The Intuitiveness of Intuitionism and All Logical Paradoxes Resolved

Moreoever, there is nothing more non-intuitive about such a one-to-one mapping than there is about the well-ordering of the real numbers. In fact, our construction alleviates us of the need of the Axiom of Choice, and the Well-Ordering Theorem that is its logical equivalent and its sole purpose in life.

And once one gets accustomed to our new way of incorporating Brouwer’s notion of choice sequences into set theory, one realizes that our construction of the one-to-one mapping of finitely definable infinity with finitely undefinable infinity is very intuitive indeed, for it clearly accounts for our intuition that there is a quantitative difference between the integers and the reals, and between any infinite set and its power set, for we make it clear that this difference is the difference between the finite and the infinite, but in the intensions and not the extensions of the sets thus distinguished.

And once one realizes also that by relieving infinite extensions from carrying the false burden of our true intuitions about these truly intensional quantitative distinctions, we greatly simplify and clarify the mathematics of infinite extension to make it match our intuitions about it perfectly as well.

For how, after all, can there be a number greater than any number which is not as great as another number greater than any number? There cannot be, and we can finally dispense with the conundra that result from assuming that there can.

We have resolved Russell’s Paradox. We have resolved Richard’s Paradox. We have resolved, in fact, every single logical paradox known. How much better can we do by our intuitions than to resolve all known inconsistencies in the foundations of logic and mathematics?

A New Logical Positivism That Embraces Intuitive Reason

In doing so, have we reopened the insanity of logical positivist reductionism? Certainly not. Logical positivism thrived on its project of disciplining reason against the errors that lead it into paradox. Godel’s theorem finally convinced Russell that reason simply could not be saved from paradox. Logical positivism then splintered into various more humble projects, but none of them gave up its fundamental premise that reasoning as an intuitive act was not to be trusted.

I reject that premise. Reasoning as an intuitive act must be trusted, but not because it is infallible. It is to be trusted because trusting it is simply how it works.

So the art of reasoning is not in whether we trust our inferential intuitions, fallacies and all, but in how.

Or to put it another way, I have proven here that First-Order Logic is consistent, but Godel’s result still stands showing the First-Order Logic is incomplete. First-Order Classical Logic is sound and it does lead reliably from certainty to certainty, but it does not supplant intuition in the construction of knowledge.

The Ross-Littlewood Paradox Resolved By Rejecting Cantor’s Fallacy

Priority Removal Scheduling

Consider, for example the Ross-Littlewood Paradox. I will state it in terms of set construction. Suppose we construct a set of non-negative integers by inserting 0, then removing zero and inserting 1 through 10. Then remove 1 and insert 11 through 20, and for every n > 0, remove n and insert 10n+1 through 10(n+1). How many positive integers are in this set? Infinity or zero?

Clearly the answer depends on the order in which the additions and removals are performed. The construction calls for adding every non-negative integer and removing every non-negative integer at some point or another. Any integer we remove first then later add, will end up in the set. Any integer we add first, then later remove, will end up not in the set.

The way the construction is defined, it is clear that at any given step n, n has already been added at some previous step (or in the case of the first step, the same step), so the answer to the question is clearly that n is added first, then removed. Thus we can answer without reservation that the resulting set is empty.

But the paradox is not thereby resolved. The paradox lies precisely in the fact that the contents of the set depend upon the way the constitutive acts of construction, of adding and removing members of the set, are bundled together within the ordered infinite sequence of revisions of the set’s membership. The strict linear ordering of revision allows later revisions to trump earlier revisions.

The contradiction between the scheduled removal of all the non-negative integers and their scheduled insertion is resolved by a temporal metaphor, which is really, however, not about time but boils down instead to priority. Later revisions get priority over earlier versions as to the final membership of the set being constructed. Because of this infinite deferral of the final say, the membership of the set is never settled at any finite step of the construction. But since an increasing finite segment of the set’s characteristic function is settled at each higher step in the construction, it is clear that the set is scheduled to be emptied out completely.

Emptying Out a Cantorian Ross-Littlewood Set

If we revise the construction to insert not just 10 times the number of integers we remove, but insert 2^n higher integers for each integer n, then we make this paradox relevant to the question of Cantor’s fallacy.

Even though at each step we are adding an exponentially larger number of integers than the size of the integer we are inserting, the priority order in which the insertion and removal are performed compels us to conclude that the set must still ultimately be empty. Even infinitely accelerated accumulation cannot cause one infinite pile to be larger than another infinite pile.

If the defining construction calls for the removal of every element to take priority over its insertion, then the set is empty, regardless of how many other elements are also inserted prior to any given element’s removal.

The Crux of the Ross-Littlewood Paradox

The truly paradoxical core of this paradox is the fact that a strictly ascending infinite sequence of non-negative integers can approach infinity but ultimately converge to zero in the limit, for that is precisely what happens to the cardinality of the nth revision of this set in its construction as n diverges to infinity.

And the only way to resolve this paradox is to accept it, by identifying and rejecting the assumption it proves to be unfounded. The unfounded assumption here is that all strictly increasing infinite sequences of non-negative integers diverge to infinity. This may seem non-intuitive, but only because we too easily fall back on fallacious finitary assumptions about infinity. Cantor’s fallacy contributes to this error by confabulating infinities that are greater than other infinities, when cardinality is clearly a finitary concept that has no applicability to infinite quantities in relation to each other.

In fact Cantor’s uncountability theorem can be used to prove the absurd result that a certain variation of the Ross-Littlewood set is uncountable.

The proof is simple. Construct a set as follows. Insert zero in it. Remove zero and insert every integer from 1 to 2^1, inclusive. Remove 1 and insert every integer from 2 to 2^2 inclusive. For every n >= 0, remove 2^n and insert every integer from (2^n + 1) to 2^(n+1) inclusive. The binary representations of these integers define all the non-singleton subsets of the set containing the first n non-negative integers, if they are read as the characteristic functions of them.

As n approaches infinity, the set we are constructing contains a larger and larger finite segment of every possible non-singleton subset of the non-negative integers. What will be the ultimate result of this construction?

It is certainly wrong to say it will be the union of all the partially constructed versions of the set at each step in its construction, for every step removes as well as inserts something in the set.

One could still try to argue that the set contains only integers with finitely long binary representations, hence that it contains the characteristic functions of only finite sets. This would certainly be the natural conclusion if each step was merely inserting more elements in the set. But since each step both removes and inserts, we know that the set does not necessarily ultimately contain anything that it contains at any finite step in its construction.

If we believe in an infinite ordinal, then the final, w-th, step in the construction of this set could plausibly be construed as inserting all the completed infinite characteristic functions whose increasingly large finite segments are inserted at each successive step in its construction.

One could argue, however, that it could also plausibly be construed as inserting merely the finite initial segments of all the infinite characteristic functions of the infinite subsets.

Conventional wisdom in a Cantor-believing worldview would hold the former construal unreasonable and the latter obvious. But I think the reverse is true, and I can explain why in a way that is compelling to Cantorians and non-Cantorians alike.

Infinitizing the ever-increasing in the leap to infinity

I believe in an infinite ordinal, but only one, w. And I believe that at the w-th step in our construction, there is a leap to infinity, not merely a hand-waving generalization over all the finite steps that precede it. Thus the w-th construction step, it makes sense to me to presume, takes each and every strictly increasing finite quantity in the finite-ordinal steps and infinitizes it as it culminates in the limit at infinity.

Thus if we insert all the characteristic functions of length n at every step n, then at step w, we do not merely acknowledge insertion of all characteristic functions of any finite length n at prior steps; we insert all characteristic functions of infinite length w. Thus we insert at step w enough steps to map one-to-one with the power set of the infinite set of non-negative integers.

Does this mean that at step w, the insertions finally get to trump the removals? If we believe there are more subsets of the non-negative integers than there are non-negative integers, then it is hard to escape this conclusion. But if we acknowledge the simple fact that all infinite quantities have the same cardinality, then the insertion of enough integers to map one-to-one with the entire power set of the non-negative integers into the set we are constructing does not do any harm to our ultimate conclusion that, according to the clear definition of the construction, each number we insert in the set gets removed at a later, higher-priority, step in the construction, and hence the ultimate content of the set is nothing.

If we believe there are more subsets of the integers than integers, however, we can easily convince ourselves that the w-th step in our construction inserts so many elements in our set that the removal it calls for cannot possibly remove them all.

Cantorian objections laid to rest

Of course, the typical Cantorian will simply reject the notion that the w-th step requires insertion of any infinite numbers in the set. For the infinite characteristic functions I speak of in my argument are actually 2-adic binary numbers with infinitely many digits of increasing place-value, so the Cantorian can argue that by adding them in at infinity, I am adding numbers that simply do not meet the membership definition of the set.

My answer to this, if we must keep doing tit for tat, is simply to say that the recursion defines the set, and thus to insist we define the limit behavior of the recursion one plausible way rather than another is simply to choose one of two possible definitions of the set, not to prove that one definition is correct or consistent and the other not.

In any case, it doesn’t matter whether we interpret the construction rule to mean that at the nth step, 2^n – n finite numbers will be inserted, or if we interpret it to mean that all characteristic functions of non-singleton sets of length n will be inserted. Either way, the w-th step inserts what Cantorians would consider an uncountable number of numbers, most of them 2-adic numbers.

In this way, then, Cantorians believe that this particular kind of Ross-Littlewood set is uncountable. But I say, all Ross-Littlewood sets, including this one, are empty, because they all are constructed by removing each and every element added to the set, with the removal taking priority over, trumping, the insertion. This straightforward resolution of the Ross-Littlewood paradox becomes problematic only if we insist on taking the Cantorian view that some infinities are larger than others, and that hence some Ross-Littlewood sets are constructed by inserting more into them than we remove.

Cantorians Are Finitarians

Cantorians, when it comes down to it, are actually finitarians. They do not believe in a true infinite ordinal. The behavior of the Cantorian w ordinal, in fact, is not infinitary at all. It is the behavior of a terminal finite ordinal.

Suppose for some arbitrarily large positive integer w, we define the sub-transfinite numbers as those less than w, and the transfinite numbers as those equal to or greater than w. The result of this will be all of the mathematics of transfinite sets, without the infinite sets.

How large is w? Let it be greater than the absolute value of any integer ever referred to, and greater than the round-up of the absolute value of any real number ever referred to. Thus whenever any integer or real number is mentioned, w is greater in absolute value.

I’m not saying there can’t be a set of infinitely many finite sets. I’m saying if we construct an infinite set in an infinite sequence of stages, and at each stage we increment the size of its members, then that set will have infinite-sized members. To claim that such a construction results in only an infinite number of finite sets is to ignore the fact that the size of its members gets incremented infinitely many times.

In an infinite-step construction whose steps map one-to-one with the positive integers, we can certainly construct a set that consists only of elements of every finite size n, and no sets of infinite size, but we can also construct a set that has 2^w sets of size w.

Cantorians treat the latter kind of construction as if it were the construction of a finite set with 2^w elements, where w is defined as above, as the number that is greater in absolute value than any number ever mentioned. I treat it as what it is: a construction of an infinite set of infinite sets.

But I have nothing against the Cantorian construction of w. It is very interesting, even as a foundation for mathematics. Though I think I can show that it results in many less than ideal difficulties when pressed into that service.

Cantorians believe such a set is “uncountable,” incapable of one-to-one mapping with the positive integers, because they define “finite” as being less than w, and “countable” as mapping one-to-one with w. But 2^w is obviously greater than w, so it must be some transfinite number greater than the first transfinite number, w.

Yet as we have seen, w is just an arbitrarily large finite number deemed to be greater than all specified finite numbers, and defined as the set of all “finite” numbers. Cantorians implicitly assume that all finite numbers are specified, and they are wrong about that. Stating an algorithm for specifying any finite number does not actually specify every finite number. It merely describes how to specify any one of them. The algorithm for specifying any finite number does, in a sense, refer to all finite numbers, but it does not specify all of them because it refers to them in a collective, non-individuating manner, and thus does not specify any of them.

The fact is, 2^w = w, if we are talking about the real w, the truly infinite set of all finite sets. But for the Cantorian w,2^w > w because w is actually finite (though unspecified).

Lowenheim-Skolem Paradox Resolved

This sheds new light on the meaning of the Lowenheim-Skolem Paradox, which is the existence of countable models of first-order logic despite the existence of uncountable ranges of first-order predicates definable within first-order logic.

The conventional wisdom is that the Lowenheim-Skolem Paradox is not a true paradox, because it does not break the consistency or completeness of first-order logic with any formal antinomy the way Russel’s Paradox did to Frege’s naïve set theory.

The reason first-order logic has a countable model, despite being capable of proving the existence of an uncountable set, is because all uncountable sets are actually finite. Thus the paradox disappears. The meaning of “uncountable” in the statements proven in the theory is modeled accurately in the countable model for what it truly is, the attribute of being exponentially greater than the finite number w, which is defined in the model as the finite set of all “finite” sets.

I am not arguing that there actually are only a finite number of finite-sized objects. I am merely showing that a model of the counting numbers that corresponds, in the realm of numbers, with that physical notion makes perfect sense out of first-order logic and every relatively well-studied axiomatization of set theory.

The finite reality underlying Cantor’s transfinite ordinals and cardinals also sheds light on the Continuum Hypothesis (CH). The reason CH is independent of ZFC is because it is merely a choice of whether or not to count transfinite ordinals as transfinite cardinals, or simply to treat the limit ordinals as the cardinality of all the ordinals obtainable from them by finite recursion of the successor operation. CH is clearly definitional in nature when understood in that light.

If it seems nonsensical to model Cantorian talk of infinite ordinals and cardinals using finite numbers that just happen to be greater than the finite number w, which in our model is the least finite number larger than all finite numbers ever specified (uniquely referred to), consider that this is no more absurd than to model Cantorian talk of uncountable sets using countable ones, and that is something Cantorians are comfortable doing day in and day out.

And so am I. And for the same reasons – because doing so is useful and interesting.

Making Room for True Infinity

But let’s go back to the Ross-Littlewood paradox, particularly my version with the exponentiating insertions.

I have claimed it is obvious that since every number inserted is eventually removed, the set is empty. But how does that square with the intuition that the ratio of numbers removed to the numbers inserted approaches zero, and fast, as n goes to infinity?

It squares with it because that vanishing ratio is the ratio between two finite variables as their assigned values increase with each revision of the set. But the nature of infinity is that it collapses measure.

Compared to infinity every finite quantity is the same, infinitesimal. It truly does not matter how large the finite increments are that a variable receives as compared to another, because as long as the difference between the sets remains finite, that difference will never get large enough to be more than infinitesimal to the ultimate value of both variables, infinity.

The counterargument would be that another variable, set equal to the increasing difference between the two variables, would be strictly increasing, and hence would also have to leap to infinity as we leap to the infinite step. But this does not mean the two variables that increasingly differ end up differing at infinity. Sure, their difference at infinity is infinity, but infinity plus infinity is just infinity, so an infinite difference between two infinite values is no difference at all. It does not cause there to be a greater and a lesser infinity.

It is only the Cantorian’s refusal to accept the inherent nature of infinity that compels the Cantorian to suppose there is something incorrect in this result.

Thus it becomes even more clear that when Cantorians insist that vanishing ratios and divergent differences matter at infinity, they are ignoring the fact that finite quantities no matter how large, and even infinite differences, do not matter at infinity. Consequently, they are treating infinity as if it were really just a very very large finite number, “as large as you please.”

Now, it turns out this Cantorian finitism, most commonly pursued today in the form of ZFC, is capable of modeling quite a lot of mathematics. But believing that ZFC is truly interpreted by its infinite model, believing that it actually is a mathematics of infinity, rather than what it really is, which is a mathematics of arbitrarily large finite quantities, prevents its practitioners from recognizing their enterprise for what it is best suited for, and from realizing that beyond it lies a neglected true mathematics of infinity that also has its own place in mathematics, science and philosophy.

A General Method For Refuting Attempted Proofs of P=NP

I have shown how Cantor’s diagonalization method of producing counterexamples to one-to-one mappings actually presumes the crypto-finiteness of the sets being compared. I have also shown how this flaw in Cantor’s proof of the uncountability of the real numbers and of the power set of the integers actually saves our attempts to prove P != NP from diagonal doom.

I am now in a position to explain exactly how one goes about refuting any specific unsuccessful proof that P = NP, the specifics of which I deliberately glossed over earlier in this exigesis.

Above I wrote this:

“All NP-Complete problems concern finite initial conditions and finite candidate sets, but proving P != NP or P = NP requires proving something about all possible finite initial conditions, thus about arbitrarily large finite initial conditions.”

Now that I have also explained that the Cantorian least transfinite ordinal w is actually an arbitrarily large finite number, the above statement takes on new meaning. It can be restated, with substitution:

“All NP-Complete problems concern finite initial conditions and finite candidate sets, but proving P != NP or P = NP requires proving something about all possible finite initial conditions, thus about Cantorian transfinite initial conditions.”

So a proof that P = NP is hard because it requires proving that for every actually finite initial condition, including the elusively defined arbitrarily large Cantorian transfinite w-th finite initial condition, there exists a recursively enumerable sequence of sets Q such that, for some finite integer t, for all initial condition sizes n, Q(n) intersects the satisfier set (if non-empty) for the n-sized initial condition, and |Q(n)| < n^t.

The reason this is hard to prove is that it is unclear what “finite” means.

If by every instance of the term “finite” we mean specifiable (Cantorian sub-transfinite), then we can neither prove nor disprove P=NP in the case of any specific polynomial time algorithm that generates the sequence Q(n), without essentially assuming what we are trying to prove. Suppose we have such an algorithm. Then for any Q(n) we can construct, in a number of steps polynomial in n, a minimal subset Q'(n) that contains only one element, which of course is also in S(n). Thus we have a polynomial time algorithm for generating the sequence Q'(n) of singleton sets of a member of S(n), from which in polynomial time we can also construct a sequence Q”(n) that maps each n to a member of S(n) if the latter is non-empty, and to the empty set if it is empty.

It is not hard to see that Q”(n) is a choice function, and that any assertion of its existence for all transfinite n implies the Axiom of Choice. It is also straightforward to see that the Axiom of Choice implies it as well. Now, of course, in ZF the Axiom of Choice is not needed for countable sets, never mind finite sets, but in our model we are representing the entire arithmetic hierarchy with finite integers. So for the integers greater than or equal to w, whether they be the transfinite ordinals (ZFC + ~CH) or the transfinite cardinals (ZFC + CH), we need the Axiom of Choice to prove P = NP, and its negation to prove P != NP.

Thus ZFC |= P=NP and ZF+~AC |= P!=NP. To see this, however, it is necessary to model ZF in such a way that proves it is consistent, and the only way to do that is to represent w as “the least positive integer greater than any positive integer ever specified.”

The Axiom of Choice and its negation are each sufficient and necessary for deciding P vs NP because they each supply the “something about all possible finite initial conditions, thus about Cantorian transfinite initial conditions” required to do it.

So the mystery of P vs NP is resolved to the extent it has been puzzled over up to now.

However, like all great mysteries, its resolution opens a new vista upon an even greater mystery.

The Almost-Trivial proof that P=NP

Suppose now we define “finite” as the opposite of true infinity, not of Cantorian pseudo-infinity. This is the Brouwer infinity of infinite choice, of patternless randomness. It is the infinity that is the answer to the paradox that random sequences, in order to avoid conforming to any pattern, must have no definable property and must hence be so bland as to seem perfectly homogeneous, but how can something perfectly homogeneous be random?

The answer to this paradox that we can now give, is that truly random sequences have finitely undefinable properties and hence, far from being homogeneous, are infinitely heterogeneous.

So here is the new mystery. Suppose we mean the opposite of true infinity, not the opposite of Cantorian pseudo-infinity, when we say:

Theorem 3: P=NP.

Definition:poly-n” is any polynomial expression in the variable n.

Lemma 7: P=NP iff there exists a recursively enumerable sequence of sets Q such that, for some finite integer t, for all initial conditions n of size poly-n, Q(n) intersects the satisfier set (if non-empty) for the poly-n-sized initial condition, and |Q(n)| < (poly-n)^t.

Proof of Lemma 7: This follows directly from Lemma 1 by applying it to the case where |S| = w, the Cantorian pseudo-infinity.

QED to Lemma 7.

Then by Lemma 7 t could be w or any higher Cantorian pseudo-infinite number, if we allow that such numbers exist. If they do, then it is trivial to prove that P=NP, because the term “recursively enumerable sequence” allows us to enumerate the entire arithmetic hierarchy, because we are running the recursion in our countable model and not inside the narrative world of the (also countable) theory of Cantorian sentences in which the Cantorian pseudo-infinite numbers are regarded as being not merely actual infinities, but all the infinities there are.

We simply set t = w and we are done, because “for all initial condition sizes n” refers only to specified values of n, for initial conditions by definition are specified conditions, and thus by the definition of w “all initial condition sizes n” are less than w. Trivially, then, even if our clever search algorithm is not so clever after all, and is even equivalent to brute force search, like if |Q(n)| = 2^n or even n^n, then |Q(n)| < n^t for all n, because we have set t=w and by definition of w and initial condition n, m^poly-n < n^w for all m, n < w.

QED to Theorem 3.

So in the real world, the world where there exists such a thing as true infinity, P = NP, but trivially so since the equation tilts in the opposite direction we thought. And therein lies its kernel of non-triviality.

The Non-Trivial Real-World Implications of Trivial P=NP

Typically we have thought of P = NP to mean that NP problems become tractable for computation, that NP problems all reduce to easy P problems. But it turns out that P = NP means the converse, that even polynomial-time computable problems can, and for all the interesting and hard NP cases we want to solve, do require, in reality, nearly exponential time to solve, which is to say that proving a problem is in P does not necessarily prove it can be computed significantly faster than any problem in NP.

Well, the good news is that we already knew that. So my result is not shocking, but rather, confirms in theory what experience in practice has already shown, that in the real world P = NP, meaning the classification of a problem in terms of the exponential or polynomial expanse of its possible solutions in relation to the size of its initial conditions is not even a meaningful practical measure of its finite computational complexity. We have proven here merely that this is the case even in theory. And I believe theory that matches practical experience is in some important sense superior to theory that does not. It is precisely this kind of correspondence to practical experience that lies at the heart of foundational decisions in mathematics, such as in the debate over which specific set of axioms to adopt for any given area of mathematics.

The key thing to understand here is what we are doing when we set t=w. We are saying that we will still count a problem as being in P if the computation time is the initial condition size n, raised to the power of the least finite positive integer greater than any positive integer ever specified, with the understanding that setting any initial condition of size n is specifying n.

Yes, this is cheating. Yes, we have simply in some sense assumed what we are proving. But no, this does not mean we have not proven P = NP. We have indeed, and the proof is indeed trivial.

What is not trivial is that we have shown that by modeling ZFC in the only way it can be modeled as both consistent and complete, then looking at the P vs NP question in that canonical model, we find that P = NP becomes an almost trivial result, but not quite trivial.

The almost trivial proof that P = NP is actually very significant because it unmasks the real question behind P vs NP, which is whether there is a way to achieve specific-exponent polynomial-time computation of an NP problem. In other words, if we consider only t < w, does P = NP?

This reframing of the real question actually opens the way to a decidedly non-trivial answer.

Definition: Let us call “P<” the class of problems solvable in n^t time where t<w.

Then the real question is whether P< = NP.

The answer is that P< != NP.

But don’t take my word for it. Let’s prove it.

Proof that P< != NP

Theorem 4: P< != NP

Proof: Since we reject Cantor’s fallacy we accept the existence of a one-to-one mapping between any infinite set and its power set, with the caveat that we can only refer to a generic, not any specific, member of that mapping. Thus we can at least assert P< != NP, because our list of all possible choice functions that shortcut a search into an exponential candidate pool under some finite number t < w, cannot be diagonalized.

Since we can generalize over all possible polynomializing choice functions as a non-diagonalizably listed set, we can partition that set into equivalence classes based on the least member of each element mapped to that is a satisfying element. Each partition’s members reduce to that partition’s singular choice function that maps each initial condition n < w to some satisfying element, or to the empty set if there is no satisfying element for that n.

This puts us in position to diagonalize. In the case of 3SAT, given any choice set, we can construct a 3-disjunct CNF formula such that no singular choice function is able to specify a satisfying set if there is one.

There are two cases.

First, if w exists, then there are only finitely many specific CNF formulae, because there are only finitely many CNF formulae of size < w. Thus we simply assert the non-constructive existence of some CNF formula of finite size > w, and no choice function can map it to either a satisfying valuation or to the empty set, because doing so would require specifying it as a member of an ordered pair which is, in turn, a member of the choice function.

Second, if w does not exist, then there is no unspecified finite number, and hence there are truly an infinite number of specific CNF formulae, because there are CNF formulae of any finite size. But what becomes of our choice functions when they are constructed in the infinite limit to range over the true infinity of all CNF formulae? There are two sub-cases, call them Case 2a and Case 2b.

In Case 2a, each such choice function infinitizes the formulae along with itself. Thus each choice function includes in its domain every possible infinite conjunction of triple-disjunctions. This, however, is impossible for any choice function, because a choice function by definition specifies all its members, uniquely pairing each formula with either a specific satisfying valuation or the empty set if no such valuation exists, for otherwise it is not serving its defined purpose of separating satisfiable formulae from non-satisfiable ones. Since all choice functions with infinitized domains that infinitize the members of its domain are finitely undefinable, the mappings they represent are unspecifiable, and hence in being infinitized they cease to be choice functions, and thus cease to prove that P< = NP.

In Case 2b, each such choice function infinitizes none of the formulae in its domain along with itself. Thus each choice function includes in its domain every possible finite conjunction of triple-disjunctions, but no infinitely long formulae. We have already shown, however, that any set constructed as the infinite limit of a sequence of recursively defined increasingly inclusive sets cannot be truly infinite without infinitizing its members. For if we define such a set so as to avoid infinitizing its members, for example as the union of all members of some recursively defined set of increasingly inclusive sets, then we are defining a set with infinite extension but finite intension. Such a set, as it turns out, and as we have shown above, is in fact truly finite in cardinality, as it has true cardinality w in the only consistent model of ZFC there is. Thus case 2b simply reduces to Case 1.

In conclusion, we have shown how P< != NP by showing that any proof that P< = NP relies on the existence of a choice function that ranges over all possible initial conditions of the problem and picks out a unique satisfier, if one exists, for those initial conditions from among exponentially many candidates, and we have shown that this choice function must fall into one of the following three categories:

1) Cantorian sub-transfinite, meaning truly and specifiably finite in extension, thus failing to account for initial conditions of size greater than its own size, which is the same as saying in Cantorian terms that the set is recursive and thus diagonalizable, or

2) Cantorian pseudo-infinite (recursively “infinite” in extension but finite in intension, thus not infinitizing its elements), thus in fact truly (though unspecifiably) finite in extension, thus, like in category 1), failing to account for initial conditions of size greater than its own size, which is the same as saying in Cantorian terms that the set is recursive and thus diagonalizable, or

3) Truly infinite (infinite in both extension and intension, thus infinitizing its elements), thus incapable of uniquely specifying any satisfying valuations at all.

Since in all three cases, the choice function fails to tie more than a proper subset of all possible initial conditions to satisfying valuations, no choice function exists that can do so, thus an essential ingredient of any proof that P< = NP is missing, and hence P< != NP.

Moreover, this is true whether or not one assumes that there is such a thing as an unspecifiable finite number, for our cases cover both the possibility that such numbers exist and the possibility that they do not.

QED.

Godels’ Second Incompleteness Theorem DeCantorized

Godel’s Second Incompleteness Theorem, in light of our finitary canonical model of ZFC and the Brouwerian true infinity that lies outside that model’s scope, also becomes a proof that P< != NP. It states that any finite set of axioms of ZFC can be proven consistent in ZFC (so there is a poly-time verification algorithm), but that no proof exists in ZFC that all finite sets of its axioms are consistent (but no poly-time search algorithm to recursively enumerate the models proving each set of its axioms consistent). Thus it amounts to saying that the consistency of the finite sub-theories of ZFC is an NP problem but not in P<, because it is equivalent to saying you can construct a choice function that specifies a model for each and every finite sub-theory, but only as a set that falls into one of the three cases described above:

1) Cantorian sub-transfinite, meaning truly and specifiably finite in extension, thus failing to account for initial conditions of size greater than its own size, which is the same as saying in Cantorian terms that the set is recursive and thus diagonalizable, or

2) Cantorian pseudo-infinite (recursively “infinite” in extension but finite in intension, thus not infinitizing its elements), thus in fact truly (though unspecifiably) finite in extension, thus, like in category 1), failing to account for initial conditions of size greater than its own size, which is the same as saying in Cantorian terms that the set is recursive and thus diagonalizable, or

3) Truly infinite (infinite in both extension and intension, thus infinitizing its elements), thus incapable of uniquely specifying any models at all.

But Godel’s Second Incompleteness Theorem can also be understood with the assumption that the upper-bounding exponent t is allowed to be w, the least unspecified finite number. Under that assumption, like in our general treatment above, it becomes an almost trivial proof that P = NP.

Trivial Proof of the Consistency and Completeness of ZFC

But what does this almost-trivial equation P = NP mean for the consistency of ZFC? It means the finitary model of ZFC that proves its consistency is also complete. We can actually formulate and prove within ZFC the theorem that ZFC is consistent under our finitary model, because in our finitary model the statement enumerating the set of all sets of axioms of ZFC, which is forebodingly non-recursively-enumerable in infinitary models of ZFC (models that attempt to interpret Cantorian pseudo-infinity as true infinity) becomes a trivially finite statement of size 2^w.

Pseudo-Infinity in ZFC and Heisenberg’s Uncertainty Principle

Now, of course, the odd thing about this is that within the pseudo-infinite world of ZFC theory this finite statement does not stand out as anything special, for it is just another finite formula, and ZFC thinks of its own axiom schemas as infinite, so although ZFC does in fact prove its own consistency, in some basic sense it still does not know it.

Fortunately this blissful ignorance is merely a subjective matter, for we can show with full rigor that ZF does in fact prove a statement that our model dutifully interprets as asserting the consistency of ZF. Thus our model is complete, and under our finitary interpretation of ZF, ZF is both consistent and complete.

Since the sense in which ZF does not “know” it proves its own consistency is entirely subjective, is there a way of “doing” ZF so as to realize as we are doing it that this otherwise non-descript finite theorem is in fact CON(ZF)? Well, the answer to that is yes and no. Yes, we can do ZF as if we knew it proved its own consistency, for that is actually what we do all the time. After all, if we ever actually did math or even set theory in ZFC without assuming it was consistent, we would sure have an easy time proving all our conjectures, since they would all be true, including the ones that imply each other’s negations.

But the answer is “no” in the sense that we cannot, in ZFC, ever, as it were, become lucid in its dreamworld and realize that the infinities it flies around in are actually just unspecified finite numbers. And the finite formula that encodes its consistency can never be recognized as such within ZF because to recognize it we must first specify it, and we cannot specify it either within ZF or even outside it, without in effect chasing the spirit of its definition to reside in some larger formula of unspecified size. So it behaves according to something akin to Heisenberg’s Uncertainty Principle. We cannot both specify it and recognize its existence at once. If we recognize its existence, we also recognize that we cannot specify it. If we specify it, it cannot exist as what we have specified. Its ontological position and momentum do not coexist, if we understand its ontological position to mean its unique reference, its specifiability, its extension, and understand its ontological momentum to mean its meaning, its definition, its intension.

We can conjecture, in fact, that the Heisenberg Uncertainty Principle is in fact just a special case of the quixotic nature of the least unspecifiable infinity, for the finite speed of light and Planck’s constant are just applications of that number to theoretical physics, and their function in establishing a virtual infinity (finite maximum) and a virtual zero (finite minimum) for energy lies at the heart of quantum mechanics.

True and Pseudo Infinities at the Foundations of Mathematics

True infinity also has some interesting cosmological consequences. For example, if I were to sit in this room forever, with my limited view of the universe, would I eventually see every event that has ever happened anywhere repeated before my eyes? The existence of true infinity implies that yes, I would. But if the only infinity is Cantorian pseudo-infinity, I could prove that no, I would not.

What I have shown here is that, despite what may be proven about Cantorian pseudo-infinity, it is simply not true of true infinity, and that there is nothing inconsistent in this fact. We can make use of Cantorian pseudo-infinity for all it is worth to us, and it is worth a great deal, yet also recognize that it is not about true infinity, and that we can also theorize about true infinity for all that is worth to us, and it is also worth a great deal.

And I think this position on the foundations of mathematics and logic reflects the general consensus today among mathematicians and logicians, that however we may like or dislike this or that foundational program or project, we are all as a community better off on the whole for having all these programs and projects to ponder, than we would be if some of them were not open to us for inquiry. I have proven here that they may coexist in a single complete and consistent model for the foundations of mathematics.

The discovery here of the finiteness of Cantorian pseudo-infinity is especially important to modern mathematics precisely because so much of mathematics is currently modeled in ZFC. Thus a lot of confusion has also arisen in mathematics over the last century and a half because of Cantor’s fallacy. By correcting that fallacy, we are able to provide simple and elegant solutions to many problems involving Cantorian pseudo-infinities being mistaken for true infinity.

One of these, for example, is the Poincare Conjecture, which was solved by Pereleman by demonstrating that a Cantorian pseudo-infinity actually reduces to a finite number. Pereleman showed no understanding in his proof, however, that this was in fact what he had done. He understood his proof as something anomalous within the Cantorian worldview.

What I have shown in this paper is that Perelmen’s result comes into focus, makes sense, in a way becomes a trivial result, from the standpoint of a Kantified Brouwerian and Kroneckerian worldview that embraces the Cantorian worldview as a useful fiction within its imaginary ambit.

It would be interesting to explore, as well, how Abraham Robinson’s “non-standard” analysis may also be understood as an instance of seeing through to the true finiteness of a Cantorian infinity, such that it ought to be called “true analysis” rather than “non-standard.”

An exploration of the relationship between seeing through to the finiteness of Cantorian pseudo-infinities and seeing through to the regulary of Mandelbrot’s monsters also promises an interesting journey.

May 07

Dred Scott and Covington Drawbridge – Taney’s Crucible For Civil Liberty and Equality

Taney’s Defense of Robust Citizen Liberty and Equality in Dred Scott

U.S. Supreme Court Chief Justice Roger B. Taney ruled in the 1857 Dred Scott case that state “citizens” in the meaning of the U.S. Constitution are protected by the Bill of Rights against all state, territorial and federal laws except their home state laws. This makes perfect sense since each state’s citizens have never delegated any of their sovereignty to other state governments, only to their own and to the Union, and are not represented in other state legislatures, only in their own and in the federal legislature. The laws of other states apply to them only via the U.S. Constitution’s Full Faith and Credit Clause, which requires all states to accept the authority of each state’s laws over anyone in its own territory and over its own residents wherever applicable.

Thus under pre-Reconstruction Dred Scott citizens may lack privileges or immunities in their home states that they enjoy in other states. This fact may seem to violate the notion that one should be at least as free at home as anywhere else, but in fact one is. For each citizen is free to change the laws at home but has no say in the laws of other states. Thus any citizens who have reserved fewer privileges and immunities under their own state laws than they are guaranteed in the rest of the Union still have full authority to seek redress of this inequity through petition or representation in their own state government. And they are even free to grant themselves greater privileges and immunities at home than the federal Constitution guarantees them elsewhere in the Union.

The only inequality in Taney’s construction of Article IV is that each state’s citizens are free to impose stronger or weaker legal restrictions on themselves than they are permitted under the Constitution to impose on citizens visiting from other states. This is an inequality no more repugnant to democracy or republican government than the ability of any private association of citizens to impose terms of an associational contract upon themselves that they may not impose upon citizens outside their association. It is only the difference between self-determination being permitted and the usurpation of others’ self-determination being forbidden.

Taney concluded that the word “citizens” in the Constitution was not meant to include blacks, nor any other non-whites, since at the time the Constitution was ratified:
– every state but Pennsylvania had anti-miscegenation laws, and even Pennsylvania did at the time of the Declaration of Independence
– all states denied blacks the right or duty to serve in militia
– most states severely restricted the rights of free blacks visiting from other states in almost every way they are explicitly prohibited from doing to citizens visting from other states under the Bill of Rights
– no state ever questioned their own authority to continue enforcing all these restrictive laws against free blacks visiting from other states
– no state complained to another state that its own free blacks were restricted in ways their white citizens were not when in the other state

Taney said, however, that the same generic language, if used in a more contemporary law, would presumably include non-whites. This aspect of Taney’s ruling could have been used to strike down the draconian Black Codes in “free” states like Illinois, whose 1853 revised state constitution used generic language in its exhaustive Declaration of Rights equally protecting all people in its territory. It might be supposed that the Illinois legislature’s apparent consanguinity about the compatibility of its Black Codes with its 1853 Constitution’s Declaration of Rights would justify reading that Declaration as implicitly excluding blacks, just as Taney read the Constitution as such in light of the framers’ contentment with its compatibility with Black Codes in force in their states at that time.

But Taney believed the times had changed, and that enough citizens might be expected to read the Declaration’s generic language, as it was approved by the Illinois legislature in 1853, to include blacks, after twenty years of what amounted to almost a national popular obsession with radical egalitarian abolitionist public agitation and debate on the Negro question, that according to the strict construction the Taney Court had applied to all State constitutions for two decades running, the State of Illinois must be held to a presumptively inclusive meaning to its generic terms in its recently approved constitutional Declaration of RIghts.

By contrast, the framers of the U.S. Constitution not only would have found it necessary to make express any intent to include Negroes in generic terms referring to citizens, but were also framing a Constitution for a government that did not yet exist, and that thus had no existing laws of its own at all, never mind Black Codes, that the new Constitution might potentially overturn. Nor did the Constitution delegate any powers to the new government it proposed forming which would in any way give it scope to strike down any state’s laws instituting social hierarchy among its own inhabitants, such a the Black Codes.

This new Constitution for a new government would also need to be ratified by three-fourths of the state legislatures before it would have any legal effect at all, and thus the new Constitution was actually at the time of its passage still subordinate to the collective will of the state legislatures who had no inclination to part with their Black Codes, and thus carried no authority under law over those Black Codes until it was ratified. Thus it only made sense to construe that new Constitution only and strictly in ways that were compatible with the existing state constitutions and laws of three-fourths of the states in force at that time, or which were repealed in short duration after ratification.

The 1853 Illinois Constitution, however, had direct legal supremacy over all Illinois state laws from the moment it was passed going forward, including over its Black Codes. While its passage did not immediately overturn those laws, it did leave them open to challenge in Illinois courts, whose constructions and applications of its own state constitution were ultimately subject to final review by the U.S. Supreme Court. Thus in passing the 1853 Illinois Constitution with conspicuously generic language referring to people, inhabitants and citizens in its Declaration of Rights, it is plausible to suppose that the legislature was acting quietly to assure that its Black Codes would ultimately, though not immediately, be overturned, if not by state courts then ultimately by the U.S. Supreme Court. It could have intended to set the Black Codes on a course of extinguishment, but in a way that left open the possibility of leaving the Federal judiciary to make the final call and take the final political responsibility for doing so.

Abolitionists did not want to accept Taney’s conclusion, because they felt unable to muster support for a Constitutional Amendment granting citizenship to free blacks, especially if citizenship must come, as Taney insisted it must, with both full Bill of Rights protection everywhere but one’s home state and genuine equality with whites. For the Bill of Rights would exempt black citizens of each state from the Black Codes in every other state they visit, and genuine equality would strike down anti-miscegenation laws everywhere in the United States.

The Republicans could not maintain national power without the alliance between the more mildly racist free states like Pennsylvania and Massachussetts and the more virulently racist free states like Illinois, which not only maintained pseudo-slavery status for free blacks but permitted actual slaves to be rented in from out of state a year at a time.

So Republicans had to reject one or both of Taney’s premises if they were to forge a coalition to grant citizenship to blacks. They had to contend that granting them citizenship in the meaning of the Constitution either did not give all citizens genuine equality under the law, thus letting anti-miscegenation laws stand, or did not guarantee citizens full Bill of Rights protections in federal territories and other states, thus letting Black Codes stand, or both. Even radical abolitionists like Arthur Tappan and William Lloyd Garrison had from their earliest publications disavowed any intent to promote or encourage interracial sex or marriage, and Democrat politicos tried their best to baste Republican candidates with the charge that they favored amalgamation of the races, eliciting their frequent and indignant disavowals and charges of libel.

Taney, however, saw the danger in interpreting the Constitution as permitting states to invalidate the marriages of each other’s citizens to preserve traditional barriers of social caste. He himself was a Catholic married to an evangelical Protestant in a nation that had given 23% of the popular vote to the violently anti-Catholic “Know Nothing” American Party ticket in the Presidential election held just months before he handed down the Dred Scott decision.

Moreover, among Taney’s close political allies at Jackson’s right hand in the Democratic Party were two miscegenists, one open and one closeted:

– Vice President Richard Mentor Johnson, Andrew Jackson’s hand-picked running mate for his hand-picked successor Martin Van Buren the year after Jackson won a three year battle to get Taney confirmed to the Supreme Court. Johnson, while earlier serving thirty years in Congress, had openly avowed the sanctity of his illegal common-law marriage to his one-eighth-black slave, and lavishly celebrated the weddings of their two mixed-race daughters to scions of prominent white families.
– Charles Carroll of Carrolton, Taney’s third cousin once removed, and also the illustrious sole Catholic Signer of the Declaration for whose sole financial support, as the wealthiest man in the colonies, religious tolerance towards Catholics was written into the Constitution. Carrol’s illegitimate mulatto son Daniel married a slave named Rachel and raised a son Charles Henry on the family’s historic estate at Doughroregan to become a Methodist minister, whose daughter Lillie Mae Carroll married a black preacher Keifer Jackson and went on to fame as the Mother of the Civil Rights Movement from her base of operations in the Baltimore NAACP from 1935 to 1974. Taney may not have known that Carroll’s great-granddaughter, who was also his fourth cousin twice removed, would grow up to found the Civil Rights Movement, but as Carroll’s personal lawyer who had drawn up his will, Taney was certainly intimately familiar with all the legally significant details of Carroll’s family circumstances.

And Carroll was not the only cousin Taney helped with legal issues involving miscegenation. In 1829 Taney came up with an ingenious legal strategy to help his mulatto cousin Nicholas Darnall win affirmation from the U.S. Supreme Court that he was free by implied manumission through bequeath of property, establishing a landmark principle in the law of manumission, and that his racial heritage was therefore no impediment to his valid title to the sizeable landed estate and grand manor he had inherited. Taney managed to pull off this feat by evading the question of Darnall’s standing to sue in federal court. Officially representing the other party in the suit, Legrand, who was a friend of Taney’s playing along to help, Taney orchestrated both sides of the case, and when the opportunity arose to challenge Darnall’s standing to sue, Taney simply declined on behalf of his friend Legrand, and the case proceeded to the merits. Taney openly explained all of this in several paragraphs of detail right in his Dred Scott ruling, in response to Scott’s counsel raising the case as an issue in oral argument and brief.

Taney’s fears concerning the allowance of a caste system among citizens were realized in 1879 when the post-Reconstruction Texas Court of Appeals (the highest court in Texas) ruled that the state had every right to criminalize even interfaith marriages (using the example of Christians and Jews, but clearly implying Protestants and Catholics could be banned from marrying as well), never mind interracial marriages, and based its finding on the U.S. Supreme Court’s 1873 Slaughterhouse Cases that presumed the Curtis construction of the Privileges and Immunities Clause, following the spurious ruling to that effect four years earlier in Paul v Virgina, without even acknowledging the Taney construction in Dred Scott.

Indeed, the British Parliament is only just this year, 2013, planning to repeal a law forbidding the Royal Family from marrying Catholics. Taney had good reason to fear that states might use any weakening of full citizen equality jurisprudence to start persecuting Catholics again, to the point of even annulling or even criminalizing his own marriage. Taney’s beloved wife of forty-eight years died suddenly only months before the Dred Scott case came onto the docket in late 1855. The only public statement she ever recorded was her sworn testimony in support of the freedom of a black man she knew as a child against an attempt to remove him from the state of Maryland as a fugitive slave. Her husband battled in court for years before finally vindicating the man’s freedom.

Before the Curtis construction took root, however, some pragmatically egalitarian abolitionists in the Republican Party first tried and failed to achieve Taney’s vision of robust citizen liberty and equality while rejecting, for political and ideological reasons, his method of constructing it from the Constitution.

The Failed Abolitionist Gambit on Citizen Higher Law Equality

John Bingham first formulated the Fourteenth Amendment as a mere empowerment of Congress to enforce the Privileges and Immunities Clause. He believed that clause already guaranteed full Bill of Rights protection to citizens against the laws of every state, including their own states. In this he disagreed with Taney only in that Taney believed its protection did not apply to a citizen’s home state laws. But this difference was huge in its implications, for Taney had for decades championed state sovereign independence from Congressional control on civil rights.

Like Hamilton, Taney believed enforcement of the Privileges and Immunities Clause was best left to the federal judiciary in diversity jurisdiction. Unlike Taney, Bingham rejected Barron v Baltimore, and thus believed that the people of each state, in ratifying the Constitution, had relinquished their authority to impose upon themselves through state law any restrictions on their rights that the federal government was forbidden to impose, including at least those enumerated in the first eight Amendments of the Bill of Rights.

Unlike Taney, Bingham read the phrase “citizens in the several states” to mean “citizens of the United States, under the laws of every state” so that the full meaning of the clause for Bingham was:

“The citizens of each state shall be entitled to all the privileges and immunities of citizens of the United States, under the laws of every state.”

Bingham agreed with Taney, however, that Congress had been granted no authority under the Constitution to enforce the clause upon the states, as Taney had ruled in Dennison. Bingham believed Congress should be granted authority to enforce it. Taney believed that authority was best left with the federal judiciary.

Alexander Hamilton, in Federalist No. 80, explained the framers had chosen to place the burden of enforcing the Privileges and Immunities Clause solely in the federal judicial branch to prevent Congress from gaining, in effect, plenary legislative authority over state governments. But Bingham was not satisifed with leaving the federal judiciary to enforce the Privileges and Immunities Clause.

If Bingham was right that the clause originally meant to restrain citizens of each state from violating their own federally enumerated rights through state law, then Hamilton must have been terribly confused about that clause when he asserted that the Diversity Clause provides for the judicial enforcement of the Privileges and Immunities Clause, for if Bingham’s construction was correct then clearly the Diversity Clause does not give the federal judiciary full enforcement power over that clause because it does not provide federal jurisdiction for citizens to sue their own states for violations of the Bill of Rights (Hamilton was writing years before the Eleventh Amendment forbade suits by individuals against states). But apparently Bingham believed Hamilton misunderstood the Privileges and Immunities clause and that he and the other framers had goofed and omitted to fully secure its enforcement even by the judiciary, despite Hamilton having esteemed it “the basis of the Union” and declared that it had been plainly enforced through the Diversity Clause.

In light of Hamilton’s view, it certainly seems that Taney’s construction was far more plausible than Bingham’s, but Bingham was a legislator, not a judge, and he was not doing anything terribly unexpected of a legislator in trying to force-fit the Constitution to suit the prevailing Republican political agenda.

Bingham also did not subscribe to Taney’s reasoning that state laws were restrained by the Bill of Rights only because they were applied to citizens of other states exclusively through the authority of the U.S. Constitution. His reason for rejecting Taney’s approach was most likely politically ideological. Taney had used that reasoning in Dred Scott to strike down the already-repealed Missouri Compromise, ruling that Congress had no power under the Constitution to ban slavery anywhere, not even in U.S. territories. The entire Republican platform in the 1860 election turned on rejection of that reasoning and that conclusion in Dred Scott. Having fought and won the election on that platform, then fought and won a horrendous war to enforce their electoral victory, the Republicans were not about to turn around and admit that Taney was right after all.

Instead, Bingham believed states had all agreed, in ratifying the Constitution, to allow any person that any state deemed a citizen of that state at that time, including free black citizens of a number of states who were regarded by those states as its citizens, to be protected by the Bill of Rights against all state and federal laws throughout the Union. He believed that states were in violation of the Constitution when they applied their laws restricting free blacks to those few visiting free blacks who were deemed citizens by their home states, but that the Constitution had left the enforcement of those rights to the federal judiciary as Hamilton had explained, and that the federal judiciary, beholden to the Slave Power from the outset, had unjustly refused to enforce those rights at all. He also believed free blacks deemed citizens of their own states had full Bill of Rights protection against their own state’s laws, but that the Constitution made no provision at all to enforce those home state rights, but rather, left it up to each state to adhere to voluntarily.

Thus Bingham believed Barron v Baltimore’s finding in 1835 that the Bill of Rights do not apply against state laws had been decided wrongly by the Marshall Court and should have long since have been reversed. For Bingham, the Marshall Court should have recognized state citizens’ Bill of Rights protections against state law, but begged off enforcing them since the Constitution gave the federal government no power to do so. Bingham may have been inspired to take this view by Taney’s own recent ruling in Dennison in which he chastised two Ohio governors for refusing to fulfill an extradition request from another state, but avowed that the federal government had no power under the Constitution to force their hand.

Bingham also believed that the entire thicket of jurisprudence the Taney Court had developed around the Tenth Amendment reserved police powers of states should be thrown out. He believed every ruling of the Supreme Court upholding state laws that infringed upon any privilege or immunity guaranteed to citizens under the Bill of Rights ought to be overturned. He believed that at the time each original state ratified the Constitution, the citizens of each state whose state constitution did not already explicitly incorporate equivalent language to the Bill of Rights to restrain their own state laws as the Bill of Rights restrained federal laws, had implicitly done so by ratifying the Constitution.

Believing the federal judiciary had utterly failed in its duty to enforce the Bill of Rights against state laws even for citizens of other states, Bingham concluded that Congress must be empowered to do the job instead, and fully empowered unlike the federal judiciary had originally been, and he intended the Fourteenth Amendment to empower Congress to do just that.

At one point in the Constitutional Convention, James Madison had proposed that Congress have direct veto power over all state legislation. Madison’s proposal was defeated, but Bingham’s idea for the Fourteenth Amendment was to revive Madison’s proposal to a certain degree, to allow Congress to make it the duty of the federal judiciary to strike down state legislation, but only to the extent Congress decided it violated the Bill of Rights.

After winning the Civil War most Republicans were at first resolved to act as if Dred Scott had never happened, but Bingham believed it had to be repudiated with a Constitutional Amendment. After procuring a solid majority on the court with five Republican appointments in the course of the war, including replacement of Taney upon his death in 1864 with Salmon Chase, a radical egalitarian abolitionist, the Republican Congress passed the Civil Rights Act of 1866 in direct defiance of Dred Scott’s principal holding that Congress had no authority to grant citizenship to blacks. But they soon realized Bingham was right, that even their abolitionist Supreme Court majority could not guarantee citizenship to blacks as fully and enduringly as a Constitutional Amendment could.

Having already passed the Thirteenth Amendment banning slavery and involuntary servitude, and in response to the rapid passage of draconian Black Codes in the South protectively imitating similar Black Codes in the North and West, including that of Lincoln’s home state of Illinois, the Republicans realized they must pass a Fourteenth Amendment to strike down those Black Codes. But they must do it carefully, so as not to offend the loyal Union states in the North and West who coveted what they thought of as their own good and righteous Black Codes and free black exclusion laws as much as they wished to punish the former Confederate States for what they saw as their sin of trying to carry their abjectly evil practice of slaveowning into the “free” states, even though some of those “free” states treated free blacks pretty much as state-owned slaves.

Bingham had gotten nowhere in 1858 trying to muster support to block Oregon’s statehood admission because of its state constitution’s free black exclusion provision. He knew that even if he could get the two-thirds vote in Congress for full black citizen equality as Taney had envisioned it in Dred Scott, it was unlikely that enough states in the West and the South would support it to achieve the three-fourths required for ratification.

It will be valuable to recite here Taney’s vision of black citizen equality as he expressed it in Dred Scott:

“More especially, it cannot be believed that the large slaveholding States regarded them as included in the word citizens, or would have consented to a Constitution which might compel them to receive them in that character from another State. For if they were so received, and entitled to the privileges and immunities of citizens, it would exempt them from the operation of the special laws and from the police regulations which they considered to be necessary for their own safety. It would give to persons of the negro race, who were recognised as citizens in any one State of the Union, the right to enter every other State whenever they pleased, singly or in companies, without pass or passport, and without obstruction, to sojourn there as long as they pleased, to go where they pleased at every hour of the day or night without molestation, unless they committed some violation of law for which a white man would be punished; and it would give them the full liberty of speech in public and in private upon all subjects upon which its own citizens might speak; to hold public meetings upon political affairs, and to keep and carry arms wherever they went. And all of this would be done in the face of the subject race of the same color, both free and slaves, and inevitably producing discontent and insubordination among them, and endangering the peace and safety of the State.”

Clearly Taney made it essential to his argument in Dred Scott that if citizenship in the meaning of the Constitution were conferred upon blacks, it would give the most despised and debased among them absolute and total equality under the law with the most powerful and wealthy of whites in every way, everywhere in the nation.

Former loyal Jacksonian Democrat Congressman Robert Dale Owen, who as a pro-war Democrat and later a Republican convert had led a special commission on Reconstruction planning during the war that led to the creation of the Freedmen’s Bureau, and had pressured Lincoln publicly to issue the Emancipation Proclamation, drafted a far more comprehensive version of the Fourteenth Amendment, and his old friend and leading abolitionist in the House Thaddeus Stevens submitted it to the House Committee drafting the amendment.

Owen’s version called for specific guarantees of both civil and political rights for blacks. It disenfranchised former Confederate government officials and military officers, but not soldiers. It resolved the conundrum that the Thirteenth Amendment had already ballooned the representation in Congress of all the recently emancipated slave states because slaves who had only counted for three-fifths of a person now counted, being free blacks, as whole persons for the purpose of apportioning seats in Congress. It solved this problem by denying apportionment count for any blacks who were not given the right to vote, with a sunset provision that granted suffrage to all blacks unconditionally after ten years.

Owen’s version managed to avoid requiring all states to enfranchise their free blacks right away because many “free” states still vehemently opposed enfranchising their miniscule free black populations. It significantly punished the disenfranchisement of blacks only by states with enough blacks for the change in apportionment count to impact their share of seats in Congress.

For states in the North and West, choosing to continue disenfranchising blacks at the cost of losing the one or two percent of their census counts that figured into their apportionment of Congressional seats would be unlikely to cost them a single seat. So the impact of Owen’s Amendment would be to coerce recently emancipated slave states into enfranchising their large black populations, while permitting all the “free” states to continue disenfranchising their blacks without penalty.

Owen certainly did not relish this result. He had been infamous decades earlier as a Utopian socialist, in fact, and had published extensively supporting not only the absolute equality of all races, but that of women with men, and had pioneered with Fanny Wright the promotion of (natural and voluntary) birth control for the self-liberation of women. He was now focused, however, on assuring concrete and expeditious results, and trying to forge a national consensus merging the radical egalitarian abolitionist ideals of black equality with the Jacksonian Democratic ideals of common-man citizen sovereignty, while recognizing the political limits of what this rare moment in history would afford such ambitions.

Unfortunately Owen’s original draft did not survive the committee’s reworking of it. Certain they could not muster support for even the eventual requirement of black suffrage in every state, they tossed out the sunset grant of suffrage ten years later, but kept the conditional apportionment measure as a permanent provision. This gave the “free” states a permanent free pass on ever having to enfranchise blacks, and encouraged all recently emancipated slave states to devise ways either to grant sham legal voting rights to blacks that they could not effectively exercise, or devise ways to rid themselves of blacks altogether.

Within a couple years after the Fourteenth ratification in 1868 the odiousness of this negative incentive ultimately led egalitarian abolitionist Republicans to push through the Fifteenth Amendment, requiring all states to enfranchise all black males.

Another critical part of the Owen draft that got removed was the specific guarantee of civil rights to blacks against all state laws. In its place Bingham inserted his own re-worked versions of his original Privileges and Immunities enforcement provision. He had revised it to adopt the self-executing modality of the entire Owen draft, so that it no longer required an act of Congress to go into effect, but would have full force of law upon ratification. Owen’s draft, thus gutted of all its substantive and specific guarantees of black civil and political rights against state law, became the Fourteenth Amendment as it reads in the Constitution to this day.

Unfortunately Bingham’s civil rights language did not specify which possible construction of the Privileges and Immunities Clause it meant to enforce on the states. This left an opening for the Supreme Court to render the provision almost meaningless. This the Reconstruction Court did, not as a betrayal of its Republican commitments, but in accord with its aristocratic Hamiltonian roots, and in tragic fidelity to Bingham’s ultimately unworkable project of repudiating Dred Scott.

Historians and commentators generally assume this rejection of Bingham’s intended construction of the Privileges or Immunities Clause of the Fourteenth Amendment occurred first in the Slaughterhouse Cases in 1873, and was explicitly completed in the Cruikshank case in 1877. The historians and commentators are all wrong.

Binghams’ intended construction of the Privileges or Immunities Clause was gutted and hollowed out first by the Salmon Court in Paul v Virginia in 1868, just a year after the Fourteenth Amendment was ratified. In that case, former Associate Justice Curtis, who had retired from the Supreme Court months after the Dred Scott case to resume his lucrative career as a corporate lawyer representing northern corporations owned by wealthy aristocrats, argued for the plaintiff that corporations were citizens in the meaning of the Article IV Section 2 Privileges and Immunities Clause, and thus deserved full Bill of Rights protection against all state laws.

Justice Field ruled against Curtis’ argument for a unanimous court, rehashing Taney’s ruling in Augusta Bank v Earle three decades earlier to declare that corporations, being the legal vehicle for the expression of special privileges and immunities given to its owner-investors in its state charter, could not carry those special privileges or immunities into another state. But Field’s rehash differed significantly from Taney’s original ruling in its construction of the Article IV Privileges and Immunities Clause. Field held that it only guaranteed to citizens of other states the same privileges and immunities under a state’s laws that the state granted its own citizens.

By subtly misconstruing Taney’s less developed 1839 reasoning in Augusta Bank v Earle on corporate rights, and ignoring Taney’s more sophisticated Dred Scott-based update of Earle in Covington Drawbridge in 1858, Field, while spewing out an enormous cloud of needless verbiage to rehearse how Taney’s ruling in Earle clearly invalidated Curtis’ reiteration of the faulty claim to the same Privileges and Immunities Clause citizenship for corporations rejected in Earle, slipped in the following concurrence with the specious construction of the Privileges and Immunities Clause Curtis had dropped subtly into his oral argument and brief as a mere unsupported premise to his ostensible claim:

“But the privileges and immunities secured to citizens of each State in the several States by the provision in question are those privileges and immunities which are common to the citizens in the latter States under their constitution and laws by virtue of their being citizens.”

In this manner Field managed to ignore and evade Taney’s definitive construction of the Privileges and Immunities Clause in Dred Scott, thus setting it aside without having to argue against it, which he could not successfully do, to affirm in its stead the barebones construction of that clause that Curtis had offered in his Dred Scott dissent. Field and the unanimous Salmon Court of 1868, some of them perhaps in collusion with Curtis’ sleight of hand, others perhaps unwittingly, had stolen into the crib of the newborn Fourteenth Amendment’s guarantee of black citizen equality, ripped its heart out and replaced it with corporate personhood.

Taney in Dred Scott and Covington Drawbridge set forth an overwhelmingly forceful and undeniably correct construction of the Privileges and Immunities Clause that since 1858 had made natural person citizens robust with Bill of Rights protections against state laws outside their home states, and had subordinated corporations far beneath them there as non-citizens with only access to courts, equal protection only with other non-citizen corporations, and only procedural due process rights, none of them at that time construed substantively.

Under Curtis’ construction, any rights guaranteed to citizens against state laws under the Fourteenth Amendment would be automatically guaranteed to corporations as well. The implied result of this groundlessly and surreptitiously affirmed construction of the Privileges and Immunities Clause in Paul v Virginia was actually the exact opposite of its ostensible principal holding, that corporations are subordinate to citizens, who are natural persons, under state and federal law.

By ignoring Dred Scott and Covington Drawbridge in Paul v Virginia, the Salmon Court wheedled its way to reducing natural person citizens to having only the same rights against state laws that corporations had.

This was certainly a betrayal of Bingham’s intent in one sense, but it was the fulfillment of his intent in another sense. For it was certainly Bingham’s intent for the Fourteenth Amendment to be interpreted in defiance of, not in accord with, all of Taney’s reasoning in Dred Scott. Bingham thought his own reasoning (or lack thereof) was sufficient to arrive at the same robust construction of the Privileges and Immunities Clause that Taney had arrived at, but without the political baggage of Taney’s mooted implication that only a Constitutional amendment could make blacks citizens or abolish slavery in a territory. Bingham was not alone, for the ardent rejection of Taney’s reasoning in regard to the Privileges and Immunities Clause in Dred Scott had become a politically indispensable article of undying faith among Republicans even before the ruling went to press, unifying the entire spectrum of the Republican coalition on a common point of moralistic Constitutional dogma.

Unfortunately it was dogma without a working theory of how to construe it from the text of the Constitution, so Bingham did his best to supply such a construal. In the end, however, the only Constitutional theory that could coherently compete with Taney’s intricately sound exigesis of the republican machinery of the Constitution in Dred Scott was Higher Law theory, and that was a theory whose coherence ultimately relied upon the vesting of moral dictatorship in some branch or other of the federal government, with authority to subdue all state governments and individuals in the nation to its dictates.

Lincoln had clothed himself in moral dictatorial authority in seizing neutral ships without proper warning and without a declaration of war, suspending habeas corpus, declaring fiat notes legal tender, and replacing the federal judiciary in effect with a military tribunal system that suspended all pretense of due process of law. Higher Law theory, alas, was simply not a theory of Republican government, but one of theocracy. The Second Great Awakening had fully played itself out in the political arena with the holy war of the North over the South, and in the aftermath it was playing itself out in the legal arena through the Reconstruction Acts and Amendments.

Bingham’s attempt at construal was simply that “the several states” included each citizen’s home state and that the privileges and immunities referred to were not those defined by any state’s laws but, rather, were general privileges and immunities to be found enumerated in the Bill of Rights. It was this robust construction of the Privileges and Immunities Clause, shorn of the compelling grounding in republican principles of citizen sovereignty and representation that Taney had rooted it in, that Bingham clearly meant to be enforced through his Fourteenth Amendment Privileges or Immunities Clause.

To be fair, there was a republican justification for Bingham’s reading, just not a very plausible one. If ratification of the Constitution and the Bill of Rights Amendments implied incorporation of the Bill of Rights through the Privileges and Immunities Clause into each ratifying state’s own state constitution, then Bingham could argue that each state citizen had assented to restraining his or her own state government from violating its inhabitants’ rights as enumerated in the first eight amendments of the Bill of Rights. Certainly the losing plaintiff in Barron v Baltimore believed this was true. And he was not alone.

The problem was that the tussling together of each state constitution with eight chunks of text from the U.S. Constitution, most of which was already covered by provisions in each state constitution with other language, would tend to make a muddle of state law, and wreak havoc with stare decisis in every state’s judicial system as well as in the federal judiciary in suits in diversity and in review of state laws. It is a great idea to have the same restrictions on all federal and state governments throughout the nation, but for that very reason it seems highly implausible that the framers, with such a laudable objective in mind, would go about implementing it in such a bizarre and ineffectual manner as to hinge it upon an oracular alternate reading of a clause that replaced a clause that was clearly meant, in its original wording in the old Articles of Confederation, to say exactly what Justice Field claimed the new version of the clause still meant in the Constitution, that states may not treat citizens of other states worse than they treat their own.

Taney’s careful reconstruction of the framers’ genuine revision of the Privileges and Immunities Clause succeeded in avoiding this reversion of its meaning without disturbing existing fundamental state law the way Bingham’s bold facelift of that clause required. By tying it tightly with the framers’ clear intent to cure the original version in the Articles of its ill of allowing states to give its own non-citizens citizen rights in other states, which Madison said was potentially “the cause of much embarrassment,” Taney cogently elucidated its complementarity with the federalization of exclusive power over citizenship status determination and change introduced by the Constitution to cure the aforesaid ill.

This exposed a fatal weakness in Curtis’ construction, which Curtis pursued to its bitter logical requirement that the framers had bizarrely decided to grant the Federal Government authority only over the citizenship status of foreign-born persons, while leaving each state to continue determining the citizenship status of all its native-born inhabitants. Curtis did not expose himself, however, to the embarrassment of attempting to explain just how state and federal courts would resolve the frequent deficiencies to be found in the factual record in regard to specific persons’ place of birth, which though practically surmountable in the limited case of candidates for U.S. President, presented quite a different challenge when required of every person seeking or claiming citizenship. Nor did Curtis attempt to explain, because he could not in any coherent way explain, given Curtis’ claim that there was no such thing as U.S. citizenship apart from state citizenship as defined by each state, why the framers would have regarded the conferral of state citizenship to foreign-born inhabitants of a state a federal matter at all.

And even assuming they had such a reason, there is no reason why it would be a particularly federal matter in contrast to conferral of citizenship to the state’s native-born inhabitants,  especially since such a half-federalization of the power to confer citizenship on a state’s inhabitants would fail to cure the ill Madison claimed the Constitution cured, because it would still allow each state to confer citizenship rights in other states on any of its non-citizen inhabitants native-born, including all blacks, simply by making them state citizens. Curtis seemed to think it would suffice that each other state could defend itself from this ill by categorically denying rights to different social categories. But it is precisely this supposed authority of any state to nullify as many or even all the rights of any category of citizens visiting from other states, alongside its own citizens of the same category, that Taney so justly dismisses as rendering “unmeaning” the Privileges and Immunities Clause, which Hamilton had “esteemed” to be “the basis of the union.” When run through the rigors of Taney’s construction by comparison, Curtis’ construction of the Privileges and Immunities Clause simply crumbles into nonsense, even as Bingham’s construction stalls on its facial implausibility and bogs down completely under the weight of its judicial impracticability.

But the year after Bingham’s language was ratified Justice Field adopted former Justice Curtis’ construction of the “in” as simply synonymous with “of,” ignoring both Bingham’s and Taney’s constructions entirely, and insisting that the only alternative construction was that each citizen brought his own state laws into other states and that those other states were bound to enforce them instad of their own laws against him. This straw man argument, of course, Field easily dismissed as absurdly untenable under any notion of state sovereignty, representative government or comity among nations, and laughably impossible to administer. Field was a loyal Republican. He ignored Dred Scott like everyone expected him to. Perhaps he was in cahoots with Curtis and his corporate paymasters, or perhaps he simply could not support Bingham’s untenable construal, and thus he went for the tried and true misconstrual that simply read back into the clause both the simplicity and the attendant defects that its progenitor in the Articles of Confederation had had.

Perhaps Field did not mention Dred Scott because he knew Taney had argued specifically against both the straw man construction Field argued against and the equally facile construction he ruled in favor of. Taney made this argument in an oft-misconstrued paragraph in Dred Scott. We will explicate this paragraph in three sections, in order to clarify its proper interpretation:

“But so far as mere rights of person are concerned, the provision in question is confined to citizens of a State who are temporarily in another State without taking up their residence there. It gives them no political rights in the State as to voting or holding office, or in any other respect. For a citizen of one State has no right to participate in the government of another. But if he ranks as a citizen in the State to which he belongs, within the meaning of the Constitution of the United States, then, whenever he goes into another State, the Constitution clothes him, as to the rights of person, with all the privileges and immunities which belong to citizens of the State.”

In this first section, Taney’s final words “the State” refer back to “the State to which he belongs,” not to “another State.” Taney is arguing here against the same construction Field later set up as straw man in Paul v Virginia. the construction in which each citizen carries his or her home state privileges and immunities along while in other States. Then Taney proceeds to show the absurd result of this construction in the second section of the paragraph:

“And if persons of the African race are citizens of a State, and of the United States, they would be entitled to all of these privileges and immunities in every State, and the State could not restrict them, for they would hold these privileges and immunities under the paramount authority of the Federal Government, and its courts would be bound to maintain and enforce them, the Constitution and laws of the State to the contrary notwithstanding.”

Once again, the words “the State” in the final clause of this second section of Taney’s paragraph refers still to “the State to which he belongs” and not to “another State.” Taney is saying that if citizens carry only the rights their home States afford them when they are in other States, the Constitution nonetheless, by a converse effect, augments those rights by restricting the application of other States’ laws upon the traveling citizen so as to reserve to that citizen the full Bill of Rights protections all citizens enjoy against those other States’ laws, as those laws are applied upon them only by virtue of their home States’ ratification of the Constitution, and are thus of a Federal character in that application, irregardless of whatever greater extent to which their home States’ laws may restrict their rights while back home in those States.

Taney here makes a much more profound and cogent argument against the portable-home-State construction of the Privileges and Immunities Clause than Field later would in Paul v Virginia. Taney is saying here that any attempt by a State to restrict the rights of its free black citizens more onerously than its other citizens, if indeed its conferral of state citizenship on those blacks had the effect of making them citizens under the Privileges and Immunities Clause, would fly right back in that State’s face once those blacks traveled into other States and exercised their federally reserved full Bill of Rights protections against those States’ laws. For under the Full Faith and Credit Clause, that home State would have to honor any judgments or proceedings in relation to those blacks’ exercise of their rights in those other States, including, for example, the right to marry whites and enlist in the U.S. military.

Having thus demolished this portable-home-State rights construction of the Privileges and Immunities Clause, but in a different and more profound way than Field later would, Taney moves on to demolish the construction Curtis argued for in his Dred Scott dissent, and which Field later affirmed under Curtis’ cunning influence at counsel in Paul v Virginia. Taney proceeds in the third section of his paragraph:

“And if the States could limit or restrict them, or place the party in an inferior grade, this clause of the Constitution would be unmeaning, and could have no operation, and would give no rights to the citizen when in another State. He would have none but what the State itself chose to allow him.”

Here Taney begins with the plural phrase “the States,” which refers back to “every State” in the prior sentence. While “every State” is not grammatically a plural construction, it is clear that Taney’s use of the plural to begin the next sentence can only refer to the semantically plural collective of non-home-States referred to by the singular quantifier “every” in the prior sentence, for no other collection of States is semantically invoked in the paragraph prior to that sentence. Taney in this third section thus argues that the non-home-States also cannot restrict the rights of the citizen as Curtis’ construction of citizenship under the Privileges and Immunities Clause would permit, because such a construction of citizenship would defeat the entire purpose of that clause, which is to ensure the rights of every State’s citizens as citizens of the Union while traveling in other States.

Taney concludes his paragrah, and his two-pronged argument, in a concluding fourth section thus:

“This is evidently not the construction or meaning of the clause in question. It guaranties rights to the citizen, and the State cannot withhold them. And these rights are of a character and would lead to consequences which make it absolutely certain that the African race were not included under the name of citizens of a State, and were not in the contemplation of the framers of the Constitution when these privileges and immunities were provided for the protection of the citizen in other States.”

Here the phrase “the State” in the first sentence continues to mean the non-home-State, and Taney here repeats his argument in this context that the framers of the Constitution could not possibly have meant to include blacks in the term “citizens” if they had any intention of the Constitution being taken seriously by the twelve of thirteen States who at that time stood staunchly behind their anti-miscegenation laws, and who certainly were not prepared to ratify a Constitution that would suddenly embolden every free black with the right to assemble, bearing arms and speaking freely, in any numbers, anywhere and everywhere in the Union where they might incite slaves and other more docile free blacks to revolt against their bondage and degradation.

Curtis, while losing the case itself in Paul v Virginia, had to have been pleased with his clever handiwork. Field had adopted his construction of the Privileges and Immunities Clause, and in it lay the cornerstone of an edifice of corporate dominance under color of equality under the law that corporate lawyers had been hankering and battering at the gates of the federal courts for since the moment they smelled the Jacksonian coffee of Taney’s slap across their face in Charles River Bridge Company and rap against their knees in Augusta Bank v Earle.

By rejecting Taney’s reasoning, Bingham sidelined the only Constitutional Law expert with a coherent construction defending the common man from corporate-mediated aristocratic rule, and wrote his own dogmatic appeal to unjusticiable Higher Law instead into the Fourteenth Amendment to be brushed aside easily by the juggernaut of corporate interests and their relentless hammering of the courts by stables of well-heeled ivy-league educated lawyers, including former Justice Curtis himself, clamoring for what they considered, or feigned to consider, to be a level playing field between corporations and the individual citizens who often stood in the way of their speculative government subsidized schemes for a techno-aristocratic revolution.

Yet as we shall see, Bingham could not, try as he might, actually successfully ignore or replace any of Taney’s reasoning or construction in Dred Scott. In the end, the entire purpose of Bingham’s Privileges or Immunities, Due Process and Equal Protection Clauses can only be understood as acknowledgment that the Higher Law construction of the Constitution was not nearly as sound and valid as Taney’s, and that Bingham, ultimately more concerned with guaranteeing equality for blacks than with vindicating Republican political disgust with the conclusions Taney reached in Dred Scott, had made his peace with Taney’s precedents and had designed his clauses of the Fourteenth Amendment to ambiguate in just such a manner as to trace both paths, the Higher Law path and the Dred Scott path, to the same result, full and absolute non-white citizen equality with whites and full application of the Bill of Rights for all citizens against all state laws.

Not knowing whether the Higher Law or the Dred Scott construction would ultimately prevail in the long future of Supreme Court construction of his historic Fourteenth Amendment clauses, Bingham hedged his bets, and used language that seemed almost oracular because he needed to make it agnostic as to whether HIgher Law or Dred Scott would win out, or whether Supreme Court jurisprudence would flip back and forth between them in the decades and centuries to come. He did not foresee, unfortunately, that Curtis would be able to induce the Supreme Court to read the oracle in an entirely different direction.

 

The Republican Sacrifice of Citizen Sovereignty to Corporate Aristocracy

Justice Benjamin Curtis’ dissent in Dred Scott rejected the full Bill of Rights interpretation of the Privileges and Immunities Clause, insisting that it guaranteed to each State citizen only the rights assigned to that citizen’s social category by the laws of each other State the citizen may visit. Other privileges and immunities, Curtis argued, are granted by each state at its pleasure to different social categories of citizens within its territory. This, for Curtis, made blacks fully equal to whites under the Constitution, despite the fact that blacks might actually have no rights at all under any state’s constitution and laws. Taney pointed out this absurd result of Curtis’ construction of the Privileges and Immunities Clause and concluded that such an interpretation would render the clause “unmeaning” because it would give to the citizen only the rights each State chose to give him or her. Taney was absolutely right.

Yet Curtis’ view of black “freedom and “equality” under the Constitution was also the mainstream Northern anti-slavery movement’s myopic corporation-enslaved vision of black “freedom” and “equality.” It was not Bingham’s. It was not Owen’s. And it was not Taney’s. Curtis was a protege of Daniel Webster, a lifelong tool of the Boston Brahmin famlies and other similar aristrocratic American families, many of whom profited profusely from the African slave trade from colonial times right up until the thick of the Civil War.

Abraham Lincoln wholly endorsed Curtis’ dissenting Dred Scott opinion in his Illinois Senate campaign against Stephen Douglas, and in the 1860 Presidential campaign he promised to replace enough Supreme Court Justices as President to overturn Dred Scott and affirm Curtis’ dissent as the law of the land. In so doing Lincoln was dutifully representing the interests of his home state’s voters, for as Curtis had touted, his interpretation had the advantage of allowing blacks to be citizens under the U.S. Constitution without disturbing any of the draconian Black Codes that Illinois had at one time or another asserted its right and need to impose upon its free black “citizens”, including the legality of indentured servitude of black “citizens” for terms up to 106 years; imprisonment, indenture, deportation, or sale into slavery from the state of any free black “citizen” from another state; requirement that each free black already residing in the state post a $1000 bond for the right to remain; denial of the right to testify in court or serve on juries, and many other peonage-like restrictions.

The Maine Supreme Court, meanwhile, immediately and hotly rejected Dred Scott, insisting that its own anti-miscegenation law did not deny equality between blacks and whites since it equally punished both blacks and whites for marrying a member of the other race. Maine repealed its anti-miscegenation law in 1883, which was largely symbolic since hardly any blacks have ever lived in Maine. But the repeal did not overturn Maine’s high court ruling reserving its right to impose anti-miscegenation laws, and even Loving v Virginia a century later did not entirely repudiate its specious logic. In striking down all anti-miscegenation laws in 1967, the Loving Court felt it necessary to rely on the argument that since the Virginia law at issue only illegalized interracial marriage with whites, it was clearly designed to sustain White Supremacy, in violation of the Equal Protection Clause of the Fourteenth Amendment.

Thus Loving left open the possibility that a state could constitutionally ban all interracial marriages, if it could but find a compelling state interest to do so. Loving concluded with the troubling assertion that a grand total of two Justices in the majority could not, try as they might, even imagine a compelling state interest that could justify such a law. One wonders what the other seven justice did imagine.

The Loving Court did not make what ought to have been the obvious observation that any ban on interracial marriage was inherently unequal on its face as applied to different races, because it banned each race from marrying a distinct set of categories of potential marriage partners. Thus the putative equity in the language was logically specious, for its self-referentiality to its own invidious racial categorization of each person convicted of the crime clearly meant that it was merely a shorthand for a set of distinct but related laws, one for each racial category, banning its members from marrying members of one or more other racial categories. For those laws to be genuinely a single law applying equally to all, they would have had to have banned each race from marrying members of precisely the same set of races.

Thus the only genuinely equitable ban against blacks marrying whites and whites marrying blacks would also ban whites from marrying whites, and blacks from marrying blacks. Indeed, for a decade or so during Reconstruction, every former Confederate State’s Supreme Court but Georgia’s struck down its state’s anti-miscegenation laws based on just that non-specious interpretation of black-white equality, citing some combination of Reconstruction Civil Rights Acts, Constitutional Amendments, and state constitution revisions for justification. They had all reversed these decisions, however, citing the Slaughterhouse Cases for support, by 1890. But Maine did not abolish its anti-miscegenation laws until 1883, and remains today the whitest of all states by proportion of population.

Rather than directly close the self-referential loophole, the Loving Court chose instead to disregard the question of equal application of the anti-miscegenation law to the targeted racial categories in favor of scrutinizing the law’s justification for using racial categories at all. It declared any use of racial categories in a marriage law subject to the Korematsu test, that it serve a compelling state interest that outweighs the discriminatory impact of the law, and found Virginia’s law lacking under that test.

But Taney was not in favor of sweeping bans on the use of racial categorizations in state laws, whether they closed the self-referential predicate logic loophole or not. He was, instead, simply against any use of the state’s police powers to create or preserve caste stratifications among citizens, as being repugnant to a republican form of government and to the Privileges and Immunities Clause.

Thus, for Taney, anti-miscegenation laws are not unconstitutional for referencing race categories without a compelling state interest, as the Loving Court ruled. For Taney anti-miscegenation laws, if applied to citizens of a lower social stratum, are repugnant and absolutely inimical to the very existence of a republican form of government in a state, and therefore no duty or objective of a state government can justify such a law, for such a law effectively dissolves the republican state and converts it into an aristocracy, nor can such a law be justified, for Taney, by serving any state interest, no matter how compelling.

Thus a Taney approach to Brown v Board of Education would have overturned Plessy v Ferguson, but only on the grounds that it uses state police powers to maintain the social segregation of a socially inferior caste of citizens. To reach this ruling, Taney would only require the help of the Citizenship Clause of the Fourteenth Amendment, and would not even require the Thirteenth Amendment.

In Charles River Bridge in 1837 Taney took his first crack at putting corporations in their proper subordinate status below natural persons. He ruled that a state grant of monopoly to a corporation in a corporate charter must be construed narrowly in favor of the public interest, such that no implied grant, only an explicit and unambiguous grant, of any privilege or immunity to the corporation may be construed.

Taney’s second crack at corporate pretensions came in Augusta Bank v Earle in 1839, when he ruled that although a corporation can be regarded as a citizen for the purpose of access to federal courts, to sue and to be sued in them, because they represent their owners who are natural persons and citizens, the Privileges and Immunities Clause does not apply to corporations because if it did, it would extend the special privileges under one state’s law, granted to those owners through the state corporate charter, into another state, thus usurping the special privilege from the citizens of that other state who never elected to grant it. The rights of a corporation in its home state, Taney ruled, consist only of the rights granted them in their charter by its home state, and its rights in another consist only of the subset of its home state rights that are granted to that corporation by each other state, when operating in each other state’s jurisdiction.

Taney ultimately repositioned corporations as non-citizen artificial persons in Covington Drawbridge in 1858, fitting corporations into a securely subordinate status to citizens within the more fully worked out scheme of citizenship and legal personhood he had set forth in Dred Scott. He placed corporations in a position slightly higher than free blacks, but decidedly lower than every citizen, and clarified that his ruling in Earle, that corporations would be regarded as if they were citizens, was only an analogy, and that in fact corporations were, like blacks, not citizens at all in the meaning of the Constitution, nor could they ever be citizens since their exercising the privileges and immunities of citizens would augment the citizenship of each citizen directing or holding shares in a corporation over and against all other citizens, thus elevating corporate directors, officers and shareholders into a higher caste of citizenship than the rest.

It should be noted, however, that Taney had originally granted corporations standing in federal court not so they could sue so much as so they could be sued. His denial of the same privilege to free blacks could be seen as a protective move, to prevent free blacks, debilitated as they were in every state by onerous restrictions on their rights and freedoms, from having to face lawsuits against whites wearing the full regalia of citizen rights. He knew free blacks would be annihilated in court every time under such conditions of spurious citizen equality under the law.

Having spent decades representing free blacks in case after case, he was an expert on their treatment under the law, as he showed in his ingenious maneuvering in Legrand v Darnall in 1829 to win Supreme Court affirmation that all slaves bequeathed property by their masters were thereby manumitted by implication, deftly evading the question of whether the slave Nicholas Darnall, Taney’s mulatto cousin and friend, had standing in federal court. Taney explained his handling of the case in his Dred Scott ruling and commented that there were many ways blacks could become parties in federal cases without having standing to sue or be sued as citizens in diversity.

Thus when Taney died in 1864 he had left the status of corporations clearly defined as non-citizen artificial persons whose privileges and immunities were strictly limited to those explicitly granted, not just implied, by their state charters and laws pertaining to them, and thus strictly subordinate to citizens under the law. Taney had made unconstitutional any attempt to deploy state or federal power to construct or preserve caste hierarchy among citizens, rebuffing repeated attempts by corporate lawyers to get the Supreme Court to amplify the citizenship of the wealthy through grants of citizen rights to corporations. Corporations, the robot minions of the wealthy, would never be permitted in Taney’s America to stand equal to the common citizen under the law, and in so doing lift their wealthy masters above the law to make common citizens every bit their slaves as their corporations were.

Unfortunately, the ratification of the Fourteenth Amendment in 1868 with Owen’s original Civil Rights Clause replaced with Bingham’s version that relies on the Privileges and Immunities Clause, and the triumph just a year later of the Curtis interpretation of the Privileges and Immunities Clause in Paul v Virginia, undid Taney’s work. The victory that hordes of corporate lawyers could not bend Taney to give them for thirty years, they managed to wring out of the Salmon Court only four years after Taney’s death. By allowing each state to construct U.S. citizenship in its own chosen ladder of gradations, it enabled states to reduce the the citizens on the lowest rung, blacks, to a condition below that in which Taney had placed corporations. The Supreme Court managed to equalize the robot slaves of the wealthy, their corporations, with common citizens, thus making those citizens effectively their slaves as well.

Corporations finally had their corporate amplification of the rights of corporate directors, officers and shareholders over the rights of other citizens. Directors’, officers’ and majority shareholders’ control over a corporation amounted to super-citizen status, with minority shareholders reaping the profits if not the enhanced power. Just as anti-miscegenation laws deployed state power to rigidify lineage-based caste hierarchy, the Fourteenth Amendment as warped by Paul v Virginia deployed federal power to rigidify wealth-based caste hierarchy.

Justice Miller’s ruling in the Slaughterhouse Cases five years later was actually Miller’s attempt to forestall the damage by limiting the human-enslaving equality of corporate and human citizens to a few areas of federal law. Miller knew exactly what the corporate lawyers had done in Paul v Virginia, and in his private letters he bemoaned the fact that so many of his brethren on the court had come from the ranks of railroad and bank lawyers. The Cruikshank decision four years later only finalized Miller’s stalemate. The aristocrats had won, and they planned to accrue full civil rights to corporations by using blacks as their sympathetic proxy, but Miller saw through their ruse and bound up the Fourteenth Amendment’s Privileges or Immunities Clause in an interpretive straitjacket, to contain the corporate monster it had become.

Miller must have realized as well that the other surreptitiously decided pro-corporate result in Paul v Virginia had been Field’s uncritical affirmation of Curtis’ argument that the federal Commerce Power in its dormant state would invalidate the Virginia law requiring insurance agents from a New York company to register with the state and pay a fee before operating in Virginia, if only insurance policies were actually articles of commerce. Thus Field in this other way again ostensibly rejected Curtis’ claim for his corporate plaintiff, but in so doing set a sideways precedent that cleared the way for a sweeping nullification of all state laws regulating out of state corporations if they lie anywhere within the giant shadow of the dormant Commerce Clause.

Taney had fought a long pitched battle in the 1840’s against a politically motley power-hungry majority arrayed against him on just this issue, each member of the majority wanting to clobber state sovereignty, state police powers and the Tenth Amendment with the dormant Commerce Clause for his own purposes. The best Taney could do was eke out a compromise that Curtis himself delivered in Cooley in 1852, which remains the only significant ruling Curtis delivered in his short tenure on the Court, and it was significant only in what he conceded to Taney’s point of view on the matter. To Curtis’ credit, however, he did publicly support Taney’s attempts to restrain the unconstitutional excesses of the Lincoln Administration during the war.

It has never been too late, however, nor is it now too late, for the Supreme Court to correct its course and take up the Taney Sovereign Citizen Equality construction of the Constitution. It could have done so at any moment from the time Dred Scott and Covington Drawbridge were handed down, and it could still do so now. To illustrate this point, I will now go through a series of landmark rulings on civil rights and corporate rights, and show how the Court in each case could have decided to correct course and follow the Taney Sovereign Citizen Equality construction of the Constitution to arrive at much better decisions than the ones they arrived at in historical reality.

Subsequently we will use the acronym “TSCE” to refer to the Taney Sovereign Citizen Equality construction of the Constitution we have described in the foregoing.

Revisiting Slaughterhouse

If a majority of the Slaughterhouse Court had acted on the basis of Taney’s Sovereign Citizen Equality construction of the Constitution, they would have cited Charles River Bridge to say the state has authority to charter monopolies in the public interest, and cited Covington Drawbridge to set aside the butchers’ Equal Protection claim because the slaughterhouse monopoly, being a corporation, need only be treated equally with other non-citizen artificial persons. Given that it lacks the basic Bill of Rights guarantees that butchers as citizens enjoy, the state was not placing it above citizens by granting it the monopoly.

As a creature of state law with no rights other than those tailored to achieve a specific public purpose, the slaughterhouse monopoly presents no danger to the butchers’ equal rights as citizens, nor their sovereign rights as citizens above those of corporations as their non-citizen servants. The corporation, as the collectively indentured servant of all the state’s citizens, cannot be said to abridge the rights of any of the citizens it serves, except by the consent of those citizens through their equal voice in the state government.

Revisiting Cruikshank

The Cruikshank Court, if taking a TSCE approach, would have overturned both Slaughterhouse and Paul v Virginia as blatant violations of stare decisis in regard to Dred Scott. The ruling would go something like this:

All rights of citizenship in the Constitution were denied to blacks in Dred Scott on the argument that those rights include full Bill of Rights protections for citizens of any state against all laws of any other state by virtue of the Privileges and Immunities Clause. The Fourteenth Amendment declared all blacks citizens in the meaning of the Constitution, and therefore vested blacks with full Bill of Rights protection against all state laws, even in their own home states by virtue of the Equal Protection Clause. By violating stare decisis in Paul v Virginia without giving any justification for it whatsoever, the unanimous Salmon Court capriciously and arbitrarily deprived blacks of the full and robust citizen rights and equality that Dred Scott promised them should they ever become citizens, and that the Fourteenth Amendment gave them by making them citizens.

The framers of the Fourteenth Amendment replaced a draft version of its civil rights provision that mirrored the language of the Civil Rights Act of 1866, which had directly challenged the Court’s finding in Dred Scott that Congress had no power to make blacks citizens, with one that referred instead to the Privileges and Immunities Clause, and the replacement wording could only have had a similar meaning to the draft version it replaced upon the assumption that the Privileges and Immunities Clause guaranteed full Bill of Rights protection to traveling citizens in other states against state laws there, just as Taney had ruled in Dred Scott. Thus the replacement was clearly made to make the provision consistent with Dred Scott in that regard instead of repudiative of it. The Fourteenth Amendment also included the Citizenship Clause, tacitly acknowledging the continuing authority of Dred Scott and the possibility that it might not ultimately be overturned or remain forever overturned, leaving the rights of the Negro forever uncertain in the face of a possible constitutional deficiency in the Civil Rights Act of 1866, in respect of Dred Scott‘s findings.

The Equal Protection Clause appears to have been included to bridge the gap between the Dred Scott construction of the Privileges and Immunities Clause and the framers’ preferred construction, which included an implied Bill of Rights incorporation into every state’s Constitution. The intent of the framers was clearly to assure that whether the Supreme Court adopted the framers’ construction of the Privileges and Immunities Clause, or affirmed Dred Scott‘s construction, the result would be the same in achieving their frequently avowed purpose of applying the Bill of Rights to all state laws as applied to all citizens of all states.

Thus it is clear from the record and its plain language that the Fourteenth Amendment, whether on the basis of the framers’ or the Dred Scott court’s construction of the Privileges and Immunities Clause, did apply the Bill of Rights to all state laws as applied to all citizens of all states.

Paul v Virginia capriciously set aside the Dred Scott construction of the Privileges and Immunities Clause, but it did so a year after the Fourteenth Amendment had been ratified by the states. Thus the states ratified it on the understanding that it referred to the construction of the Privileges and Immunities Clause found either in Dred Scott, which was the controlling construction under the law at the time, or in the record of the Framer’s construction of it.

As the Court found in Cooley and in Dred Scott, a constitutional provision or a statute forever has the meaning it had when it was made into law. Redefinition of the terms used, whether by judicial ruling or by unrelated legislation, or by general usage or definition by executive action, cannot change the meaning of the terms as they apply in the law that predates those redefinitions. Since Paul v Virginia‘s construction of the Privileges and Immunities Clause fundamentally alters the definition of “citizen” and “privileges” and “immunities” in the Constitution, it has no effect on the meaning of those terms in the Fourteenth Amendment, even if we were to let stand its construction of the original Privileges and Immunities Clause.

Thus even without overturning Paul v Virginia we must overturn the Slaughterhouse Cases, as its majority opinion unjustifiably read the Paul v Virginia construction of the Privileges and Immunities Clause backward into the Privileges or Immunities Clause of the Fourteenth Amendment, effectively rewriting that recently ratified provision in a brazenly unconstitutional act of judge-made legislation.

But we will simplify matters by overturning the facile construction in Paul v Virginia of that original clause, not merely point out its inapplicability to the Fourteenth Amendment. Dred Scott is hereby restored as giving the proper construction, and the construction to be observed by all courts in the nation, of the Privileges and Immunities Clause, and hence also of the Privileges or Immunities Clause whose meaning derives from it. Indeed we recognize that Dred Scott has always and continuously given the controlling construction of that clause from the time it was handed down forward.

The mobs of Negro-hating thugs who seek to be protected by an invalid ex post facto redefinition of the clearly intended terms of the Fourteenth Amendment to match the aberrrant construction of the Constitution in the Paul v Virginia ruling, that did not even acknowledge the Dred Scott precedent it presumed to set aside without reason, is vehemently denied. The Enforcement Act must be interpreted under the proper construction of the Privileges or Immunities Clause of the Fourteenth Amendment, which vests the victims in this case with all the rights the hateful mobs are accused of violating by their vicious mayhem, and by restoring the proper meaning to both the Privileges or Immunities Clause and the Privileges and Immunities Clause, we release the law to do its just work in redressing the wrongs committed by these cold-blooded, violent minions of fratricidal intolerance.

Revisiting Lochner

The Lochner Court, if taking a TSCE approach, would have held that while bakers who are citizens may not be deprived in general of their right of free contract, and states may not restrain the right of any citizen to enter into contracts the citizen sincerely believes he or she can fulfill safely and in maintenance of good health, the state may prohibit corporations in general from entering into or demanding fulfillment of contracts with any citizen that, in the view of that citizen, endangers the health, safety, welfare or morality of the citizen or of other citizens of the state or of other states,

Thus any citizen may obtain a court order nullifying any contract with a corporation on a showing that in the citizen’s sincere and rational opinion the fulfillment of the contract would imperil the safety, health, welfare or morality of the citizen or of other citizens. And if a corporation claims that a citizen has entered into a contract, but the citizen disputes that the contract was validly consummated, a showing by the citizen that the purported contract would be against the interest of the citizen’s safety, health, welfare or morality, or against that of other citizens, would be sufficient to dismiss the suit.

This would place bakers in a position of authority over their freedom of contract that would allow them to organize formally or informally into a mutual benefit association and speak with one voice to corporate employers to declare what working hours and conditions shall be considered by them, the citizens who are bakers, to be non-injurious to their health, safety, welfare and morality.

This would differ from union collective bargaining under closed shop in that no individual citizen would be bound by this declaration, nor bound to maintain it once joined in it, for being bound to rigid safety specifications that may prove inaccurate to real working conditions later on would surely be rationally regarded by any individual baker thus imperiled as a contractual obligation that is injurious to the baker’s health and safety, and thus nullifiable through an appropriate showing in suit.

Yet this citizen’s-interest nullifiability of citizen contracts with corporations dispenses with any need to close shop. The corporation or corporate industry cannot use divide and conquer or individual secrecy rules to prevent organized demands from arising among its employees, because each individual employee has the right to nullify the contract on his or her own rational basis tested showing that the contract is contrary to either their individual interest or to the public interest.

Thus even if the employees never communicate or coordinate with each other at all, their common concerns would simply emerge from the pattern of their individual nullification actions in the state courts, or federal courts if the citizen is from a different state than the corporation. And all potential scabs who are citizens would have the same right of nullification, and would themselves be unlikely to put up with the oppressive conditions for very long.

Non-citizen natural persons, however, could be hired as scabs by a corporation, but citizens could then simply pass laws banning such persons from working in those jobs. Better yet, citizen employees could simply demand that corporations write nullifiability rights equal to those of citizens in all contracts with non-citizens, and declare that the lack of such provisions in non-citizen contracts constitutes a danger to the welfare of themselves as citizens, and of all citizens, as well as of non-citizens among the state’s inhabitants generally.

Citizens’-interest and public-interest nullifiability, then, replaces all strike and other industrial action with a simple right of individual supremacy of any citizen over any corporation in contract rights.

A transitivity rule would also have to apply to allow a subcontracted citizen to nullify a contract with another citizen contracted in turn with a corporation, and to any recursive iteration of subcontract. The nullifiability of corporate contracts with citizens would thus be transferrable to all recursive subcontracts of that citizen with other citizens, for the recursively subcontracted citizen to nullify.

Revisiting Pace v Alabama

Dred Scott clearly invalidated all anti-miscegenation laws as applied to out-of-state citizens, and the Fourteenth Amendment’s Equal Protection Clause extends that ban to cover in-state citizens as well. Thus a TSCE ruling would have summarily voided all anti-miscegenation laws across the country as repugnant to the definition of citizenship inherent in the Privileges and Immunities Clause.

Revisiting Plessy v Ferguson

The Plessy Court, if taking a TSCE approach, would have struck down all segregation laws as state enforcement of social hierarchy. The Plessy ruling directly contradicted Taney’s ruling in Dred Scott that any anti-miscegenation law is fundamentally incompatible with citizenship and cannot be constitutional if it bans any citizen from marrying any other citizen. In Plessy it was stated:

“We consider the underlying fallacy of the plaintiff’s argument to consist in the assumption that the enforced separation of the two races stamps the colored race with a badge of inferiority.”

If the Plessy Court had followed Dred Scott, it would have ruled just the opposite, that segregation of any kind is indeed a badge of inferiority and therefore forbidden by the Constitution to be applied to any citizen, even without any appeal to the Thirteenth Amendment or the Equal Protection Clause of the Fourteenth Amendment.

Dred Scott plus the Citizenship Clause in Plessy would have equaled the resounding overturning of all segregation laws applied to any citizens, period.

Revisiting Korematsu

The Korematsu Court, if taking a TSCE approach, would have struck down all segregation laws and cited Merryman to denounce the executive order interning all persons of Japanese descent as unconstitutional, not only because it merely enforces social hierarchy, but because it does so by flagrantly and en masse suspending habeas corpus. It would have issued an injunction invalidating any right of any agent of the federal government to carry out the order, and authorizing all state militia, law enforcement agencies and citizens to resist any attempt to enforce the order by any means necessary, including the arrest and detention of any agent of the federal government attempting to enforce the order. It would have declared anyone aiding or abetting or directly enforcing the order to be a criminal acting individually and outside of any government authority, personally and individually culpable for their crimes as if acting entirely alone in the capacity of a citizen and without any hint or color of government authority.

There would have been no thought whatsoever as to any compelling state interest that would justify violating a citizen’s rights under the Constitution, or doing so in a manner that so heinously enforces social hierarchy. A citizen’s rights are absolutely inviolable under the Constitution. It would vehemently strike down an order that so flagrantly annihilated the rights of even one citizen, never mind an entire multitudinous class of citizens and their non-citizen family members. Given the flagrant unconstitutionality of the executive order, it would have struck it down entirely, even though the authority to detain non-citizens might be upheld under a narrowly tailored executive order for that purpose.

Revisiting Brown v Board of Education

In Brown v Board of Education a divided Supreme Court issued a vague unanimous ruling declaring unconstitutional all state laws forbidding black and white children from attending public schools together. The only remedy it offered was a vague directive to federal district court judges to rule in favor of any suits brought by parents of black children to compel school boards to end desegregation as soon as those segregationist-dominated school boards felt they could.

The result was a regime of minimalist compliance from school boards leading to token desegregation, usually involving minority-to-majority transfer options and race-proxy criteria for pupil siting. Ten years later perhaps one in a thousand black students in the Deep South were attending school with whites. Greater integration in the border states still only had token mixes of blacks in predominantly whites schools and vice versa.

Even after the Court followed the lead of the Kennedy Administration in taking a more aggressive stance towards desegregation enforcument, starting with its reversal in Cooper of its endorsement of gradualism in Brown and culminating in its upholding of the constitutionality of forced busing in the 1970’s, the ultimate impact of school integration as a whole was merely the segregation of housing at the school catchment area and school district levels, as white families moved farther and farther out of black neighborhoods to avoid their children having to attend schools with majorities or substantial minorities of blacks.

By forcing all the children, the citizens least prepared for the job, to lead and guide the rest of the people, all the adults, in working through the interpersonal, social and institutional complexities of racial desegregation, Brown v Board of Education doomed the entire project of racial desegregation as a whole. Today we live in the enduring aftermath of that project’s abject failure.

This could have been avoided. We have seen already how taking Taney’s construction of Article IV in conjunction with the Fourteenth Amendment’s Citizenship and Equal Protection Clauses would have resulted in reverse results in Cruikshank, Lochner, Plessy v Ferguson and Korematsu, even if each case were the first in which Taney’s construction had been applied. It would have have concurred with Brown v Board of Education as to the unconstitutionality of school segregation, but would have established a clear and effective remedy that did not violate anyone’s citizen rights.

Since the Equal Protection Clause serves only to filter each state’s powers through the Bill of RIghts as applied to its own citizens as well as to citizens of other states, and to require it to treat all non-citizens equally with each other, it would play only a personal jurisdiction role in a TSCE decision in Brown v Board of Education. The case on its merits would be decided by direct application of the Bill of RIghts and the principle of no state support of social hierarchy. The decision that segregation of schools is unconstitutional would have been reached in one sentence:

“No state may segregate any citizens from any group of citizens holding the balance of power in the state.”

It must be first understood that compulsory education itself almost counts as a violation of the Thirteenth Amendment under TSCE, since it forces children to labor under involuntary confinement at the direction of another person. But a child has no volition under the law, cannot give or withhold legal consent, and thus, in effect, is in that sense like an indentured servant of her or his legal guardian until adulthood, but unlike an indentures servant, her legal guardian is in fact also his or her servant as well, and thus by reflection a child is truly an indentured servant only to him or herself, indentured for the purpose of learning to become a self-sufficient and responsible citizen, thus she or he may not be used for any other purpose than to learn to exercise the rights and carry out the duties of a citizen.

Any other use of a child is a violation of that child’s sovereignty as a citizen. Indeed, this understanding that children in our republic are, in effect, all crown princes and princesses, and all need to be raised to carry the weighty responsibility of sovereignty, was commonly emphasized in the first few decades after the American Revolution in publications and public discourse over child rearing and children’s education.

Since the child is incapable of volition, her or his compulsory education is a-voluntary, not involuntary, and therefore is not a violation of the Thirteenth Amendment. But the child’s sovereignty is exercised on her or his behalf by her or his guardian, and thus the guardian has full discretion to decide how to educate the child to exercise the rights and carry out the duties of a Sovereign Citizen, and can only be held accountable for making poor decisions in this regard in view of the overall results of those decisions.

Thus  states may not constitutionally require any specific method or content in the child’s education. It may, however, offer specific methods and content through a public education system if it is optional for the child to attend.

Having explained how Taney Sovereign Citizen Equality leaves a guardian to take or leave on behalf of a child whatever the state offers in the way of education, all that remains is to assure that whatever the state does offer does not create or sustain social hierarchy.

But this inquiry cannot be confined to the method by which the state delivers education to children. Social hierarchy is a pervasive dynamic system in society. To assess the impact on social hierarchy of the state’s options regarding how to offer education to children, one must look at the options in the context of everything else the state may also be doing that creates or sustains social hierarchy.

In short, a TSCE ruling in Brown v Board of Education would have immediately expanded its scope to the full gamut of social hierarchy enforcing state action, and would only ban school segregation in the context of a clean sweep of all the state’s laws and administrative procedures that enforce social hierarchy.

The Court would thus look at how the elimination of school segregation must fit into an overall remedy for eliminating all social hierarchy enforcement by the state. This is not to say that the remedy must eliminate all social hierarchy. The state need not be in the business of social engineering. On the contrary, its purpose must be to remove itself as a force for social hierarchy, but only in a systemic manner that naturally results in the dissolution of social hierarchy and the maximization of social mobility.

The Court would consider, for example, that as long as blacks have substantially less wealth and income as whites, and less access to credit, there will be a tendency for school desegregation to become a driver for housing segregation as whites flee neighborhoods to prevent their children from attending school with blacks, or with too many blacks.

The Court would conclude that school desegregation would ultimately reinforce racist social hierarchy if implemented in the context of the potential for white flight. Thus it would first target equal access to housing, while at the same time addressing the reasons why whites would rather leave a neighborhood than allow their children to attend school with blacks.

The state would also be required to take measures to achieve racial proportionality of attendance at desegregated schools. Since the school environment is one created and managed by the state, it must not reflect the social hierarchies of the society because in doing so, it would work to reinforce and perpetuate those hierarchies. Rigidity of social hierarchy is inimical to republican government, and the Constitution only encourages, never restrains, all use of lawful power to alleviate that rigidity.

When the rare district court judged applied a strong hand to desegegate a school district after Brown, the common response was for the school board to threaten to shut down the public schools there altogether. A TSCE empowered district court judge would not have flinched and cowered at this, but would instead have called the bluff, and declared that if one school district shuts down then every public school in the state must do the same, lest the state violate the Equal Protection Clause. Meanwhile, the judge would turn the shutdown argument back against the defiant school district, declaring that if the school board does not adopt a plan that achieves desegregation by appropriate incentive to legal guardians of children to enroll their children in sufficiently mixed schools by a certain deadline, the court would declare the entire state’s public education system an unconstitutional use of state police powers to enforce social hierarchy, and shut it down immediately.

Forcing children to go to a particular school, however, cannot be done as it would violate their Citizen Sovereignty. Children’s guardians must be given the choice of them attending any school in the state. Private housing discrimination against blacks must be monitored and prosecuted as a violation of black children’s right to unsegregated education. Any private discrimination of any kind that discourages blacks from moving their children into school districts with top schools in particular specialized areas should be prosecuted as well for that reason.

Magnet schools should be created in every district for specialized training in a wide variety of areas, with open enrollment criteria, and busing provided for any students who are not in walking distance. Voluntary busing for the individual purpose of obtaining specialized training, not forced busing for the state purpose of social engineering, would dampen and defuse and isolate and expose any racist resistance against black children being bused into a white neighborhood where a magnet school is located.

The effect would be to bring students together of different races who have common learning interests and talents. This would give them both the opportunity and the excuse to associate in mutual respect, appreciation and cooperation on common interests. The same would be true of the teachers, who would also be moved to magnet schools based on subject matter expertise and interest.

Rather than mandate this plan, however, the Court would simply sketch it out clearly enough to show such a plan is possible to imagine. They would then require all school boards to draft such plans within 18 months, and district courts to review and approve all plans within 6 months of submission.

The key element the plans must include is some strong incentive for parents to place their children in each particular school that denotes a common educational interest for their children, irrespective of race, which can serve to overcome the white supremacist pressure to place one’s children, all other things being equal, with other white children instead of with black. Every school must have at least one specialized educational attraction built into its program that is unavailable anywhere else in the district. By making all the schools in each district equal in quality but educationally different, the school district would force no one to choose between prejudice and quality, but everyone to choose between racial preference and educational specialization preference.

In reviewing plans, the district courts would be required to use the following test: A school district’s plan must offer open school choice to all parents in the district, and must clearly refuse to accommodate segregational preferences among white parents in their choice of schools for their children. It must demonstrate its refusal by implementing a school differentiation and child pupil assignment plan that strongly advantages the academic opportunities of children whose parents choose schools for their children without regard to the higher proportion of blacks in the student body, or to any factors that show any statistically significant correlation with higher proportions of black students.

States are also required to redraw school district boundaries on the basis of some reasonable set of criteria that in no way statistically correlates with the housing segregation of whites from blacks. There should be no majority black school districts if at all possible, and to the extent possible no white school districts with less than half the proportion of black children than the proportion of black children in the state as a whole.

The plans must be reviewed every two years, and where it appears chidren are being deprived of educational opportunities by white supremacist segregationist motivated school choice by their own parents, the district must implement sufficient academic specialization focus in each school failing to meet the desegregation statistical threshold to overcome the white supremacist bias among enough parents to surpass that threshold. Every two years the specialization incentives must increase until every school has met desegregation statistical thressholds.

In this way, every parent who values the choice of specialized education for their children over their choice of specialized racial association for their children will get preferred treatment. And those who do not, will get what is left over. This does not punish parents for having white supremacist preferences, but it does punish them for prioritizing their white supremacist preferences over the educational advantage of their own children. And it does not deny basic, equal, quality general or specialized education to any child. To the extent they are deprived of their or their parents’ choice of specialization in their studies, it is their parents who are depriving them of it on the basis of their irrelevant racial bias, not the state,

Revisiting Naim v Naim and Loving v Virginia

See Pace v Alabama above. The rulings in Naim v Naim (1955) and Loving v Virginia (1967) under TSCE would have been identical to what it would have been in Pace v Alabama.

Revisiting Bowers v Hardwick

To the extent homosexual conduct in specific sex acts may be deemed by a State to endanger the health, welfare, safety or morality of its people, the State may outlaw such acts,  to the extent the State’s own constitution permits them, but to the extent the State attempts to enforce such prohibitions, the Fourth and Fifth Amendment obstacles to search and seizure of evidence of such intimate acts engaged in within the homes of married couples, and the immunity of married couples from testifying against one another, work to render any such prohibitions irrational and tending to encourage unconstitutional state action by state law enforcement agents. Thus any such prohibitions as applied to married couples must be struck down as repugnant to the Constitution.

Laws forbidding homosexual sex acts performed outside the home (including temporary lodging) or outside of marriage, however, may theoretically pass Constitutional muster, to the extent the State’s own constitution permits them, but to the extent they tend to stigmatize and discourage differentially, in correlation with citizens’ sex or their sexual preference, the full and equal exercise of their rights, even such prohibitions may be found unconstitutional when the effective impact of their enforcement upon the institution or support of social hierarchy among citizens is taken into full consideration.

Citizens of other States do reserve all their unenumerated Ninth Amendment rights against the State of Georgia’s laws, but unlike in a Federal Territory that reservation of rights must be balanced against State of Georgia’s sovereign police powers. The Full Faith and Credit Clause requires the visiting citizen’s home State to subject its citizens to Georgia’s laws when they are within Georgia’s territorial jurisdiction, but only to the extent Georgia’s laws are not repugnant to the home State’s own laws that are designed to sustain the health, welfare, safety and morality of its own citizens and inhabitants while traveling outside the State.

A State’s laws can have extraterritorial effect over its own citizens and inhabitants if its own citizens have designed and approved those laws with such application, but only to the extent they are not repugnant to the laws of the other States in which the home State would like them to apply. Thus if a State deems it harmful to the morality of its citizens to submit to laws restraining certain freedoms they enjoy in their home States and which are regarded as fundamental liberties in their home States, then the Constitution requires all other States to extend Full Faith and Credit to any precedent case law in the home State absolving its citizens from violating such repugnant laws in other States. Yet to the extent the prosecuting State deems intolerably harmful to its own citizens and inhabitants any allowance of citizens of the other State to exercise what their home State deems a fundamental liberty in violation of the prosecuting State’s laws, the home State is also bound by the Full Faith and Credit Clause to yield to the prosecution of its citizen for that violation while in the other State.

One way for States to resolve isolated incidents of this kind of irreconcilable conflict of laws is for the home State to issue an extradition order for prosecution of its own citizen for violating the other State’s law, so that the citizen is forcibly removed by the other State’s authority back to the home State, where the charges are then peremptorily dropped against the citizen. But this method of resolution will not be workable if the frequency of the violation, or the extent of the perceived harm by the prosecuting State, is significant, and in any case it is only a voluntary solution that must be worked out between the States in conflict.

Absent such an amicable workaround to the conflict of laws, the federal judiciary may resolve such cases in equty if they are only infrequent and not perceived as a great threat by the prosecuting State, but may require establishing legal precedent in diversity if the prosecuting State deems the overall impact of the violations from citizens of the other State or States to be intolerable. In that case, the federal judiciary would most likely establish a balancing test between the prosecuting State’s police interests and the extraterritorially rights-assertive State’s police interests. In most cases the prosecuting State’s interests will tend to prevail, since territorial jurisdiction tends to outweight personal jurisdiction in the application of State police powers.

However, the Equal Protection Clause then operates in two directions to uphold the rights of Georgia citizens engaging in prohibited acts outside of marriage. First, the prohibition must not tend to sustain or increase social hierarchy of married people over unmarried people, and second, it must not tend to sustain or increase social hierarchy of out-of-state citizens over in-state citizens. In the latter case the question is whether the prohibition tends to reduce citizens of the prohibiting State to a lower social position than citizens of other States.

The second question would turn on whether the prohibition is the best method of achieving the State’s general objective of protecting the health, safety, welfare and morality of its people as compared to how those objectives are achieved in regard to similar matters under the laws of every other State. If another State has an established and proven method of achieving the objective that requires less restraint of the rights of its citizens, then Georgia must adopt a method at least as liberal.

The first question turns on the impact of the prohibition on unmarried citizens, whether it tends to coerce citizens into marriage who might otherwise not choose to associate in that manner, and whether it tends to stigmatize unmarried people or deprives them of life, liberty or property in a manner that married people are not thus denied it.

The first question thus brings us to the edge of substantive due process, but just to the edge. It seems fairly evident that any significant and systematic deprivation of life, liberty or property, or any systematic burden that causes such deprivation, if it works more severely upon one class of citziens than on another class, it by definition institutes and sustains social hierarchy of the less burdened class over the more burdened class. Thus any law that tends significantly to result in the deprivation of citizens’ life, liberty or property must have the same depriving impact on all citizens, or it violates not only the Equal Protection Clause of the Fourteenth Amendment, but the very definition of citizenship under the Privileges and Immunities Clause in and of itself.

Thus if out-of-State unmarried citizens are more heavily deprived of life, liberty or property by Georgia’s anti-sodomy law than out-of-State married citizens, then Georgia’s anti-sodomy law must not be applied against out-of-State citizens at all, lest it institute or sustain social hierarchy among them in violation of the Privileges and Immunities Clause. But if out-of-State citizens are thus exempted from the law’s restrictions, in-State citizens must also be exempted lest the law violate the Equal Protection Clause of the Fourteenth Amendment.

The final result of this federalist calculus of immunities and privileges is that the Georgia anti-sodomy law must be struck down as unconstitutional in its entirety.

Though the route to arriving at this conclusion seems roundabout, the round trip is necessary to respect the delicate balances inherent in our federalist system of government, and while the elaborate pathway to arrive at this result in this case may seem unnecessarily convoluted, there are other cases in which the respectful path taken here sets a valuable precedent by which rights and duties, and conflicts of law, may be resolved from fundamental principles of federalism in ways that would not be possible under more direct, blunt and ultimately anti-republican means of adjudication.

Revisiting Citizens United v FEC

Under TSCE a corporation has no right not explicitly granted in its charter, and is not a citizen under the Constitution, thus it has no rights at all in States other than its home State, the State that has chartered it, except to the extent each other State explicitly or implicitly grants it a subset of the rights it is granted in its charter.  Thus unless its charter explicitly grants it the right to speak politically, it may not, and any political speech made by its agents must be considered the individual action of those agents and not an action of the corporation at all.

Any expenditure from corporate funds for political speech not spent in conformity to an explicitly chartered right to make such speech is thus a form of embezzlement. Any agent of the corporation who fails to act in due diligence to prevent or promptly expose, correct and punish such embezzlement is aiding and abetting in the embezzlement.

Every shareholder and agent of every corporation already has whatever free speech rights the law affords them. They have no claim to exercise additional free speech rights under a corporate persona. If the State includes any rights to speech under a corporation’s charter, it is to aid in the accomplishment of the State’s overall public purpose in issueing the charter, not to increase the speech rights of the corporations shareholders or agents, and certainly not to entitle its shareholders or agents to any such extra rights.

Early in our nation’s history property qualifications for voting in every state was overturned first by massive disobedience, then by legislative affirmation of the popular status quo. An insurrection and civil war, the Duerr War, was actually fought over the issue in Rhode Island in the 1840’s, resulting in a stalemate in which two rival governments claimed legitimacy for nearly a decade. When the question of which government was legitimate came before the Taney Court in 1849, Taney ruled for the majority that the Supreme Court lacked jurisdiction to decide the matter because it was essentially political, not judicial, in nature. This was Taney’s famous introduction of the “political question” doctrine into Supreme Court jurisprudence.

Benjamin J Curtis, who ascended at a relatively young age from his brief pit-stop on the Supreme Court to an apparently higher office as the chief advocate for Northern aristocratic corporate interests, deftly thwarted the egalitarian ideals of both the Jacksonian Democrat and the Radical Republicans, by inducing the Salmon Court to construct the Privileges and Immunities Clause in Paul v Virginia exactly as Curtis had unsuccessfully proposed to construct it in his Dred Scott dissent a decade earlier. In so doing he accomplished what Taney had blocked him from doing, which is to render the Privileges and Immunities Clause “unmeaning” and to “have no operation” as Taney so aptly declared for the 7-2 majority against Curtis in Dred Scott. In effecting his reversal on the construction of that clause in Paul v Virginia, Curtis got the Salmon Court to chain all human citizens down into the same dungeon of baseline zero privileges and immunities under State law to which Taney had relegated corporations in Covington Drawbridge the year after Dred Scott.

In effect, by eviscerating the Fourteenth Amendment’s Privileges or Immunities Clause, and thus equating non-citizen corporations with human citizens under the Fourteenth Amendment’s Due Process and Equal Protection Clauses, Curtis reinstated the apportionment of political power by wealth rather than by household, undoing the post-Revolutionary de facto democratization of government Taney had helped first legitimize and enshrine into law with his activism as a young man in the Maryland state legislature to repeal all property qualifications for voting and to criminalize the then common practice of corrupt electioneering through distribution of food and liquor at voting places.

Today we live in a Curtis-architected political environment, hemmed in between the juggernaut of the aristocrats’ robot slave corporations and their captive federal government’s “compelling state interest” whip to keep those robot slave corporations, and their own subordinate human slaves and their hapless State governments, in line in the chain gang of service to the whims, proclivities and, most frighteningly, the megalomaniacal social engineering agendas of the wealthy. Nothing illustrates this fact more bluntly than the following declaration by Justice John Paul Stevens in his ringing dissent joined by three other Justices in Citizens United v FEC:

“The majority grasps a quotational straw from Bellotti, that speech does not fall entirely outside the protection of the First Amendment merely because it comes from a corporation. Ante , at 30–31. Of course not, but no one suggests the contrary and neither Austin nor McConnell held otherwise. They held that even though the expenditures at issue were subject to First Amendment scrutiny, the restrictions on those expenditures were justified by a compelling state interest.”

Neither the majority nor the dissenting minority in Citizens United had any compunction about acknowledging the fundamental right of corporate robot slaves to claim on behalf of their masters superhuman turbo-charge-pumped uncensored conduits of speech into the marketplace of ideas, through which the wealthy are able to propagate elaborate tissues of half-truths inducing specifically intended popular belief in certain lies, drown out selected individual citizen voices through delegitimization campaigns in collusion with mercenary corporate mass media purveyors, and amplify selected citizen voices to forward the political aims of the wealthy, who remote control these political machinations of the masses’ frames of political and factual perception from behind the blast wall of the corporate veil.

In Charles River Bridge Taney ruled for the majority:

“The object and the end of all Government is to promote the happiness and prosperity of the community by which it is established, and it can never be assumed that the Government intended to diminish its power of accomplishing the end for which it was created. …

The continued existence of a Government would be of no great value if, by implications and presumptions, it was disarmed of the powers necessary to accomplish the ends of its creation, and the functions it was designed to perform transferred to the hands of privileged corporations.”

If this principle of strict construction in favor of the maximal preservation of State power against its being “transferred to the hand of privileges corporations” is fit be applied to a corporate charter concerning a single bridge, how much more mandatory is its application to the Privileges and Immunities Clause and the Privileges or Immunities Clause of the Constitution? Applying it to the Constitution in the context of revisiting the Citizens United decision would compel a TSCE Court to reject the common presumption of the majority and the dissent in that decision that corporations have a presumptive First Amendment right of free speech, for nothing in the Constitution either requires or justifies such a presumptive transfer of sovereign-immunity imbued speech “to the hands of privileges corporations.”

To the extent a State’s charter for a corporation can be construed as granting the corporation a right of speech at all, that grant must be “to promote the happiness and prosperity of the community by which it [the State] is established,” and the permitted speech must serve at once both that end and the specific end for which the corporation has been chartered, not either or, and it must also do so in conformity to the Constitution’s guarantee of a republican form of government in each State. Taney’s construction of the Constitution promotes that end while minimizing any requisite diminishing of State power through its transfer or surrender to any corporation it charters, or indeed to the Federal Government itself.

State governments have little interest in the outcome of a contest for power between a Federal Government agency like the FEC and a State-chartered corporation like Citizens United. Its interest is uniformly in minimizing the aggrandizement of its own power by either party, while maximizing the extent to which the actions of each support each State’s own ultimate end, the welfare and happiness of its people. And it must seek to achieve this end with as little constitutional grant of power to restrain the rights of its citizens as possible.

A closer examination of the act of incorporation, however, is required to bring us to the heart of the matter presented in the Citizens United case. Although nowadays State executive agencies issue corporate charters, they do so only by direction from State legislatures, which originally chartered corporations through specific Acts. The authority under which corporations are chartered remains under the State legislature today, and the executive merely administers the process.

State legislatures are very limited in their political speech and for obvious reasons. They are elected directly by voters, so they should not as a body have any power to lobby voters. What they may not do directly, they may not do indirectly either. For this reason, no corporation, being the creature of a State legislature, has any legitimate authority under any State’s laws to lobby voters in any way. Thus by ruling in case after case that corporations are persons in the meaning of the Due Process and Equal Protection Clauses of the Fourteenth Amendment, the U.S. Supreme Court has effectively inserted into every State corporate charter a delegation of powers to corporations that neither the State legislature nor the Congress, nor the U.S. Supreme Court, itself possesses under their respective Constitutions. How can any of these government bodies delegate to corporations these powers they do not themselves possess? They cannot do so lawfully. In doing so, they illegally usurp the power they thus delegate.

But we must go further. Under today’s Curtis-architected political environment, political power is apportioned as much by wealth as by vote, and thus to the extent State legislatures or the U.S. Supreme Court authorizes corporations to engage in commercial speech, to the extent that speech acts to lobby rather than even-handedly to inform public opinion for or against certain consumer choices, that speech, every bit as much as political speech lobbying voters, acts to influence the allocation of political power in a manner that directly rebounds to the benefit or detriment of the fate of specific legislation or candidacies for public office. Thus even the right of corporations to speak commercially to lobby rather than merely inform consumers amounts to an illegitimate exercise of power by the State legislature, and exercise of power it does not have, and cannot have without undermining the specific guarantee under the Constitution of a republican form of government in every State.

Under TSCE, individual citizens could bring suit to silence corporations whose speech can be rationally established as the cause of some public ill or private harm against the plaintiff, and judges would be required to rule strictly in favor of the reservation of power from corporations to act injuriously to citizens in every case. Thus the wealthy would gain no political amplification of their own speech through their investment in or control over corporate speech. This would be good for corporations, for they would then once again be legitimately empowered to lobby consumers without the risk of undermining the integrity of the State’s republican form of government, but they would still have no authority to lobby voters or legislators in any way.

As creatures of the State legislature, corporations must be as silent as the legislature itself on its own pending legislation, on its own members’ re-election campaigns, and on any other elections. The majority in a State legislature certainly may not legally spend public funds on a mass media campaign in favor of legislation it is planning to bring up for a vote, nor can it do so after it has passed a piece of very unpopular legislation that it hopes to persuade voters to view more favorably in time for the next legislative election campaign season, nor can it issue resolutions or fund campaigns for or against candidates for executive office.

For the same reason, State legislatures have no authority to charter corporations with the power to do the same, nor does the U.S. Supreme Court have any authority to inject such powers into corporate charters through judge-made laws of a kind even the Congress has no authority to pass, namely, laws effectively amending the Constitution to grant Bill of Rights reserved rights to each State legislature as if that body were a sovereign citizens in and of itself, with the authority to procreate at will as many offspring corporate sovereign citizens as it pleases to do its bidding, rather than the duly elected representative body of its citizens it truly is, with no inherent rights of its own.

Returning to our examination of the act of incorporation, it is clear today’s corporations are not actually owned by their shareholders, and indeed no corporation in the past that has limited the liability of its shareholders has actually established those shareholders as its true owner. Ownership necessarily entails ultimate total liability. By severing the tie of ultimately liability, the State legislature severs the tie of ownership, and converts shareholders into mere lenders.

Shareholder agreements are in fact money lease agreements. The founding body of shareholders acts as fiduciary intermediary between the shareholders and the directors who lease the money from the body of shareholders. When the shareholders appoint the directors, they are actually contracting with the directors to lease to them the capital funds deposited by the shareholders into the corporate bank account in exchange for share certificates of deposit, The directors borrow that money on an annual basis, and the shareholders hold annual meetings to renegotiate the terms of the lease with the directors and the officers they appoint to direct the operations of the business enterprise.

The same is true of non-profit corporations, except that instead of shares in a joint lease of funds, the capital is accumulated from grants, donations and member dues. The money, however, is nonetheless held in deposit by a shareholder body, and the money is no less leased from that body to the directors and the lease annually renegotiated just as in a for-profit corporation.

Some have argued that corporate directors and officers are more akin to owners than shareholders. That is only true, however, to the extent one identifies ownership with immediacy of control. Ownership, however, is most accurately defined as ultimate control. Directors and officers most definitely do no hold or exercise ultimate control over corporations, for with ultimate control comes ultimate liability, and directors and officers are always indemnified by corporations specifically because they are recognized as not being ultimately in control of or liable for the corporation’s deeds. And neither are the shareholders ultimately liable. Who, then is?

The answer ought now to be patently obvious. The State legislature creates the corporation, defines its existence and certifies it as a legal entity, regulates its operation, withdraws dividends out of its profits at will through its taxation power, and severs the ownership tie between the shareholders and the business enterprise by absolving the shareholders of ultimate liability for the corporation’s actions and debts. In thus relieving the shareholders of the defining burdens of ownership, the State legislature assumes those burdens on behalf of the citizens who created the State itself.

The people of each State in fact own all the corporations chartered in that State. And nothing makes that fact more clear than the phenomenon of “Too Big To Fail,” in which State governments, or the Federal Government on behalf of the States, “bail out,” which is to say exercise ultimate liability for, or in other words ownership of, the acts and debts of a corporation.

No one would argue that the State has authority to bail out a corporation it has chartered. And no one would argue that the State has authority to choose not to bail it out. The State is free to inject or not inject lifesaving capital into any corporation it charters. And it is free to call upon the assistance of the Federal Government for that purpose as well. This authority is inherent in the power to charter corporations. For no one would argue that the State has a similar authority in regard to sole proprietorships or private partnerships.

The State has no authority to bail out sole proprietorships or private partnerships because to do so would be to transfer public funds directly into the hands of private citizens. Yet the State may bail out any corporation it chooses precisely because in doing so it is merely transferring public funds to a publicly owned enterprise.

But if a corporation is owned by the State, why is the State not liable for its debts upon dissolution? The answer is very obviously because the Eleventh Amendment forbids anyone but another State or the Federal Government to sue the State for those debts, except to the extent the State consents to be sued for them, and no State would ever consent to such a suicidal thing as that. Nor would any State sue another State, or the Federal Government any State to recover corporate debts, for fear of starting a litigation civil war.

To the extent the outcome of elections and the outcome of legislative, executive and judicial decision-making are influenced by constituents in proportion to their wealth rather than to their vote, and to the extent the State delegates its power unequally with a bias toward the wealthy, State legislatures and the courts, corporations and executive agencies they create must be restrained from engaging in commercial speech or commercially biased activity of any kind. Obviously this would severely hamper traditional business practices. Only unbiased informational commercial communication would be permitted, and all marketing would be forbidden. While such a marketplace may sound idyllic in some sense, it would most likely prove completely inoperable and lend itself to corrupt regulation because the regulatory power of the State policing commercial speech would essentially dictate the winners and losers in the marketplace.

Under TSCE individual citizens may silence corporate speech and forestall corporate action in court by a mere showing of individual harm or harm to the public good, thus neutralizing the political bias created by the State legislature’s delegation of its power to wealthy investors in proportion to the number and value of shares those investors hold in corporations, and the degree to which some of those large shareholders also tend to gain appointment as directors and officers of corporations. Thus TSCE would allow the status quo manner in which States now charter corporations to persist, including the grant of broad latitude to engage freely in commercial speech to to lobby consumers, investors and business leaders to influence their commercial decisions.

Nowadays for-profit and even tax-exempt non-profit “public benefit” corporations are created by State legislatures through pro forma procedures administered by executive agencies, without any need for the incorporator to avow any particular public purpose or promise any particular public benefit in exchange for the privilege of incorporation. This may seem odd, particularly in regard to “public benefit” non-profits, but the reason for this practice is quite simple. The public purpose of your run-of-the-mill for-profit and tax-exempt non-profit corporation alike is to generate tax revenue for the State.

In the case of for-profits, this explains why States rarely take action to hold them accountable to any apparent public purpose at all, except the vague purpose of engaging in economic development while sharing the risk with the State. States permit, encourage, even in many ways require by law every for-profit corporation to prioritize the maximization of profit, market expansion and market share, often at the cost of consumer health, safety, security, morality or welfare. States do this for one paramount reason, to drive their corporate offspring to generate as much taxable revenue as possible for the State.

In the case of non-profits, the availability of enormous sums of foundation and corporate grant money that goes unawarded each year, the dues collected from members by membership non-profits and the enormous market for selling “wares of conscience” in the form of promised charitable, educational, scientific or religious services in exchange for donations, sometimes coupled with “incentive gifts” that tokenize the indulgence embodied in the gift, non-profit corporations raise enormous sums of capital every year, and never as much as is available in the market. Non-profits often also compete with for-profits in marketing and selling goods and services at market rates, or even engage in full-scale commercial production, research and development. While States do not collect taxes directly from   these commercial activities of non-profits, either from their market income or from their expenditures, the capital non-profits accumulate from many otherwise untapped in-state and out-of-state sources are usually disproportionately spent in-state, creating jobs directly and, through purchasing, indirectly, and thus generating both income tax revenue and sales tax revenue downstream for the State.

Non-profits also tend to raise the public image of the States in which they are chartered and operate, potentially increasing the State’s overall attractiveness for out-of-state investment, immigration of highly paid professionals, tourism and additional non-profit clustering activity and headquartering.

States thus need look no further than the overall economic benefits of both for-profits and tax-exempt non-profits for the public good those corporations do for the State. Only when enough of their offspring corporations do such damage to the welfare of its citizens, or to the State’s reputation, that the short-term social losses outweigh the long-term economic benefits of maintaining a fundamentally economic mission for its entire corporate portfolio will a State consider altering the basic policy that equates the public benefit of all its corporations with the bottom line tax revenue they collectively generate for the State.

In truth a State likely looks also to a mix of economic factors to evaluate the overall public benefit it derives from its corporate portfolio, including market diversity, job creation, income inequality, etc. It is this out of this overall evaluation of the State’s economic well-being from which the sensibility of the majority in the actual Citizens United v FEC ruling springs. In its ruling the Citizen United Court waxed glowingly of the enormous benefit the State legislature gleans from the expert and insightful corporate speech of its massive stable of corporate entities when it comes to vital concerns of an economic nature that are continually pressing themselves upon the beleaguered minds of State legislators and their paltry staffs of mostly fresh-out-of-college policy aides, not to mention the even more beleaguered minds of voters on such difficult issues.

It is hard to argue with the basic rationale. The people own the corporations, and the corporations do their good work of generating jobs, tax revenue, food, shelter, entertainment, religion, science, education and charity or its more popular epithet among the younger indulgence-seekers, “social justice” for the people. Why shouldn’t the people also beneifit from the sage guidance on economically tinged political questions from the same public servant corporations who have been doing so much good for the people in the economic realm for as long as anyone can remember?

Indeed, there is no one alive today who can remember the early days of the republic, the days Taney remembered from his youth, nor the days Taney witnessed over the course of his life in which he saw the transformation of our nation from an agrian-artisan economy with some significant import-export activity, to a bustling manufacturing and service economy well on its way to becoming today’s metropolitan criss-cross of corporate inter-relations threading through the warp and weave of the average citizen’s daily existence.

Corporate directors and officers today are self-appointed public officials of a second executive branch of government created by State legislatures through their power of incorporation. The mission of this branch is to manage the economy of the State in a highly monetized manner that generates maximal tax revenue for the State, as well as fulfills the economic needs of the great majority of its consumers. Corporations chartered in other States and overseas, however, play an increasingly dominant role in this economic branch of government in most States. The local branch managers of those corporations then serve in the role of State officials for this branch in each State. Thus, increasingly, the economic State officials on some States are being appointed by the self-appointed economic State officials in other States where the largest multi-State and multinational corporations are chartered or headquartered.

Yet Taney made it clear in Augusta Bank v Earle and in Covington Drawbridge that corporations only exist in a State in which they are not chartered at the pleasure of that State’s legislature. Thus any branch of a corporation chartered in another State is no less owned by the State in which it operates than is a corporation both chartered in that State and operating there.

Unde TLCE there will be no need to alter the status quo in regard to the de facto self-appointment of State economic branch officials, or their appointment of State economic branch officials in other States, because all of these officials will be held strictly and directly accountable to each and every citizen for the demonstrable impact of their actions on the welfare of individuals and of the public.

The impact of legislative decisions on the welfare of the people nowadays is less direct and profound than the economic decisions of corporate directors and officers. By making those directors and officers directly accountable in the courts to individual citizens representing the public ownership interests in the corporations those directors and officer manage on behalf of the citizens will go a long way towards neutralizing any pressing need to suppress the corporate-funded speech of those directors and officers on political matters. Once the public understands that they actually own all of the corporations chartered or operating in their States, they will vote in accord with their sense of ultimate responsibility for the deeds of those corporations.

Corporate speech only has undue political influence to the extent it successfully masquerades as the voice of independent wealth and industry that stands apart from the State as if its political equal. Once it is fully understood among voters, consumers and shareholders alike that all corporations are their property in both title and lien, they will perceive corporate political speech in a very different light. They will see it for what it is, the voice of a subordinate speaking out of line, its temerity matched only by its negligence of its delegated duty. Self-appointed or not, when citizens realize that corporate directors and officers are actually public officials, the citizens will put them in their place, and the republic will experience the flourishing of civic prosperity that every fairy tale recounts upon the momentous occasion of the return of the good and sensible sovereign, in our case the sovereign people, to her throne.

A TLCE decision in Citizens United would strike down the law restricting corporate expenditures on political speech, but completely forbid all corporate political speech as a species of prohibited State legislature political speech unless the State’s constitution explicitly allows such speech by its State legislature, and allows it to delegate that power of speech to its corporate offspring. It would reaffirm Dred Scott and Covington Drawbridge with the effects described under the descriptions of TLCE rulings in those cases given elsewhere in this exposition. And it would suggest to State Legislatures that if they seek to empower themselves and their corporate offspring to speak politically, they must amend their State Constitutions to allow it if it does not already, and in so doing they must add a provision in those constitutions requiring full disclaimer and disclosure, in each corporate or State Legislative communication, and presented in a manner no less prominent than the primary message of the communication itself, that “The views presented in this message are offered humbly by the following named public servants to influence the popular will of the people who own and ultimately control all corporations and government bodies operating in this State, in the hopes that their Highness, the honored sovereign citizens who own this State, will adopt the measures these servants humbly recommend: <names of the directors, officers of legislators issuing the message>.”

 

Looking Forward to United States v. Windsor

Under TSCE the Federal Government has no jurisdiction over marriage law, so DOMA is blatantly unconstitutional and is summarily voided. The refusal of the executive branch to defend DOMA in the case is no bar to reaching the merits. The Federal Government cannot frustrate the judicial process by its inaction. To allow its inaction to have that effect would be to grant it a power to interfere in the judicial branch’s activities in a way not delegated to it by the Constitution. On the issue of the “Equal Protection Component” of the Fifth Amendment, although the question does not arise in this case, it is interesting to note that TSCE would find equal protection against federal laws in the Ninth and Tenth Amendments, not in the Fifth, simply because no text in the Constitution either expressly grants nor logically entails a delegation of power to the Federal Government to make laws that treat different social categories of citizens differently. All the areas of law that could possibly have need of social categorization of citizens are reserved to the States, and outside the province of the Federal Government, whose purpose it is specifically to deal with citizens of the United States in a manner that bears no need of variation for local purposes in different States. Since there is no social categorization of citizens that can be said to have a valid legislative purpose nationally which may not require local variation, there is no regulatory justification for the Federal Government to have jurisdiction to legislate in any way on the basis socially differentiating categorizations of citizens.

Looking Forward to Hollingsworth v. Perry

Under TSCE the Privileges and Immunities Clause defines citizenship so as to require the absence of any discriminatory restraint of any state citizen’s equal rights compared to any other state citizen’s under that state’s laws. It also applies Bill of Rights protections against all State laws as applied to out-of-state citizens, and the Equal Protection Clause of the Fourteenth Amendment equalizes treatment of in-state versus out-of-state citizens under each State’s laws. These three constructions under TSCE effectively void any state law as applied to any citizens that deploys state police powers to support or institute social hierarchy.

The California state law passed by Proposition 8 specifically institutes social hierarchy under the law by barring male citizens from marrying into the socially dominant class consisting of all males, just as it would if it barred whites from marrying whites, and barring female citizens from marrying anyone other than a member of the socially dominant class consisting of all males, just as it would if it barred blacks from marrying blacks.

Proposition 8 bars female citizens from enjoying the legal privileges and immunities that come with legal recognition of their primary intimate association in marriage except under conditions by which they engage in that primary intimate association in a subordinate social position to their intimate partner in marriage. By requiring social subordination by sex of a female to a male as a condition of the state’s grant of the associational privileges and immunities attendant to a civil marriage union under the law, Proposition 8 deploys state power to institute sex-based social hierarchy among citizens, and therefore must be voided as entirely repugnant to the Constitution, as it vitiates the very meaning of citizenship by rendering the Privileges and Immunities Clause “unmeaning.”

By forbidding males from marrying males, Proposition does similar violence to the same clause of the Constitution by forbidding citizens from marrying into a socially dominant class of citizens.

Furthermore, by singling out homesexuals for denial of the privileges and immunities attendant to martial association that are afforded to heterosexuals, Proposition 8 deploys state power to support and institute social hierarchy on the basis of sexual preference, which also does violence to the Privileges and Immunities Clause.

To the extent homosexual conduct in specific sex acts may be deemed by a State to endanger the health, welfare, safety or morality of its people, the State may theoretically outlaw such acts, but may not deny marital association status to any pair of citizens on the presumption that such an association necessarily entails engagement in the prohibited sex acts. In general the State may outlaw such acts, but to the extent the State attempts to enforce such prohibitions, the Fourth and Fifth Amendment obstacles to search and seizure of evidence of such intimate acts engaged in within the homes of married couples, and the immunity of married couples from testifying against one another, work to render any such prohibitions irrational and tending to encourage unconstitutional state action by state law enforcement agents. Thus any such prohibitions as applied to married couples in their own homes must be struck down as repugnant to the Constitution.

Ultimately the police power of a State to prohibit homosexual sex acts is defeated as instituting or sustaining social hierarchy, thus all anti-sodomy laws are prohibited by the Constitution under TLCE. See the details of this analysis under “Revisiting Bowers v Hardwick” above.

Reaffirming Dred Scott for Genuine Liberty and Equality

Taney’s construction of the Privileges and Immunities Clause has never been overturned, merely ignored. It cannot be overturned because its construal of the intricate mechanics of federalism in the Constitution is too masterful and flawless. It stands, and always has, the law of the land, not in the shadows, but right there in the open light of reason, dressed in the full regalia of its pre-eminence as the unrefuted ruling of a 7-2 Supreme Court majority, and if we do not see it as such, it is only because we have chosen to look to some other light — be it war, be it empire, be it postmillennial rapture, be it corparate branded slavery — than the Constitution and the law as the guiding principle for our government and our pursuits of happiness.

Taney was not just a textual originalist. He was an earwitness textual originalist. He learned the meaning of the Constitution directly from the lips of many of its original framers. There will never be an interpreter of the Constitution with a better vantage from which to plumb and elucidate the proper manner in which it can best be symphonically rendered as sound government of, by and for the people.

Nor did the Reconstruction Amendments weaken the validity of Taney’s interpretation of the Constitution in any way. All they have done is fill the part-vacant crucible of Taney’s Dred Scott construction of the Constitution by including non-whites under all generic terms referring to people, such as the term “citizens”, and flipped Barron v Baltimore on its head to apply the Bill of Rights against all state laws.

The Equal Protection Clause of the Fourteenth Amendment, if applied to Taney’s construction of the Privileges and Immunities Clause in Dred Scott, would require all states to grant full Bill of Rights protections under their own state laws to their own state citizens, effectively reversing the result in Barron v Baltimore, for state citizens would no longer be allowed to restrict their own privileges and immunities more stringently than they do citizens of other states. Its effect in this regard would be identical to that of Bingham’s intended effect in the Privileges or Immunities Clause of the Fourteenth Amendment, and the intent of Congress in passing it.

Taney clearly implied that genuine equality, including the liberty to choose a spouse of any traditionally forbidden social caste, would be included under the Bill of Rights as a Ninth Amendment unenumerated reserved privilege or immunity. Thus even without overturning the specious construction of the Equal Protection Clause, not struck down in Loving, as permitting self-referential discrimination, the Taney construction of equality under the Privileges and Immunities Clause needs the Equal Protection Clause only to get itself applied to every state’s own citizens, and it can do that without stumbling over the self-referential blind spot designed into the specious construction of the Equal Protection Clause that is currently lodged in the gut of American jurisprudence by a century of and a half of corporate-funded stare decisis concrete.

Feb 13

Born Free

Who in my generation can forget the appeal of the movie “Born Free”?  It captured the ideal of a life, for animals and humans alike, defined by both individual freedom and harmony with nature.

Similar ideals came to us through children’s books and movies like Bambi and Charlotte’s Web.  I remember Charlotte’s Web was the first book I ever read that had more words than pictures.  Then I loved the movie Bambi so much I insisted on getting the book from the library and, despite barely understanding half the words and not being very deft yet at using a dictionary, I fought my way through every single page, guessing the meaning of every word I did not know.

Forty years later I find myself still struggling to understand half the terms of engagement between our natural rights and our place in nature, and I am still guessing at their meaning.

One area that has had me scratching my head for decades has been the field of economics.  I studied and thought I understood “mainstream” economics in my high school and undergraduate years.  My exposure to Marxism, world systems theory, feminist anthropology in Third World development and ecofeminism in graduate school advanced my understanding of economics tremendously.  More recently, my introduction to Austrian economics through my avid appreciation of and support for the Ron Paul for President campaign has advanced my understanding of economics even farther.

If you share either my concern or my perplexity, I invite you to join me in this blog, in which we will explore and develop ways of thinking, living and legislating that refuse to compromise on our fidelity either to the social practice of individual liberty as defined in the Declaration of Independence and in the U.S. Constitution, or to the ideals of conservation of nature’s ecological integrity as most definitively set forth in Rachel Carson’s Silent Spring.

Aug 19

Cantor’s Fallacy – Proof that All Sets are Countable in a True Infinitism Foundation for Mathematics

Cantor’s Fallacy – Proof that All Sets are Countable in a True Infinitism Foundation for Mathematics

Definitions

Let a “class” be anything with parts.

Let an “atom” be anything that is not a class.

Let a thing x be “distinguishable from” a thing y if and only if x is part of a class y is not, or y is part of a class x is not.

Let the act of assigning x and y to classes so as to make them distinguishable from each other be called, “to distinguish.”

Let a “thinker” be a thing that distinguishes things.

Let a class be “discrete” if and only if it has no part, including its whole self, that is indistinguishable from another part or distinguishable from itself.

Let a “set” be a discrete class.

Let a set’s parts be called its “elements.”

Let there be an atom named “zero” that is part of a class named “the natural numbers” and let any part of the natural numbers except itself be called a “natural number.”

Let “the successor of” any natural number be the set containing that natural number and its elements.

Let “the zeroth successor” of any natural number be that natural number itself.

Let “succession” be the act of forming the successor of a natural number.

Let “the strict successors of” any natural number be the set of natural numbers formed by recursive succession on that number, and let any element of its strict successors be called a “strict successor” of it.

Let “the successors of” any natural number be the set consisting of itself and its strict successors.

Let a “sentence” of a set be any many-to-one mapping of zero and its successors to the members of that set.

Let a “sequencing” of a set be any one-to-one mapping of zero and its successors to the members of that set.

Let the “index” of an element of a sentence or sequencing be the first element of its ordered pair.

Let the act of generating a sequencing through acts of distinguishing be called “to sequence” a set.

Let a sentence of a set of atoms “specify” a set if and only if it enables a thinker to sequence that set.

Let the sentence of atoms that specifies a set be called a “specification” of the set.

Let the set of atoms used to specify a set be called the “minimal alphabet of the set.”

Let a “revision of” any specification of a set be another specification that strictly includes the first specification as a part.

Let any specification be called the “zeroth revision” of itself.

Let a “revision sequence” of a specification of some set be a sequence of recursed revisions of a specification beginning with itself as the zeroth revision; let the set be called the “base set” of the revision sequence.

Let the “restriction to” a set R of a mapping, including the mappings from the successors of zero to atoms that are defined to be sentences, have its usual meaning of being the subset of the mapping where the initial pair in each ordered pair is a member of the intersection of the entire mapping’s domain with the set R that the mapping is being restricted to.

Let an “initial clause” of length n of any sentence S, n a successor of zero, be the restriction of S to the set consisting of zero and the first n successors of zero.

Let the “final set” of a revision sequence be the set specified by either the last revision in the revision sequence or, if there is no last revision, by the sentence F formed by the concatenation of all members of the revision sequence in order of succession separated by some atom, called the “revision operator” of that revision sequence, that is not in the minimal alphabet of any revision in the sequence, and whose occurrence in the sentence F is interpreted to mean, “Given the set specified by the restriction of this sentence to the successors of zero less than my index,” which has the effect of deferring the sequencing of the set specified by the entire concatenated sentence to the subsentence with indexes higher than that occurrence.

Let “the revisions of” any specification of a set be the set of successively recursed revisions of that specification.

Let a “definition” of any set be the set containing all the revisions in a revision sequence of the set, as well as that revision sequence itself.

Let the “extension” of any set be the set of things a thinker ultimately sequences as a result of applying all the revisions of the set in the order given by the revision sequence of the set.

Let two revisions sequences be “informationally equivalent” if they result in the exact same sequencing of atoms.

Let a “defining sequence” of any set be any sequence the thinker generates using the set’s definition to specify the set’s extension.

For any number n and revision sequence V of any set, let the “nth set rendition in V” be the set of things specified by the thinker’s sequencing of the set based on the nth revision in V.

Let a set be called “finite” if and only if any sequencing of it includes a number but not its successor; let it be called “infinite” if it is not finite.

Let a set be called “finitely definable” if and only if there exists a finite sequence of atoms that defines its extension; let it be called “finitely undefinable” if it is not finitely definable.

Let the “cardinality” of a set be the number, if it exists, that is not mapped to any member of the set by any sequencing of that set, but is the successor of a number that is thus mapped.

Lemmas

Lemma 0: Every extension of a set is the same.

Proof of Lemma 0: Suppose a set has two distinct extensions. Then that set belongs to two classes, the class containing all classes without the one extension, and the class containing all classes without the other extension. Thus the set is distinguishable from itself. But by definition, a set is not distinguishable from itself. By contradiction, then, every extension of a set is the same.

Lemma 1: A set has a cardinality if and only if it is finite.

Proof of Lemma 1: By definition a sequencing of a finite set maps a number whose successor it does not also map, which is by definition its cardinality. Conversely, if a set has a cardinality, the cardinality is a successor of a number mapped by a sequencing of the set, and yet is not in the set, therefore by definition the set is finite.

Lemma 2: The definition of a finitely definable set has a finite number of finite revisions.

Proof of Lemma 2: If the definition contained an infinite number of finite revisions, or any infinite revision, it would have infinitely many atoms, and would therefore not be the definition of a finitely definable set as assumed.

Lemma 3: The set of all sequencings of any finitely definable infinite set is finitely undefinable.

Proof of Lemma 3: Suppose not. Then there is a finitely definable infinite set A whose set of all sequencings Q is also finitely definable.

Thus there is a finite sequencing of atoms B defining A, and since B is finite it has a cardinality b.

And there is a finite sequencing of atoms R defining Q, and since R is finite it has a cardinality r.

We will us Cantor’s familiar diagonal argument to show there is a sequencing x of A that is not in Q.

R enables a thinker T to sequence Q with the sequencing RQ, which is a sequencing of all the sequencings of A.

We now give the following finite sequencing of atoms that will enable thinker T to sequence A in a way not listed in RQ:

“For each number n, let x be the sequencing of A that maps n to the member of A to which the the nth member of RQ maps the successor of n.”

Since the nth member of x differs from the nth member of each member of RQ, x is not a member of RQ.

Since this is true of any finite sequencing R of the set Q of all sequencings of A, there is no finite sequencing of Q, hence there is no finite definition of Q, the set of all the sequencings of A.

This is true of any finitely definable infinite set A, thus the set of all sequencings of any finitely definable infinite set is finitely undefinable.

Lemma 4: Any set with a revision sequence that includes an infinite revision has another revision sequence consisting of infinitely many finite revisions which provides the same information to any thinker.

Proof of Lemma 4: Let S be a set. By definition of “revision sequence” there is no such thing as a finitely undefinable set of revisions of S. Thus any revision sequence V of S contains a finitely definable set of revisions. Thus we can replace the (finitely definable set of) infinite revisions in V with an infinite number of finite revisions by taking, as each successive revision Wn, increasing finite segments of each infinite revision for an increasing number of infinite revisions. As n increases, the revision Wn approaches V, thus V and W provide the same information to any thinker T.

Lemma 5: Any finitely definable set has a defining sequence and thus an extension.

Proof of Lemma 5: Suppose the finitely definable set S has no definition. S either has no elements, has some element, or has both no elements and some elements.

Case 1: If S has no elements, it has a definition containing the following:

a) a revision sequence that includes only one specification, which is an empty mapping of the elements of S with the non-negative integers, and

b) that empty mapping as the only revision in the sequence.

Case 2: If S has a finitely definable set of element, it has a definition containing the following:

a) a revision sequence that includes only one specification, which is a one-to-one mapping of the non-negative integers to the finite unique identiers of all its elements, and

b) that mapping as the only revision in the sequence.

Case 3: If S has both no elements and some elements, then it belongs to the class of all things with no elements and the class of all things with some elements, hence is distinguishable from itself, and is therefore not a set.

Since all cases lead to contradiction when assuming the contrary, any set S has a definition.

Lemma 6: Any finitely undefinable set has a defining sequence and thus an extension.

Proof of Lemma 6: Let U be any finitely undefinable set. Let V be a revision sequence of U, and let |V| be the set of revisions in it. Then |V| either consists of infinitely many finite revisions or contains an infinite revision (containing infinitely many atoms).

Case 1: If |V| consists of infinitely many finite revisions, let Z be the result of the sequential application of all members of V according to their ordering in V.

Let Un be the nth revision of U in V. Let Zn be the one-to-one mapping, to the members of Un, of the positive integer indices that are exactly the set of products of the nth least prime and any number of instances of itself and of any lesser primes.

Then as n increases, the domain of Zn approaches the set of all non-negative integers and its range approaches U. Thus Zn approaches a one-to-one mapping of the non-negative integers with U, and that mapping, Z, is a defining sequencing of U.

Case 2: If |V| contains finitely many revisions, including one or more infinite revisions, then by Lemma 4 U has another informationally equivalent revision sequence W that fits Case 1.

Theorems

Theorem 1: There exists a one-to-one mapping of the real numbers with the non-negative integers.

Proof of Theorem 1: The set of real numbers maps one-to-one with the set of all sequencings of the non-negative integers, which is finitely undefinable by Lemma 3. By Lemma 6 it has a defining sequence, which by definition is a one-to-one mapping of its members with the non-negative integers. By transitivity of one-to-one mappings, the set of real numbers maps one-to-one with the set of non-negative integers.

Theorem 2: All sets are countable.

Proof of Theorem 2: Since there exists a one-to-one mapping of the real numbers with the non-negative integers, by defintion the real numbers are countable. Let Q be any set. Since Q is a set, by definition it is indistinguishable from itself, thus it is either finitely definable or finitely undefinable, and not both. If it is fintely definable, it has a one-to-one mapping with the non-negative integers by Lemma 5. If it is fintely undefinable, it has a one-to-one mapping with the non-negative integers by Lemma 6. In both cases, the mapping proves by definition that Q is countable.

Cantor’s Fallacy in His First Uncountability Theorem

Cantor’s First Uncountability Theorem held that for any sequencing W of the real number interval [0,1], given a real interval [a,b] within [0,1], the sequencing X of a subset of W that takes the first element a’ in W in that interval, and then the next element b’ in W that lies in [a’, b], and then the next element a” that lies in [a’,b’], and the next element b” in W that lies in [a”, b’], and so on and so forth with incraeasing index of iteration n in X, then the upper and lower bounds of that shrinking interval must either approach the same real number as n increases, or must approach different real numbers with a real interval between them. In either case, there must be a real number in the interval [a,b] that is not an element in W.

Cantor’s theorem is just a simple application of our Lemma 3 if by W he means a finitely definable sequencing of real numbers in [0,1]. Certainly no finitely definable sequencing will exhaust the real numbers in [0,1], because they are finitely undefinable. But if Cantor here means any sequencing, finitely definable or not, will fail to exhaust the real numbers in [0,1], then he is wrong. And it is clear from the conclusions he later draws from this theorem that he does indeed fallaciously believe he has proven here that no sequencing of the real numbers in [0,1] at all can define the real numbers in [0,1].

Suppose, then, W sequences a finitely undefinable set. Then by Lemma 6 we can have W be the limit of an infinite sequence of revisions of the set of real numbers in [0,1], each revision a one-to-one mapping of an infinite subset of the set of real numbers in [0,1] with an infinite subset of the non-negative integers, and each revision specifying the set’s extension in a finite string of symbols that includes the entire specifying string from the immediately preceding revision.

Of course, since there is no telling how the set’s specified extension changes from one revision to the next, the revision sequence does not “converge,” but it does not have to. We know that the specification string diverges, and that is all that we require to maintain that the sequence is not finitely definable. We know that each revision includes the information from all prior revisions, though it may use it in a very different way, potentially changing its mind about how to use it to specify an extension, over and over again. A finitely undecidable set’s ultimate specification of its extension, its ultimate sequencing of its membership, is the result of a linearly ordered but endless process of deliberative revision.

The fact that this deliberative process takes infinitely many steps instead of finitely many does not mean the resulting infinite set has “more” members than the infinite set whose membership is specified in finitely many steps. The “moreness” is not in the size of the set’s resulting extension, but in the number of successive independent choices made to arrive at the final extension, an infinite number versuses a finite number, and the total size of the specifying language string, an infinite string versus a finite string.

Now, if we define another set with a specification string that uses a finite string as a name to refer to the extension of a set already defined by an infinite string, and then we revise infinitely many times to build up an infinite string as well, then in some sense the new set is based on a “higher infinity,” but only because it is infinite at two levels of reference, the result of two recursive loops of infinite-revision limit-taking. This also, however, has no bearing on the size of the resulting set extension thsu specified. The “moreness” in this case, rather, has to do with the number of levels of embedded reference to infinitely defined sets the specification string contains.

So the problem with Cantor’s proof is that his assumed sequencing of W is not general, but implicitly a finitely defined sequencing. We can show this by describing a finitely undefinable instance of W which is not susceptible to his diagonalizing argument.

For any n, let Wn be the nth revision of W defined by the finite string Dn, and let the range of Wn be distinct from that of any prior revision in the sequence. Let Zn be the mappping defined by Dn of the members of Wn one-to-one with the set containing zero, the nth largest prime number and its products with any number of instances of itself and any lesser prime numbers. As n increases, let Wn approach the set of all real numbers in [0,1], while Zn approaches a one-to-one mapping of the real numbers in [0,1] with the non-negative integers.

What justifies our “letting” Wn approach the set of all real numbers in [0,1] as n increases? We know that for any n, the finite string Dn that defines Wn cannot map to every real number in the interval [0,1], because, being a finite string, it is susceptible to diagonalization. What justifies us assuming that an infinite sequence of such diagonalizable mappings will approach in the limit a non-diagonalizable mapping?

Our justification is precisely in the fact that finitely undefinable mappings are not diagonalizable, and that hence an infinite sequence of optimally short finite definition strings, each necessarily incorporated into the next so that the next is necessarily longer, approaches a necessarily infinitely long string in the limit. In the case of our Dn sequence, the infinitely long definition in the limit, D, cannot be reduced to a finitely long definition because the range that each Dn defines for Wn is distinct from that which every other Dn defines, thus D contains infinitely many substrings that define infinitely many distinct sets, and since the manner in which each successive Dn distinguishes the set it defines as the range of Wn is entirely arbitrary, a diagonal argument can prove that there exists a sequence Wn whose range sets defy generation by any finite string of any language.

Thus we may simply let Wn be such a diagonal sequence, and the Dn approach an infinitely long limiting definition string D that defines a limiting mapping W of the Wn whose range is a subset of the real interval [0,1] that is distinct from any of the finitely defined range sets of the Wn. Since the range of W cannot be finitely defined, it cannot be the range of any mapping of any sequence Wn, for any n, for any sequence of definition strings Dn. But those ranges are all the diagonalizably defined subsets of the real interval [0,1]. Thus the range of W must be a non-diagonalizable subset of real interval [0,1]. Without loss of generality, we can simply let W be the total subset of the real interval [0,1], namely, the entire interval itself.

Applying Cantor’s proof to W thus constructed, consider a real interval [a,b], and the sequencing X of a subset of W that takes the first element a’ in W in that interval. Now, how do we determine what a’ is? We have to determine what non-negative integer a’ gets mapped with by Z. But we cannot determine this in a finite number of steps. The ultimate sequential non-negative integer indexing of the real numbers in [0,1] by Z is only accomplished through an infinite succession of revisions, and at each revision the real numbers that had been assigned certain non-negative indices in the previous revision will likely be reassigned to new indices. So when Cantor blithely refers to “the next real number in the list W that is greater than a and less than b,” he is actually writing shorthand for an infinite length specification, and is thus introducing an infinite length string into his proof, making his proof itself infinitely long.

But his proof relies on a construction. To work, it must succeed in constructing the sequencing X that marches the left and right legs of the original interval [a,b] it begins with, left-right-left-right deeper and deeper inward but never such that left and right meet. Each step in that inward march is meant to peg an ever-higher non-negative integer index, in our sequencing W of the real numbers in [0,1], to a left or right footprint, a lower bound or upper bound of a shrinking interval on the real line, thus assuring that the gap between those bounds, the forever shrinking but indelible gap between the left and right feet, will forever be out of reach of any index in our sequencing W. The march is meant to stomp down the entire infinite stretch of W outside that unreachable gap.

The problem is, for X to stomp down W in that way, it must determine the next element in W to stomp on at each step. To determine that, it must take each element of W in order and compare it to the last two elements in X to see if it falls between them on the number line. To take the next element of W it must first complete the construction of W; otherwise the mapping of the reals in [0,1] with the non-negative integers is incomplete and provisional only, and can provide no basis for stomping down W outside the gap with finality. But we have shown that W, to be a sequencing of the real numbers in [0,1], must be the limit of an infinite sequence of revised mappings of subsets of that real interval with a progressively inclusive infinite subset of the non-negative integers. To complete such a construction requires an infinite number of steps.

But we cannot afford to take an infinite number of steps before taking our first stomp. If we do that, we make our construction of X itself infinite in length, and in particular we make the entire infinite construction of W a required first step in the construction of X, with the rest of the construction only possible afterwards. This makes |X| a finitely undefinable set just like |W|, so either W and X can both be mapped one-to-one with the non-negative integers, or both cannot be. If both cannot be mapped one-to-one with the non-negative integers, then X cannot be constructed. If both can be mapped one-to-one with the non-negative integers, then X can be constructed and the proof may proceed as planned, and it leads to contradiction as planned, proving that |W| cannot be mapped one-to-one with the non-negative integers. But if |W| cannot be, |X| cannot be either since X is constructed by reference to W, and W does not exist. If that is the case, the proof breaks down again since there is no way to construct the sequence X required for the proof.

So to avoid this collapsing of Cantor’s proof into self-negating paradox, we have to find a way to construct |X| without first having to complete the construction of |W|. The only way we can do this is to try to construct |X| and |W| simultaneously, hopefully keeping our construction of |W| one step ahead of our construction of |X| so that as we pin down the values of W we are able to stomp them down outside our gap by assigning them as needed as values of X. Can we accomplish this feat?

No, we cannot. For the construction of W requires there to be no rule-driven restriction on the changes to the extension of |W| from one revision of W to the next. So the values of W at every index are utterly unsettled through every step of its construction until the very end, at infinity. Only upon the completion of an infinite number of revision steps does each index of the sequence W settle finally on a one-to-one assigned real value in [0,1]. Thus Cantor’s proof cannot avoid completing the infinite construction of W before embarking on its construction of X, hence it cannot avoid collapsing into self-negating paradox, rendering it meaningless and inconclusive.

Cantor’s Fallacy In his Second Uncountability Theorem

Cantor’s fallacy was his assumption that no method of construction exists to map the real interval [0,1] one-to-one with the non-negative integers. We have provided such a construction using the very same principle of the supremum construction of infinite sets that was essential to Cantor’s set theory, and we have shown that the inaccessibility of the resulting bijection’s specific values to finite-length definitions of sets and sequences turns out to entail the non-existence of cardinals above countable infinity.

Cantor’s second uncountability proof meets a similar fate. In that one he claimed that any function f mapping a set S one-to-one and onto its power set P(S) can be used to construct a paradoxical set X, namely, the set of all elements s of S for which s is not an element of f(s). Since f maps some element of S to each and every subset of S, there must be some element x that it maps to X. But if s is an element of X, then by the definition of X it is not an element of X. And if x is not an element of X, by definiton of X it is indeed an element of X. Since either way, x both is and is not an element of X, the premises lead to contradiction and thus their conjunction must be false.

There are actually two ways our foundation for mathematics, which we are calling True Infinitism, dispenses with Cantor’s proof as fallacious.

First, X is simply not a set, because it is distinguishable from itself.

Second, X is constructible only with an infinite definition, because its definition incorporates that of the finitely undefinable function f. First, as a formality, we point out that we know the domain S and range P(S) of f are both infinite, because if not, the cardinality of P(S) would exceed that of S and f would not be one-to-one and onto. Now, if P(S) were finitely definable, then since it and P are infinite, f could not be the limit of any infinite sequence of revisions fn mapping subsets Sn one-to-one and onto their power sets P(Sn). But that limit is precisely what f must be constructed as according to True Infinitism. Thus by contradiction f, and with it X, must be constructible only with an infinite definition. Both f and X are thus finitely undefinable.

Since f is finitely undefinable, it maps S one-to-one and onto P(S), but yields no method of determining which element of s maps to which element of P(S). So if we try to construct X by using f, find we cannot. The specific values of S and P(S) mapped by f can only be established by comprehending the infinitely long definition of f, which is impossible to do.

But this does not mean X cannot be constructed. X must simply be constructed the same way as f or any finitely undefinable set, as the limit of an infinite sequence of revisions, each of which is a finitely defined set incorporating the definition of the prior revision. But each revision Xn is defined on the basis of the corresponding fn in the construction of f, which is not onto P(S), and among the subsets of S that fn does not map any member of S to is Xn, for otherwise the paradox will be invoked for Xn and fn. We cannot have Xn be self-contradictorily defined, for that would stall and break our construction of X. So we have to conclude that for any n, Xn is not in the range of fn.

It may seem at first fine to allow Xn to be excluded from the range of fn, and yet include X in the range of f, since there are obviously infinitely many subsets of S that are not in the range of any fn, yet they are all in the range of f. It seems even more plausible since the Xn for any n could well be in the range of fm for any m not equal to n. But the Xn are a special subset of the set of subsets of S that are not in the range of fn, because the Xn are necessarily excluded from the range of their corresponding fn and thus are excluded as an inherent aspect of the definition of the fn. Each fn does not just happen to exclude the Xn; it excludes Xn because if it does not, it collapses into paradox. Thus by definition, not just by happenstance, fn does not have Xn in its range.

Now, consider all the subsets of S. Given any fn, every subset of S must be either in the range of fn, or not. And presumably we can determine the exact membership of Xn by running through each member s of S and checking if it fn(s) exists, and if so, if s is an element of it or not. Suppose we do so, and have a precise infinite enumeration of Xn. There is no reason, then, why we should not be able to revise fn to obtain fn+1 by taking any element s1 of S that fn maps to some subset of S fn(s1), and map it instead to Xn.

Now, suppose that s1 was not an element of fn(s1). Then it was an element of Xn. Thus fn+1 would map s1 to Xn, of which it is an element, hence s1 would not be an element of Xn+1.

But let us revise fn+1 to obtain fn+2 by mapping s1 to Xn+1 instead of to Xn. Then s1 is not an element of fn+2(s1), thus it is an element of Xn+2. We can begin this pattern with n=0, and continue this alternation ad infinitum, so that s1 is in Xn for even n, and not in Xn for odd n.

Now, suppose we also make f0 differ from f2 in a specific way. Consider some s2 in S, not equal to s1. Let f0(s2) be some subset of S that does contain s2. Then s2 is not an element of X0. Now let f2(s2) = X0. Then since s2 is not an element of f2(s2), s2 is an element in X2. Now, suppose we repeat this pattern of revision in relation to s2 for all even n, so that s2 is not in Xn for values of n that are even multiples of 2, and s2 is in Xn for values of n that are odd multiples of 2.

Now, suppose we also make f0 differ from f3 in a specific way. Consider some s3 in S, not equal to s1 or s2. Let f0(s3) be some subset of S that does not contain s3. Then s3 is an element of X0. Now let f3(s3) = X0. Then since s3 is not an element of f3(s3), s3 is not in X3. Now, suppose we repeat this pattern of revision in relation to s3 for all n divisible by 3, starting with n=0. Then s3 is in Xn for values of n that are even multiples of 3, and s3 is not in Xn for values of n that are odd multiples of 3.

Suppose we repeat this pattern ad infinitum for s4, s5, … sm, sm+1, … and suppose this sequence of sm includes all elements of S.

Then for every element s of S, there exists some subsequence of the sequence of revisions Xn for which that element s keeps flipping in an out of Xn, ad infinitum. Far from being a sequence of arbitrary revisions, then, the sequence of Xn that in the limit defines X actually has a very definite pattern in relation to every member of S. There is some infinite subsequence of the Xn that oscillates between including and excluding each and every member of S, ad infinitum.

Thus it is clear that as n approaches infinity, the Xn converge in the limit to an ambiguous set that both contains and does not contain each and every element in S, and is therefore distinguishable from itself and thus not a set but a class without extension. By contrast, the conditions we have placed on the fn to construct X in this manner do not cause the fn to converge to any particular kind of function, and hence the fn are still free to converge to a finitely undefinable set that maps the elements of S one-to-one and onto the power set of S.

Now, while it is certainly worthy of note that f can be easily constructed so as to render the concomitant construction of X as the convergence of the oscillating Xn sequence that is so in accord with the paradoxical nature of X itself, and while we continue to maintain that X is therefore not a set at all but an extensionsless class, what we have proven beyond that here is that X can be defined in a finite string of symbols. We did that very thing when we defined it as the limit of a sequence of subsets of S that oscillate on the membership of every member of S in perpetuity and at different frequencies. This definition fully characterizes the extensionality of X. Thus X is finitely definable, and cannot therefore be either a set or a class that is genuinely defined upon the basis of the finitely undefinable function f that maps S one-to-one and onto the power set of S. If it were thus defined, it would not be susceptible to finite definition.

The downfall of X, which is a fully generalizable version of the paradoxical Russell set, is precisely the susceptibility of its defining sequence of revisions to rule-driven lockdown of its divergent behavior on each input as a direct result of its lack of any independent grounding other than the arbitrary behavior of a finitely undefinable function’s freewheeling perpetual revisioning of the relationship between the members of its domain with the members of its range. Any set defined solely in terms of the relationship between the domain and the range of a finitely undefinable function ultimately can be defined finitely in its entirety because the relation between its own revisions is finitely and rigidly defined in terms of the relation between the domain and range of the finitely undefinable function.

What we have done is to specify precisely the characteristic of a Russell set that causes it to be paradoxical, and to show how the paradox can be avoided by the exclusion of the Russell set, really an extensionless class, without excluding any non-paradoxical sets, and in a perfectly understandable, intuitive, consistent and coherent manner. We have provided, in short, a new foundation for mathematics that suffers from no antinomies, and is both complete and consistent.

The same is true of the complement of the Russel Set; call it Y. We define it as the set of sets to which f maps some non-negative integer that is a member of the set f maps it to.

We can construct Y like we constructed X, by revising fn+1 for each n such that fn+1(s1) for some s1 in S, such that fn+1(s1) = fn(s). The result is that Yn can be constructed to either oscillate or stay steady on each candidate element as n increases, and which it does is entirely an arbitrary result of initial conditions. Thus it is also finitely definable, and therefore although it exists for each revision fn, it does not exist in relation to the finitely undefinable limit function f.

Conclusion

All sets are countable. Cantor’s two uncountability theorems are premised on a single fallacy, mistaking infinite intension for a purportedly higher kind of extensional infinity. By DeCantorizing infinity, we are able to set forth a new foundation for mathematics we call “True Infinitism” that does not suffer from the antinomies of naïve set theory, nor from the semantic limitations of less naïve set theories that attempt to evade those antinomies by axiomatizing them out of the syntax.

In axiomatizing away the paradoxes created by Cantor’s fallacy, we merely end up chasing the underlying fallacy into the woodwork of the semantics, where it continues to wreak havoc in the form of vexing problems of uncertainty and inherent trade-offs over ontology, epistemology, modality, reference, constructibility, decidability, consistency, completeness, non-standard interpretive models, convergence in its many meanings and fidelity to mathematical intuition.

By shifting our foundations instead onto the bedrock of True Infinitism, we resolve all these lingering difficulties.

» Newer posts