Tuesday, July 24, 2018

Can You Prove a Negative?

People who learn a little about Scientific Method or Logic often proudly declare that You Can't Prove A Negative.  The problem is that they stopped learning science just a wee bit too early.  What they think they know about Scientific Method is really a mis-application of a quickie rule of thumb for how to construct a well-stated hypothesis for the kind of experiment you have in mind.  Prove a Negative is shorthand for confirm a negative hypothesis.

A negative hypothesis is just a testable statement framed as a negative, along the lines of humans can't fly.  How can we ever prove that?  We could certainly disprove it by producing even one verified instance of a human that flies.  But to positively prove that humans can't fly, other than specific instances of specific humans failing to fly on specific days, defies logic.  The best you can do is to show evidence that supports it (humans hitting the ground in undeniably un-flight-like fashion), and then to conditionally accept the hypothesis, or regard it as indistinguishable from true.  For the time being.  Until we someday get better at distinguishing.

The perfect hypothesis in the perfect experiment would either be fully confirmed or fully disconfirmed by the experiment.  But this rarely happens.  Usually the best we can do is to either disconfirm a falsifiable hypothesis, or to confirm a confirmatory hypothesis.

To falsify a falsifiable hypothesis means that the hypothesis is definitely disproved.  It is conclusively rejected, never to rise again.  If we fail to disprove a falsifiable hypothesis, then it might be true, but it might still be false as well.  It just means we were unable to reject it this time on the basis of this evidence from this experiment.  In the light of lots of other experiments all of which likewise failed to falsify the hypothesis, we might be forced to accept the hypothesis as indistinguishable from true around the time we run out of ways of testing the hypothesis.

The Natural Sciences tend to use falsifiable hypotheses because it is the most reliable way of finding out objective facts.  Only something that is virtually indistinguishable from true can withstand a determined, persistent onslaught of experiments seeking to disconfirm it.  It is just too easy to find evidence that confirms (erroneously) what you want to be true, because this is a basic operational bias of the human brain.

Falsifying is also a very efficient way of clearing out the myriad possible explanations that are wrong in order to zero in on the one that is right.  And it is generally easier to falsify or prove wrong a positive statement about what is, rather than a negative statement about what isn't.  All humans can fly is readily disproved; whereas no humans can fly is a lot harder to disprove, and these are not the only two possibilities.

You can see how easy it is to frame a hypothesis as either positive or negative.  Just be careful about false binaries.  If one explanation fails, it isn't automatically the competing one.  If all humans can fly fails, it does not automatically follow that no humans can fly, or that all humans can't fly.  It just means you weren't able to do it that time.  Perhaps humans can sometimes fly, with the right sort of suit and a good running start.  NO capes, though.  No capes!

Applied Science (e.g. engineering) tends to use confirmatory or provable hypotheses because it is the more efficient way to invent things.  "Cameras are possible, because look at this working prototype I just made."  Hypothesis confirmed in one experiment.  Or you could frame the hypothesis as cameras are impossible and disconfirm that with the same single experiment.  But most of the time x is impossible is not the preferred way to frame a hypothesis in Applied Science.  The first ninety-nine camera prototypes that didn't work could be taken (erroneously) as evidence supporting this poorly framed hypothesis.  The injunction you can't prove a negative is more a rule against framing hypotheses as negatives - a rule that is also frequently broken when appropriate to do so.

I still haven't answered the question - can you ever prove a negative hypothesis of the type XYZ does not exist?  Is it ever possible to confirm this kind of hypothesis?  Many top scientists will say, on principle, "No."  But what if I gave you an example of Science positively proving the non-existence of something?  And to make it harder, what if the thing already had scientific evidence for it and the beginnings of a consensus on its existence?

Our story starts in France in 1903.  Respected and accomplished physicist Prosper-Renรฉ Blondlot (1849 - 1930) needed an explanation for some weird shit his experiments were doing, and he also really needed to keep up with the other scientists of the age who were discovering new rays left and right.  X-rays.  Alpha Rays.  Gamma Rays.  Beta Rays.  Is there no end to the vast variety of rays to be discovered in Nature?  Well, actually, that was . . . um . . . that's pretty much all of them.

In any case, Blondlot wishfully assumed an experimental glitch he was seeing must be caused by one of these new Rays everyone was constantly discovering, which he named N-Rays.  He set up a new cockamamie experiment that was guaranteed to not detect any other previously discovered kind of ray, and immediately found what he was looking for. Once he reported this amazing discovery, other scientists started reporting confirmations of his discovery in their own labs.  Soon, N-Rays were being reported emanating from a wide range of materials with the oddly specific exceptions of green wood and rock salt.

N-Ray Mania swept France, and in just the first half of 1904 over 50 scientific papers were published on the subject (compared to just 3 about X-Rays).  Eventually there would be over 300 papers on N-Rays.  Over 120 other scientists confirmed the existence and properties of N-Rays.  A serious dispute arose over who was the first to discover N-Rays emanating from the human body.  The French Academy of Sciences awarded Blondlot a prize of Fr 50,000 (almost $600,000 in today's money).  He had international fame, was a national hero in France, and he had his eye on a Nobel Prize.

Only problem was, N-Rays don't exist.  They never did.

And Science proved it with one simple experiment.  Well, a meta-experiment actually.  A wickedly clever and devious experiment conducted on the way N-Ray Science was being done.

So how do you confirm the hypothesis N-Rays Don't Exist?  Well, if N-Rays are a mere phantasm in the minds of scientists (and there was some reason to suspect this alternative explanation) then crucially disrupting the experiment without the knowledge of the scientist should have no effect on the "positive" results of experiment.

The fact that many eminent German and British scientists were entirely unable to replicate Blondlot's discoveries, and that the detection required the involved natural senses of the researchers, made hallucination one possible explanation for the phenomenon of N-Rays.  This hypothesis is testable in an experiment conducted on N-Ray Science rather than on N-Rays themselves.  If N-Ray Science is not affected by disabling the apparatus in a way unknown to and undetectable by the researcher, then this should only be possible if N-Ray Science is basically hallucinations (or outright fraud) on the part of the researchers.  An objective phenomenon is supremely sensitive to technical faults in the apparatus; a subjective experience requires only a belief in the apparatus.

Unfair, you say?  Unethical?  Well they started it.  Those researchers inserted themselves into the experiment, so I say it is fair game to conduct experiments upon them without their knowledge.  Today most science utilizes electronic or otherwise mechanized detection, measurement, and data collection instrumentation in order to prevent the biases of the faulty human brain from contaminating the data.  In cases where this is not possible, the gold standard is a double-blind study, in which neither the researchers nor the subjects (where relevant) know what they are doing or often what the experiment is even about.  Only the designer of the study knows, and she is explicitly forbidden from participating in the data collection or analysis, and definitely forbidden from winking and gesturing in the researchers' direction.

In this case, American physicist Robert W. Wood traveled to Blondlot's laboratory in Nancy, France in September 1904.  His perceived impartiality as neither British nor German was essential to this project.  Whilst Blondlot happily demonstrated the "detection" of N-Rays, Wood removed a crucial part of the experimental apparatus without Blondlot's knowledge, which according to N-Ray theory should have rendered detection of N-Rays utterly impossible.  But Blondlot continued counting and recording N-Rays on his phosphor screen, oblivious to the sabotage.  Wood meanwhile was unable to observe any N-Rays, either before or after disabling the apparatus.  "Not those big, obvious flashes of light," Blondlot explained, "Look for the much, much fainter ones."  Yeah right - the ones that look just like what you see when you close your eyes.

Since N-Rays were detectable only by certain people and detectable whether the apparatus was operational or not, since N-Rays were never able to be recorded photographically in spite of numerous determined efforts, and since N-Rays had neither theoretical explanation nor theoretical reason to exist, there was only one conclusion to make, and that conclusion did not have to be conditional, tentative, or subject to qualification.  Science had confirmed the hypothesis that N-Rays Do Not Exist.

So it is possible to prove a negative if you can do the following:

  1. Demonstrate the absence of any solid evidence for the thing.  Solid evidence means evidence which bears no other explanation and is repeatable.  Absence of evidence is very much evidence of absence, in cases where the hypothesis demands such evidence to be found precisely where it is not.
  2. Propose and test alternative explanations for all the evidence that does exist, such as it is.  In our story this test consisted of showing that N-Ray detection was unaffected by disabling the apparatus, a state of affairs not concordant with objective phenomena, but perfectly concordant with subjective ones.
  3. Demonstrate that the non-existence of the thing is entirely concordant with all other available data.

Things that definitely do not exist can be proven not to exist if these three hurdles can be cleared.  It is not necessary to be agnostic about every potentiality and every absurd claim on the basis that "you can't prove a negative."  Because quite often, you can.







Sources:
Klotz, I.M.  "The N-Ray Affair," Scientific American, May 1980
Wood, R.W.  "The N-Rays," Nature, Sept 1904, 70 (1822): 530–531

Saturday, July 21, 2018

Examining Objectivity

Describing statements or reasoning as Subjective means that their validity depends on (is subject to) who is making them.  While it may well be objectively true that mushrooms are good food based on analysis of their nutrients, composition, and demonstrated absence of toxicity, it is not a true objective statement to assert that I like mushrooms and I think they are good, taste good, and are nice to eat.  Some people can truthfully make that statement, but I cannot.  So the truth of that statement is subject to the condition of who is making the statement.

An objective statement is one that anyone can make without changing its veracity status.  Mushrooms often have a rubbery texture. This is demonstrably true whether I say it or someone else does. It is a statement in objective concordance with reality rather than a personal value judgement.  If I wanted to defend or explain my disinclination to eat mushrooms, I might appeal to objective facts such as this, followed by an ultimately subjective statement such as, "and I don't like rubbery things to be in my mouth. Bleah."

Whatever flavor mushrooms may or may not objectively have, to my subjective tastes this alleged flavor does not offset the most unpleasant feeling of rubbery fungus between my teeth.  But I am getting a little bit off track I see.

Objectivity roughly means the ability to reach the same conclusion as anyone else.  More precisely it means the ability to reason or reach a specific conclusion that is determined by the outside reality of things rather than predetermined by who you are or where you were born.

I assert that it is not possible for a religious person to reason objectively about religion.  That was certainly true for myself once upon a time in a galaxy far, far away.  But why would it be universally true?  It's because "My religion is true" and "I like mushrooms" are both subjective statements that only certain people can make, and not objective conclusions that should or even could apply to everybody.

By definition a religious person believes in or accepts a religious conclusion a priori.  When one asks a religious person to reason about a substantive question of religion one finds that there is already a conclusion in place that the person must reach.  He is not free to even consider certain possibilities, such as that this religion is false or that a specifically believed religious premise or claim is wrong.

"But surely," you say, "It is not impossible for an intellectually honest religious person to suspend belief and consider a question objectively."  Perhaps, but how would we know if or when that ever happens?  More to the point, how would he know?  The brain is very adept at concealing intent from itself and forcing reasoning processes into a predetermined outcome, leaving the Reasoner to invent whatever reasons it needs.  This is called Motivated Reasoning and it in no way needs to be at a conscious level of awareness or intent.  A religious person can be completely unaware of being controlled by motivated reasoning.  He might not be at all aware of whatever feats of cognitive gymnastics he had to perform in order to get to the necessary pre-determined conclusion, and can even be cognitively blind to egregious fallacies and errors he may be committing.

The simple fact is that when someone needs one conclusion to be reached over any other, or is invested personally, emotionally, socially (and yes - often financially) in one outcome over another, that person will not reason objectively, full stop.  Even if by some chance that person reaches a valid conclusion, the conclusion and more importantly the process cannot be relied upon.  It is unreliable, and therefore wrong even if accidentally valid on occasion.

It is useless therefore to ask a religious person to reason through questions of, for instance, whether their religion's origin story is factual (historically accurate), or whether testable claims made by the religion are supported by the evidence.  Whether gods exist and sometimes modify or directly influence events the physical universe.  As James Randi was fond of saying, "Those who believe without reason cannot be convinced by reason."  They are playing a fundamentally different game. It may look and sound a bit like reasoning, but it isn't reasoning. It's justifying a belief, also known as Apologetics.

To which the religious person may reply with turnaround: "Well your atheism is just another belief!  You are motivated to believe my God doesn't exist because you need that to be true."

Bollocks.  Boll.  Locks.

Non-belief is just another belief in the same way that OFF is a TV network and not collecting stamps is a hobby.  Never playing tennis is a sport.  Being dead is a lifestyle.  Baldness is a hairstyle.  None is a breed of dog one owns as a pet.  And accusing me of having "just another belief" is an admission that mere belief is not an intellectually honest or sound position to hold in the first place.

I neither need a religious premise to be true nor do I need it to be false.  But back in the day when I did need a religion to be true and when I was invested in it, it seemed totally true.  If it were objectively true, it should still seem at least a little bit true now, even though I no longer need anything it has. However once that link breaks and Motivated Reasoning is no longer operating in one direction or the other, objective conclusions are possible to reach.  Reliable conclusions can be reached - not just one-off conclusions, but replicated and documented ones that anyone can get to.

Do I need there to be no gods because if there were, they would want to punish me?  On the contrary.  If I thought that, I would be motivated to believe in gods, not to disbelieve.  "But you really need there to be no Hell, because you love your sins and you are for sure going there!"  No, I certainly am doing no such thing.  If I really thought there was such a thing as hell, once again, I would be strongly motivated to defy logic and evidence and believe differently.

I am not going to try to convince a religious person that I am a good and ethical person worthy of the best of all possible afterlife scenarios.  It is an unacceptable powerplay and oppressive control tactic to make people assert their worthiness to the satisfaction of some goddamned self-righteous religious authority, and it ain't gonna happen.  But if I needed to, I could prove beyond doubt that I am above ethical reproach by all the major gods or goddesses in most popular storybooks.  Except for some of the wackier and capricious rules like weird hairstyles, removed body parts, the wearing of silly hats, or arbitrary cultural constructs like calendar-based numerological and astrological observances.  And if the only crime that can be laid at my feet in some sort of post-mortality reality TV show eviction episode is that I did not believe in some bullshit that went against all logic and evidence, then the producers can kiss my big, bare, pasty white ass.

I would be happy to accept the reality of gods or goddesses provided A) evidence concordant with no other ordinary explanation were produced, and 2) satisfactory explanations for all the evidence against gods were also produced.

In short, either produce a specimen for examination and/or interrogation, or fuck right off with this 'gods' bullshit and let us hear no more about it.








Saturday, July 14, 2018

Santa Claus Agnosticism

You know who really shits me?  Smart-arses who smugly proclaim,

"Well actually, you can't technically prove Santa Claus doesn't exist, so being agnostic about Santa Claus is the only tenable position to take."

This is a disingenuous and intellectually lazy position.  Such fence-sitters just can't be bothered to understand the evidence behind the fact that there is no Santa Claus; that it is and always was a fiction.  I submit that the reason for taking such a patently absurd position is out of fear of offending people who believe in Santa Claus.

But it gets worse.  There are people either so petrified of offending anybody or so lazy that they do not even want to touch the subject that they are lead to say things like,
"Nobody should even care whether there is a Santa Claus or not."
Really?   Really?  The discovery that Santa Claus was actually real would not be the most astonishing, world-changing event in history?  The discovery that all of reality, all of history, all of science is now proven to be upside-down, inside out, and just plain wrong? If there's even a chance that Santa Claus was a real being who did what the stories claim he does, do you honestly expect me to believe that you wouldn't want to know this?  Well, I don't believe you.

Santa Claus Agnosticism and Santa Claus Apathy are wholly unnecessary positions to take because it can be demonstrated beyond any reasonable person's doubt that there never was any Santa Claus apart from possibly Nicholas of Myra, a 4th century figure of questionable historicity who scarcely resembles the Santa Claus of popular mythology, and who is now widely acknowledged as being dead.


So how do we know there really is no Santa Claus?

1.  Provenance of source materials.  There is no corroborating documentary evidence for Santa Claus apart from a small number of source documents which cannot be regarded as anything other than fictional works.  They have always been recognized as fiction, and were never intended to be anything but.  Also we can trace the development through history of the myth of Santa Claus, demonstrating that rather than arising from an individual's actual acts, they arose in a syncretic manner through ordinary myth-making processes.  Basically, just people making up stories.

2. Demonstrated physical impossibility.  The acts attributed to Santa Claus are provably not things that this universe's physics support or permit.  For Santa Claus to be real, physics, biology, chemistry, and even economics would have had to turn out quite differently to how they are actually discovered to be.

3.  Absence.  No real Santa Claus has ever been observed, communicated with, met with, photographed, or has ever left any physical or documentary evidence not explainable by entirely mundane non-Santa circumstances.  Neither would any real Santa have a legitimate reason for concealing his existence, the existence of elves, flying reindeer, gifts not actually manufactured and delivered by humans, or those three women he's always talking about (rather disparagingly, at that).  Absence of evidence precisely where that evidence must be found to support a hypothesis is perfectly valid evidence of absence.

If you are one of those who crouch behind a misunderstanding and misrepresentation of the Scientific Method and Karl Popper's Inductivism and squawk about being agnostic while feeling superior about it, then you, my friend, are Dumb and Wrong.  Agnosticism is not a reasonable position to take when there is ample evidence against a proposition and zero evidence for it, regardless of how many people believe in some piece of demonstrable nonsense. No, you are merely afraid of ruffling feathers by offending people's cherished delusions.

So don't pretend that agnosticism is a superior position to take, and that it is anything other than disingenuous intellectual laziness.

Good day.



Friday, July 13, 2018

Understanding Mathematics

What is Mathematics?  Everyone has an answer for this; few answers make any sense.  Even highly regarded philosophers and scientists answer this question incorrectly, probably because nobody has challenged the explanations, and they have better things to do than sit around worrying about it.

Daniel Dennett (philosopher) calls mathematics a scientific system attempting to be internally consistent without any direct empirical basis or meaning.  He is sadly wrong.  Lawrence Krauss (physicist) calls mathematics a kind of philosophy - a tool for thinking about things and working out "what if" sorts of questions.  He is slightly less wrong.  Wikipedia and most pedagogical sources define Mathematics as a Formal Science, declaring that it is "not concerned with the validity of theories based on observations in the real world, but instead with the properties of formal systems based on definitions and rules."  This is also completely wrong.  Mathematics is supremely concerned with what statements are objectively, universally true.

Any number of statements about what Mathematics is or is not have been made: Mathematics is a language.  Mathematics is a tool.  Mathematics is a game.  Mathematics is puzzles.  Mathematics is an arbitrary construct.  Mathematics is imperialist male-centric in-group signalling.  Bla bla bla.  All mostly misinformed rubbish. And not only offered by people who never use it or who know nothing about it beyond what they were unable to grasp in high school.  People who use mathematics professionally, and even people who consider themselves professional mathematicians, usually have not really thought about what this thing is, why it works, and what it means.

So what is Mathematics really?  Let's break it down. The syntax and notation of mathematics should be considered separately to the facts, discoveries, or conclusions of mathematics.  The way we communicate and work with mathematical ideas is an invention that has been developed over hundreds of years.  Few people for instance can today understand and work through the mathematical notations of early discoverers like Newton, Euler, Gauss, or Leibniz.  Also, different cultures that discovered mathematical facts and principles independently often have mutually unintelligible ways of expressing those facts.  Mathematical systems of notation therefore share characteristics of a language: arbitrary, relative to the culture that created it, and not universal or unique.

And then there's the ideas themselves.  The content of mathematics (as divorced from the syntax) are discovered truths about quantities (numbers), the relationships between quantities and groups of quantities, and truths about operations on quantities and the transformation and manipulation of identifiable quantitative objects.  These facts, being discoveries and not inventions, are universal.  They are also absolute, and not (as many claim) merely relative to a set of assumed axioms.  I assert that the basis of mathematics is not axiomatic, but empirical beginning with the empirical existence of integer counting numbers and their properties, and extending to the empirical, discoverable, and absolute properties of idealized geometric objects, spaces, operations, and functions.  Once we understand and get over the problem of syntax, it is easier to understand mathematics as the following:
Mathematics is the taxonomy of discovered quantities, quantitative objects, their properties, and of mathematical operations, and the discovered relationships between quantities, operations, and quantitative objects.  Mathematics makes use of arbitrary and non-unique invented systems of syntax and notation in order to document and explore this taxonomy.
Mathematical quantities and quantitative objects exist not because we (or any other agency) call them into existence, but because of nothing more than the possibility of the existence of one, two three, or any other number of distinguishable things, be they asteroids, universes, atoms, oranges, or just abstract units or groups of units.  And that is a very, very low bar for the existence of anything - so low that one may as well accept that mathematics is as self-existing as anything could be, existing and awaiting only discovery by competent observers.

Meanwhile the syntax and notation of mathematics are clearly human inventions.  They are somewhat arbitrary in that respect, are not unique (i.e. by no means the only possible systems of notation), have regional dialects, and clearly evolve over time.  While each system of syntax strives to be internally consistent and unambiguous, they are not always perfect.  Syntax, like language, must be learned from other users of the syntax and through determined effort.  We develop with effort and practice the facility to interpret and manipulate the syntax and notation of mathematics in order to read, manipulate, or write universal mathematical ideas.

Syntax and notation allows us to do three things. 

1. Using syntax one can read and interpret a mathematical statement.  This may be in the form of defining a stand-alone mathematical object (quantity or group of quantities) or in the form of a statement about a relationship between quantities, usually in the form of an equality relationship involving an operation.  Mathematical statements often express information about how aggregate quantities are comprised of or related to other quantities, e.g. by addition or multiplication.  The area of a rectangle is the quantity representing the base of the rectangle multiplied by the quantity representing the height.  The length of an object in centimeters is related to the length of that object in inches through multiplication by 2.54.  These cumbersome statements of fact are abbreviated succinctly using an invented notation: A = B*H;   1 in = 2.54 cm.

2.  Through the use of a suitable syntax we can manipulate mathematical statements to find equivalent statements.  The statement "This tree weighs five tonnes," stripped of the ambiguity and possible smarty-pants alternative meanings of a natural language statement, becomes Wtree = 5 t.  As such it can be manipulated to express what else can be known from this statement alone.  This includes the conclusions that two identical such trees would necessarily weigh 10 tonnes together, a tonne is one equal fifth part of this tree, dividing the tree into two parts of equal mass must yield parts of 2.5 t each, and so on.  Notice that such manipulations and transformations do not add any new knowledge, but at best re-state the existing knowledge in order to be applicable or relevant to specific questions one may ask.  It tells us only what we already know, although sometimes in a way that we did not at first appreciate.

For example, if we know that the speed of a certain body increases by 32 feet per second each second, it is pretty obvious that after 3 seconds of accelerating from rest, the object must be travelling at a speed of 3 x 32 = 96 feet/sec.  This we know by mere extension or re-statement of the premise.  No new information is required.  But far less obvious is the fact that after 3 seconds it must also necessarily be 144 feet away.  This non-obvious fact is not new information; it is actually contained within the premise.  It only becomes obvious when the statement, expressed in mathematical syntax, undergoes valid syntactical transformations and manipulations leading to other entirely equivalent statements which we can then interpret.  This is not generally possible with natural-language statements.

Mathematical manipulations of the syntactical expression of a statement accepted as true, done in such a way that each re-statement is also true, permit us to uncover many other true statements implied or required by the original one.  It may be the case however that not all possible true statements about the premise can be discovered.  The only guarantee is that if each manipulation is valid, the result is a true re-statement of the premise.

3.  We can express mathematical ideas including asking questions about quantities and testing hypotheses about the relationships between mathematical objects.  A mathematical object is sometimes literally an object like a line, a triangle, or a sphere, but more generally it is an identifiable collection of numbers, often which have a simple rule for determining which numbers belong to the object.  Sometimes the numbers have to be in a specific order; sometimes it is something simple like "all numbers less than five."

For example, one may ask in natural language, "Which is the larger of the two - the area of a circle of diameter equal to one Glaaarrghtoot (aka a Glaaarrghsnaffian Inchmeter - 1/100,000 the mean circumference of Planet Glaaarrghsnaffia VII), or the area of a triangle, each side of which is also exactly one Glaaarrghtoot?"  And while philosophers and theologians unconstrained by knowledge endlessly dispute the meaning of words like "circle," "diameter," "triangle," "planet," and "equal," and while engineers Space-Google the mean circumference of G. VII in Spaceyards, we can cut right to the chase using mathematical syntax:

√3 >? ๐œ‹

In this form the question is readily and unambiguously answered: the circle has the larger area, on any planet or on no planet at all; in any universe or in no universe at all.  However, as you may have noticed, finding the answer was not possible by syntax alone, but is also inextricably linked to the meaning of mathematical objects such as "3" and "๐œ‹," and by the existence of operations such as the square-root. So we leave syntax aside for now with the understanding that while itself an invention, the things syntax expresses are not inventions but discoveries.

Mathematical discoveries include all numbers; groups of numbers; relationships between numbers; mathematical objects including geometric shapes, functions, and other identifiable groupings of quantities; operations that transform numbers, groups of numbers and objects; and relationships between mathematical objects.

These discoveries begin with the discovery of 1. The unit. A thing. Any single thing. Then along comes another thing, and we immediately discover 2. Two things. The idea of two. Two of something. Also, we make the discovery that one and one is two; or that two ones is two, two divided equally into two is one, and one removed from two is once again one. In our notation,


1 + 1 = 2
1 x 2 = 2
2 / 2 = 1
2 - 1 = 1

By careful observation of two units both together and apart, we have also thus discovered (not invented) the operations of addition, subtraction, multiplication, and division. Then we discover 3, 4, 5, and all the other cardinal numbers, and a myriad of facts about the relationships between them. We discover even numbers, odd numbers, square numbers, prime numbers, factors, divisors, etc in endless variety.

We can also discover without any further assumptions (or axioms) whatsoever the existence of an infinity of numbers between the cardinal (integer) numbers, as well as negative numbers. These non-obvious numbers are called into existence by the very existence of the operations we discovered at the start. Because Division empirically exists (you can divide objects or collections of things and count the results), non-integer numbers must therefore also exist. Five must ultimately be capable of being divided by two, for instance. Because Subtraction is a thing, negative numbers must therefore also be a thing. You can have an actual deficit of frogs - negative frogs - if someone owes you a frog.

Because an endless series of dividing whole numbers have no reason not to exist, irrational numbers (not expressible by any finite number of divisions of whole numbers) likewise are permitted to exist, and it can be shown that they do. Even less obviously, if multiplication is to be a logically self-consistent thing that exists, "imaginary" numbers must also exist - numbers which when multiplied together produce negative numbers, which we already know exist.



Besides quantities, there are endless other kinds of mathematical objects to be discovered: the point, the line, the plane, points on a plane, geometric shapes on the plane as identifiable groups of related points, n-dimensional spaces, and n-dimensional geometric objects. There are functions in endless variety: groups of numbers that are related to each other through a sequence of operations. y = 5x. y = sine(2x). All just awaiting discovery, and the discovery of their numerous obvious and not-so-obvious interrelationships. New techniques and better systems of syntax often need to be invented in order to more easily work with more complicated discovered objects.

But you may say all these "empirical" discoveries are merely of abstract ideals, not objects of physical existence. What is the connection then to the physical universe? Why do so many natural phenomena lend themselves so well to mathematical descriptions? What is the nature of the strange link between the real world and the purely abstract world of mathematics?

We need to walk back some of the question-begging smuggled in with these questions. Firstly, it is in no way "purely abstract" to observe that discrete items in the physical universe correspond to the cardinal numbers one discovers in mathematics. This is, indeed, how the cardinal numbers were originally discovered. One rock. Two rocks. Another makes three. One bear. Two bears. Holy shit - run for your lives! In no way is this purely abstract or hypothetical. One gallon - two gallons - not to mention the practically unlimited divisibility of gallons of liquids into smaller non-integer quantities. Mathematics is simply not the abstraction that so many have claimed it was.

Natural phenomena and the mathematics that describe them are likewise not the separate entities that the above questions presuppose, either. We discover natural phenomena at about the same time we discover the mathematics that describes them precisely because they are often one and the same. The inverse square law of physical phenomena such as gravity, radiation, luminescence, electric fields etc are not eerily mathematical due to some kind of conspiracy or fine tuning, or some deep mystery of surprising profundity, nor is the mathematics "just a model of reality." Rather, all these physical laws are nothing more than re-statements of the rather mundane mathematical truth discovery that the surface of any sphere increases as the square of its radius. Exponential radioactive decay and the law of half-lives is not atoms being artificially forced to obey an invented mathematical abstraction by some mysterious conspiracy; instead it is merely an instance of probability (another mathematical discovery) happening to large numbers of objects, not abstractly, but in real life.

It is not at all mysterious, nor should it be, that mathematics works so well in the natural and applied sciences, any more than it should be confusing that one rock and another rock makes . . . two rocks. It is a basic truth about things in the universe that they represent and are represented by numbers. Numbers have relationships that we can discover, and those relationships are again reflected in the real objects in the universe that embody these numbers, as a direct consequence of embodying those numbers. What's interesting is that while numbers are readily embodied by physical objects, they can also be embodied by abstractions. Numbers don't even need a universe in order to exist.


The historical development of mathematics is a confused story of simultaneously discovering mathematical objects while struggling to invent ways of talking about them. These are often ad hoc shortenings of natural language descriptions that evolved into some kind of operative syntax. This can easily account for why it is not obvious to more people - even mathematicians - that mathematics is really two very different things bound together: a taxonomy of discovered universal truths about quantities, and an invented arbitrary system of syntax needed to talk about them.