# What is 0^0? And is math true, or just useful?

April 22, 2011 38 Comments

When you hear mathematicians talk about “searching” for a proof or having “discovered” a new theorem, the implication is that math is something that exists out there in the world, like nature, and that we gradually learn more about it. In other words, mathematical questions are objectively true or false, independent of us, and it’s up to us to discover the answer. That’s a very popular way to think about math, and a very intuitive one.

The alternate view, however, is that math is something we invent, and that math has the form it does because we decided that form would be useful to us, not because we discovered it to be true. Skeptical? Consider imaginary numbers: The square root of X is the number which, when you square it, yields X. And there’s no real number which, when you square it, yields -1. But mathematicians realized centuries ago that it would be useful to be able to use square roots of negative numbers in their formulas, so they decided to define an imaginary number, “i,” to mean “the square root of -1.” So this seems like a clear example in which a mathematical concept was invented, rather than discovered, and in which our system of math has a certain form simply because we decided it would be useful to define it that way, not because that’s how things “really are.”

This is too large of a debate to resolve in one blog post, but I do want to bring up one interesting case study I came across that points in favor of the “math is invented” side of the debate. My friends over at the popular blog Ask a Mathematician, Ask a Physicist did a great post a while ago addressing one of their readers’ questions: What is 0^0?

The reason this question is a head-scratcher is that our rules about how exponents work seem to yield two contradictory answers. On the one hand, we have a rule that zero raised to any power equals zero. But on the other hand, we have a rule that anything raised to the power of zero equals one. So which is it? Does 0^0 = 0 or does 0^0 = 1?

Well, I asked Google and according to their super-official calculator, the answer is unambiguous:

Indeed, the Mathematician at AAMAAP confirms, mathematicians in practice act as if 0^0 = 1. But why? Because it’s more convenient, basically. If we let 0^0=0, there are certain important theorems, like the Binomial Theorem, that would need to be rewritten in more complicated and clunky ways. Note that it’s not even the case that letting 0^0=0 would contradict our theorems (if so, we could perhaps view that as a disproof of the statement 0^0=0). It’s just that it would make our theorems less elegant. Says the mathematician:

“There are some further reasons why using is preferable, but they boil down to that choice being more useful than the alternative choices, leading to simpler theorems, or feeling more “natural” to mathematicians. The choice is not “right”, it is merely nice.”

I’ve always felt that this is one of those debates that sounds deeply meaningful but in fact turns out to be spurious. Isn’t it simply the case that if you set up certain parameters, as we have done with mathematics in our axiomatic framework, then structure and discoveries come directly out of the framework, because they are intrinsic to it. That does not mean that it is “out there” somewhere, but rather in how we set up the system (invention) and then explore it (discovery).

HAHA! I’ve always wanted to know why i was -1, it’s because there is no square root that equals -1. That’s hilarious….. Whoever invented i must have had a lot of gumption.

http://en.wikipedia.org/wiki/Euler's_formula

Indeed, the originator of this idea was much maligned by the mathematical community. Every time someone has extended an idea in a nonobvious way, there has been push back from the field.

What if mathematics both invented and discovered, analogous to the nature vs. nurture debate? It seems to me that at least some of mathematics is inherent to the universe, otherwise it wouldn’t be so useful in modelling that universe. On the other hand, I’ve read of some mathematicians who have come up with systems of mathematics that rest on completely different axioms than “mainstream” mathematics. Assuming that my understanding is correct, even though these systems are internally consistent and logical, they produce wildly differing results than what the rest of is might expect. IIRC, there’s also been similar things done with systems of formal logic. (Caveat, my math education was long time ago.)

In any case, I think I need to add AAMAAP to my Google Reader instead of just reading articles you link to on Facebook!

I’m not sure that I agree with the importance of that distinction. Let me try to illustrate with some examples:

The set of integers {0, ±1, ±2, …} is not closed under the operation of division. To find a set that is closed under division, we need to “invent” a new type of number, called a rational number q, where q = a/b and a, b are integers.

Similarly, the set of real numbers is not closed under the operation of square roots. To find a set that is closed under the operation of square roots, we need to “invent” a new type of number, called an imaginary number i, where i = sqrt(-1).

The complex numbers extend the real numbers, and in a similar way, the rational numbers extend the integers. (This can go further; quaternions extend the complex numbers, and octonions extend the quaternions.) I think that imaginary numbers seem more “invented” than rational numbers because fractions are more obviously visible in everyday life, but imaginary numbers do have important uses in describing physical phenomena, such as electrical signals.

In another sense, I don’t think that this is any different than what, e.g., particle physicists do. When we find a new “particle,” what’s really happening is that we have evidence for some object with specific properties, and that object isn’t in the catalog of objects we already know about. With enough evidence, we decide that we’re confident enough to assign this object a name and add it to the catalog. Calling it a particle is one description; other physicists might describe it instead as an excitation of a field, or a resonance. Mathematicians did the same thing with sqrt(-1), by assigning it a name, further investigating its properties, and eventually using those properties to find even newer mathematical objects, as well as many applications.

As Chana Messinger said above, a new mathematical object is intrinsic to the mathematical system, just as a new particle is intrinsic to the laws of physics. We do have some degree of freedom in which system we choose, but that’s true in physics, too. We can choose the origin of coordinate axes, gauge conditions, and so forth. However, once we do, certain conclusions are inevitable, such as complex numbers. I think the 0^0 example is one of those situations with a degree of freedom. (I would like to note, though, that if we refer to the Wikipedia section on 0^0, there are arguments for it being either 1 or indeterminate, but it seems as though agreement has been reached that it’s not 0.)

It’s not necessarily true that mathematics has to be something inherent to the universe for it to be scientifically useful. Math would be just as empirically useful even if it was a contradictory system.

Sidenote: Was Riemann really just a hipster who though that Euclidean geometry was too mainstream? 😛

We know that mathematics is true, because it is useful; and we know that it is useful, because it is true. (They love it when you shuffle the words around!)

Seriously, though, the fact that mathematicians can agree on a newly proven result because they are shown the proof and they can validate it, I think, casts extreme doubt on any theory that goes so far as to say that mathematics is purely a cultural phenomenon. Granted, all the mathematicians are (arguably, roughly speaking) trained in the same way, using the same basic axioms. But when a previously open problem is conclusively resolved one way or another, it is not merely chance – the solution was implicit in the set of axioms and/or cultural biases, but undiscovered. (The case of Newton and Leibniz is perhaps the most familiar example of two mathematicians reaching highly non-trivial and equivalent conclusions without directly influencing one another.)

Sometimes, however, a question cannot be resolved one way or another. Sometimes you can even prove that a given question cannot be resolved one way or another. Sometimes you can even resolve the question of whether it is possible to prove whether or not all questions can be resolved, as Gödel did.

And sometimes, a question of seemingly the purest mathematics can be revealed as…a DISGUISED QUERY. This is the case when it comes to the matter of zero to the zeroth power. It is impossible to prove that this value is equal to anything–no matter how many axioms of mathematics you apply and in what order, you will never derive a formula for it (unless, of course, your mathematics has an axiom which expressly covers this case). There is simply not enough information in the question “What is 0^0?” to produce an answer. However, with a little bit of context, we might be able to come up with something. For instance, if you want a continuous function f(x) such that f(x)=0^x anywhere that 0^x is defined, the only function that will satisfy this criterion also satisfies f(0)=0. Similarly, if you want a continuous function g(x) such that g(x)=x^0 anywhere that x^0 is defined, the only function that will satisfy this criterion also satisfies g(0)=1. If you want a continuous function h(x,y) such that h(x,y)=x^y everywhere x^y is defined, well, you’re out of luck – that problem is overconstrained and no such function h(x,y) exists. Continuity is a useful property because it eliminates “sensitivity” to small perturbations; and the reason that a convention of 0^0=1 — g(0) instead of f(0) — is popularly chosen is that in the overwhelming majority of cases where a base is not constrained to a particular positive constant, perturbations are more likely to occur in the value of the base than in the value of the exponent. (Put another way, we are accustomed to viewing “raising x to a power” as an operator than to viewing “raising a base to the xth power” as an operator.)

So, indeed, the convention that Google has adopted here – following the IEEE floating-point standard – is in the spirit of an *approximation* – a perturbation on mathematics that is more convenient, under most circumstances. In fact, the entire floating-point standard is an approximation of the concept of a real number, which (ironically) can never be fully realized in our universe.

However! All of this uncertainty and doubt should not serve to taint the image of mathematics as a whole: for mathematicians can indeed prove some propositions with absolute certainty, given a particular set of premises. These *relationships* between propositions and their premises are outside the influence of culture or language – it is through the application of reason that we may refine our image of them, and the fact that many minds through reason perceive the same images serves as strong evidence that their existence is neither fabricated nor illusory.

Math gives you freedom to invent two things: axioms themselves and names for relationships. For example, the axioms of number theory don’t say anything about a Fibonacci sequence. That’s just a name for a particular sequence. Now, given a definition of that sequence, you can discover/prove its properties, like the fact that every third number of the sequence is even.

By the same token, axioms of the real line only define addition and multiplication, not exponentiation, so you’re free to define exponentiation in a way that’s useful or convenient.

I think you guys all described the situation aptly. My original post was, I now realize, somewhat unclear in that I conflated two similar debates: “Is math discovered or invented?” with “Is math true or just useful?” Here’s my sense of how those two debates relate to each other:

(1) If you believe that mathematical statements are inherently true or false (e.g., “1+1=2” is inherently true, and it would be true no matter what physical world we lived in), then you view the process of mathematical inquiry as being one of discovery. That’s because we usually use the word “discovery” when there is one objectively correct answer and we are trying to figure out what it is.

(2) However, if you believe that mathematical statements like “1+1=2” are only true

conditional on certain axioms, then the process of mathematical inquiry looks more like one of invention: we invent axioms that yield the theorems which will be useful, both in terms of making our math simpler and cleaner, and in terms of allowing us to model the physical world we happen to live in.Someone in camp (2) would say that axioms can’t themselves be true or false. But someone in camp (1) would say that those axioms which lead to true statements like “1+1=2” are true axioms; those that don’t are false axioms.

This post made me think of 1 and the definition of a prime number; also, of 0!.

Certainly, some aspects of mathematics are based on convention; however, it appears to me that most (at least, of my acquaintance) are descriptive. I think Cory’s suggestion regarding the usefulness of mathematics in modeling the universe hits the nail on the head. What came to mind as I read the post was the concept of numbers for counting. If we have, say, two sets of n objects, it is not merely by convention that we recognize each set as containing the same quantity of objects–whatever those objects might be.

Why can’t it be both? I.e., some parts of it are discovered, and inherent to nature, while others are shortcuts and conveniences “invented” by human beings.

In my opinion, mathematics is a language or tool invented to understand and explain the wolrd around us, much the same way as say the English language was invented. Numbers are analogous to alphabets, mathematical operation to rules of grammar and proofs to sentences.

Mathematical theorems are based on axioms, assumptions, definitions and earlier thoerems. Similary, Physics is based on “laws of nature”, meaning the “whys” of which cannot be explained, but the “hows” explain the consistent aspect in natures behavior. As new consistent behaviour is discovered, new mathematics gets invented to explain it.

0^0 = anything; there needs to be a context for something that is undefined on one side.

(2) However, if you believe that mathematical statements like “1+1=2″ are only true conditional on certain axioms, then the process of mathematical inquiry looks more like one of invention: we invent axioms that yield the theorems which will be useful, both in terms of making our math simpler and cleaner, and in terms of allowing us to model the physical world we happen to live in.Even in this case though, if one believes that

givena particular axiom system there are objective truths about whether any given theorem (or its negation) would or wouldn’t be derivable from those axioms, and that these objective truths are independent of whether human beings ever figure them out or not, then this still seems like a weakly “Platonistic” position–the Stanford Encyclopedia of Philosophy article on mathematical Platonism refers to this as truth-value realism. Just to pick an example, I think that once you’ve given some axioms defining the complex numbers and then given the rule that defines the Mandelbrot set, there’s an objective truth about whether any given complex number is a member of the set or not, so in some sense the Mandelbrot set exists “out there” and we just discover parts of it. So maybe I’m not a full Platonist but I’m definitely more “Platon-ish” than someone who thinks math is a pure cultural invention defined only by human thought processes.Also, this is a pretty minor nitpick, but it always kind of bugs me when people suggest that imaginary numbers are somehow more unreal in the ordinary English sense than “real numbers”, or more a product of human imagination–“real” and “imaginary” are just arbitrary labels mathematicians came up with that shouldn’t be taken too literally, after all one wouldn’t say that “transcendental numbers” like pi are more “transcendent” than integers in the ordinary English sense of the word. Sure, you can’t hold i apples in your hand, but then you can’t hold a negative number of apples in your hand either! And just like negative numbers, imaginary numbers are pretty essential for modeling some real physical phenomena, like in quantum mechanics where the quantum “state vector” has a complex amplitude. Different people may take different views of the ontological status of numbers, whether they “exist” independently of us in any sense, but I don’t think any philosophers would suggest real numbers have a

differentontological status than imaginary ones…0^0 = limit of (x^x) as x->0. But x^x = exp( ln (x^x) ) = exp (x ln (x) ). But (x ln(x) ) -> 0 as x -> 0, and exp() is continuous so exp(x ln(x) ) -> exp(0) = 1.

There’s nothing inconsistent here.

The “Ask a Mathematician” post labels that version the “Cleverest Student”.

But the simpler limit of 0^x as x->0 = 0.

your maths is right, but you have error in first equation, it is the same to assert infinity*infinity = lim(…)….thats nonsense, there is big difference in defition of lim(x);x->0 and 0. generally a^x is to defined when a = 0, there is a posibility to count it but we have to specify the function and its not clear, it depends wheather you use 0^X or X^0

I take issue with the assertion in the original post that imaginary numbers are an example of something “made up”. They are thought of this way because they are really poorly named and not taught well in high school math (big surprise). Complex numbers are pretty intuitively obvious if one interprets them as describing geometric operations on a plane like stretching and rotating.

Think of it this way: if we multiply any number by -1, that corresponds to starting with the real number line and flipping it around about 0. If we flip it again, we are back to where we started, so -1 is a geometric operation on a line which has the property that doing it twice gets you back to where you started. -1 can also be thought of as acting on a plane: -1 flips the plane about the y-axis and about the x-axis. But we can also rotate a plane; there is nothing “invented” or counterintuitive about rotating some flat object. The number “i” is best thought of as that geometric operation on a plane which, when you do it twice it is like you did -1, which was just flipping the plane about its axes. Doing the operation twice is the analogy of taking i^2, which is -1.

I forgot to mention that the operation determined by the number i is just a 90 degree rotation of the plane, where -1 determined a 180 degree rotation (a flip). The number 1 corresponds to doing nothing since multiplying anything by 1 yields the original number.

We invent everything, but that doesn’t mean there is no corresponding reality to our ideas. The idea of believing what is useful, simpler, etc. is exactly what every epistemology requires to have any beliefs about anything. There are many qualities of hypotheses and theories that we take as positive rather than negative and we can’t always (if ever) attain absolute certainty.

The subject (is math real/true/…) is covered in good details here. http://www.amazon.co.uk/Logicomix-Search-Truth-Apostolos-Doxiadis/dp/0747597200/ref=sr_1_1?ie=UTF8&qid=1303727802&sr=8-1-spell

To average readers with no specific knowledge in Math 🙂

You’re posing a false dichotomy. Formal systems are invented. AND there are objective truths about such systems. Chess is invented. Yet we can prove that a king and rook can force mate against a lone king.

Imaginary numbers are as “imaginary” as real numbers. You won’t find a 7 on the street. All numbers are abstract concepts that may be used to model reality. Points along the real number line describe scalar quantities like distance and temperature, while points in the complex plane describe things with a magnitude and angle/phase, like 2D motion and sine waves.

“God gave us the integers, all else is the work of man”. Kronecker,

Pingback: On “0 to the 0′th power” « College Math Teaching

I just wrote about another part of this issue on my blog here:

http://mathmeteo.blogspot.com

Julia, I think I disagree with your most recent comment in the thread…specifically, I think position (1) is a bit of a strawman (not a malicious one, I’m sure! Just a bit inaccurate) of mathematical Platonism. I think any modern mathematician would admit that 1+1=2 depends strongly on your axiomatic system (for example, if you have an inconsistent one, it’s trivial to prove 1+1!=2), but those of a Platonist bent would argue that it is the connections between axioms, and the theorems you can prove from them, which form the objective truth of mathematics. I.e. given an axiomatic system, the provable theorems have a truth independent of human minds. In contrast, non-Platonists have a few options: a fallibilist might question the basis of human mathematical knowledge, and a (reformed, post-Godelian) formalist might say that’s a watered-down definition of “objectively true.”

For a radical mathematical Platonist’s (slightly technical) take on the objective reality of mathematics, this paper by Tegmark is pretty thought-provoking:

http://arxiv.org/abs/0704.0646

Cheers,

Nick

Yes! What Nick said. I definitely fall into the “radical Platonist” camp myself. Another interesting take is Jürgen Schmidhuber’s:

http://arxiv.org/abs/quant-ph/0011122

Very interesting paper. With the caveat that I haven’t read the whole thing yet, the contrarian physicist in me would like to mention one interesting wrinkle, which is that there’s no a priori reason to think the universe is formally describable. We’ve been quite fortunate to have computable and decidable physical laws so far, but there’s no guarantee that will continue as we hone in (perhaps) on a ToE. For example, if the ToE involves sums over superpositions of spacetimes with different topologies, my understanding is that it’s quite likely that that theory will be noncomputable (given that classification of four-topologies is isomorphic to the halting problem). I believe – although I’m less sure of this – that similar computability problems can arise in noncausal universes (and GR already admits closed timelike curves in principle).

-Nick

Pingback: Thinking nice and thinking right. « society says …

when old age shall this generation waste

thou shalt remain, in midst of other woe

than ours, a friend to man, to whom thou say’st

‘beauty is truth, truth beauty,- that is all

ye know on earth, and all ye need to know

I’ve never managed to figure out what it means for a mathematical concept to be “discovered” rather than “invented” – my requests to have this distinction explained to me in words I can understand never seem to lead anywhere good. I would like to point out, though, that you could argue that imaginary numbers were discovered, not invented, at least under some plausible meanings of these words, which may or may not be the meanings you had in mind.

The story goes like this. Geralamo Cardano was obsessed with solving cubic equations. After decades of work, and several cases of probable plagiarism, he finally triumphed and derived his famous formula. But there was a curious feature to his results: even though the answer was always right, in some cases the intermediate steps of the computation did not make sense. For example, applied to the equation

x^3 = 15x+4

his formula gave an expression which simplified to

2 + sqrt(-1) + 2 – sqrt(-1)

Now 4 is indeed a solution to the original equation, and as long as you are comfortable canceling out those pesky square roots, everything is great.

But Cardano was not comfortable: the square root of -1 does not exist, so how can it be cancelled with its negative? He tried to derive formulas which would make sense (i.e. produce real numbers) from beginning to end, but failed repeatedly (and, as far as I know, so did all those after him who attempted the same).

In his monograph, Cardano freely admits that his formulas produce nonsense, but he cannot bring himself to abandon them either, deriving formula after formula which produce correct answers once the negative square roots cancel out. In the end, Cardano produced some quasi-mystical ramblings on concepts which do not exist in an attempt to justify his manipulations. It goes without saying that all this really caught the attention of the mathematical world, and over the next couple of centuries many people thought about the sort of logical arguments which might justify Cardano’s equations.

With this in mind, it seems plausible to say that we did not invent imaginary numbers, but instead they really forced themselves upon us.

Rereading this comment, I feel my main point did not come through as clearly as I would liked. Which is: both Cardano and the mathematicians who followed him tried very, very hard to avoid dealing with imaginary numbers.A lot of effort was spent trying to find arguments which avoided them. However, this turned out to be untenable, and after a century and a half or so, the mathematical world had no choice but to give in.

Read Lakoff’s “Where Mathematics comes from”

He votes for a cognitive science of mathematics and says that all ‘romance mathematics’ and concepts of it being transcendent, are false and that numerical cognition is a shared subjective bias of humans.

People often presume complex numbers are merely a clever device and as such don’t or can’t *directly* describe phenomena, but – although this is true for physical scales of personal experience – it is in general false. You might as well say other algebraic or differential structures can’t fundamentally describe aspects of reality, e.g. integers modulo 12 for clocktime or manifolds for curved spacetime. The notion of “number” has broadened over time, and at every step people perceived the new numbers as something not belonging to the old set.

Moreover, our notation is sometimes imperfect in capturing the structure of a mathematical theory at various points of concern, so we are forced to make exceptional cases in order to consistently represent the whole picture, even if the theory appears arbitrary or convention-based in specific situations (such as with 0^0=1). Other times we simply don’t bother and leave them undefined because they are not intrinsic to the theory (such as with 0/0=nullity).

I’ll repost what I posted on math.stackexchange:

To say a mathematical object exists is to say it is logically possible for affairs to exemplify its structure. Logical possibility exists, by assumption, independent of the human mind, and hence mathematical objects are discovered. On the other hand, we may say an idea is created if it is creatively fashioned from other, previously known concepts in a meaningful way. Therefore mathematical objects are also created. So long as the words “discovery” and “invention” hold these straightforward definitions, we must conclude mathematics is a process of both simultaneously.

I feel this was written by someone with very small mathematical knowledge.

To say that i is invented, as a method of abstracting and generalizing is no different than to say the number 1 was invented as a method of abstracting and quantifying.

There is no such “thing” as ‘1’, it is merely a symbol to represent an idea, no different than ‘house’ is a group of symbols to represent an object. i is no less ‘real’ than ‘real’ numbers, and it was not ‘invented’ any less than ‘1’ or discovered any more than ‘1’ was.

Deep studies of complex analysis yield amazing results like “Euler’s identity” which is an often quoted example of how ‘i’ was more of a leap in intuition and a ‘discovery’ in mathematics due to extremely important results. So to say that ‘i’ was ‘invented’ or ‘discovered’ is wrong, both in it’s own way.

Just to cook up the debate, math is no less true than the fact that you are reading this, since in math you assume a set of axioms, and in life you assume that what you see is in fact the world (e.g. Descartes Discourse on the Method) which is in fact an axiom. Admitting an axiom implicitly admits the consequences, and thus math is no less true and no less false than ‘reality’.

So the answer is that the question is faulty, since there is no answer.

I think 0^0 is undefined. It is like saying 0^(1-1), which translates to (0^1 / (0^1) or 0 / 0. Does this seem plausible to anyone else?