Archive Cast Forum RSS Podcast Poll Results About Search Facebook Fan Art More Stuff Random Support on Patreon 
Updates: Sunday, Thursday; reruns all other days

1 [caption]: If Kurt Gödel were alive today: {image of Gödel sitting at a computer}
1 Kurt Gödel: Hmmm.
2 Kurt Gödel: It seems no matter what anyone does, there always remain statements which are true, but which cannot be proven within the system.
3 Kurt Gödel: Stupid Wikipedia.
3 [computer screen]: Elementary arithmetic
3 [computer screen]: From Wikipedia, the free encyclopedia
3 [computer screen]: Elementary arithmetic is the most basic kind of mathematics[citation needed]: it concerns the operations of addition, subtraction, multiplication, and division[citation needed]. Most people learn elementary arithmetic in elementary school[citation needed].
3 [computer screen]: Elementary arithmetic starts with the natural numbers and the Arabic numerals used to represent them[citation needed]. It requires the memorization of addition tables and multiplication tables for adding and multiplying pairs of digits[citation needed]. Knowing these tables, a person can perform certain wellknown procedures for adding and multiplying natural numbers[citation needed]. Other algorithms are used for subtraction and division. Mental arithmetic is elementary arithmetic performed in the head, for example to know that 100  37 = 63 without use of paper. It is an everyday skill[citation needed].
First (1)  Previous (1844)  Next (1846)  Latest Rerun (1164) 
Latest New (3409) First 5  Previous 5  Next 5  Latest 5 Miscellaneous theme: First  Previous  Next  Latest  First 5  Previous 5  Next 5  Latest 5 This strip's permanent URL: http://www.irregularwebcomic.net/1845.html
Annotations off: turn on
Annotations on: turn off

Kurt Gödel was a mathematician and logician, born in 1906 in Brünn, then part of the empire of AustriaHungary (now known as Brno, and in the Czech Republic). He fled Austria following the Anschluss, travelling to the USA by train across Siberia and via Japan, eventually becoming an American citizen after World War II.
Gödel is best known for his work in formal mathematical logic, and in particular a result that he proved, which is known as Gödel's incompleteness theorem.
[Aside: I've attempted to explain a lot of quite complicated things in science and mathematics in simple terms in comic annotations before, but frankly this one is by far the most daunting. In fact, I've mentioned it before, in #1068, but I shied away from trying to explain it in any detail. Feeling somewhat like I took the soft option that time, I'm giving it a go now. I just hope I can do it justice.]
Gödel's famous incompleteness theorem (he actually produced two "incompleteness" theorems, but one is much better known) states:
For any consistent, nontrivial, formal, computably enumerable theory that proves basic arithmetical truths, an arithmetical statement that is true, but not provable in the theory, can be constructed. That is, any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete.This is a pretty dense couple of statements, full of words with precise mathematical definitions. Let's try to break it down. formulation taken from Wikipedia, with addition of the word "nontrivial".
A formal theory is a way of doing mathematics by manipulating symbols according to strict rules. A simple example is given by the way we do addition. Normally when doing addition we don't really think about the symbols for the numbers; we sort of think about the numbers "themselves", whatever that means. We work out in our heads by applying a few ingrained rules learnt by heart early in our educations that 12 plus 34 is equal to 46. We don't really have to think very hard to do this.
To do this addition formally, we first need to define our symbols. In this case, they are the numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and the two symbols + and =. Next we need to define our rules. In the case of addition, the rules are as follows:
An important feature of this theory of addition is that you can then use the system to prove any statement of addition. "Prove" in this context has a very precise meaning. It means that you can take a statement written in symbols, like 79+3=82, and generate that sequence of symbols using nothing but the explicitly defined rules of the theory. How does one generate this sequence of symbols?
The simplest method is to start with the numbers before the equals sign (79 and 3) and add them using our rules. That produces 82, and we know we can write the resulting sum as 79+3=82. That proves (using the rules of our theory) that the statement is correct. The important thing to note is that we are doing all of this merely by manipulation of symbols. We don't have to think about, understand, or even know that there exist semantic meanings for these symbols. We don't need to know that "79" can mean a number of objects. In other words, we could teach this sort of addition to someone without explaining to them that the symbol "3" means "three". We can instruct a computer to do addition like this without the computer having any understanding of what it means to have three objects. All the computer is doing is manipulating a bunch of arbitrary symbols based upon a set of welldefined rules.
If you set up a formal theory of mathematics correctly, you should be able to prove any true statement, and be unable to prove any false statement. This is true of our simple theory of addition. In fact, we can prove every true statement about the addition of two nonnegative integers with nothing more than what I have described so far.
To do this, perform the following steps:
Addition of nonnegative integers all by itself is a pretty limited domain, though. In fact, this theory of addition is in some sense trivial. It dosn't allow us to make any particularly interesting statements about the properties of numbers.
What you really want to make this theory interesting, in a nontrivial sense, is a bunch of new symbols. Symbols that allow you to make much grander statements about the theory of numbers. Logical symbols. Here are a few:
∀a ∃b : a + b = aWhat this says is that for every possible number a, there exists another number b (maybe different, maybe the same, it doesn't matter), so that the statement a+b=a is true. In other words: for every number, there is another number which when added to the first number gives the first number as the answer. You can probably work out that the number b we are talking about is zero. For every number, if you add zero, you get your original number as the answer. This statement does not say that "if you add zero to a number a you get the same number a"; what it is saying is that "for every number a, there is some number b that when added to first number gives you the same number a".
This is not a statement about the the answer to an addition of two numbers. It is a statement about the existence of a certain number with a specific property (that is defined in terms of addition).
These few symbols are extremely powerful enhancements of our theory. But to go along with them, we also need to add some new rules, governing how to manipulate sequences of symbols which contain them. Otherwise we'd have no way to actually prove (within our formal theory) that such a number as zero exists. At this point, let it suffice to say that such rules do exist. They are too complex for me to go into detail here in what is already in danger of becoming the most deeply theoretical of any of my scientific annotations. But rest assured they exist, and they allow the proof (within the formal system) of statements like this.
Another very powerful enhancement to our theory of numbers would be this:
∀a : a × 0 = 0 (any number multiplied by 0 equals 0)And the cool thing is that within the expanded formal theory (which I haven't described fully) you can prove these statements by doing nothing more than manipulating the symbols according to the rules of the system.
∀a : a × 1 = a (any number multiplied by 1 equals itself)
∀a ∀b ∀c : a × (b + c) = a×b + a×c (the distributive law of addition)
We can also make statements like this:
∀a : a + a = 0 (any number added to itself equals 0)The first statement is simply false. If we add 1+1 we get 2, not 0. And the nice thing is that you cannot prove this statement within the formal theory. It's a false statement about numbers, and you can't prove it to be true within the theory. Yeah, nice.
∀a ∃b : a + b = 0 (for every number, there is another number that when added to it gives 0)
The second statement is more interesting. So far we've only been talking about nonnegative integers. I haven't introduced the concept of subtraction, or even the minus sign symbol, at all yet. Within our theory as it stands, this statement is false. Given a nonnegative integer a, it's possible (in fact highly likely) that there is no such nonnegative integer b so that a+b=0.
ASIDE: This is slightly off the main track of this annotation, but it's so amazingly stunning that I feel compelled to divert briefly into a further exploration of this last statement. No doubt many of you are thinking that there are such numbers so that given any number a you can find a number b so that a+b=0. We call them negative numbers. If a is 3, then b is 3. It's just that (for some reason) I'm not considering them at the moment.Back to Gödel. One final enhancement to our developing number theory is this:You are of course correct. The statement points out a deficiency in the sorts of numbers we are considering. It points us to the existence of more numbers, with different properties to the sort of numbers we were originally talking about. When negative numbers were first conceived of by mathematicians, they were controversial and regarded by many with scepticism. After all, you can see 3 apples. What does 3 apples look like? It makes no sense! The very concept of 3 is absurd!
To people with a simplistic understanding of numbers and what they can be used for, the concept of negative numbers is indeed strange. We all go through this process at school, when we graduate from counting apples to doing more abstract mathematics involving subtractions. We are often introduced to negative numbers via the concept of the number line. This makes it seem a bit more natural that there are "more numbers" over there, on the left side of the normal counting numbers. And we discover that negative numbers are useful for things like accounting, and keeping track of debts, and so on.
By the time most of us leave school, we are familiar and comfortable with the concept of negative numbers.
This is not the only example of finding "more numbers" that we weren't previously aware of. Another thing we learn early in our school lives is fractions. When we divide three apples between two people, how many apples does each person get? The answer cannot be found in the realm of the counting numbers. We have to extend our knowledge of numbers to include rational numbers: all the possible results of dividing one integer by another. And there are natural places on the good old number line for these too. One and a half (the answer to how many apples each person gets), sits halfway between 1 and 2 on the line. So we've filled in more spaces on our number line.
But wait, there's more. During high school algebra we learn about square roots. And if we have a decent mathematics teacher, we manage to prove and understand that there are some numbers for which the square root is not a rational number. The square root of 2 is the classic example. We can even construct a line interval, geometrically, that has a length equal to the square root of 2 (compared to a reference interval of length 1). So this number must exist somewhere on our number line, but not at a point that can be described as the ratio of two integers. In other words, there are still gaps on our number line, in between all those rational numbers.
So we fill in the gaps with these extra numbers. And then we get on to logarithms and trigonometry, and we discover there are still gaps in the number line, between these immensely densely packed numbers we already know about. We learn about transcendental numbers, the best known examples of which are π and e. More numbers.
But wait... you guessed it. There are still gaps in the numbers. We can construct very simple statements like ∃a:a×a+1=0 (there is a number a which has the property that when multiplied by itself, the answer can be added to 1 to give zero; i.e. a multiplied by itself is 1) that still look false in the numbers we know so far. This is where things get tricky, and for one simple reason. We can easily conceive of some more numbers, unlike any of the numbers we know, that have different properties to them, and which satisfy statements like this. After all, we've done it before, many times over. We discovered negative numbers, and how useful they were, and we became familiar with them so they no longer seem strange. We found rational numbers and recognised their utility, so they are like old friends now. If we did well at high school mathematics, we learnt of more missing numbers, the irrationals and transcendentals, and we realised how useful they were, and accepted them into our evergrowing collection of things we regard as numbers.
The problem is that this latest enhancement to the realm of numbers was discovered in the 16th century, at a time when mathematical theory had become partially formalised. Again, these new numbers were controversial, just as negative numbers, rational numbers, and irrational numbers were before them when they were discovered. They were so controversial, in fact, that many prominent mathematicians refused to believe that such numbers could exist. Thus these new numbers were given the rather unfortunate name of imaginary numbers.
And this is where the minds of many people learning mathematics today rebels. "Well," we reason, "if they're imaginary, then they mustn't really exist. It's some sort of trickery that doesn't really mean anything, really."
But this impression is wrong. After all, does a negative number "really exist"? Show me 3 apples. Yet we use negative numbers all the time, with nary a second thought. Why should numbers, which when multiplied by themselves, produce negative numbers, be any less real or useful?
In fact, the imaginary numbers are incredibly useful. But where are they on the number line? Actually, they're at right angles to it. Seriously. If the normal number line runs from left to right, the imaginary numbers run in a line running up and down from the zero point. How cool is that?
And we're still not done. If we can enhance our number line with another line running up and down, what about all the other space above and below? Sure, there are numbers there too. They're called complex numbers.
But are these new numbers really useful? Sure, I can't show you the square root of minus one apples, but then neither can I show you 3 apples. The number 3 is incredibly useful for accounting. The square root of minus one is incredibly useful for work in fields as diverse as electronics, audio and video signal processing, power engineering, quantum mechanics, relativity, geology, seismology, materials stress analysis, fluid dynamics, system control theory, and so on. Some of these names are a little abstract, but they encompass the theories of operation of things including computers, aeroplanes, space probes, earthquake analysis and prediction, submarines, and even industrial sewing machines. (The physical properties of fabric and how it deforms under the stress of being sewed are most effectively characterised by using complex number theory.)
The imaginary and complex numbers are not some bizarre thing that a bored mathematician "made up" some day, that are only of interest to abstruse theoretical mathematicians. They are respectable, important, and, for want of a better word, real numbers with real world applications. Most people will never need to use them, but most people never need to use irrational numbers either. But they exist, and it's a good thing they do.
∀a ∃b : a + b = 0 (for every number, there is another number that when added to it gives 0)The second statement is simply the reverse of the first. If we consider only nonnegative integers, the first statement is false, which means necessarily that the second is true. If we do allow negative integers, the first statement is true, and the second necessarily false.
~∀a ∃b : a + b = 0 (it is not the case that for every number, there is another number that when added to it gives 0)
Now we're in a position where we can express the negation of a false statement, which will then be a true statement. So if we can't prove something like ∀a:a+a=0 (because it's actually false), we should be able to prove that it's false by proving that ~∀a:a+a=0 is true. So we now have a mechanism for proving that some statement is false: we add the "not" symbol in front and then prove that the resulting statement is true. This step brings us firmly out of the realm of trivial theories of numbers and into the realm where things are decidedly nontrivial.
This pretty much completes our formal theory of numbers. I have glossed past the actual symbol manipulation rules that we use to generate our proofs, but I assure you they exist, and the resulting theory is indeed very powerful, in that it can generate a proof for many nontrivial statements about numbers, such as:
∃a ∃b ∃c : a×a + b×b = c×c (there are integers that satisfy the Pythagorean theorem)The other important thing about this theory is that it does not ever generate a "proof" of a false statement about numbers. This is important. If it could generate proofs of false statements, then it wouldn't be much use. This property of the formal theory is called consistency.
Right. Now we know what a "consistent, nontrivial, formal, computably enumerable theory that proves basic arithmetical truths" is. That's the type of thing that Gödel's theorem talks about. What it says about it is: "an arithmetical statement that is true, but not provable in the theory, can be constructed."
Whoa. What does that mean? It means that there is some true statement about numbers that the formal theory cannot produce a proof for. Let's call that statement G as shorthand. Gödel's theorem says that G is true, but cannot be proven. And since we know that the theory can never produce a proof of a false statement, we can't prove ~G either. In fact, within the formal system, we can never decide if G is true or not.
What's more, Gödel's theorem doesn't just assert that G exists. It also says we can construct the statement G. In fact, the proof of Gödel's theorem explicitly constructs the statement G(*). This is a little mindbending. We can construct a statement that we know is true, but we cannot prove it is true? The catch lies in the fact that we cannot prove it is true within the formal theory. To construct the statement G we have to use knowledge and rules about mathematics that lie outside the formal theory. We're allowed to do this.
[* No, this annotation will not provide an actual proof of Gödel's theorem. That would require an entire book, not just this annotation. If you're interested in learning how Gödel's theorem is actually proved, I highly recommend the book Gödel, Escher, Bach, by Douglas Hofstadter.]
But what it implies is that the formal system cannot prove everything about numbers that we can prove by thinking outside the formal system.
The kick in the pants is that this applies to all possible formulations of mathematics. Because no matter how you extend or enhance your formal theory to try to account for the stuff you couldn't prove before, Gödel's theorem applies all over again to the new system, and shows that there is another mathematical truth that you can't prove. And you can step this out as many levels as you like  you will never, ever, reach a theory of mathematics that is capable of proving everything that is actually true.
In other words, there are, incontrovertibly, some statements about the properties of numbers, that are true, but that we cannot ever prove to be true. That's quite a stunning thing to think about.
But Gödel's result doesn't only apply to this rather abstract sort of mathematics. Computers are formal systems. Think about it. Computers manipulate symbols. That's all they do. They have no understanding of what they're doing. And they manipulate symbols according to strict rules laid down by their builders and programmers. The rules can be very complex, but they are strict, formal rules. Gödel's theorem means that there are some things that are true, but that computers can never work out for us. (For those familiar with computer science, this is intimately related to the famous halting problem. For those unfamiliar with this, this is a specific wellknown example of a problem that we know no computer can ever solve.)
Some philosophers have gone further, and applied Gödel's theorem to the human mind. The mind, they argue, at its most fundamental level merely manipulates symbols, according to some strict rules programmed by biochemistry. We can no more break these rules than we can effectively decide to block certain chemicals reaching certain neurons in our brain. If this is the case, then Gödel's theorem implies that there are truths about the universe that we, human beings, can never prove. Other philosophers have dismissed this argument as a load of bollocks, pointing out that Gödel's theorem applies only to consistent formal systems, and the human mind is anything but consistent  we can readily convince ourselves of the truth of something that is in fact false.
Phew. I hope those of you who have persisted this far got something out of all that. I'll leave you with a small reward:
Gödel's incompleteness theorem would make a snazzy Tshirt design. "Formulations of number theory: Complete, Consistent, Nontrivial. Choose two."
LEGO^{®} is a registered trademark of the LEGO^{®} Group of companies,
which does not sponsor, authorise, or endorse this site. This material is presented in accordance with the LEGO^{®} Fair Play Guidelines. 