- Numbers can be understood as abstract entities, as symbols created by us, or as logical objects whose existence is supported by axioms and set theory.
- The formal construction of the natural numbers using the empty set, Peano's axioms, and the Recurrence Theorem allows for a rigorous definition of sum, product, and powers.
- Integers, rationals, irrationals, and reals are obtained by step-by-step expanding ℕ, using equivalence classes and Dedekind cuts to capture phenomena such as the continuum and irrationality.
- The history of number systems and Gödel's incompleteness theorems show that numbers are powerful cultural tools but also structures with unavoidable logical limits.

When we use numbers to tell time, pay at the supermarket, or check our bank balance, we take them for granted, as if they were as real as our house keys. But if we think about it carefully, things get more complicated: In what sense do numbers really "exist"?Are they something we discover, like the planets, or something we invent, like the characters in a novel?
This debate blends philosophy, history, and mathematics in a rather fascinating way. Throughout the centuries, various answers have been proposed: from those who believe that numbers are part of a kind of "abstract world" independent of us, to those who maintain that they are nothing more than symbolic tools that we have created for counting, measuring, and reasoning. Along the way, ideas such as Peano's axioms, set theory, the formal construction of natural, integer, rational, irrational, and real numbers, and even the famous limitations discovered by Gödel, appear.
What does it mean for a number to "exist"?
Before we delve into formulas and axioms, it's worth clarifying what on earth we mean by "existence." The existence of a table is not the same as the existence of Sherlock Holmes or the existence of... a number like 24The table is a physical object; Holmes is a fictional but well-defined character; the 24, on the other hand, takes up no space, weighs nothing, and cannot be stored in a drawer.
One way of approaching the issue, which comes from Plato, maintains that numbers are abstract entities that live in a non-physical domainThey are not made of matter, but they are as "real" as justice or beauty in Platonic philosophy. From this perspective, mathematicians do not invent numbers, but rather discover them: the number 24 was "there" even though no one had thought of it.
Other philosophers and mathematicians argue something different: numbers would rather symbols and conceptual constructions that we develop to model the world. They would not exist outside of our theories and conventions, although once those rules are established, the mathematical results would be as rigid as we would like. In this approach, 24 is the result of a system of symbols and operations that we have agreed upon, not a piece of an independent mathematical universe.
There are also interesting intermediate proposals: some authors argue that a number is a kind of abstract object with the peculiar property that "if it could exist, it would exist"In other words, a concept only needs to be possible and well-defined to have a certain kind of logical or mathematical existence. This way of speaking allows us to include not only numbers, but also sets, areas, functions, geometric figures, and many other entities that we use daily in mathematics.
From any of these points of view, the underlying problem is similar: How does the existence of a number differ from the existence of a fictional character?Everyone knows what the number 5 is and everyone knows who Sherlock Holmes is, but we don't attribute the same kind of reality to them. The discussion, far from being settled, usually raises more questions than it answers.
Numbers, symbols and meaning: what is a "2" really?
If we strip away what we take for granted and look at the numbers objectively, the first thing we see is written symbols or sounds when pronouncedThe "2" that we write on paper, the "two" that we say out loud, or the Roman "II" are not the number itself, but representations.
A symbol, by itself, is a simple stroke or noise without content. What gives it meaning is collective agreement: we decided that this stroke represents a quantity, an order, a measureJust like with the letters of the alphabet, which mean nothing on their own, but combined form words that we do associate with ideas, things, or actions.
This symbolic perspective reveals something important: There is nothing "magical" about the concrete form of numbersWe could use entirely different symbols, and as long as we agreed on the same rules and meanings, mathematics would still work. In fact, throughout history there have been many number systems, with completely different symbols and rules, and yet they all served to count, measure, and calculate.
However, the everyday use of numbers goes far beyond simply writing them down: The power of numbers becomes apparent when we work with them.Adding, subtracting, multiplying, dividing, raising to powers… All these operations allow us to model real phenomena: from dividing a cake to designing a GPS navigation system or calculating the dose of a vaccine.
Precisely because mathematics underpins almost all modern technology, mathematicians were forced, especially from the 19th century onwards, to to define with maximum precision what they understood by "number"It wasn't enough to simply say "it's what we use to count"; a formal definition was needed to avoid contradictions and allow the entire theory to be built with certainty.
Are there infinite numbers, or is that not so clear either?
One of the most perplexing issues when discussing the existence of numbers is the theme of infinityWe are used to saying that there are infinitely many natural numbers: 0, 1, 2, 3… and so on. But if we accept this, some curious questions arise.
For example: if we think about the "set of all numbers" and want to choose one "at random," what is the probability of getting a 5? Intuitively, we might say something like 1 divided by infinity, which would seem like zeroAnd if the probability is zero, one might be tempted to say that 5 "does not appear" in that set, which sounds absurd because 5 is clearly there.
This type of reasoning illustrates the clash between everyday intuitions about infinity and rigorous way in which probability and infinite sets are treated in mathematicsIn measure theory and probability, something having zero probability does not mean it is impossible; it simply indicates that, within an infinite continuum, its "weight" is negligible. In other words, the idea that "zero probability = does not exist" is not correct in mathematics.
From this arises another, more philosophical proposal: perhaps numbers are not "given" as a complete infinity, but rather We generate them step by step, advancing without limit but without reaching a finished infinityIn other words, the numbers would be potentially infinite (we can always keep adding 1), but there would not be a "total" of all of them as something closed.
This position connects with the notion of natural numbers as objects that are constructed by succession (0, then its successor, then the successor of the successor, and so on), which leads us to the famous Peano axioms set theory as the formal basis of modern mathematics.
From nothing to zero: sets, empty space, and natural numbers
To rigorously construct the natural numbers, many 19th-century mathematicians relied on a common language: the Set TheoryThe idea is simple in appearance: we work with "sets" (collections) and "elements" (what belongs to those collections) and give a few basic axioms about how they behave.
One of the fundamental axioms is that of extension: Two sets are equal if they have exactly the same elementsAnother, the specification, allows us to form subsets from a condition: given a set A and a property T, there exists the set of all elements of A that satisfy T.
With these tools we can define something key: the empty set, which is the set that has no elements. It can be presented as the set of all x in A such that x ≠ x (an impossible condition), so nobody enters that club. This set is usually called 0 and becomes the cornerstone of the formal construction of the natural numbers.
From there, we can "name" the first numbers as certain sets: we call the empty set 0, the set containing only 0 we call 1, the set containing both 0 and 1 we call 2, and so on. Each number is constructed as a set that collects to all the above numbersThis way of encoding natural numbers (similar to Frege's proposal and later to von Neumann's) allows relating the "less than" order to the inclusion of sets.
To move forward, we need the union axiom: given a collection of sets, there exists a set containing all elements that belong to at least one of them. And we also define the successor of a set A as A+ = A ∪ {A}. That is, we add the set itself as a new element, which allows us to go "up" number by number.
This introduces the concept of successor setA set S is a successor set if it contains 0 and, whenever it contains an element A, it also contains its successor A+. A key axiom states that there exists at least one successor set. If we take the intersection of all possible successor sets, we obtain the smallest set that contains them all: this is precisely where the set of successors is "nested." natural numbers, ℕ.
Peano's axioms: ensuring that 1 + 1 = 2 is not so trivial
Once we identify ℕ as that minimal set containing 0 and stable by succession, we can study its properties. Giuseppe Peano formulated a very compact list of axioms at the end of the 19th century that captures the essence of the behavior of natural numbers.
In a typical version, starting from 1 instead of 0, Peano's axioms state, in broad terms, the following: first, 1 is a natural numberSecond, every natural number has a successor, which is also a natural number. Third, no natural number has 1 as its successor (or, in another formulation, 0 is not the successor of any natural number). Fourth, if a set of natural numbers contains 1 and is closed by sequence, then it contains all natural numbers: this is the principle of inductionFifth, if two numbers have the same successor, then the two numbers are equal.
These axioms, although they seem formal and somewhat dry, encompass ideas we've used unconsciously since childhood. For example, induction allows us to prove properties of the type "all natural numbers satisfy X" by proving that X is valid for the first one And if it holds true for one number, then it holds true for its successor. It's a kind of logical domino effect.
From these axioms, basic properties of natural numbers are deduced, such as that There is no number whose successor is 0or that the "successor" operation is injective (if two numbers have the same successor, they are the same number). They also allow us to characterize ℕ as the only set that satisfies certain combined conditions of succession and induction.
The most interesting thing is that, starting from this logical framework and the notion of successor, one can rigorously construct the usual arithmetic operations: addition, multiplication and powers, and demonstrate their classical properties (commutativity, associativity, existence of neutral elements, etc.) without appealing to "intuitively it is so".
How to construct sum, product, and powers over ℕ
Once we accept Peano's axioms and have the set ℕ well-defined, we can ask ourselves: how exactly do we define operations like addition, without taking them for granted? For this, we use a very powerful tool: the Recurrence Theorem, which guarantees the existence and uniqueness of certain functions defined step by step on the natural numbers.
The idea is as follows: if we have a set X, an initial element a in X and a function f: X → X, the theorem ensures that there exists a unique function u: ℕ → X such that u(0) = ayu(n+) = f(u(n)) for all natural numbers n. That is, we can construct u by applying f over and over again starting from a, and there will not be two distinct ways of doing so that respect that definition.
Applying this idea to the natural numbers, we can define the sum of a fixed number m with any n. We take X = ℕ, a = m and a function s: ℕ → ℕ that maps each na to its successor n+. Then, the Recurrence Theorem gives us a function S_m: ℕ → ℕ, with S_m(0) = m and S_m(n+) = s(S_m(n)). We interpret this function as the sum m + nThat is, we define S_m(n) = m + n.
With this formal definition, something as common as 1 + 1 becomes a small chain of applications: 1 + 1 = S_1(1) = S_1(0+) = s(S_1(0)) = s(1) = 2It's not that mathematicians don't know that 1 + 1 equals 2, it's that they want to justify why, within the axiomatic system, that equality is inevitable.
From this definition, one can prove properties such as that 0 acts as an identity element for addition (m + 0 = my, 0 + m = m for all m), that addition is commutative (a + b = b + a) and that is also associative ((a + b) + c = a + (b + c)). All these proofs rely on the principle of induction and the behavior of the successor.
The product is defined similarly. We fix a number m, we take a function P_m: ℕ → ℕ such that P_m(0) = 0 and P_m(n+) = S_m(P_m(n)). We interpret P_m(n) as m × nThus, for example, 1 × 2 is developed as P_1(2) = P_1(1+) = S_1(P_1(1)) = S_1(1) = 2. Then, using induction again, its properties are demonstrated: commutativity, associativity, and that 1 is the identity element of the product.
Powers are constructed by taking a further step: we define E_m: ℕ → ℕ with E_m(0) = 1 and E_m(n+) = P_m(E_m(n)), and we write E_m(n) = m^n. From this definition, identities such as m^(n + k) = m^n × m^k, again with the help of the principle of induction and the already demonstrated properties of the product.
This whole process, although formal and somewhat technical, illustrates that the edifice of elementary arithmetic is not "in the air," but supported by a few very clear axioms and a handful of logical argumentsFrom this perspective, the "existence" of natural numbers means that there is a model (for example, sets constructed from the empty set) that satisfies those axioms.
From natural numbers to integers, rational and irrational numbers
Once natural numbers are firmly established, the story doesn't stop there. Everyday and scientific problems compel us to expand this numerical universeFor example, with natural numbers we only know how to count and add, but not how to subtract in general or divide.
The next step is usually to introduce the integer numbers, which include the natural numbers and their negative versions: …, -2, -1, 0, 1, 2, … Historically, fractions came before negative numbers, but from a formal point of view, it is convenient to start with the integers. An integer can be defined as an equivalence class of pairs of natural numbers (a, b), where we consider two pairs (a, b) and (c, d) equivalent if a + d = b + c. Intuitively, this corresponds to thinking of the "subtract" from − b, although formally that subtraction does not yet exist within ℕ.
Then the rational numbersThese correspond to the fractions we've always known. They are used to measure quantities that are not a whole number of units, such as half a cake, a third of a liter, or three-quarters of an hour. A rational number is usually represented as a/b, where a and b are integers and b ≠ 0. Formally, each rational number is defined as an equivalence class of pairs (a, b), with b not equal to zero, where two pairs (a, b) and (c, d) are equivalent if a·d = b·cThat is, if they represent the same proportion.
The Pythagoreans believed that "everything is number" in the sense of "everything is rational," but this view was shattered when it was discovered that the diagonal of a square with side length 1 (the square root of 2) cannot be written as a fraction of integers. It was later also shown that π and e are irrational numbersThat is, they cannot be expressed as a/b with integers a and b.
To rigorously construct the irrational numbers It's a bit more delicate. An elegant way to do it is through calls Dedekind cutsThe idea is to consider certain subsets of rational numbers that have a specific upper bound. For example, we can take the set of all rational numbers whose square is less than 2; its natural "cut" is √2, which is not rational. In this way, each suitable cut can be viewed as a real number, and some of these cuts do not correspond to rational numbers.
By combining all the rational numbers and all these cuts that give rise to irrational numbers, we construct the set of the real numbers, ℝIn ℝ live all the numbers we use to measure continuous magnitudes: lengths, areas, times, speeds, etc. Within the real numbers are still "embedded" the natural, integer and rational numbers, each with its specific interpretation.
A quick tour through the history of number systems
The question of the existence of numbers is not only abstract; it is also reflected in the history of how different cultures have learned to counting and writing quantitiesThe earliest evidence of numbering dates back to around 7000 BC, with marks and bones used to keep simple counts.
In ancient Egypt, during the First Dynasty, a hieroglyphic decimal numbering system was developed. Each power of ten had its own symbol, and they were They grouped the elements in tens.It was used for practical tasks such as calculating taxes, measuring agricultural fields, or building temples.
In Mesopotamia, the Sumerians and later the Babylonians used a sexagesimal numbering system, that is, base 60Its complexity lay in the large number of symbols and possible combinations, but it proved extremely effective for astronomy and timekeeping. In fact, we still use that legacy today in hours, minutes, and seconds.
The Greeks took the Egyptian base ten as a reference and developed a system in which they used letters of their alphabet to represent numbersThe Attic system, however, proved quite rigid and somewhat limited the development of advanced arithmetic, although the Greeks shone spectacularly in geometry and logical proofs.
The Roman system, more familiar to us, assigned numerical values to certain letters (I, V, X, L, C, D, M). Although simpler than others in appearance, It was not positionalThis made performing complicated calculations very cumbersome. It's fine for a couple of dates on a building facade; not so much for algebra.
In parallel, a decimal and positional system emerged in India around the 5th century BC. In this system, the value of each digit depends on its position, and ten units of one order are equivalent to one unit of the next higher order. This system, which explicitly incorporated the zero as a numberIt proved to be incredibly powerful and practical.
The Arabs, in contact with cultures such as Hindu, Greek, and Egyptian, adopted and spread this decimal positional system. Although we speak of "Arabic numerals," in reality Its origin is in IndiaIt was the Islamic peoples who transmitted it to Europe through, among other places, Al-Andalus. Over time, this system displaced Roman numerals and became the world standard.
In pre-Columbian America, the Mayan civilization developed an extraordinarily advanced numerical system, based on 20 and also positional. Furthermore, they explicitly recognized zero. They represented numbers by combining dots and bars: dots for units and bars to group by fives. His handling of the calendar and astronomy was astonishingly accurate.
This entire historical overview reinforces the idea that, although forms and rules change, The need to count, measure, and order the world is universal.Numbers, in their various incarnations, seem to emerge time and again wherever there is a civilization that wants to organize its experience of the environment.
The limits of the system: Gödel and faith in mathematics
At the end of the 19th and beginning of the 20th centuries, many mathematicians sought to turn mathematics into a a completely solid building, free of contradictionsThe idea was to find a finite set of basic axioms from which all other mathematical results could be deduced using pure logic.
Figures like Henri Poincaré were skeptical and saw this ambition as unattainable, while others, led by david hilbertThey were confident that a perfect axiomatic system could be achieved for arithmetic and, by extension, for the rest of the branches of mathematics.
Then Kurt Gödel appeared and proved two theorems that changed the landscape forever. The first states, greatly simplifying, that in any system powerful enough to include basic arithmetic (for example, the Peano axioms), there will always be true propositions that cannot be proven within the system itself. In other words: arithmetic cannot be both complete and consistent.
Gödel's second theorem is even more unsettling: it shows that if an axiomatic system like that of arithmetic is consistent (has no contradictions), then That consistency cannot be demonstrated from within the system itself.If someone were to prove that there are no contradictions in arithmetic using only its axioms and rules, that would mean, paradoxically, that the system was not coherent.
These conclusions have sometimes been interpreted as a kind of "cosmic joke": if we rely so heavily on mathematics as the ultimate tool for knowledge, we have to accept that, in a certain sense, We must also believe in something that we cannot prove from within the mathematical framework itself.The "existence" of a reasonable arithmetic system, without contradictions, requires a minimum act of faith.
When we put together this entire journey—from the symbols and the Ishango bone, through Egypt, Babylon, India and the Maya, to set theory, Peano's axioms, the formal constructions of the different types of numbers and Gödel's theorems—what we see is that numbers are, at the same time, human tools and surprisingly robust structuresWe can debate whether they "exist" as abstract entities or as sophisticated conventions, but it is clear that they shape our understanding of the universe and, in some way, transcend us: even if we were to disappear, it is hard to imagine a cosmos in which 1 + 1 would no longer be 2.
