Introduction
In 1931 a mathematician with round glasses wrote a quite short paper with title “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme”. He was 25 years old. So young was he that, instead of tearing mathematics apart with the energy of his youth, he could have spent his time doing nothing. He was at the University of Vienna and since 1938 a permanent member of the Institute for Advanced Study at Princeton.

The work was cited as one of the most significant developments in logic in contemporary times when Gödel received an honorary degree from Harvard University in 1952. At the time the paper was published only those who specifically interested in technical literature could follow and understand the argument. However, the book, which was written by Ernest Nagel, makes it accessible to those who has no specialization about mathematics.
Axioms
Before moving onto the concept, some preliminaries will help us to understand it thoroughly. Approximately 2000 years ago Ancient Greeks invented logical proofs. They used to use set of axioms to create theorems. The axioms were the statements that is accepted to be true without proof. Things that are true simply because we feel that way, because we sense it. In short, axioms build the foundations of the system. The theorems are the superstructure and are obtained by using axioms. One of the very important axiomatic system is the one that was created by Euclid. He made up 5 axioms to systemize the synthetic geometry.
The Problem of Consistency
We know that the 19th century has a lot of improvements regarding mathematics. Before 19th century mathematics was ambiguous due to its primitivity. But at and after 19th something inner in mathematics had changed. They had developed new precise techniques for the old foundations on mathematics and also new branches, that expands the old ones were created. The fundamental and ancient problems were solved. In 1837, Pierre Wantzel showed that the problem “trisecting an angle with a compass and straightedge” is impossible for arbitrary angles.
Rigorous definitions were eventually supplied for negative, complex and irrational numbers a logical basis was constructed for the real number system. Until the 19th century, mathematicians used real numbers only with their senses, without constructing them. Thanks to Cauchy, Cantor and Dedekind we have now solid understanding of real numbers.

The other long lasted was Euclid’s parallel postulate (it states that through a point outside a line, only one parallel line can be drawn). Nearly 2000 years generations of mathematicians struggled with this axiom because they felt this axiom was less “self-evident” than other four axioms and tried to prove it from them. In 19th century, four mathematicians Gauss, Bolyai, Lobachevsky and Riemann showed that this was also impossible.
Proving the impossibility of postulate showed, in general, impossibility of proving certain propositions within a system. This shattered the traditional belief that axioms were self-evident truths. The focus of pure mathematics shifted. It was no longer about discovering truths of the real world but about deriving theorems from a given set of postulates, regardless of whether those postulates were “true”. Mathematics became more abstract and formal.
New kinds of algebras and geometries were developed which marked significant departures from the traditional mathematics. As the meanings of certain terms become more general, their use became broader and the inferences that could be drawn from them less confined. For example, definition of limit can be given in many ways, such as using real numbers and topology.
Some of the scientific and mathematical interpretations could not be sounded logical or moreoever intuitive. It is more than normal. But it does not make the theorem or fact worthless. In book Ernest gives very good example of this. In the future, our children will probably have no difficulty in accepting paradoxes of relativity, as we do not boggle at ideas that were regarded as wholly intuitive a couple of generations ago.
2000 years old problem
However, the increased abstractness of mathematics raised a more serious problem. if axioms are not guaranteed to be true, how can we be sure they are internally consistent? How do we know that a set of axioms won’t eventually lead to a pair of theorems that contradict each other? Before the 19th century, no mathematician ever thought about whether two contradictory theorems could be inferred from the Euclidean axioms as they were typically assumed to be valid statements about space.
If a mathematical system is consistent, it is impossible to prove both a statement and its negation with the system. In other words, it does not yield contradictory results. For example, if a system based on axioms can prove both that two plus two equals four and that two plus two does not equal four, the system is contradictory and inconsistent.
For 2000 years, mathematicians tried to prove the parallel postulate using the other four axioms. In the process of proving, they created non-Euclidean geometries. Non-Euclidean geometries, whose axioms were considered false of real space, proving consistency was a critical and difficult task. Without this proof, they couldn’t be considered valid mathematical systems. For example, a non-Euclidean geometry called elliptic geometry was discovered by Bernhard Riemann as a result of reversing Euclid’s parallel postulate (through a given point outside a line no parallel to it can be drawn).
Modeling the problem
Mathematicians developed a general method to prove consistency: finding a model or an interpretation for the abstract axioms that makes them all true statements about the model.
The text provides a set of five abstract postulates about two classes, K and L. To prove them consistent, it uses a simple model: let K be the three vertices of a triangle and L be its three sides. By checking, one can see that every postulate becomes a true statement about this triangle (e.g., “Any two vertices lie on just one side”). Since a true model exists, the postulates are consistent.
To prove the consistency of a non-Euclidean (elliptic) geometry, mathematicians used a Euclidean model. They interpreted the non-Euclidean ‘plane’ as the surface of a sphere and a ‘straight line’ as a great circle on that sphere. Under this interpretation, each non-Euclidean axiom becomes a provable theorem in standard Euclidean geometry, thus showing its consistency.
There is a logical flaw
The proofs are relative, not absolute. The model for non-Euclidean geometry only proves that if Euclidean geometry is consistent, then non-Euclidean geometry is also consistent. This shifts the problem. So, how do we prove Euclidean geometry is consistent?
The mathematician David Hilbert created an algebraic model for Euclidean geometry, translating its axioms into statements about numbers and equations (Hilbert’s Program). But this, too, was a relative proof. It showed that if algebra is consistent, then Euclidean geometry is consistent. This just pushes the problem back another step.

The core difficulty is that most important mathematical systems, like arithmetic, require models with an infinite number of elements. You can’t verify the truth of the axioms by inspecting an infinite number of cases, so you can never be absolutely sure the model itself is free of contradictions. We often rely on infinite models, but these models are so complex that they might secretly contain inconsistencies, even if they look fine at firs. It is not important for a system to feel intuitively correct, because antinomies can arise unexpectedly. Therefore, the problem of consistency remains complex and important.
The text states that history has proven arguing that some concepts are so “clear and distinct” that they must be consistent to be a terrible guide. The very good example of this is Russel’s Paradox. It is basically a paradox about naive set theory used by Cantor and Russel caught a fatal flaw.

Consider a class those that contain themselves as members and those do not. A class that does not contain itself is called normal and a class that contains itself is non-normal. Let’s define a class $\mathrm{N}$, which is the class of normal classes. The question that led us to the fatality is: “Is $\mathrm{N}$ itself a normal class?”
If $\mathrm{N}$ is a normal class then $\mathrm{N}$ does not contain itself as a member, but it is a contradiction since $\mathrm{N}$ is class of normal classes by definition. If $\mathrm{N}$ is non-normal class then $\mathrm{N}$ contain itself as a member, but it is a contradiction since $\mathrm{N}$ is class of normal classes (a class that does not contain itself). The unavoidable conclusion is that the statement “$\mathrm{N}$ is normal” is true if and only if it is false.
This earth-shattering paradox demonstrated that even the most basic, intuitive logical concepts could end up with contradictions, which shows that the problem of consistency was far from being solved.