formalization-of-mathematics.bigb
= Formalization of mathematics
= Foundation of mathematics
{synonym}
Mathematics is a <art>[beautiful game] played on https://en.wikipedia.org/wiki/String_(computer_science[strings], which <mathematicians> call https://en.wikipedia.org/wiki/Theorem["theorems"].
Here is a more understandable description of the semi-satire that follows: https://math.stackexchange.com/questions/53969/what-does-formal-mean/3297537#3297537
You start with a very small list of:
* certain arbitrarily chosen initial strings, which mathematicians call "<axioms>"
* rules of how to obtain new strings from old strings, called https://en.wikipedia.org/wiki/Rule_of_inference["rules of inference"] Every transformation rule is very simple, and can be verified by a computer.
Using those rules, you choose a target string that you want to reach, and then try to reach it. Before the target string is reached, mathematicians call it a https://en.wikipedia.org/wiki/Conjecture["conjecture"].
Mathematicians call the list of transformation rules used to reach a string a https://en.wikipedia.org/wiki/Mathematical_proof["proof"].
Since every step of the proof is very simple and can be verified by a computer automatically, the entire proof can also be automatically verified by a computer very easily.
Finding proofs however is undoubtedly an <computable problem>[uncomputable problem].
Most mathematicians can't code or deal with the real world in general however, so they haven't created the obviously necessary: <website front-end for a mathematical formal proof system>.
The fact that Mathematics happens to be the best way to describe <physics> and that humans can use physical intuition heuristics to reach the NP-hard proofs of mathematics is one of the great miracles of the universe.
Once we have mathematics formally modelled, one of the coolest results is https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems[Gödel's incompleteness theorems], which states that for any reasonable proof system, there are necessarily theorems that cannot be proven neither true nor false starting from any given set of axioms: those theorems are independent from those axioms. Therefore, there are three possible outcomes for any hypothesis: true, false or independent!
Some famous theorems have even been proven to be independent of some famous <axioms>. One of the most notable is that the http://en.wikipedia.org/wiki/Continuum_hypothesis[Continuum Hypothesis] is <independent (mathematical logic)> from <Zermelo-Fraenkel set theory>! Such independence proofs rely on modelling the proof system inside another proof system, and https://en.wikipedia.org/wiki/Forcing_(mathematics)[forcing] is one of the main techniques used for this.
\Image[https://web.archive.org/web/20190430151331im_/http://abstrusegoose.com/strips/i_dont_give_a_shit_about_your_mountain.png]
{title=The landscape of modern Mathematics comic by Abstruse Goose}
{description=This comic shows that Mathematics is one of the most diversified areas of <art>[useless] human knowledge.}
{source=https://abstrusegoose.com/211}
= Proof assistant
{parent=Formalization of mathematics}
{wiki}
Much of this section will be dumped at <website front-end for a mathematical formal proof system>{full} instead.
= QED manifesto
{c}
{parent=Proof assistant}
{wiki}
If <Ciro Santilli> ever becomes rich, he's going to solve this with: <website front-end for a mathematical formal proof system>, promise.
= Web-based proof assistant
{parent=Proof assistant}
A more verbose description of this at: <Website front-end for a mathematical formal proof system>{full}.
= The Math Genome Project
{c}
{parent=Web-based proof assistant}
{tag=Closed source software}
{title2=2023-}
https://www.themathgenome.com/
Appears to support multiple <proof assistant> backends including <Lean (proof assistant)>, Hol and <Coq>.
A discussion on the <Lean (proof assistant)> Zulip: https://leanprover.zulipchat.com/#narrow/stream/113488-general/topic/The.20Math.20Genome.20Project/near/352639129[]. Lean people are not convinced about the model in general it seems however.
TODO <closed source>? Really? https://www.themathgenome.com/pricing
TODO not viewable without login?
Has <conjectures> feature.
Built by this dude John Mercer: https://www.linkedin.com/in/johnmercer/[]. He must be <independently wealthy> or something? What a hero.
A failed <Hacker News> self post: https://news.ycombinator.com/item?id=35775071
<Ciro Santilli> asked: https://discord.com/channels/1096393420408360989/1096393420408360996/1137047842159079474
\Q[Does the website actually automatically check the formal proofs, or is this intended to be implemented at some point? And if yes, is it intended to allow proofs to depend on other proofs of the website (possibly by other people)]
Owner:
\Q[Hi Ciro, yes we will be releasing in-browser proof assistant environments/checkers (e.g. Lean). Our goal is not to replace the underlying open-source repos (e.g. Mathlib) so the main dependency will be on the current repos; then when statement formalizations and proofs come in and are certified they can be PR'd to the respective repos. So we will be the source of truth for the informal latex code but only a stepping stone and orchestration layer on the way to the respective formal libraries.]
So apparently there will be proof checking, but nodependencies between proofs, you still have to pull request everywhing back and face the pain.
= List of proof assistants
{parent=Proof assistant}
= Lean
{c}
{disambiguate=proof assistant}
{parent=List of proof assistants}
{tag=Microsoft product}
{tag=Open source software}
{wiki}
* https://github.com/leanprover/lean
* https://github.com/leanprover/lean4
= Coq
{c}
{disambiguate=software}
{parent=List of proof assistants}
{wiki}
= Coq
{synonym}
= Formal proof
{parent=Formalization of mathematics}
{wiki}
A proof in some system for the <formalization of mathematics>.
= Formal proof is useless
{parent=Formal proof}
= Formal proff is useless
{synonym}
The only cases where formal proof of <theorems> seem to have had actual mathematical value is for theorems that require checking a very large number of case, so much so that no human can be fully certain that no mistakes were made. Some examples:
* https://en.wikipedia.org/wiki/Four_color_theorem[Four color theorem]
* <BB(5)>
* <classification of finite simple groups>
= Mathematical proof
{parent=Formal proof}
{wiki}
= Formal system
{parent=Formal proof}
= Formal proof system
{synonym}
= Zermelo-Fraenkel set theory
{c}
{parent=Formal system}
{title2=ZF}
{wiki=Zermelo–Fraenkel_set_theory}
One of the first <formal proof systems>. This is actually understandable!
This is <Ciro Santilli>-2020 definition of the <foundation of mathematics> (and the only one he had any patience to study at all).
TODO what are its limitations? Why were other systems created?
= Zermelo-Fraenkel axioms
{c}
{parent=Zermelo-Fraenkel set theory}
{title2=ZF}
= Zermelo-Fraenkel axioms with the axiom of choice
{c}
{parent=Zermelo-Fraenkel axioms}
= ZFC
{c}
{synonym}
{title2}
= Set theory
{parent=Zermelo-Fraenkel set theory}
{wiki}
When <Ciro Santilli> says <set theory>, he basically means. <Zermelo-Fraenkel set theory>.
= Metamath
{c}
{parent=Zermelo-Fraenkel set theory}
{wiki}
http://metamath.org/
It seems to implement <Zermelo-Fraenkel set theory>.
= Axiom
{parent=Formal proof}
{wiki}
= Consistency
{parent=Axiom}
{wiki}
= Consistent
{synonym}
A set of <axioms> is consistent if they don't lead to any contradictions.
When a set of axioms is not consistent, false can be proven, and then everything is true, making the set of axioms useless.
= Independence
{disambiguate=mathematical logic}
{parent=Axiom}
{wiki}
= Independent
{disambiguate=mathematical logic}
{synonym}
A <theorem> is said to be independent from a set of <axioms> if it cannot be proven neither true nor false from those axioms.
It or its negation could therefore be arbitrarily added to the set of axioms.
= Open problem in mathematics
{parent=Formal proof}
{wiki=List_of_unsolved_problems_in_mathematics}
= Conjecture
{parent=Open problem in mathematics}
{wiki}
A <conjecture> is an <open problem in mathematics> for which some famous dude gave heuristic arguments which indicate if the <theorem> is true or false.
= Famous conjecture
{parent=Conjecture}
This section groups conjectures that are famous, solved or unsolved.
They are usually conjectures that have a strong intuitive reasoning, but took a very long time to prove, despite great efforts.
The list: https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics
= Collatz conjecture
{c}
{parent=Famous conjecture}
{tag=Simple to state but hard to prove}
{title2=1937-}
{wiki}
Given stuff like https://arxiv.org/pdf/2107.12475.pdf on <Erdős' conjecture on powers of 2>, it feels like this one will be somewhere close to <computer science>/<Halting problem> issues than <number theory>. Who knows. This is suggested e.g. at <The Busy Beaver Competition: a historical survey by Pascal Michel>.
= Collatz-like problem
{c}
{parent=Collatz conjecture}
We ust use the if mod notation definition as mentioned at: https://math.stackexchange.com/questions/4305972/what-exactly-is-a-collatz-like-problem/4773230#4773230
= The Busy Beaver Competition: a historical survey by Pascal Michel
{c}
{parent=Collatz conjecture}
{title2=2013}
https://arxiv.org/abs/0906.3749
= Erdős' conjecture on powers of 2
{c}
{parent=Collatz conjecture}
Described at: https://arxiv.org/pdf/2107.12475.pdf[] where a relation to the <Busy beaver scale> is proven, and the intuitive relation to the <Collatz conjecture> described. Perhaps more directly: https://demonstrations.wolfram.com/CollatzSequenceComputedByATuringMachine/
= Theorem
{parent=Formal proof}
{wiki}
= Corollary
{parent=Theorem}
{wiki}
An easy to prove <theorem> that follows from a harder to prove theorem.
= Lemma
{disambiguate=mathematics}
{parent=Formalization of mathematics}
A <theorem> that is not very important on its own, often an intermediate step to proving something that the author feels deserves the name "<theorem>".
= Set
{disambiguate=mathematics}
{parent=Formalization of mathematics}
= Set
{synonym}
Intuitively: unordered container where all the values are unique, just like C++ `std::set`.
More precisely for set theory <formalization of mathematics>:
* everything is a set, including the elements of sets
* string manipulation wise:
* `{}` is an empty set. The natural number `0` is defined as `{}` as well.
* `{{}}` is a set that contains an empty set
* `{{}, {{}}}` is a set that contains two sets: `{}` and `{{}}`
* `{{}, {}}` is not well formed, because it contains `{}` twice
= Union
{disambiguate=set theory}
{parent=Set (mathematics)}
{title2=$A \bigcup B$}
{wiki}
= Set union
{synonym}
= Cardinality
{parent=Set (mathematics)}
The size of a set.
For <finite> sizes, the definition is simple, and the intuitive name "size" matches well.
But for infinity, things are messier, e.g. the size of the <real numbers> is strictly larger than the size of the <integers> as shown by <Cantor's diagonal argument>, which is kind of what justifies a fancier word "cardinality" to distinguish it from the more normal word "size".
The key idea is to compare set sizes with <bijections>.
= Function
{disambiguate=mathematics}
{parent=Formalization of mathematics}
{wiki}
= Function
{synonym}
<Set (mathematics)> of <ordered pairs>. That's it! This is illustrated at: https://math.stackexchange.com/questions/1480651/is-fx-x-1-x-2-a-function/1481099#1481099
= Domain, codomain and image
{parent=Function (mathematics)}
= Bijection
{parent=Domain, codomain and image}
{wiki}
= Bijective
{synonym}
= One-to-one
{synonym}
= 1-to-1
{synonym}
= Injective function
{parent=Bijection}
{wiki}
= Injection
{synonym}
= Injective
{synonym}
Mnemonic: in means into. So we are going into a <codomain> that is large enough so that we can have a different image for every input.
= Surjective function
{parent=Bijection}
{wiki}
= Surjection
{synonym}
= Surjective
{synonym}
Mnemonic: sur means over. So we are going over the <codomain>, and covering it entirely.
= Domain of a function
{parent=Domain, codomain and image}
{wiki}
= Domain
{disambiguate=function}
{synonym}
= Codomain
{parent=Domain, codomain and image}
{wiki}
Vs: <image (mathematics)>: the codomain is the set that the function might reach.
The <image (mathematics)> is the exact set that it actually reaches.
E.g. the function:
$$
f(x) = x^2
$$
could have:
* codomain $\R$
* image $\R_{+}$
Note that the definition of the codomain is somewhat arbitrary, e.g. $x^2$ could as well technically have codomain:
$$
\R \bigcup \R^2
$$
even though it will obviously never reach any value in $\R^2$.
The exact image is in general therefore harder to characterize.
= Endofunction
{parent=Codomain}
A <function> where the <domain (function)> is the same as the <codomain>.
= Image
{disambiguate=mathematics}
{parent=Domain, codomain and image}
{wiki}
= Image of a function
{synonym}
= Image
{disambiguate=function}
{synonym}
= Periodic function
{c}
{parent=Function (mathematics)}
{wiki}
= Square wave
{parent=Periodic function}
{wiki}
= Rectangular wave
{parent=Square wave}
= Function by signature
{parent=Function (mathematics)}
In this section we classify some functions by the type of inputs and outputs they take and produce.
= Functional function
{parent=Function by signature}
This is about functions that take functions as input or output.
= Convolution
{parent=Functional function}
{wiki}
= Set function
{parent=Function by signature}
This section is about functions that operates on arbitrary <sets>.
= Cartesian product
{c}
{parent=Set function}
{wiki}
A function that maps two <sets> to a third set.
= Direct product
{c}
{parent=Set function}
{wiki}
A <Cartesian product> that carries over some extra structure of the input groups.
E.g. the <direct product of groups> carries over <group> structure on both sides.
= Numeric function
{parent=Function by signature}
This section is about functions that operate on numbers such as the <integers> or <real numbers>.
= Addition
{parent=Numeric function}
{wiki}
= Subtraction
{parent=Numeric function}
{wiki}
= Multiplication
{parent=Numeric function}
{wiki}
= Division
{parent=Numeric function}
{wiki}
= Exponentiation
{parent=Numeric function}
{title2=$x^y$}
{wiki}
= nth root
{parent=Exponentiation}
{wiki}
= Square root
{parent=nth root}
{wiki}
= Exponentiation functional equation
{parent=Exponentiation}
{tag=Functional equation}
We define this as the <functional equation>:
$$
f(x, y) = f(x)f(y)
$$
It is a bit like <cauchy's functional equation> but with <multiplication> instead of <addition>.
= Exponential function
{parent=Exponentiation}
{title2=$e^x$}
{wiki}
= Exponential
{synonym}
= Exponential function differential equation
{parent=Exponential function}
The <differential equation> that is solved by the <exponential function>:
$$
y'(x) = y(x)
$$
with <initial condition>:
$$
y(0) = 1
$$
TODO find better name for it, "<linear differential equation>[linear] homogenous differential equation of degree one" almost fully constrainst it except for the exponent constant and initial value.
= Definition of the exponential function
{parent=Exponential function}
{wiki=Characterizations_of_the_exponential_function}
= Taylor expansion definition of the exponential function
{c}
{parent=Definition of the exponential function}
The <Taylor series> expansion is the most direct definition of the expontial as it obviously satisfies the <exponential function differential equation>:
* the first constant term dies
* each other term gets converted to the one before
* because we have <infinite> many terms, we get what we started with!
$$
e^x = \sum_{n=0}^\infty \frac{x^n}{n!} = 1 + \frac{x}{1} + \frac{x^2}{2} + \frac{x^3}{2 \times 3} + \frac{x^4}{2 \times 3 \times 4} + \ldots
$$
= Product definition of the exponential function
{parent=Definition of the exponential function}
$$
e^x = \lim_{n\to\infty} \left(1+\frac x n \right)^n
$$
The basic intuition for this is to start from the origin and make small changes to the function based on its known derivative at the origin.
More precisely, we know that for any base b, <exponentiation> satisfies:
* $b^{x + y} = b^x b^y$.
* $b^{0} = 1$.
And we also know that for $b = e$ in particular that we satisfy the <exponential function differential equation> and so:
$$
\dv{e^x}{x}(0) = 1
$$
One interesting fact is that the only thing we use from the <exponential function differential equation> is the value around $x = 0$, which is quite little information! This idea is basically what is behind the importance of the ralationship between <Lie group-Lie algebra correspondence> via the <exponential map>. In the more general settings of groups and <manifolds>, restricting ourselves to be near the origin is a huge advantage.
Now suppose that we want to calculate $e^1$. The idea is to start from $e^0$ and then then to use the first order of the <Taylor series> to extend the known value of $e^0$ to $e^1$.
E.g., if we split into 2 parts, we know that:
$$
e^1 = e^{1/2}e^{1/2}
$$
or in three parts:
$$
e^1 = e^{1/3}e^{1/3}e^{1/3}
$$
so we can just use arbitrarily many parts $e^{1/n}$ that are arbitrarily close to $x = 0$:
$$
e^1 = (e^{1/n})^n
$$
and more generally for any $x$ we have:
$$
e^x = (e^{x/n})^n
$$
Let's see what happens with the Taylor series. We have near $y = 0$ in <little-o notation>:
$$
e^y = 1 + y + o(y)
$$
Therefore, for $y = x/n$, which is near $y = 0$ for any fixed $x$:
$$
e^{x/n} = 1 + x/n + o(1/n)
$$
and therefore:
$$
e^x = (e^{x/n})^n = (1 + x/n + o(1/n))^n
$$
which is basically the formula tha we wanted. We just have to convince ourselves that at $\lim_{n \to \infty}$, the $o(1/n)$ disappears, i.e.:
$$
(1 + x/n + o(1/n))^n = (1 + x/n)^n
$$
To do that, let's multiply $e^y$ by itself once:
$$
e^y e^y = (1 + y + o(y))(1 + y + o(y)) = 1 + 2y + o(y)
$$
and multiplying a third time:
$$
e^y e^y e^y = (1 + 2y + o(y))(1 + y + o(y)) = 1 + 3y + o(y)
$$
TODO conclude.
= Gaussian function
{c}
{parent=Exponential function}
{title2=$e{-x^{2}}$}
= Gaussian
{c}
{synonym}
= Logarithm
{parent=Exponential function}
{wiki}
= Matrix exponential
{parent=Exponential function}
{wiki}
Is the solution to a <system of linear ordinary differential equations>, the <exponential function> is just a 1-dimensional subcase.
Note that more generally, the <matrix exponential> can be defined on any <ring (mathematics)>.
The matrix exponential is of particular interest in the study of <Lie groups>, because in the case of the <Lie algebra of a matrix Lie group>, it provides the correct <exponential map>.
\Video[https://www.youtube.com/watch?v=O85OWBJ2ayo]
{title=How (and why) to raise e to the power of a matrix by <3Blue1Brown> (2021)}
= Logarithm of a matrix
{parent=Matrix exponential}
{wiki}
= Matrix logarithm
{synonym}
= Existence of the matrix logarithm
{parent=Logarithm of a matrix}
= The matrix exponential is not surjective
{synonym}
https://en.wikipedia.org/wiki/Logarithm_of_a_matrix#Existence mentions it always exists for all <invertible> <complex> matrices. But the <real> condition is more complicated. Notable counter example: -1 cannot be reached by any real $e^{tk}$.
The <Lie algebra exponential covering problem> can be seen as a generalized version of this problem, because
* <Lie algebra> of <GL(n)> is just the entire <M_n>
* we can immediately exclude non-invertible matrices from being the result of the exponential, because $e^{tM}$ has inverse $e^{-tM}$, so we already know that non-invertible matrices are not reachable
= Step function
{parent=Numeric function}
{wiki}
= Heavyside step function
{c}
{parent=Step function}
{wiki}
= Function space
{parent=Formalization of mathematics}
{wiki}
Most notable example: <l2>.
= Number
{parent=Formalization of mathematics}
{wiki}
= Scientific notation
{parent=Number}
{wiki}
= E notation
{c}
{parent=Scientific notation}
{tag=good}
{{wiki=Scientific_notation#E_notation}}
What do you prefer, `1 \times 10^{10}` or `1E10`.
= Natural number
{parent=Number}
{title2=$\N$}
{title2=0, 1, 2, ...}
{wiki}
= Integer
{parent=Number}
{tag=Commutative ring}
{title2=$\Z$}
{title2=..., -2, -1, 0, 1, 2, ...}
{wiki}
= Rational number
{parent=Number}
{title2=$\Q$}
{title2=$\frac{1}{2}$, $\frac{4}{3}$}
{wiki}
= Rationals
{synonym}
= Fraction
{parent=Rational number}
{wiki}
= Reduced fraction
{parent=Fraction}
A "reduced fraction" is a fraction that has the smallest possible <integer> <numerator> and <denominator> for its value.
For example:
$$
\frac{3}{6}
$$
is not a reduced fraction, because there is another fraction equal to it but with smaller <numerator> and <denominator>:
$$
\frac{1}{2}
$$
= Farey sequence
{c}
{parent=Reduced fraction}
{title2=$F_n$}
{wiki}
= Farey sunburst
{c}
{parent=Farey sequence}
https://en.wikipedia.org/w/index.php?title=Farey_sequence&oldid=1223364653#Farey_sunburst
= Number of elements in a Farey sequence
{c}
{parent=Reduced fraction}
{wiki}
https://en.wikipedia.org/w/index.php?title=Farey_sequence&oldid=1223364653#Sequence_length_and_index_of_a_fraction
$$
|F_n| = 1 + \Phi(n)
$$
where $\Phi(n)$ is the <totient summatory function>.
= Numerator
{parent=Fraction}
{title2=Number on top}
= Denominator
{parent=Fraction}
{title2=Number on bottom}
= Real number
{parent=Number}
{title2=$\R$}
{wiki}
= Real
{synonym}
A good definition is by using <Dedekind cuts>.
= Dedekind cut
{c}
{parent=Real number}
{wiki}
= Cardinality of the continuum
{parent=Real number}
= Countable set
{parent=Cardinality of the continuum}
{wiki}
= Cantor's diagonal argument
{c}
{parent=Cardinality of the continuum}
{wiki}
= Mathematical constant
{parent=Real number}
{wiki}
= Pi
{parent=Mathematical constant}
{tag=Transcendental number}
{title2=$\pi$}
{wiki}
= Complex number
{parent=Formalization of mathematics}
{wiki}
= Complex
{synonym}
An <ordered pair> of two <real numbers> with the complex addition and multiplication defined.
Forms both a:
* <division algebra> if thought of <\R^2> with complex multiplication as the bilinear map of the <algebra over a field>[algebra]
* <field (mathematics)>
= Complex conjugate
{parent=Complex number}
{title2=$\overline{x}$}
{wiki}
= Imaginary number
{parent=Complex number}
{wiki}
= Imaginary unit
{parent=Complex number}
{title2=$i$}
{wiki}
= Cayley-Dickson construction
{c}
{parent=Complex number}
{wiki=Cayley–Dickson construction}
Constructs the <quaternions> from <complex numbers>, <octonions> from <quaternions>, and keeps doubling like this indefinitely.
= Quaternion
{parent=Cayley-Dickson construction}
{title2=$\H$}
{title2=4}
{wiki}
Kind of extends the <complex numbers>.
Some facts that make them stand out:
* one of the only three real <associative> <division algebras> in addition to the <real numbers> and <complex numbers>, according to the <classification of associative real division algebras>
* the simplest non-<commutative> <division algebra>. Contrast for example with <complex numbers> where multiplication is <commutative>
= Octonion
{parent=Cayley-Dickson construction}
{title2=8}
{wiki}
Unlike the <quaternions>, it is non-<associative>.
= Sedenion
{parent=Cayley-Dickson construction}
{title2=16}
{wiki}
= Ordered pair
{parent=Formalization of mathematics}
{wiki}
<Sets> are unordered, but we can use them to create ordered objects, which are of fundamental importance. Notably, they are used in the definition of <functions>.
= Logic
{parent=Formalization of mathematics}
{wiki}
= Propositional logic
{parent=Logic}
{wiki}
This is the part of the <formalization of mathematics> that deals only with the propositions.
In some systems, e.g. including <Metamath>, <modus ponens> alone tends to be enough, everything else can be defined based on it.
= Modus ponens
{parent=Propositional logic}
{wiki}
= If and only if
{parent=Propositional logic}
{wiki}
= Iff
{synonym}
{title2}
= First-order logic
{parent=Logic}
{wiki}
Builds on top of <propositional logic>, adding notably <existential quantification>.
= Existential quantification
{parent=First-order logic}
{title2=$\exists$}
{wiki}
Models <existence> in the context of the <formalization of mathematics>.
= Existence and uniqueness
{parent=Existential quantification}
{wiki}
Existence and uniqueness results are fundamental in <mathematics> because we often define objects by their properties, and then start calling them "the object", which is fantastically convenient.
But calling something "the object" only makes sense if there exists exactly one, and only one, object that satisfies the properties.
One particular context where these come up very explicitly is in solutions to <differential equations>, e.g. <existence and uniqueness of solutions of partial differential equations>.
= Existence
{parent=Existence and uniqueness}
= Exists
{synonym}
= Exist
{synonym}
= Uniqueness
{parent=Existence and uniqueness}
= Unique
{synonym}
= Universal quantification
{parent=First-order logic}
{title2=$\forall$}
{wiki}