Ciro Santilli OurBigBook.com $£ Sponsor €¥ 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱
geometry.bigb
= Geometry
{wiki}

= Minimum bounding box
{parent=Geometry}
{wiki}

= Bounding box
{parent=Geometry}
{wiki}

= Fractal
{parent=Geometry}
{wiki}

= Point
{disambiguate=geometry}
{parent=Geometry}
{wiki}

= Point
{synonym}

= Line
{disambiguate=geometry}
{parent=Point (geometry)}
{wiki}

= Line
{synonym}

= Hyperplane
{parent=Point (geometry)}
{wiki}

Generalization of a <plane> for any number of dimensions.

Kind of the opposite of a line: the line has dimension 1, and the plane has dimension D-1.

In $D=2$, both happen to coincide, a boring example of an <exceptional isomorphism>.

= Plane
{disambiguate=geometry}
{parent=Hyperplane}
{wiki}

= Plane
{synonym}

= n-sphere
{parent=Geometry}
{title2=$S^n$}
{wiki}

= Antipodal point
{parent=n-sphere}
{wiki}

= Diameter
{parent=n-sphere}
{wiki}

= Radius
{parent=Diameter}
{wiki}

= Circle
{parent=n-sphere}
{title2=$S^1$}
{wiki}

= 1-sphere
{synonym}
{title2}

= Squaring the circle
{parent=Circle}
{wiki}

= Tarski's circle-squaring problem
{c}
{parent=Circle}
{title2=Cut a circle into square}
{wiki}

Does not require straight line cuts.

= Sphere
{parent=n-sphere}
{title2=$S^2$}
{wiki}

= 2-sphere
{synonym}
{title2}

= Great circle
{parent=Sphere}
{wiki}

= 3-sphere
{parent=n-sphere}
{title2=$S^3$}
{wiki}

Diffeomorphic to <SU(2)>.

= Projective geometry
{parent=Geometry}
{wiki}

= Projective space
{parent=Projective geometry}
{title2=$\projectiveSpace(V)$}
{wiki}

A <unique> projective space can be defined for any <vector space>.

The projective space associated with a given <vector space> $V$ is denoted $\projectiveSpace(V)$.

The definition is to take the vector space, remove the zero element, and identify all elements that lie on the same line, i.e. $\vec{v} = \lambda \vec{w}$

The most important initial example to study is the <real projective plane>.

= Projective plane
{parent=Projective space}
{wiki}

= Real projective space
{parent=Projective geometry}
{title2=$RP^n$}
{title2=$\projectiveSpace(\R^{n+1})$}

In those cases at least, it is possible to add a <metric (mathematics)> to the spaces, leading to <elliptic geometry>.

= Real projective line
{parent=Real projective space}
{title2=$RP^1$}
{title2=$\projectiveSpace(\R^2)$}
{wiki}

Just a <circle>.

Take $\R^2$ with a line at $x = 0$. Identify all the points that an observer 

= Real projective plane
{parent=Real projective space}
{title2=$RP^2$}
{title2=$\projectiveSpace(\R^3)$}
{wiki}

For some reason, <Ciro Santilli> is mildly obsessed with understanding and visualizing the real projective plane.

To see why this is called a plane, move he center of the sphere to $z=1$, and project each line passing on the center of the sphere on the x-y plane. This works for all points of the sphere, except those at the equator $z=1$. Those are the <points at infinity>. Note that there is one such point at infinity for each direction in the x-y plane.

= Synthetic geometry of the real projective plane
{parent=Real projective plane}

It good to think about how <Euclid's postulates> look like in the real projective plane:
* two parallel lines on the plane meet at a point on the sphere!

  Since there is one point of infinity for each direction, there is one such point for every direction the two parallel lines might be at. The <parallel postulate> does not hold, and is replaced with a simpler more elegant version: every two lines meet at exactly one point.

  One thing to note however is that ther <real projective plane> does not have <angles> defined on it by definition. Those can be defined, forming <elliptic geometry> through the <projective model of elliptic geometry>, but we can interpret the "parallel lines" as "two lines that meet at a point at infinity"
* points in the real projective plane are lines in <\R^3>
* lines in the real projective plane are planes in <\R^3>.

  For every two projective points there is a single projective line that passes through them.

  Since it is a plane in <\R^3>, it always intersects the real plane at a line.

  Note however that not all lines in the real plane correspond to a projective line: only lines tangent to a circle at zero do.

Unlike the <real projective line> which is <homotopic> to the <circle>, the <real projective plane> is not <homotopic> to the <sphere>.

The <topological> difference bewteen the <sphere> and the <real projective space> is that for the <sphere> all those points in the x-y circle are identified to a single point.

One more generalized argument of this is the <classification of closed surfaces>, in which the <real projective plane> is a <sphere> with a hole cut and one <Möbius strip> glued in.

= Model of the real projective plane
{parent=Real projective plane}

= Lines through origin model of the real projective plane
{parent=Model of the real projective plane}

This is the standard model.

= Spherical cap model of the real projective plane
{parent=Model of the real projective plane}

<Ciro Santilli>'s preferred visualization of the real projective plane is a small variant of the standard "lines through origin in <\R^3>".

Take a open half <sphere> e.g. a sphere but only the points with $z > 0$.

Each point in the half sphere identifies a unique line through the origin.

Then, the only lines missing are the lines in the x-y plane itself.

For those sphere points in the <circle> on the x-y plane, you should think of them as magic poins that are identified with the corresponding <antipodal point>, also on the x-y, but on the other side of the origin. So basically you you can teleport from one of those to the other side, and you are still in the same point.

Ciro likes this model because then all the magic is confined just to the $z=0$ part of the model, and everything else looks exactly like the sphere.

It is useful to contrast this with the sphere itself. In the sphere, all points in the circle $z = 0$ are the same point. But this is not the case for the <projective plane>. You cannot instantly go to any other point on the $z=0$ by just moving a little bit, you have to walk around that circle.

\Image[https://raw.githubusercontent.com/cirosantilli/media/master/spherical-cap-model-of-the-real-projective-plane.svg]
{title=Spherical cap model of the real projective plane}
{description=On the x-y plane, you can magically travel immediately between <antipodal points> such as A/A', B/B' and C/C'. Or equivalently, those pairs are the same point. Every other point outside the x-y plane is just a regular point like a normal <sphere>.}

= The real projective plane is not simply connected
{parent=Real projective plane}

To see that the <real projective plane> is not <simply connected space>, considering the <lines through origin model of the real projective plane>, take a <loop (topology)> that starts at $(1, 0, 0)$ and moves along the $y=0$ <great circle> ends at $(-1, 0, 0)$.

Note that both of those points are the same, so we have a loop.

Now try to shrink it to a point.

There's just no way!

= Point at infinity
{parent=Real projective plane}
{wiki}

= Points at infinity
{synonym}

= Homogenous coordinates
{parent=Real projective plane}
{wiki}

= Polytope
{parent=Geometry}
{wiki}

A <polygon> is a 2-dimensional <polytope>, <polyhedra> is a 3-dimensional <polytope>. 

= Convex polytope
{parent=Polytope}
{wiki}

= Convex
{synonym}

= Regular polytope
{parent=Polytope}
{wiki}

TODO understand and explain definition.

= Classification of regular polytopes
{parent=Regular polytope}
{{wiki=Regular_polytope#Classification_and_description}}

The 3D regular convex polyhedrons are super famous, have the name: <Platonic solid>, and have been known since antiquity. In particular, there are only 5 of them.

The counts per dimension are:
\Table[
|| Dimension
|| Count

| 2
| Infinite

| 3
| 5

| 4
| 6

| >4
| 3
]
{title=Number of regular polytopes per dimension}

The cool thing is that the 3 that exist in 5+ dimensions are all of one of the three families:
* <simplex>
* <hypercube>
* <cross polytope>
Then, the 2 3D missing ones have 4D analogues and the sixth one in 4D does not have a 3D analogue: https://en.wikipedia.org/wiki/24-cell[the 24-cell]. Yes, this is the kind of irregular stuff <Ciro Santilli> lives <the beauty of mathematics>[for].

= Simplex
{parent=Classification of regular polytopes}
{wiki}

<Triangle>, <tetrahedron>.

The name does not imply regular by default. For regular ones, you should say "regular polytope".

Non-regular description: take convex hull take D + 1 vertices that are not on a single D-plan.

= Hypercube
{parent=Classification of regular polytopes}
{wiki}

<square>, cube. 4D case known as <tesseract>.

Convex hull of all $\{-1, 1\}^D$ (<Cartesian product> power) D-tuples, e.g. in <3D>:
``
( 1,  1,  1)
( 1,  1, -1)
( 1, -1,  1)
( 1, -1, -1)
(-1,  1,  1)
(-1,  1, -1)
(-1, -1,  1)
(-1, -1, -1)
``

From this we see that there are $2^D$ <vertices>.

Two <vertices> are linked iff they differ by a single number. So each vertex has D neighbors.

= Hyperrectangle
{parent=Hypercube}
{wiki}

The <regular polytope>[non-regular] version of the <hypercube>.

= Cross polytope
{parent=Classification of regular polytopes}
{wiki}

Examples: <square>, <octahedron>.

Take $(0, 0, 0, \dots, 0)$ and flip one of 0's to $\pm 1$. Therefore has $2 \times D$ <vertices>.

Each edge E is linked to every other edge, except it's opposite -E.

= Polygon
{parent=Polytope}
{wiki}

= Quadrilateral
{parent=Polygon}
{wiki}

= Rectangle
{parent=Quadrilateral}
{wiki}

= Parallelogram
{parent=Polygon}
{wiki}

= Parallelepiped
{parent=Parallelogram}
{wiki}

<3D> <parallelogram>.

= Volume of the parallelepiped
{parent=Parallelepiped}

= Volume of a parallelepiped
{synonym}

= Regular polygon
{parent=Polygon}
{wiki}

= Regular convex polygon
{parent=Regular polygon}

= Triangle
{parent=Regular convex polygon}
{wiki}

= Square
{parent=Regular convex polygon}
{tag=Rectangle}
{wiki}

= Pentagon
{parent=Regular convex polygon}
{wiki}

= Hexagon
{parent=Regular convex polygon}
{wiki}

= Octagon
{parent=Regular convex polygon}
{wiki}

= Polyhedron
{parent=Polytope}
{wiki}

= Polyhedra
{synonym}

= Tetrahedron
{parent=Polyhedron}
{wiki}

= Octahedron
{parent=Polyhedron}
{wiki}

= Regular polyhedron
{parent=Polytope}
{wiki}

= Platonic solid
{c}
{parent=Regular polyhedron}
{wiki}

A <convex> <regular polyhedron>.

Their <the beauty of mathematics>[beauty is a classification type result] as stated at <classification of regular polytopes>.

https://en.wikipedia.org/wiki/Platonic_solid#Topological_proof

= 4-polytope
{parent=Polytope}
{wiki}

= Regular 4-polytope
{parent=4-polytope}
{wiki}

= Tesseract
{parent=Regular 4-polytope}
{wiki}

= Differential geometry
{parent=Geometry}

Bibliography:
* https://maths-people.anu.edu.au/~andrews/DG/ Lectures on Differential Geometry by Ben Andrews

= Lie group
{c}
{parent=Differential geometry}
{wiki}

The key and central motivation for studying Lie groups and their <Lie algebras> appears to be to characterize <symmetry> in <Lagrangian mechanics> through <Noether's theorem>, just start from there.

Notably <local symmetries> appear to map to forces, and local means "around the identity", notably: <local symmetries of the Lagrangian imply conserved currents>.

More precisely: <local symmetries of the Lagrangian imply conserved currents>.

TODO <Ciro Santilli> really wants to understand what all the fuss is about:
* https://math.stackexchange.com/questions/1322206/lie-groups-lie-algebra-applications
* https://mathoverflow.net/questions/58696/why-study-lie-algebras
* https://math.stackexchange.com/questions/405406/definition-of-lie-algebra

Oh, there is a low dimensional classification! Ciro is <high flying bird vs gophers>[a sucker for classification theorems]! https://en.wikipedia.org/wiki/Classification_of_low-dimensional_real_Lie_algebras

The fact that there are elements arbitrarily close to the identity, which is only possible due to the group being continuous, is the key factor that simplifies the treatment of Lie groups, and follows the philosophy of <continuous problems are simpler than discrete ones>.

Bibliography:
* https://youtu.be/kpeP3ioiHcw?t=2655 "Particle Physics Topic 6: Lie Groups and Lie Algebras" by Alex Flournoy (2016). Good <special orthogonal group>[SO(3)] explicit exponential expansion example. Then next lecture shows why SU(2) is the representation of SO(3). Next ones appear to eventually get to the physical usefulness of the thing, but I lost patience. Not too far out though.
* https://www.youtube.com/playlist?list=PLRlVmXqzHjURZO0fviJuyikvKlGS6rXrb "Lie Groups and Lie Algebras" playlist by XylyXylyX (2018). Tutorial with infinitely many hours
* http://www.staff.science.uu.nl/~hooft101/lectures/lieg07.pdf
* http://www.physics.drexel.edu/~bob/LieGroups.html

\Video[https://www.youtube.com/watch?v=ZRca3Ggpy_g]
{title=What is Lie theory? by Mathemaniac 2023}

= Lie derivative
{c}
{parent=Lie group}
{wiki}

Bibliography:
* https://takeshimg92.github.io/posts/lie_derivatives.html

= Applications of Lie groups to differential equations
{parent=Lie group}

= How to use Lie Groups to solve differential equations
{synonym}
{title2}

Solving <differential equations> was apparently Lie's original motivation for developing <Lie groups>. It is therefore likely one of the most understandable ways to approach it.

It appears that Lie's goal was to understand when can a differential equation have an explicitly written solution, much like <Galois theory> had done for <algebraic equations>. Both approaches use <symmetry> as the key tool.

* https://www.researchgate.net/profile/Michael_Frewer/publication/269465435_Lie-Groups_as_a_Tool_for_Solving_Differential_Equations/links/548cbf250cf214269f20e267/Lie-Groups-as-a-Tool-for-Solving-Differential-Equations.pdf Lie-Groups as a Tool for Solving Differential Equations by Michael Frewer. Slides with good examples.

= Lie algebra
{c}
{parent=Lie group}
{wiki}

Like everything else in <Lie groups>, first start with the <matrix> as discussed at <Lie algebra of a matrix Lie group>{full}.

Intuitively, a <Lie algebra> is a simpler object than a <Lie group>. Without any extra structure, groups can be very complicated non-linear objects. But a <Lie algebra> is just an <algebra over a field>, and one with a restricted <bilinear map> called the <Lie bracket>, that has to also be <alternating multilinear map>[alternating] and satisfy the <Jacobi identity>.

Another important way to think about Lie algebras, is as <infinitesimal generators>.

Because of the <Lie group-Lie algebra correspondence>, we know that there is almost a <bijection> between each <Lie group> and the corresponding <Lie algebra>. So it makes sense to try and study the algebra instead of the group itself whenever possible, to try and get insight and proofs in that simpler framework. This is the key reason why people study Lie algebras. One is philosophically reminded of how <normal subgroups> are a simpler representation of <group homomorphisms>.

To make things even simpler, because <all vector spaces of the same dimension on a given field are isomorphic>, the only things we need to specify a <Lie group> through a <Lie algebra> are:
* the dimension
* the <Lie bracket>
Note that the <Lie bracket> can look different under different basis of the <Lie algebra> however. This is shown for example at <Physics from Symmetry by Jakob Schwichtenberg (2015)> page 71 for the <Lorentz group>.

As mentioned at <Lie Groups, Physics, and Geometry by Robert Gilmore (2008)> Chapter 4 "Lie Algebras", taking the <Lie algebra> around the identity is mostly a convention, we could treat any other point, and things are more or less equivalent.

Bibliography:
* https://physicstravelguide.com/advanced_tools/group_theory/lie_algebras#tab__concrete on <Physics Travel Guide>
* http://jakobschwichtenberg.com/lie-algebra-able-describe-group/ by <Jakob Schwichtenberg>

= Infinitesimal generator
{parent=Lie algebra}

Elements of a <Lie algebra> can (should!) be seen a continuous analogue to the <generating set of a group> in finite groups.

For continuous groups however, we can't have a finite generating set in the strict sense, as a finite set won't ever cover every possible point.

But the <generator of a Lie algebra> can be finite.

And just like in finite groups, where you can specify the full group by specifying only the relationships between generating elements, in the Lie algebra you can almost specify the full group by specifying the relationships between the elements of a <generator of the Lie algebra>.

This "specification of a relation" is done by defining the <Lie bracket>.

The reason why the algebra works out well for continuous stuff is that by definition an <algebra over a field> is a <vector space> with some extra structure, and we know very well how to make infinitesimal elements in a vector space: just multiply its vectors by a constant $c$ that cana be arbitrarily small.

= Lie group-Lie algebra correspondence
{c}
{parent=Lie algebra}
{wiki=Lie_group–Lie_algebra_correspondence}

Every <Lie algebra> corresponds to a single <simply connected> <Lie group>.

The <Baker-Campbell-Hausdorff formula> basically defines how to map an algebra to the group.

Bibliography:
* <Lie Groups, Physics, and Geometry by Robert Gilmore (2008)> Chapter 7 "EXPonentiation"

= Lie algebra exponential covering problem
{c}
{parent=Lie group-Lie algebra correspondence}

<Lie Groups, Physics, and Geometry by Robert Gilmore (2008)> 7.2 "The covering problem" gives some amazing intuition on the subject as usual.

= A single exponential map is not enough to recover a simple Lie group from its algebra
{parent=Lie algebra exponential covering problem}

Example at: <Lie Groups, Physics, and Geometry by Robert Gilmore (2008)> Chapter 7 "EXPonentiation".

= The product of a exponential of the compact algebra with that of the non-compact algebra recovers a simple Lie from its algebra
{parent=Lie algebra exponential covering problem}

Example at: <Lie Groups, Physics, and Geometry by Robert Gilmore (2008)> Chapter 7 "EXPonentiation".

Furthermore, the non-<compact> part is always <isomorphic> to <\R^n>, only the non-compact part can have more interesting structure.

= Two different Lie groups can have the same Lie algebra
{parent=Lie group-Lie algebra correspondence}

The most important example is perhaps <SO(3)> and <SU(2)>, both of which have the same <Lie algebra>, but are not isomorphic.

= Every Lie algebra has a unique single corresponding simply connected Lie group
{parent=Two different Lie groups can have the same Lie algebra}

This <simply connected> is called the <universal covering group>.

E.g. in the case of <SO(3)> and <SU(2)>, <SU(2)> is <simply connected>, but <SO(3)> is not.

= Universal covering group
{parent=Every Lie algebra has a unique single corresponding simply connected Lie group}

The <unique> group referred to at: <every Lie algebra has a unique single corresponding simply connected Lie group>.

= Every Lie group that has a given Lie algebra is the image of an homomorphism from the universal cover group
{parent=Two different Lie groups can have the same Lie algebra}

= Lie bracket
{c}
{parent=Lie algebra}

= Exponential map
{parent=Lie algebra}
{wiki}

Most commonly refers to: <exponential map (Lie theory)>.

= Exponential map
{disambiguate=Lie theory}
{parent=Exponential map}
{wiki}

Like everything else in <Lie group> theory, you should first look at the <matrix> version of this operation: the <matrix exponential>.

The <exponential map> links small transformations around the origin (infinitely small) back to larger finite transformations, and small transformations around the origin are something we can deal with a <Lie algebra>, so this map links the two worlds.

The idea is that we can decompose a finite transformation into infinitely arbitrarily small around the origin, and proceed just like the <product definition of the exponential function>.

The definition of the exponential map is simply the same as that of the regular exponential function as given at <Taylor expansion definition of the exponential function>, except that the argument $x$ can now be an operator instead of just a number.

Examples:
* <the derivative is the generator of the translation group>

= Baker-Campbell-Hausdorff formula
{c}
{parent=Lie algebra}
{title2=BCH formula}
{wiki=Baker–Campbell–Hausdorff formula}

Solution $Z$ for given $X$ and $Y$ of:
$$
e^Z = e^X e^Y
$$
where $e$ is the <exponential map>.

If we consider just <real number>, $Z = X + Y$, but when X and Y are <non-commutative>, things are not so simple.

Furthermore, TODO confirm it is possible that a solution does not exist at all if $X$ and $Y$ aren't sufficiently small.

This formula is likely the basis for the <Lie group-Lie algebra correspondence>. With it, we express the actual <group operation> in terms of the Lie algebra operations.

Notably, remember that a <algebra over a field> is just a <vector space> with one extra product operation defined.

Vector spaces are simple because <all vector spaces of the same dimension on a given field are isomorphic>, so besides the dimension, once we define a <Lie bracket>, we also define the corresponding <Lie group>.

Since a group is basically defined by what the group operation does to two arbitrary elements, once we have that defined via the <Baker-Campbell-Hausdorff formula>, we are basically done defining the group in terms of the algebra.

= Generator of a Lie algebra
{parent=Lie algebra}

= Generators of a Lie algebra
{parent=Lie algebra}

= Generator of the Lie algebra
{synonym}

Cardinality $\leq$ dimension of the vector space.

= Continuous symmetry
{parent=Lie group}
{wiki}

Basically a synonym for <Lie group> which is the way of modelling them.

= Local symmetry
{parent=Continuous symmetry}
{wiki}

Local symmetries appear to be a synonym to <internal symmetry>, see description at: <internal and spacetime symmetries>{full}.

As mentioned at <quote axelmaas local symmetry>, local symmetries map to forces in the <Standard Model>.

Appears to be a synonym for: <gauge symmetry>.

A local symmetry is a transformation that you apply a different transformation for each point, instead of a single transformation for every point.

TODO what's the point of a local symmetry?

Bibliography:
* <quantum field theory lecture by tobias osborne 2017/lecture 3>
* https://physics.stackexchange.com/questions/48188/local-and-global-symmetries
* https://www.physics.rutgers.edu/grad/618/lects/localsym.pdf by Joel Shapiro gives one nice high level intuitive idea:
  \Q[In relativistic physics, global objects are awkward because the finite velocity with which effects can propagate is expressed naturally in terms of local objects. For this reason high energy physics is expressed in terms of a field theory.]
* <Quora>:
  * https://www.quora.com/What-does-a-local-symmetry-mean-in-physics
  * https://www.quora.com/What-is-the-difference-between-local-and-global-symmetries-in-physics
  * https://www.quora.com/What-is-the-difference-between-global-and-local-gauge-symmetry

= Local symmetries of the Lagrangian imply conserved currents
{parent=Local symmetry}

TODO. I think this is the key point. Notably, <U(1)> symmetry implies <charge conservation>.

More precisely, each <generator of a Lie algebra>[generator of the corresponding Lie algebra] leads to one separate conserved current, such that a single symmetry can lead to multiple conserved currents.

This is basically the <local symmetry> version of <Noether's theorem>.

Then to maintain charge conservation, we have to maintain <local symmetry>, which in turn means we have to add a <gauge field> as shown at <video Deriving the qED Lagrangian by Dietterich Labs (2018)>.

Forces can then be seen as kind of a side effect of this.

Bibliography:
* https://photonics101.com/relativistic-electrodynamics/gauge-invariance-action-charge-conservation#show-solution has a good explanation of the Gauge transformation. TODO how does that relate to <U(1)> symmetry?
* https://physics.stackexchange.com/questions/57901/noether-theorem-gauge-symmetry-and-conservation-of-charge

= Important Lie group
{parent=Lie group}

= Important Lie groups
{synonym}

= Matrix Lie group
{parent=Important Lie group}

This important and common simple case has easy properties.

= Every closed subgroup of $GL(n, \C)$ is a Lie group
{parent=Matrix Lie group}

<An Introduction to Tensors and Group Theory for Physicists by Nadir Jeevanjee (2011)s> page 146.

= Lie algebra of a matrix Lie group
{c}
{parent=Matrix Lie group}

For this sub-case, we can define the <Lie algebra> of a Lie group $G$ as the set of all matrices $M \in G$ such that for all $t \in \R$:
$$
e^{tM} \in G
$$
If we fix a given $M$ and vary $t$, we obtain a <subgroup> of $G$. This type of subgroup is known as a <one parameter subgroup>.

The immediate question is then if every element of $G$ can be reached in a unique way (i.e. is the exponential map a <bijection>). By looking at the <matrix logarithm> however we conclude that this is not the case for <real> matrices, but it is for <complex> matrices.

Examples:
* <Lie algebra of GL(n)>{child}
* <Lie algebra of SL(2)>{child}
* <Lie algebra of SO(3)>{child}
* <Lie algebra of SU(2)>{child}

TODO example it can be seen that the Lie algebra is not closed <matrix multiplication>, even though the corresponding group is by definition. But it is closed under the <Lie bracket> operation.

= Lie bracket of a matrix Lie group
{c}
{parent=Lie algebra of a matrix Lie group}

$$
[X, Y] = XY - YX
$$

This makes it clear how the <Lie bracket> can be seen as a "measure of non-<commutativity>"

Because the <Lie bracket> has to be a bilinear map, all we need to do to specify it uniquely is to specify how it acts on every pair of some basis of the <Lie algebra>.

Then, together with the <Baker-Campbell-Hausdorff formula> and the <Lie group-Lie algebra correspondence>, this forms an exceptionally compact description of a <Lie group>.

= One parameter subgroup
{parent=Lie algebra of a matrix Lie group}

The one parameter subgroup of a <Lie group> for a given element $M$ of its <Lie algebra> is a <subgroup> of $G$ given by:
$$
{ e^{tM} \in G | t \in \R }
$$

Intuitively, $M$ is a direction, and $t$ is how far we move along a given direction. This intuition is especially vivid in for example in the case of the <Lie algebra of SO(3)>, the <rotation group>.

One parameter subgroups can be seen as the continuous analogue to the <cycle of an element of a group>.

= Classical group
{parent=Important Lie group}
{wiki}

= Symplectic group
{parent=Classical group}
{title2=$Sp(n, F)$}

Intuition, please? Example? https://mathoverflow.net/questions/278641/intuition-for-symplectic-groups The key motivation seems to be related to <Hamiltonian mechanics>. The two arguments of the <bilinear form> correspond to each set of variables in Hamiltonian mechanics: the generalized positions and generalized momentums, which appear in the same number each.

Seems to be set of matrices that preserve a <skew-symmetric bilinear form>, which is comparable to the <orthogonal group>, which preserves a <symmetric bilinear form>. More precisely, the orthogonal group has:
$$
O^T I O = I
$$
and its generalization the <indefinite orthogonal group> has:
$$
O^T S O = I
$$
where S is symmetric. So for the symplectic group we have matrices Y such as:
$$
Y^T A Y = I
$$
where A is antisymmetric. This is explained at: https://www.ucl.ac.uk/~ucahad0/7302_handout_13.pdf They also explain there that unlike as in the analogous <orthogonal group>, that definition ends up excluding determinant -1 automatically.

Therefore, just like the <special orthogonal group>, the symplectic group is also a <subgroup> of the <special linear group>.

= Symplectic matrix
{parent=Symplectic group}
{tag=Named matrix}

= Unitary symplectic group
{parent=Symplectic group}
{title2=$Sp(n)$}

= General linear group
{parent=Important Lie group}
{wiki}

= $GL(n)$
{synonym}
{title2}

= $GL(n, F)$
{synonym}
{title2}

Invertible matrices. Or if you think a bit more generally, an invertible <linear map>.

When the <field (mathematics)> $F$ is not given, it defaults to the <real numbers>.

Non-invertible are excluded "because" otherwise it would not form a <group (mathematics)> (every element must have an inverse). This is therefore the largest possible group under <matrix multiplication>, other matrix multiplication groups being subgroups of it.

= Finite general linear group
{parent=General linear group}
{title2=$GL(n, F_m)$}

= $GL(n, m)$
{synonym}
{title2}

<general linear group> over a <finite field> of order $m$. Remember that due to the <classification of finite fields>, there is one single field for each <prime power> $m$.

Exactly as over the <real numbers>, you just put the finite field elements into a $n \times n$ matrix, and then take the invertible ones.

= Lie algebra of $GL(n)$
{c}
{parent=Important Lie group}

Is the <set of all n-by-y square matrices>.

Because <GL(n)> is a <Lie group> we can use <Lie algebra of a matrix Lie group>{full}.

For every matrix $x$ in the <set of all n-by-y square matrices> $M_n$, $e^x$ has inverse $e^-x$.

Note that this works even if $x$ is not <invertible>, and therefore not in <GL(n)>!

Therefore, the Lie algebra of <GL(n)> is the entire <M_n>.

= Special linear group
{parent=Important Lie group}
{title2=$SL(n)$}
{wiki}

Specials sub case of the <general linear group> when the determinant equals exactly 1.

= Special linear group of dimension 2
{parent=Special linear group}
{title2=$SL(2)$}
{wiki=SL2(R)}

= Lie algebra of $SL(n)$
{c}
{parent=Special linear group}

= Lie algebra of the special linear group
{c}
{synonym}

= Lie algebra of $SL(2)$
{c}
{parent=Lie algebra of SL(n)}

= Lie algebra of the special linear group of degree 2
{c}
{synonym}

This is a good first concrete example of a Lie algebra. Shown at <Lie Groups, Physics, and Geometry by Robert Gilmore (2008)> Chapter 4.2 "How to linearize a Lie Group" has an example.

We can use use the following parametrization of the <special linear group> on variables $x$, $y$ and $z$:
$$
M =
\begin{bmatrix}
1 + x & y \\
z & (1 + yz)/(1 + x) \\
\end{bmatrix}
$$

Every element with this parametrization has <determinant> 1:
$$
det(M) = (1 + x)(1 + yz)/(1 + x) - yz = 1
$$
Furthermore, any element can be reached, because by independently settting $x$, $y$ and $z$, $M_{00}$, $M_{01}$ and $M_{10}$ can have any value, and once those three are set, $M_{11}$ is fixed by the determinant.

To find the elements of the <Lie algebra>, we evaluate the derivative on each parameter at 0:
$$
\begin{aligned}
M_x
&=
\evalat{\dv{M}{x}}{(x,y,z) = (0,0,0)}
&=
\evalat{
\begin{bmatrix}
1 & 0 \\
0 & -(1 + yz)/(1 + x)^2 \\
\end{bmatrix}
}{(x,y,z) = (0,0,0)}
&=
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix}
\\

M_y
&=
\evalat{\dv{M}{y}}{(x,y,z) = (0,0,0)}
&=
\evalat{
\begin{bmatrix}
0 & 1 \\
0 & z/(1 + x) \\
\end{bmatrix}
}{(x,y,z) = (0,0,0)}
&=
\begin{bmatrix}
0 & 1 \\
0 & 0 \\
\end{bmatrix}
\\

M_z
&=
\evalat{\dv{M}{z}}{(x,y,z) = (0,0,0)}
&=
\evalat{
\begin{bmatrix}
0 & 0 \\
1 & y/(1 + x) \\
\end{bmatrix}
}{(x,y,z) = (0,0,0)}
&=
\begin{bmatrix}
0 & 0 \\
1 & 0 \\
\end{bmatrix}
\\

\end{aligned}
$$

Remembering that the <Lie bracket of a matrix Lie group> is really simple, we can then observe the following <Lie bracket> relations between them:
$$
\begin{aligned}
[M_x, M_y] &= M_xM_y - M_yM_x &= \begin{bmatrix}0 & 1 \\  0 & 0 \\\end{bmatrix} &- \begin{bmatrix}0 & -1 \\ 0 & 0 \\\end{bmatrix} &= \begin{bmatrix}0 & 2 \\  0 &  0 \\\end{bmatrix} &=  2M_y\\
[M_x, M_z] &= M_xM_z - M_zM_x &= \begin{bmatrix}0 & 0 \\ -1 & 0 \\\end{bmatrix} &- \begin{bmatrix}0 &  0 \\ 1 & 0 \\\end{bmatrix} &= \begin{bmatrix}0 & 0 \\ -2 &  0 \\\end{bmatrix} &= -2M_z\\
[M_y, M_z] &= M_yM_z - M_zM_y &= \begin{bmatrix}1 & 0 \\  0 & 0 \\\end{bmatrix} &- \begin{bmatrix}0 &  0 \\ 0 & 1 \\\end{bmatrix} &= \begin{bmatrix}1 & 0 \\  0 & -1 \\\end{bmatrix} &=   M_x\\
\end{aligned}
$$

One key thing to note is that the specific matrices $M_x$, $M_y$ and $M_z$ are not really fundamental: we could easily have had different matrices if we had chosen any other parametrization of the group.

TODO confirm: however, no matter which parametrization we choose, the <Lie bracket> relations between the three elements would always be the same, since it is the number of elements, and the definition of the <Lie bracket>, that is truly fundamental.

<Lie Groups, Physics, and Geometry by Robert Gilmore (2008)> Chapter 4.2 "How to linearize a Lie Group" then calculates the <exponential map> of the vector $xM_x + yM_y + zM_z$ as:
$$
I cosh(\theta) + M_x sinh(\theta)/\theta
$$
with:
$$
\theta^2 = x^2 + bc
$$

TODO now the natural question is: can we cover the entire Lie group with this exponential? <Lie Groups, Physics, and Geometry by Robert Gilmore (2008)> Chapter 7 "EXPonentiation" explains why not.

= Finite special general linear group
{parent=Special linear group}

= $SL(n, m)$
{synonym}
{title2}

Just like for the <finite general linear group>, the definition of special also works for finite fields, where 1 is the multiplicative identity!

Note that the definition of <orthogonal group> may not have such a clear finite analogue on the other hand.

= Isometry group
{parent=Important Lie group}
{wiki}

The <group (mathematics)> of all transformations that preserve some <bilinear form>, notable examples:
* <orthogonal group>{child} preserves the <inner product>
* <unitary group>{child} preserves a <Hermitian form>
* <Lorentz group>{child} preserves the <Minkowski inner product>

= Lie algebra of a isometry group
{c}
{parent=Isometry group}
{wiki}

We can almost reach the <Lie algebra> of any <isometry group> in a single go. For every $X$ in the <Lie algebra> we must have:
$$
\forall v, w \in V, t \in \R (e^{tX}v|e^{tX}w) = (v|w)
$$
because $e^{tX}$ has to be in the isometry group by definition as shown at <Lie algebra of a matrix Lie group>{full}.

Then:
$$
\evalat{\dv{(e^{tX}v|e^{tX}w)}{t}}{0} = 0
\implies
\evalat{(Xe^{tX}v|e^{tX}w) + (e^{tX}v|Xe^{tX}w)}{0} = 0
\implies
(Xv|w) + (v|Xw) = 0
$$
so we reach:
$$
\forall v, w \in V (Xv|w) = -(v|Xw)
$$
With this relation, we can easily determine the <Lie algebra> of common isometries:
* <Lie algebra of O(n)>

Bibliography:
* <An Introduction to Tensors and Group Theory for Physicists by Nadir Jeevanjee (2011)> page 151

= Orthogonal group
{parent=Important Lie group}
{wiki}

= $O(n)$
{synonym}
{title2}

= Definition of the orthogonal group
{parent=Orthogonal group}

Intuitive definition: real group of rotations + reflections.

Mathematical definition that most directly represents this: <the orthogonal group is the group of all matrices that preserve the dot product>.

= The orthogonal group is the group of all matrices that preserve the dot product
{parent=Definition of the orthogonal group}

When viewed as matrices, it is the group of all matrices that preserve the <dot product>, i.e.:
$$
O(n) = { O \in M(n) | \forall x, y, x^Ty = (Ox)^T (Oy) }
$$
This implies that it also preserves important geometric notions such as <norm (mathematics)> (intuitively: distance between two points) and <angles>.

This is perhaps the best "default definition".

= What happens to the definition of the orthogonal group if we choose other types of symmetric bilinear forms
{parent=The orthogonal group is the group of all matrices that preserve the dot product}

We looking at the definition <the orthogonal group is the group of all matrices that preserve the dot product>, we notice that the <dot product> is one example of <positive definite symmetric bilinear form>, which in turn can also be represented by a matrix as shown at: <matrix representation of a symmetric bilinear form>{full}.

By looking at this more general point of view, we could ask ourselves what happens to the group if instead of the <dot product> we took a more general <bilinear form>, e.g.:
* $I_2$: another <positive definite symmetric bilinear form> such as $(x_1, x_2)^T(y_1, y_2) = 2 x_1 y_1 + x_2 y_2$?
* $I_-$ what if we drop the <positive definite> requirement, e.g. $(x_1, x_2)^T(y_1, y_2) = - x_1 y_1 + x_2 y_2$?
The answers to those questions are given by the <Sylvester's law of inertia> at <all indefinite orthogonal groups of matrices of equal metric signature are isomorphic>{full}.

= The orthogonal group is the group of all invertible matrices where the inverse is equal to the transpose
{parent=Definition of the orthogonal group}

Let's show that this definition is equivalent to <the orthogonal group is the group of all matrices that preserve the dot product>.

Note that:
$$
x^Ty = (Ox)^T (Oy) = x^T O^T O y
$$
and for that to be true for all possible $x$ and $y$ then we must have:
$$
O^T O = I
$$
i.e. the <matrix inverse> is equal to the <transpose>.

Conversely, if:
$$
O^T O = I
$$
is true, then
$$
(Ox)^T (Oy) = x^T (O^T O) y = x^Ty
$$

These matricese are called the <orthogonal matrices>.

TODO is there any more intuitive way to think about this?

= Elements of the orthogonal group have determinant plus or minus one
{parent=The orthogonal group is the group of all invertible matrices where the inverse is equal to the transpose}

<the orthogonal group is the group of all invertible matrices where the inverse is equal to the transpose>

= The orthogonal group is the group of all matrices with orthonormal rows and orthonormal columns
{parent=Definition of the orthogonal group}

Or equivalently, the set of rows is <orthonormal>, and so is the set of columns. TODO proof that it is equivalent to <the orthogonal group is the group of all matrices that preserve the dot product>.

= Topology of the orthogonal group
{parent=Orthogonal group}

= The orthogonal group is compact
{parent=Topology of the orthogonal group}

= Connected components of the orthogonal group
{parent=Topology of the orthogonal group}

= The orthogonal group has two connected components
{synonym}
{title2}

The <orthogonal group> has 2 <connected components>:
* one with determinant +1, which is itself a <subgroup> known as the <special orthogonal group>. These are pure <rotations> without a reflection.
* the other with determinant -1. This is not a <subgroup> as it does not contain the origin. It represents <rotations> with a reflection.

It is instructive to visualize how the $\pm1$ looks like in <SO(3)>:
* you take the first basis vector and move it to any other. You have therefore two angular parameters.
* you take the second one, and move it to be orthogonal to the first new vector. (you can choose a circle around the first new vector, and so you have another angular parameter.
* at last, for the last one, there are only two choices that are orthogonal to both previous ones, one in each direction. It is this directio, relative to the others, that determines the "has a reflection or not" thing

As a result it is <group isomorphism>[isomorphic] to the <direct product of groups>[direct product] of the special orthogonal group by the <cyclic group> of <order (algebra)> 2:
$$
O(n) \cong SO(n) \times C_2
$$

A low dimensional example:
$$
O(1) \cong SO(2) \times C_2
$$
because you can only do two things: to flip or not to flip the line around zero.

Note that having the determinant plus or minus 1 is not a definition: there are non-orthogonal groups with determinant plus or minus 1. This is just a property. E.g.:
$$
M = \begin{bmatrix} 2 & 3 \\ 1 & 2 \\ \end{bmatrix}
$$
has determinant 1, but:
$$
M^TM = \begin{bmatrix} 5 & 8 \\ 8 & 11 \\ \end{bmatrix}
$$
so $M$ is not orthogonal.

= Lie algebra of $O(n)$
{c}
{parent=Orthogonal group}

From <Lie algebra of a isometry group>{full} we reach:

= Special orthogonal group
{parent=Orthogonal group}

= $SO(n)$
{synonym}
{title2}

= Rotation group
{synonym}
{title2}

= Rotation
{synonym}

= Rotate
{synonym}

Group of rotations of a rigid body.

Like <orthogonal group> but without reflections. So it is a "special case" of the orthogonal group.

This is a subgroup of both the <orthogonal group> and the <special linear group>.

= Lie algebra of $SO(3)$
{c}
{parent=Special orthogonal group}

We can reach it by taking the rotations in three directions, e.g. a rotation around the z axis:
$$
R_z(\theta)
=
\begin{bmatrix}
cos(\theta) & -sin(\theta) & 0 \\
sin(\theta) & cos(\theta) & 0 \\
0 & 0 & 1 \\
\end{bmatrix}
$$
then we derive and evaluate at 0:
$$
L_z
=
\evalat{\dv{R_z(\theta)}{\theta}}{0}
=
\evalat{\begin{bmatrix}
-sin(\theta) & -cos(\theta) & 0 \\
cos(\theta) & -sin(\theta) & 0 \\
0 & 0 & 1 \\
\end{bmatrix}}{0}
=
\begin{bmatrix}
0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0 \\
\end{bmatrix}
$$
$L_z$ therefore represents the infinitesimal rotation.

Note that the <exponential map> reverses this and gives a finite rotation around the Z axis back from the <infinitesimal generator> $L_z$:
$$
e^{\theta L_z} = R_z(\theta)
$$

Repeating the same process for the other directions gives:
$$
L_x = TODO
L_y = TODO
$$
We have now found 3 <linearly independent> elements of the Lie algebra, and since $SO(3)$ has dimension 3, we are done.

= Lie bracket of the rotation group
{c}
{parent=Lie algebra of SO(3)}

Based on the $L_x$,$L_y$ and $L_z$ derived at <Lie algebra of SO(3)> we can calculate the <Lie bracket> as:
$$
TODO
$$

= 3D rotation group
{parent=Special orthogonal group}
{wiki}

= Special orthogonal group of degree 3
{synonym}

= $SO(3)$
{synonym}
{title2}

Has <SU(2)> as a <double cover>.

= Unitary group
{parent=Orthogonal group}
{wiki}

= $U(n)$
{synonym}
{title2}

<Group (mathematics)> of the <unitary matrices>.

<complex number>[Complex] analogue of the <orthogonal group>.

One notable difference from the orthogonal group however is that the unitary group is connected "because" its determinant is not fixed to two disconnected values 1/-1, but rather goes around in a continuous <unit circle>. $U(1)$ \i[is] the unit circle.

= Unitary group of degree 1
{parent=Unitary group}

= $U(1)$
{synonym}
{title2}

= Unitary group of degree 2
{parent=Unitary group}

= $U(2)$
{synonym}
{title2}

Diffeomorphic to the 3 sphere.

= Unit circle
{parent=Unitary group}
{wiki}

The $U(1)$ <unitary group> is one very over-generalized way of looking at it :-)

= Special unitary group
{parent=Unitary group}
{wiki}

= $SU(n)$
{synonym}
{title2}

The complex analogue of the <special orthogonal group>, i.e. the subgroup of the <unitary group> with determinant equals exactly 1 instead of an arbitrary complex number with absolute value equal 1 as is the case for the unitary group.

= Special unitary of degree 2
{parent=Special unitary group}
{wiki}

= $SU(2)$
{synonym}
{title2}

https://en.wikipedia.org/wiki/Representation_theory_of_SU(2)

<Double cover> of <SO(3)>.

<Isomorphic> to the <quaternions>.

= Representations of $SU(2)$
{parent=Special unitary of degree 2}

= Lie algebra of $SU(2)$
{c}
{parent=Representations of SU(2)}

Bibliography:
* <Physics from Symmetry by Jakob Schwichtenberg (2015)> page 54.

= 2D representation of $SU(2)$
{parent=Representations of SU(2)}

<Pauli matrix>.

= Projective linear group
{parent=Important Lie group}
{wiki}

TODO motivation. Motivation. Motivation. Motivation. The definitin with <quotient group> is easy to understand.

= Finite projective linear group
{parent=Projective linear group}
{{wiki=Projective_linear_group#Finite_fields}}

= $PGL(q, p)$
{synonym}
{title2}

= Projective special linear group
{parent=Projective linear group}

= Finite projective special linear group
{parent=Projective special linear group}

= $PSL(p, q)$
{synonym}
{title2}

= $PSL(2, p)$
{parent=Finite projective special linear group}

= PSL(2,7)
{parent=PSL(2, p)}
{wiki}

The second smallest non-<Abelian> finite <simple group> after the <alternating group of degree 5>.

= Poincaré group
{c}
{parent=Important Lie group}
{wiki}

= Poincaré transformation
{c}
{synonym}

Full set of all possible <special relativity> symmetries:
* translations in space and time
* rotations in space
* <Lorentz boosts>

In simple and concrete terms. Suppose you observe N particles following different trajectories in <Spacetime>.

There are two observers traveling at constant speed relative to each other, and so they see different trajectories for those particles:
* space and time shifts, because their space origin and time origin (time they consider 0, i.e. when they started their timers) are not synchronized. This can be modelled with a 4-vector addition.
* their space axes are rotated relative to one another. This can be modelled with a 4x4 matrix multiplication.
* and they are moving relative to each other, which leads to the usual spacetime interactions of <special relativity>. Also modelled with a 4x4 matrix multiplication.
Note that the first two types of transformation are exactly the non-relativistic <Galilean transformations>.

The Poincare group is the set of all matrices such that such a relationship like this exists between two frames of reference.

= Galilean transformation
{c}
{parent=Poincaré group}
{wiki}

= Translation
{disambiguate=geometry}
{c}
{parent=Galilean transformation}
{wiki}

Subset of <Galilean transformation> with speed equals 0.

= Translation group
{parent=Translation (geometry)}

This is a good and simple first example of <Lie algebra> to look into.

= The derivative is the generator of the translation group
{parent=Translation group}

Take the group of all <Translation (geometry)> in <\R^1>.

Let's see how the <generator of a Lie algebra>[generator] of this group is the <derivative> <operator>:
$$
\pdv{}{x}
$$

The way to think about this is:
* the translation group operates on the argument of a function $f(x)$
* the generator is an <operator> that operates on $f$ itself

So let's take the <exponential map (Lie theory)>:
$$
e^{x_0\pdv{}{x}}f(x) = \left( 1 + x_0 \pdv{}{x} + x_0^2 \pdv{^2}{x^2} + \ldots\right)f(x)
$$
and we notice that this is exactly the <Taylor series> of $f(x)$ around the identity element of the translation group, which is 0! Therefore, if $f(x)$ behaves nicely enough, within some <radius of convergence> around the origin we have for finite $x_0$:
$$
e^{x_0\pdv{}{x}}f(x) = f(x + x_0)
$$

This example shows clearly how the <exponential map (Lie theory)> applied to a (differential) <operator> can generate finite (non-infinitesimal) <Translation (geometry)>!

= Galilean invariance
{c}
{parent=Galilean transformation}
{wiki}

= Galilean invariant
{c}
{synonym}

A <law of physics> is Galilean invariant if the same formula works both when you are standing still on land, or when you are on a boat moving at constant velocity.

For example, if we were describing the movement of a <point particle>, the exact same formulas that predict the evolution of $x_{land}(t)$ must also predict $x_{boat}(t)$, even though of course both of those $x(t)$ will have different values. 

It would be extremely unsatisfactory if the formulas of the <laws of physics> did not obey <Galilean invariance>. Especially if you remember that <Earth> is travelling extremelly fast relative to the <Sun>. If there was no such invariance, that would mean for example that the <laws of physics> would be different in other <planets> that are moving at different speeds. That would be a strong sign that our laws of physics are not complete.

The consequence/cause of that is that you cannot know if you are moving at a constant speed or not.

<Lorentz invariance> generalizes <Galilean invariance> to also account for <special relativity>, in which a more complicated invariant that also takes into account different times observed in different <inertial frames of reference> is also taken into account. But the fundamental desire for the <Lorentz invariance> of the <laws of physics> remains the same.

= Covariance
{parent=Galilean invariance}
{wiki}

= Covariant
{synonym}

Generally means that he form of the equation $f(x)$ does not change if we transform $x$.

This is generally what we want from the laws of physics.

E.g. a <Galilean transformation> generally changes the exact values of coordinates, but not the form of the laws of physics themselves.

<Lorentz covariance> is the main context under which the word "covariant" appears, because we really don't want the form of the equations to change under <Lorentz transforms>, and "covariance" is often used as a synonym of "Lorentz covariance".

TODO some sources distinguish "invariant" from "covariant": <invariant vs covariant>.

= Invariant vs covariant
{parent=Covariance}

Some sources distinguish "invariant" from "covariant" such that under some transformation (typically <Lie group>):
* invariant: the value of $f(x)$ does not change if we transform $x$
* covariant: the form of the equation $f(x)$ does not change if we transform $x$.
TODO examples.

Bibliography:
* https://physics.stackexchange.com/questions/7700/definitions-and-usage-of-covariant-form-invariant-invariant
* https://physics.stackexchange.com/questions/270296/what-is-the-difference-between-lorentz-invariant-and-lorentz-covariant

= Lorentz group
{c}
{parent=Poincaré group}
{wiki}

= $SO(1,3)$
{synonym}
{title2}

<Subgroup> of the <Poincaré group> without translations. Therefore, in those, the spacetime origin is always fixed.

Or in other words, it is as if two observers had their space and time origins at the exact same place. However, their space axes may be rotated, and one may be at a relative speed to the other to create a <Lorentz boost>. Note however that if they are at relative speeds to one another, then their axes will immediately stop being at the same location in the next moment of time, so things are only valid infinitesimally in that case.

This group is made up of matrix multiplication alone, no need to add the offset vector: space rotations and <Lorentz boost> only spin around and bend things around the origin.

One definition: set of all 4x4 matrices that keep the <Minkowski inner product>, mentioned at <Physics from Symmetry by Jakob Schwichtenberg (2015)> page 63. This then implies:
$$
\Lambda ^ T \eta \Lambda = \eta
$$

= Representation theory of the Lorentz group
{c}
{parent=Lorentz group}
{wiki}

<Physics from Symmetry by Jakob Schwichtenberg (2015)> page 66 shows one in terms of 4x4 complex matrices.

More importantly though, are the representations of the <Lie algebra of the Lorentz group>, which are generally also just also called "Representation of the Lorentz group" since you can reach the representation from the algebra via the <exponential map>.

Bibliography:
* <Physics from Symmetry by Jakob Schwichtenberg (2015)> chapter 3.7 "The Lorentz Group O (1, 3)"

= Representation of the Lorentz group
{c}
{parent=Representation theory of the Lorentz group}

= Representations of the Lorentz group
{synonym}

One of the representations of the <Lorentz group> that show up in the <Representation theory of the Lorentz group>.

= Lie algebra of the Lorentz group
{c}
{parent=Representation of the Lorentz group}

= Spinor
{parent=Representation of the Lorentz group}
{wiki}

TODO understand a bit more intuitively.

* <Physics from Symmetry by Jakob Schwichtenberg (2015)> page 72
* https://physics.stackexchange.com/questions/172385/what-is-a-spinor
* https://physics.stackexchange.com/questions/41211/what-is-the-difference-between-a-spinor-and-a-vector-or-a-tensor
* https://physics.stackexchange.com/questions/74682/introduction-to-spinors-in-physics-and-their-relation-to-representations
* http://www.weylmann.com/spinor.pdf

= Lorentz boost
{c}
{parent=Lorentz group}

Two observers travel at fixed speed relative to each other. They synchronize origins at x=0 and t=0, and their spacial axes are perfectly aligned. This is a subset of the <Lorentz group>. TODO confirm it does not form a subgroup however.

= Indefinite orthogonal group
{parent=Lorentz group}
{wiki}

= $O(m,n)$
{synonym}
{title2}

Generalization of <orthogonal group> to preserve different <bilinear forms>. Important because the <Lorentz group> is <SO(1,3)>.

= Definition of the indefinite orthogonal group
{parent=Indefinite orthogonal group}

Given a <matrix> $A$ with <metric signature> containing $m$ positive and $n$ negative entries, the <indefinite orthogonal group> is the set of all matrices that preserve the <matrix representation of a bilinear form>[associated bilinear form], i.e.:
$$
O(m, n) = {O \in M(m + n) | \forall x, y x^T A y = (Ox)^T A (Oy)}
$$
Note that if $A = I$, we just have the standard <dot product>, and that subcase corresponds to the following definition of the <orthogonal group>: <the orthogonal group is the group of all matrices that preserve the dot product>{full}.

As shown at <all indefinite orthogonal groups of matrices of equal metric signature are isomorphic>, due to the <Sylvester's law of inertia>, only the metric signature of $A$ matters. E.g., if we take two different matrices with the same metric signature such as:
$$
\begin{bmatrix}
1 0
0 -1
\end{bmatrix}
$$
and:
$$
\begin{bmatrix}
2 0
0 -3
\end{bmatrix}
$$
both produce <isomorphic> spaces. So it is customary to just always pick the matrix with only +1 and -1 as entries.

= All indefinite orthogonal groups of matrices of equal metric signature are isomorphic
{parent=Definition of the indefinite orthogonal group}

Following the <definition of the indefinite orthogonal group>, we want to show that only the <metric signature> matters.

First we can observe that the exact matrices are different. For example, taking the standard matrix of $O(2)$:
$$
\begin{bmatrix}
1 0
0 1
\end{bmatrix}
$$
and:
$$
\begin{bmatrix}
2 0
0 1
\end{bmatrix}
$$
both have the same <metric signature>. However, we notice that a rotation of 90 degrees, which preserves the first form, does not preserve the second one! E.g. consider the vector $x = (1, 0)$, then $x \cdot x = 1$. But after a rotation of 90 degrees, it becomes $x_2 = (0, 1)$, and now $x_2 \cdot x_2 = 2$! Therefore, we have to search for an <isomorphism> between the two sets of matrices.

For example, consider the <orthogonal group>, which can be defined as shown at <the orthogonal group is the group of all matrices that preserve the dot product> can be defined as:

= Indefinite special orthogonal group
{parent=Indefinite orthogonal group}

= $SO(m,n)$
{synonym}
{title2}

Like the <special orthogonal group> is to the <orthogonal group>, <SO(m,n)> is the subset of <O(m,n)> with <determinant> equal to exactly 1.

= Representation theory
{parent=Lie group}
{wiki}

= Representation
{disambiguate=group theory}
{synonym}

Basically, a "representation" means associating each group element as an invertible <matrices>, i.e. a matrix in (possibly some subset of) <GL(n)>, that has the same properties as the group.

Or in other words, associating to the more abstract notion of a <group (mathematics)> more concrete objects with which we are familiar (e.g. a matrix). 

Each such matrix then represents one specific element of the group.

This is basically what everyone does (or should do!) when starting to study <Lie groups>: we start looking at <matrix Lie groups>, which are very concrete.

Or more precisely, mapping each group element to a <linear map> over some <vector field> $V$ (which can be represented by a matrix infinite dimension), in a way that respects the group operations:
$$
R(g) : G \to GL(V)
$$

As shown at <Physics from Symmetry by Jakob Schwichtenberg (2015)>
* page 51, a representation is not unique, we can even use matrices of different dimensions to represent the same group
* 3.6 classifies the <representations of SU(2)>. There is only one possibility per dimension!
* 3.7 "The Lorentz Group O(1,3)" mentions that even for a "simple" group such as the <Lorentz group>, not all representations can be described in terms of matrices, and that we can construct such representations with the help of <Lie group> theory, and that they have fundamental physical application

Motivation:
* https://math.stackexchange.com/questions/1628464/what-is-representation-theory

Bibliography:
* https://www.youtube.com/watch?v=9rDzaKASMTM "RT1: Representation Theory Basics" by <MathDoctorBob> (2011). Too much theory, give me the motivation!
* https://www.quantamagazine.org/the-useless-perspective-that-transformed-mathematics-20200609 The "Useless" Perspective That Transformed Mathematics by <Quanta Magazine> (2020). Maybe there is something in there amidst the "the reader might not know what a <matrix> is" stuff.

= Irreducible representation
{parent=Representation theory}
{wiki}

= Casimir element
{c}
{parent=Irreducible representation}
{wiki}

= Schur's lemma
{c}
{parent=Representation theory}
{wiki}

= Simple Lie group
{parent=Lie group}
{wiki}

= Classification of simple Lie groups
{parent=Simple Lie group}

https://en.wikipedia.org/wiki/Simple_Lie_group#List

A bit like the <classification of simple finite groups>, they also have a few <sporadic groups>! Not as spectacular since as usual <continuous problems are simpler than discrete ones>, but still, not bad.

= Lie group bibliography
{parent=Lie group}

Recommended from <Physics from Symmetry by Jakob Schwichtenberg (2015)> page 92:
* <An Introduction to Tensors and Group Theory for Physicists by Nadir Jeevanjee (2011)>
* <Naive Lie theory by John Stillwell (2008)>
* <Lie Algebras In Particle Physics by Howard Georgi (1999)>

= An Introduction to Tensors and Group Theory for Physicists by Nadir Jeevanjee (2011)
{c}
{parent=Lie group bibliography}

This does not seem to go deep into the <Standard Model> as <Physics from Symmetry by Jakob Schwichtenberg (2015)>, appears to focus more on more basic applications.

But because it is more basic, it does explain some things quite well.

= Lie Groups, Physics, and Geometry by Robert Gilmore (2008)
{c}
{parent=Lie group bibliography}

The author seems to have uploaded the entire book by chapters at: https://www.physics.drexel.edu/~bob/LieGroups.html

And the author is the cutest: https://www.physics.drexel.edu/~bob/Personal.html[].

Overview:
* Chapter 3: gives a bunch of examples of important <matrix Lie groups>. These are done by imposing certain types of constraints on the <general linear group>, to obtain <subgroups> of the general linear group. Feels like the start of a <classification (mathematics)>
* Chapter 4: defines <Lie algebra>. Does some basic examples with them, but not much of deep interest, that is mostl left for Chapter 7
* Chapter 5: calculates the <Lie algebra> for all examples from chapter 3
* Chapter 6: don't know
* Chapter 7: describes how the <exponential map> links <Lie algebras> to <Lie groups>

= Naive Lie theory by John Stillwell (2008)
{c}
{parent=Lie group bibliography}

= Lie Algebras In Particle Physics by Howard Georgi (1999)
{c}
{parent=Lie group bibliography}

= Tesselation
{parent=Geometry}
{wiki}

= Aperiodic tiling
{parent=Tesselation}
{wiki}

https://www.quantamagazine.org/nasty-geometry-breaks-decades-old-tiling-conjecture-20221215/

= Tiling of the plane
{parent=Tesselation}

https://math.libretexts.org/Bookshelves/Arithmetic_and_Basic_Math/Book%3A_Basic_Math_(Grade_6)/01%3A_Area_and_Surface_Area/01%3A_Lessons_Reasoning_to_Find_Area/1.01%3A_Tiling_the_Plane

\Video[https://www.youtube.com/watch?v=thOifuHs6eY]
{title=Hexagons are the Bestagons by CGP Grey (2020)}

= Aperiodic monotile
{parent=Tiling of the plane}

= Smith aperiodic monotile
{c}
{parent=Aperiodic monotile}
{title2=needs reflections}
{title2=March 2023}

<Preprint>: https://arxiv.org/abs/2303.10798

\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/2/25/Smith_aperiodic_monotiling.svg/512px-Smith_aperiodic_monotiling.svg.png]

= Spectre aperiodic monotile
{c}
{parent=Aperiodic monotile}
{title2=no reflections}
{title2=May 2023}

https://aperiodical.com/2023/05/now-thats-what-i-call-an-aperiodic-monotile/