Ciro Santilli $$ Sponsor Ciro $$ 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱
Generalization of a plane for any number of dimensions.
Kind of the opposite of a line: the line has dimension 1, and the plane has dimension D-1.
In , both happen to coincide, a boring example of an exceptional isomorphism.
Does not require straight line cuts.
Diffeomorphic to .
A unique projective space can be defined for any vector space.
The projective space associated with a given vector space is denoted .
The definition is to take the vector space, remove the zero element, and identify all elements that lie on the same line, i.e.
The most important initial example to study is the real projective plane.
In those cases at least, it is possible to add a metric to the spaces, leading to elliptic geometry.
Just a circle.
Take with a line at . Identify all the points that an observer
For some reason, Ciro Santilli is mildly obsessed with understanding and visualizing the real projective plane.
To see why this is called a plane, move he center of the sphere to , and project each line passing on the center of the sphere on the x-y plane. This works for all points of the sphere, except those at the equator . Those are the points at infinity. Note that there is one such point at infinity for each direction in the x-y plane.
It good to think about how Euclid's postulates look like in the real projective plane:
  • two parallel lines on the plane meet at a point on the sphere!
    Since there is one point of infinity for each direction, there is one such point for every direction the two parallel lines might be at. The parallel postulate does not hold, and is replaced with a simpler more elegant version: every two lines meet at exactly one point.
    One thing to note however is that ther real projective plane does not have angles defined on it by definition. Those can be defined, forming elliptic geometry through the projective model of elliptic geometry, but we can interpret the "parallel lines" as "two lines that meet at a point at infinity"
  • points in the real projective plane are lines in
  • lines in the real projective plane are planes in .
    For every two projective points there is a single projective line that passes through them.
    Since it is a plane in , it always intersects the real plane at a line.
    Note however that not all lines in the real plane correspond to a projective line: only lines tangent to a circle at zero do.
Unlike the real projective line which is homotopic to the circle, the real projective plane is not homotopic to the sphere.
The topological difference bewteen the sphere and the real projective space is that for the sphere all those points in the x-y circle are identified to a single point.
One more generalized argument of this is the classification of closed surfaces, in which the real projective plane is a sphere with a hole cut and one Möbius strip glued in.
This is the standard model.
Ciro Santilli's preferred visualization of the real projective plane is a small variant of the standard "lines through origin in ".
Take a open half sphere e.g. a sphere but only the points with .
Each point in the half sphere identifies a unique line through the origin.
Then, the only lines missing are the lines in the x-y plane itself.
For those sphere points in the circle on the x-y plane, you should think of them as magic poins that are identified with the corresponding antipodal point, also on the x-y, but on the other side of the origin. So basically you you can teleport from one of those to the other side, and you are still in the same point.
Ciro likes this model because then all the magic is confined just to the part of the model, and everything else looks exactly like the sphere.
It is useful to contrast this with the sphere itself. In the sphere, all points in the circle are the same point. But this is not the case for the projective plane. You cannot instantly go to any other point on the by just moving a little bit, you have to walk around that circle.
Figure 1. Spherical cap model of the real projective plane. On the x-y plane, you can magically travel immediately between antipodal points such as A/A', B/B' and C/C'. Or equivalently, those pairs are the same point. Every other point outside the x-y plane is just a regular point like a normal sphere.
To see that the real projective plane is not simply connected space, considering the lines through origin model of the real projective plane, take a loop that starts at and moves along the great circle ends at .
Note that both of those points are the same, so we have a loop.
Now try to shrink it to a point.
There's just no way!
A polygon is a 2-dimensional polytope, polyhedra is a 3-dimensional polytope.
TODO understand and explain definition.
The 3D regular convex polyhedrons are super famous, have the name: Platonic solid, and have been known since antiquity. In particular, there are only 5 of them.
The counts per dimension are:
Table 1. Number of regular polytopes per dimension.
Dimension Count
2 Infinite
3 5
4 6
>4 3
The cool thing is that the 3 that exist in 5+ dimensions are all of one of the three families:
Then, the 2 3D missing ones have 4D analogues and the sixth one in 4D does not have a 3D analogue: the 24-cell. Yes, this is the kind of irregular stuff Ciro Santilli lives for.
The name does not imply regular by default. For regular ones, you should say "regular polytope".
Non-regular description: take convex hull take D + 1 vertices that are not on a single D-plan.
square, cube. 4D case known as tesseract.
Convex hull of all (Cartesian product power) D-tuples, e.g. in 3D:
( 1,  1,  1)
( 1,  1, -1)
( 1, -1,  1)
( 1, -1, -1)
(-1,  1,  1)
(-1,  1, -1)
(-1, -1,  1)
(-1, -1, -1)
From this we see that there are vertices.
Two vertices are linked iff they differ by a single number. So each vertex has D neighbors.
The non-regular version of the hypercube.
Examples: square, octahedron.
Take and flip one of 0's to . Therefore has vertices.
Each edge E is linked to every other edge, except it's opposite -E.
The key and central motivation for studying Lie groups and their Lie algebras appears to be to characterize symmetry in Lagrangian mechanics through Noether's theorem, just start from there.
Notably local symmetries appear to map to forces, and local means "around the identity", notably: local symmetries of the Lagrangian imply conserved currents.
The fact that there are elements arbitrarily close to the identity, which is only possible due to the group being continuous, is the key factor that simplifies the treatment of Lie groups, and follows the philosophy of continuous problems are simpler than discrete ones.
Solving differential equations was apparently Lie's original motivation for developing Lie groups. It is therefore likely one of the most understandable ways to approach it.
It appears that Lie's goal was to understand when can a differential equation have an explicitly written solution, much like Galois theory had done for algebraic equations. Both approaches use symmetry as the key tool.
Like everything else in Lie groups, first start with the matrix as discussed at Section "Lie algebra of a matrix Lie group".
Intuitively, a Lie algebra is a simpler object than a Lie group. Without any extra structure, groups can be very complicated non-linear objects. But a Lie algebra is just an algebra over a field, and one with a restricted bilinear map called the Lie bracket, that has to also be alternating and satisfy the Jacobi identity.
Another important way to think about Lie algebras, is as infinitesimal generators.
Because of the Lie group-Lie algebra correspondence, we know that there is almost a bijection between each Lie group and the corresponding Lie algebra. So it makes sense to try and study the algebra instead of the group itself whenever possible, to try and get insight and proofs in that simpler framework. This is the key reason why people study Lie algebras. One is philosophically reminded of how normal subgroups are a simpler representation of group homomorphisms.
To make things even simpler, because all vector spaces of the same dimension on a given field are isomorphic, the only things we need to specify a Lie group through a Lie algebra are:
Note that the Lie bracket can look different under different basis of the Lie algebra however. This is shown for example at Physics from Symmetry by Jakob Schwichtenberg (2015) page 71 for the Lorentz group.
As mentioned at Lie Groups, Physics, and Geometry by Robert Gilmore (2008) Chapter 4 "Lie Algebras", taking the Lie algebra around the identity is mostly a convention, we could treat any other point, and things are more or less equivalent.
Elements of a Lie algebra can (should!) be seen a continuous analogue to the generating set of a group in finite groups.
For continuous groups however, we can't have a finite generating set in the strict sense, as a finite set won't ever cover every possible point.
But the generator of a Lie algebra can be finite.
And just like in finite groups, where you can specify the full group by specifying only the relationships between generating elements, in the Lie algebra you can almost specify the full group by specifying the relationships between the elements of a generator of the Lie algebra.
This "specification of a relation" is done by defining the Lie bracket.
The reason why the algebra works out well for continuous stuff is that by definition an algebra over a field is a vector space with some extra structure, and we know very well how to make infinitesimal elements in a vector space: just multiply its vectors by a constant that cana be arbitrarily small.
Every Lie algebra corresponds to a single simply connected Lie group.
The Baker-Campbell-Hausdorff formula basically defines how to map an algebra to the group.
Lie Groups, Physics, and Geometry by Robert Gilmore (2008) 7.2 "The covering problem" gives some amazing intuition on the subject as usual.
Example at: Lie Groups, Physics, and Geometry by Robert Gilmore (2008) Chapter 7 "EXPonentiation".
Example at: Lie Groups, Physics, and Geometry by Robert Gilmore (2008) Chapter 7 "EXPonentiation".
Furthermore, the non-compact part is always isomorphic to , only the non-compact part can have more interesting structure.
The most important example is perhaps and , both of which have the same Lie algebra, but are not isomorphic.
E.g. in the case of and , is simply connected, but is not.
Most commonly refers to: exponential map.
Like everything else in Lie group theory, you should first look at the matrix version of this operation: the matrix exponential.
The exponential map links small transformations around the origin (infinitely small) back to larger finite transformations, and small transformations around the origin are something we can deal with a Lie algebra, so this map links the two worlds.
The idea is that we can decompose a finite transformation into infinitely arbitrarily small around the origin, and proceed just like the product definition of the exponential function.
The definition of the exponential map is simply the same as that of the regular exponential function as given at Taylor expansion definition of the exponential function, except that the argument can now be an operator instead of just a number.
Solution for given and of:
where is the exponential map.
If we consider just real number, , but when X and Y are non-commutative, things are not so simple.
Furthermore, TODO confirm it is possible that a solution does not exist at all if and aren't sufficiently small.
This formula is likely the basis for the Lie group-Lie algebra correspondence. With it, we express the actual group operation in terms of the Lie algebra operations.
Notably, remember that a algebra over a field is just a vector space with one extra product operation defined.
Vector spaces are simple because all vector spaces of the same dimension on a given field are isomorphic, so besides the dimension, once we define a Lie bracket, we also define the corresponding Lie group.
Since a group is basically defined by what the group operation does to two arbitrary elements, once we have that defined via the Baker-Campbell-Hausdorff formula, we are basically done defining the group in terms of the algebra.
Cardinality dimension of the vector space.
Basically a synonym for Lie group which is the way of modelling them.
Local symmetries appear to be a synonym to internal symmetry, see description at: Section "Internal and spacetime symmetries".
As mentioned at Quote , local symmetries map to forces in the Standard Model.
Appears to be a synonym for: gauge symmetry.
A local symmetry is a transformation that you apply a different transformation for each point, instead of a single transformation for every point.
TODO what's the point of a local symmetry?
TODO. I think this is the key point. Notably, symmetry implies charge conservation.
More precisely, each generator of the corresponding Lie algebra leads to one separate conserved current, such that a single symmetry can lead to multiple conserved currents.
This is basically the local symmetry version of Noether's theorem.
Then to maintain charge conservation, we have to maintain local symmetry, which in turn means we have to add a gauge field as shown at Video "Deriving the qED Lagrangian by Dietterich Labs (2018)".
Forces can then be seen as kind of a side effect of this.
This important and common simple case has easy properties.
For this sub-case, we can define the Lie algebra of a Lie group as the set of all matrices such that for all :
If we fix a given and vary , we obtain a subgroup of . This type of subgroup is known as a one parameter subgroup.
The immediate question is then if every element of can be reached in a unique way (i.e. is the exponential map a bijection). By looking at the matrix logarithm however we conclude that this is not the case for real matrices, but it is for complex matrices.
TODO example it can be seen that the Lie algebra is not closed matrix multiplication, even though the corresponding group is by definition. But it is closed under the Lie bracket operation.
This makes it clear how the Lie bracket can be seen as a "measure of non-commutativity"
Because the Lie bracket has to be a bilinear map, all we need to do to specify it uniquely is to specify how it acts on every pair of some basis of the Lie algebra.
Then, together with the Baker-Campbell-Hausdorff formula and the Lie group-Lie algebra correspondence, this forms an exceptionally compact description of a Lie group.
The one parameter subgroup of a Lie group for a given element of its Lie algebra is a subgroup of given by:
Intuitively, is a direction, and is how far we move along a given direction. This intuition is especially vivid in for example in the case of the Lie algebra of , the rotation group.
One parameter subgroups can be seen as the continuous analogue to the cycle of an element of a group.
Intuition, please? Example? The key motivation seems to be related to Hamiltonian mechanics. The two arguments of the bilinear form correspond to each set of variables in Hamiltonian mechanics: the generalized positions and generalized momentums, which appear in the same number each.
Seems to be set of matrices that preserve a skew-symmetric bilinear form, which is comparable to the orthogonal group, which preserves a symmetric bilinear form. More precisely, the orthogonal group has:
and its generalization the indefinite orthogonal group has:
where S is symmetric. So for the symplectic group we have matrices Y such as:
where A is antisymmetric. This is explained at: They also explain there that unlike as in the analogous orthogonal group, that definition ends up excluding determinant -1 automatically.
Therefore, just like the special orthogonal group, the symplectic group is also a subgroup of the special linear group.
Invertible matrices. Or if you think a bit more generally, an invertible linear map.
When the field is not given, it defaults to the real numbers.
Non-invertible are excluded "because" otherwise it would not form a group (every element must have an inverse). This is therefore the largest possible group under matrix multiplication, other matrix multiplication groups being subgroups of it.
general linear group over a finite field of order . Remember that due to the classification of finite fields, there is one single field for each prime power .
Exactly as over the real numbers, you just put the finite field elements into a matrix, and then take the invertible ones.
For every matrix in the set of all n-by-y square matrices , has inverse .
Note that this works even if is not invertible, and therefore not in !
Therefore, the Lie algebra of is the entire .
Specials sub case of the general linear group when the determinant equals exactly 1.
This is a good first concrete example of a Lie algebra. Shown at Lie Groups, Physics, and Geometry by Robert Gilmore (2008) Chapter 4.2 "How to linearize a Lie Group" has an example.
We can use use the following parametrization of the special linear group on variables , and :
Every element with this parametrization has determinant 1:
Furthermore, any element can be reached, because by independently settting , and , , and can have any value, and once those three are set, is fixed by the determinant.
To find the elements of the Lie algebra, we evaluate the derivative on each parameter at 0:
Remembering that the Lie bracket of a matrix Lie group is really simple, we can then observe the following Lie bracket relations between them:
One key thing to note is that the specific matrices , and are not really fundamental: we could easily have had different matrices if we had chosen any other parametrization of the group.
TODO confirm: however, no matter which parametrization we choose, the Lie bracket relations between the three elements would always be the same, since it is the number of elements, and the definition of the Lie bracket, that is truly fundamental.
Lie Groups, Physics, and Geometry by Robert Gilmore (2008) Chapter 4.2 "How to linearize a Lie Group" then calculates the exponential map of the vector as:
TODO now the natural question is: can we cover the entire Lie group with this exponential? Lie Groups, Physics, and Geometry by Robert Gilmore (2008) Chapter 7 "EXPonentiation" explains why not.
Just like for the finite general linear group, the definition of special also works for finite fields, where 1 is the multiplicative identity!
Note that the definition of orthogonal group may not have such a clear finite analogue on the other hand.
The group of all transformations that preserve some bilinear form, notable examples:
We can almost reach the Lie algebra of any isometry group in a single go. For every in the Lie algebra we must have:
because has to be in the isometry group by definition as shown at Section "Lie algebra of a matrix Lie group".
so we reach:
With this relation, we can easily determine the Lie algebra of common isometries:
Intuitive definition: real group of rotations + reflections.
Mathematical definition that most directly represents this: the orthogonal group is the group of all matrices that preserve the dot product.
When viewed as matrices, it is the group of all matrices that preserve the dot product, i.e.:
This implies that it also preserves important geometric notions such as norm (intuitively: distance between two points) and angles.
This is perhaps the best "default definition".
We looking at the definition the orthogonal group is the group of all matrices that preserve the dot product, we notice that the dot product is one example of positive definite symmetric bilinear form, which in turn can also be represented by a matrix as shown at: Section "Matrix representation of a symmetric bilinear form".
By looking at this more general point of view, we could ask ourselves what happens to the group if instead of the dot product we took a more general bilinear form, e.g.:
The answers to those questions are given by the Sylvester's law of inertia at Section "All indefinite orthogonal groups of matrices of equal metric signature are isomorphic".
Note that:
and for that to be true for all possible and then we must have:
i.e. the matrix inverse is equal to the transpose.
Conversely, if:
is true, then
These matricese are called the orthogonal matrices.
TODO is there any more intuitive way to think about this?
Or equivalently, the set of rows is orthonormal, and so is the set of columns. TODO proof that it is equivalent to the orthogonal group is the group of all matrices that preserve the dot product.
The orthogonal group has 2 connected components:
It is instructive to visualize how the looks like in :
  • you take the first basis vector and move it to any other. You have therefore two angular parameters.
  • you take the second one, and move it to be orthogonal to the first new vector. (you can choose a circle around the first new vector, and so you have another angular parameter.
  • at last, for the last one, there are only two choices that are orthogonal to both previous ones, one in each direction. It is this directio, relative to the others, that determines the "has a reflection or not" thing
As a result it is isomorphic to the direct product of the special orthogonal group by the cyclic group of order 2:
A low dimensional example:
because you can only do two things: to flip or not to flip the line around zero.
Note that having the determinant plus or minus 1 is not a definition: there are non-orthogonal groups with determinant plus or minus 1. This is just a property. E.g.:
has determinant 1, but:
so is not orthogonal.
Group of rotations of a rigid body.
Like orthogonal group but without reflections. So it is a "special case" of the orthogonal group.
This is a subgroup of both the orthogonal group and the special linear group.
We can reach it by taking the rotations in three directions, e.g. a rotation around the z axis:
then we derive and evaluate at 0:
therefore represents the infinitesimal rotation.
Note that the exponential map reverses this and gives a finite rotation around the Z axis back from the infinitesimal generator :
Repeating the same process for the other directions gives:
We have now found 3 linearly independent elements of the Lie algebra, and since has dimension 3, we are done.
Based on the , and derived at Lie algebra of we can calculate the Lie bracket as:
Complex analogue of the orthogonal group.
One notable difference from the orthogonal group however is that the unitary group is connected "because" its determinant is not fixed to two disconnected values 1/-1, but rather goes around in a continuous unit circle. is the unit circle.
Diffeomorphic to the 3 sphere.
The unitary group is one very over-generalized way of looking at it :-)
The complex analogue of the special orthogonal group, i.e. the subgroup of the unitary group with determinant equals exactly 1 instead of an arbitrary complex number with absolute value equal 1 as is the case for the unitary group.
TODO motivation. Motivation. Motivation. Motivation. The definitin with quotient group is easy to understand.
The second smallest non-Abelian finite simple group after the alternating group of degree 5.
Full set of all possible special relativity symmetries:
In simple and concrete terms. Suppose you observe N particles following different trajectories in Spacetime.
There are two observers traveling at constant speed relative to each other, and so they see different trajectories for those particles:
  • space and time shifts, because their space origin and time origin (time they consider 0, i.e. when they started their timers) are not synchronized. This can be modelled with a 4-vector addition.
  • their space axes are rotated relative to one another. This can be modelled with a 4x4 matrix multiplication.
  • and they are moving relative to each other, which leads to the usual spacetime interactions of special relativity. Also modelled with a 4x4 matrix multiplication.
Note that the first two types of transformation are exactly the non-relativistic Galilean transformations.
The Poincare group is the set of all matrices such that such a relationship like this exists between two frames of reference.
Subset of Galilean transformation with speed equals 0.
This is a good and simple first example of Lie algebra to look into.
Take the group of all Translation in .
Let's see how the generator of this group is the derivative operator:
The way to think about this is:
  • the translation group operates on the argument of a function
  • the generator is an operator that operates on itself
So let's take the exponential map:
and we notice that this is exactly the Taylor series of around the identity element of the translation group, which is 0! Therefore, if behaves nicely enough, within some radius of convergence around the origin we have for finite :
This example shows clearly how the exponential map applied to a (differential) operator can generate finite (non-infinitesimal) Translation!
A law of physics is Galilean invariant if the same formula works both when you are standing still on land, or when you are on a boat moving at constant velocity.
For example, if we were describing the movement of a point particle, the exact same formulas that predict the evolution of must also predict , even though of course both of those will have different values.
It would be extremely unsatisfactory if the formulas of the laws of physics did not obey Galilean invariance. Especially if you remember that Earth is travelling extremelly fast relative to the Sun. If there was no such invariance, that would mean for example that the laws of physics would be different in other planets that are moving at different speeds. That would be a strong sign that our laws of physics are not complete.
The consequence/cause of that is that you cannot know if you are moving at a constant speed or not.
Lorentz invariance generalizes Galilean invariance to also account for special relativity, in which a more complicated invariant that also takes into account different times observed in different inertial frames of reference is also taken into account. But the fundamental desire for the Lorentz invariance of the laws of physics remains the same.
Generally means that he form of the equation does not change if we transform .
This is generally what we want from the laws of physics.
E.g. a Galilean transformation generally changes the exact values of coordinates, but not the form of the laws of physics themselves.
Lorentz covariance is the main context under which the word "covariant" appears, because we really don't want the form of the equations to change under Lorentz transforms, and "covariance" is often used as a synonym of "Lorentz covariance".
TODO some sources distinguish "invariant" from "covariant": invariant vs covariant.
Some sources distinguish "invariant" from "covariant" such that under some transformation (typically Lie group):
  • invariant: the value of does not change if we transform
  • covariant: the form of the equation does not change if we transform .
TODO examples.
Subgroup of the Poincaré group without translations. Therefore, in those, the spacetime origin is always fixed.
Or in other words, it is as if two observers had their space and time origins at the exact same place. However, their space axes may be rotated, and one may be at a relative speed to the other to create a Lorentz boost. Note however that if they are at relative speeds to one another, then their axes will immediately stop being at the same location in the next moment of time, so things are only valid infinitesimally in that case.
This group is made up of matrix multiplication alone, no need to add the offset vector: space rotations and Lorentz boost only spin around and bend things around the origin.
One definition: set of all 4x4 matrices that keep the Minkowski inner product, mentioned at Physics from Symmetry by Jakob Schwichtenberg (2015) page 63. This then implies:
Physics from Symmetry by Jakob Schwichtenberg (2015) page 66 shows one in terms of 4x4 complex matrices.
More importantly though, are the representations of the Lie algebra of the Lorentz group, which are generally also just also called "Representation of the Lorentz group" since you can reach the representation from the algebra via the exponential map.
One of the representations of the Lorentz group that show up in the Representation theory of the Lorentz group.
TODO understand a bit more intuitively.
Two observers travel at fixed speed relative to each other. They synchronize origins at x=0 and t=0, and their spacial axes are perfectly aligned. This is a subset of the Lorentz group. TODO confirm it does not form a subgroup however.
Generalization of orthogonal group to preserve different bilinear forms. Important because the Lorentz group is .
Given a matrix with metric signature containing positive and negative entries, the indefinite orthogonal group is the set of all matrices that preserve the associated bilinear form, i.e.:
Note that if , we just have the standard dot product, and that subcase corresponds to the following definition of the orthogonal group: Section "The orthogonal group is the group of all matrices that preserve the dot product".
As shown at all indefinite orthogonal groups of matrices of equal metric signature are isomorphic, due to the Sylvester's law of inertia, only the metric signature of matters. E.g., if we take two different matrices with the same metric signature such as:
both produce isomorphic spaces. So it is customary to just always pick the matrix with only +1 and -1 as entries.
Following the definition of the indefinite orthogonal group, we want to show that only the metric signature matters.
First we can observe that the exact matrices are different. For example, taking the standard matrix of :
both have the same metric signature. However, we notice that a rotation of 90 degrees, which preserves the first form, does not preserve the second one! E.g. consider the vector , then . But after a rotation of 90 degrees, it becomes , and now ! Therefore, we have to search for an isomorphism between the two sets of matrices.
For example, consider the orthogonal group, which can be defined as shown at the orthogonal group is the group of all matrices that preserve the dot product can be defined as:
Like the special orthogonal group is to the orthogonal group, is the subset of with determinant equal to exactly 1.
Basically, a "representation" means associating each group element as an invertible matrices, i.e. a matrix in (possibly some subset of) , that has the same properties as the group.
Or in other words, associating to the more abstract notion of a group more concrete objects with which we are familiar (e.g. a matrix).
Each such matrix then represents one specific element of the group.
This is basically what everyone does (or should do!) when starting to study Lie groups: we start looking at matrix Lie groups, which are very concrete.
Or more precisely, mapping each group element to a linear map over some vector field (which can be represented by a matrix infinite dimension), in a way that respects the group operations:
As shown at Physics from Symmetry by Jakob Schwichtenberg (2015)
  • page 51, a representation is not unique, we can even use matrices of different dimensions to represent the same group
  • 3.6 classifies the representations of . There is only one possibility per dimension!
  • 3.7 "The Lorentz Group O(1,3)" mentions that even for a "simple" group such as the Lorentz group, not all representations can be described in terms of matrices, and that we can construct such representations with the help of Lie group theory, and that they have fundamental physical application
A bit like the classification of simple finite groups, they also have a few sporadic groups! Not as spectacular since as usual continuous problems are simpler than discrete ones, but still, not bad.
This does not seem to go deep into the Standard Model as Physics from Symmetry by Jakob Schwichtenberg (2015), appears to focus more on more basic applications.
But because it is more basic, it does explain some things quite well.
The author seems to have uploaded the entire book by chapters at:
Video 1. Hexagons are the Bestagons by CGP Grey (2020) Source.