Ciro Santilli \$£ Sponsor €¥ 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱

# Linear algebra

## Linear function

split words: 106
The term is not very clear, as it could either mean:
• a real number function whose graph is a line, i.e.: $$f(x)=ax+b (1)$$ or for higher dimensions, a hyperplane: $$f(x1​,x2​,…,xn​)=c1​x1​+c2​x2​+…+cn​xn​+b (2)$$
• a linear map. Note that the above linear functions are not linear maps unless (known as the homogeneous case), because e.g.: $$f(x+y)=ax+ay+b (3)$$ but $$f(x)+f(y)=ax+b+ay+b (4)$$ For this reason, it is better never to refer to linear maps as linear functions.

## Linear map (linear operator)

split words: 1k
A linear map is a function where and are two vector spaces over underlying fields such that: $$∀v1​,v2​∈V1​,c1​,c2​∈Ff(c1​v1​+c2​v2​)=c1​f(v1​)+c2​f(v2​) (5)$$
A common case is , and .
One thing that makes such functions particularly simple is that they can be fully specified by specifyin how they act on all possible combinations of input basis vectors: they are therefore specified by only a finite number of elements of .
Every linear map in finite dimension can be represented by a matrix, the points of the domain being represented as vectors.
As such, when we say "linear map", we can think of a generalization of matrix multiplication that makes sense in infinite dimensional spaces like Hilbert spaces, since calling such infinite dimensional maps "matrices" is stretching it a bit, since we would need to specify infinitely many rows and columns.
The prototypical building block of infinite dimensional linear map is the derivative. In that case, the vectors being operated upon are functions, which cannot therefore be specified by a finite number of parameters, e.g.
For example, the left side of the time-independent Schrödinger equation is a linear map. And the time-independent Schrödinger equation can be seen as a eigenvalue problem.

### Form (mathematics)

split words: 12
A form is a function from a vector space to elements of the underlying field of the vector space.
Examples:

### Linear form

split words: 362
The set of all linear forms over a vector space forms another vector space called the dual space.

#### Matrix representation of a linear form

split words: 36
For the typical case of a linear form over , the form can be seen just as a row vector with n elements, the full form being specified by the value of each of the basis vectors.

#### Dual space ()

split words: 302
The dual space of a vector space , sometimes denoted , is the vector space of all linear forms over with the obvious addition and scalar multiplication operations defined.
Since a linear form is completely determined by how it acts on a bases, and since for each basis element it is specified by a scalar, at least in finite dimension, the dimension of the dual space is the same as the , and so they are isomorphic because all vector spaces of the same dimension on a given field are isomorphic, and so the dual is quite a boring concept in the context of finite dimension.
One place where duals are different from the non-duals however is when dealing with tensors, because they transform differently than vectors from the base space .
##### Dual vector ()
split words: 174
Dual vectors are the members of a dual space.
In the context of tensors , we use raised indices to refer to members of the dual basis vs the underlying basis: $$e1​e2​e3​e1e2e3​∈V∈V∈V∈V∗∈V∗∈V∗​ (6)$$ The dual basis vectors are defined to "pick the corresponding coordinate" out of elements of V. E.g.: $$e1(4,−3,6)e2(4,−3,6)e3(4,−3,6)​=4=−3=6​ (7)$$ By expanding into the basis, we can put this more succinctly with the Kronecker delta as: $$ei(ej​)=δij​ (8)$$
Note that in Einstein notation, the components of a dual vector have lower indices. This works well with the upper case indices of the dual vectors, allowing us to write a dual vector as: $$f=fi​ei (9)$$
In the context of quantum mechanics, the bra notation is also used for dual vectors.

### Linear operator

split words: 85
We define it as a linear map where the domain is the same as the image, i.e. an endofunction.
Examples:
• a 2x2 matrix can represent a linear map from to , so which is a linear operator
• the derivative is a linear map from to , so which is also a linear operator

#### Adjoint operator ()

split words: 40
Given a linear operator over a space that has a inner product defined, we define the adjoint operator (the symbol is called "dagger") as the unique operator that satisfies: $$∀v,w∈S,= (10)$$

### Multilinear map

split words: 808

#### Bilinear map

split words: 98
Linear map of two variables.
More formally, given 3 vector spaces X, Y, Z over a single field, a bilinear map is a function from: $$f:X×Y→Z (11)$$ that is linear on the first two arguments from X and Y, i.e.: $$f(a1​x1​​+a2​x2​​,y​)=a1​f(x1​​,y​)+a2​f(x2​​,y​) (12)$$ Note that the definition only makes sense if all three vector spaces are over the same field, because linearity can mix up each of them.
The most important example by far is the dot product from , which is more specifically also a symmetric bilinear form.

#### Bilinear form ()

split words: 294
Analogous to a linear form, a bilinear form is a Bilinear map where the image is the underlying field of the vector space, e.g. .
Some definitions require both of the input spaces to be the same, e.g. , but it doesn't make much different in general.
The most important example of a bilinear form is the dot product. It is only defined if both the input spaces are the same.
##### Matrix representation of a bilinear form
split words: 223
As usual, it is useful to think about how a bilinear form looks like in terms of vectors and matrices.
Unlike a linear form, which was a vector, because it has two inputs, the bilinear form is represented by a matrix which encodes the value for each possible pair of basis vectors.
In terms of that matrix, the form is then given by: $$B(x,y)=xTMy (13)$$
###### Effect of a change of basis on the matrix of a bilinear form ()
split words: 157
If is the change of basis matrix, then the matrix representation of a bilinear form that looked like: $$B(x,y)=xTMy (14)$$ then the matrix in the new basis is: $$CTMC (15)$$ Sylvester's law of inertia then tells us that the number of positive, negative and 0 eigenvalues of both of those matrices is the same.
Proof: the value of a given bilinear form cannot change due to a change of bases, since the bilinear form is just a function, and does not depend on the choice of basis. The only thing that change is the matrix representation of the form. Therefore, we must have: $$xTMy=xnewT​Mnew​ynew​ (16)$$ and in the new basis: $$x=Cxnew​y=Cynew​xnewT​Mnew​ynew​=xTMy=(Cxnew​)TM(Cynew​)=xnewT​(CTMC)ynew​ (17)$$ and so since: $$∀xnew​,ynew​xnewT​Mnew​ynew​=xnewT​(CTMC)ynew​⟹Mnew​=CTMC (18)$$

#### Multilinear form

split words: 25
See form.
Analogous to a linear form, a multilinear form is a Multilinear map where the image is the underlying field of the vector space, e.g. .

#### Symmetric bilinear map

split words: 373
Subcase of symmetric multilinear map: $$f(x,y)=f(y,x) (19)$$
Requires the two inputs and to be in the same vector space of course.
The most important example is the dot product, which is also a positive definite symmetric bilinear form.
##### Symmetric bilinear form
split words: 76
###### Matrix representation of a symmetric bilinear form
split words: 71
Like the matrix representation of a bilinear form, it is a matrix, but now the matrix has to be a symmetric matrix.
We can then immediately see that the matrix is symmetric, then so is the form. We have: $$B(x,y)=xTMy (20)$$ But because is a scalar, we have: $$B(x,y)=B(x,y)T (21)$$ and: $$B(x,y)=B(x,y)T=(xTMy)T=yTMTx=yTMTx=yTMx=B(y,x) (22)$$
##### Hermitian form
split words: 52
The prototypical example of it is the complex dot product.
Note that this form is neither strictly symmetric, it satisfies: $$=​ (23)$$ where the over bar indicates the complex conjugate, nor is it linear for complex scalar multiplication on the second argument.
Bibliography:
split words: 3
;
##### Quadratic form
split words: 181
Multivariate polynomial where each term has degree 2, e.g.: $$f(x,y)=2y2+10yx+x2 (24)$$ is a quadratic form because each term has degree 2:
but e.g.: $$f(x,y)=2y2+10yx+x3 (25)$$ is not because the term has degree 3.
More generally for any number of variables it can be written as: $$f(x1​,x2​,…,xn​)=∑i,j​ai​aj​xi​xj​ (26)$$
There is a 1-to-1 relationship between quadratic forms and symmetric bilinear forms. In matrix representation, this can be written as: $$xTBx (27)$$ where contains each of the variabes of the form, e.g. for 2 variables: $$x=[x,y] (28)$$
Strictly speaking, the associated bilinear form would not need to be a symmetric bilinear form, at least for the real numbers or complex numbers which are commutative. E.g.: $$[xy​][02​10​][xy​]=[xy​][y2x​]=xy+2yx=3xy (29)$$ But that same matrix could also be written in symmetric form as: $$[01.5​1.50​] (30)$$ so why not I guess, its simpler/more restricted.
##### Positive definite symmetric bilinear form
split words: 17
Symmetric bilinear form that is also positive definite, i.e.: $$∀x,B(x,x)>0 (31)$$
split words: 6
##### Skew-symmetric bilinear map
split words: 8
Subcase of antisymmetric multilinear map: $$f(x,y)=−f(y,x) (32)$$
split words: 5

#### Symmetric multilinear map

split words: 16
Same value if you swap any input arguments.
##### Antisymmetric multilinear map
split words: 8
Change sign if you swap two input values.

split words: 2

## Dot product

split words: 177
The definition of the "dot product" of a general space varies quite a lot with different contexts.
Most definitions tend to be bilinear forms.
We use the unqualified generally refers to the dot product of Real coordinate spaces, which is a positive definite symmetric bilinear form. Other important examples include:
The rest of this section is about the case.
The positive definite part of the definition likely comes in because we are so familiar with metric spaces, which requires a positive norm in the norm induced by an inner product.
The default Euclidean space definition, we use the matrix representation of a symmetric bilinear form as the identity matrix, e.g. in : $$M=⎣⎢⎡​100​010​001​⎦⎥⎤​ (33)$$ so that: $$x⋅y​=[x1​​x2​​x3​​]⎣⎢⎡​100​010​001​⎦⎥⎤​⎣⎢⎡​y1​y2​y3​​⎦⎥⎤​=x1​y1​+x2​y2​+x3​y3​ (34)$$

## Index picking function

split words: 96

### Levi-Civita symbol ()

split words: 96
Denoted by the Greek letter epsilon with \varepsilon encoding in LaTeX.
Definition:

#### Levi-Civita symbol as a tensor

split words: 70
It takes as input three vectors, and outputs one real number, the volume. And it is linear on each vector. This perfectly satisfied the definition of a tensor of order (3,0).
Given a basis and a function that return the volume of a parallelepiped given by three vectors , .

## Matrix

split words: 861

### Matrix operation

split words: 81

#### Determinant ()

split words: 23
Name origin: likely because it "determines" if a matrix is invertible or not, as a matrix is invertible iff determinant is not zero.

#### Matrix inverse ()

split words: 38
When it exists, which is not for all matrices, only invertible matrix, the inverse is denoted: $$M−1 (35)$$
##### Invertible matrix
split words: 22
The set of all invertible matrices forms a group: the general linear group with matrix multiplication. Non-invertible matrices don't form a group due to the lack of inverse.

#### Transpose ()

split words: 20
##### Transpose of a matrix multiplication
split words: 14
When it distributes it inverts the order of the matrix multiplication: $$(MN)T=NTMT (36)$$
##### Inverse of the transpose
split words: 6
The transpose and matrix inverse commute: $$(MT)−1=(M−1)T (37)$$

### Matrix multiplication

split words: 61
Since a matrix can be seen as a linear map , the product of two matrices can be seen as the composition of two linear maps: $$fM​(fN​(x)) (38)$$ One cool thing about linear functions is that we can easily pre-calculate this product only once to obtain a new matrix, and so we don't have to do both multiplications separately each time.

split words: 2

### Eigenvalues and eigenvectors

split words: 592

#### Eigenvalue

split words: 56
##### Spectrum (functional analysis)
split words: 54
###### Continuous spectrum (functional analysis)
split words: 49
Unlike the simple case of a matrix, in infinite dimensional vector spaces, the spectrum may be continuous.
The quintessential example of that is the spectrum of the position operator in quantum mechanics, in which any real number is a possible eigenvalue, since the particle may be found in any position. The associated eigenvectors are the corresponding Dirac delta functions.

#### Eigendecomposition of a matrix

split words: 494
Every invertible matrix can be written as: $$M=QDQ−1 (39)$$ where:
Note therefore that this decomposition is unique up to swapping the order of eigenvectors. We could fix a canonical form by sorting eigenvectors from smallest to largest in the case of a real number.
Intuitively, Note that this is just the change of bases formula, and so:
• changes basis to align to the eigenvectors
• multiplies eigenvectors simply by eigenvalues
• changes back to the original basis
##### Eigendecomposition of a real symmetric matrix
split words: 24
The general result from eigendecomposition of a matrix: $$M=QDQ−1 (40)$$ becomes: $$M=ODOT (41)$$ where is an orthogonal matrix, and therefore has .
##### Sylvester's law of inertia
split words: 383
The theorem states that the number of 0, 1 and -1 in the metric signature is the same for two symmetric matrices that are congruent matrices.
For example, consider: $$A=[22​​2​3​] (42)$$
The eigenvalues of are and , and the associated eigenvectors are: $$v1​=[−2​,1]Tv4​=[2​/2,1]T (43)$$ symPy code:
A = Matrix([[2, sqrt(2)], [sqrt(2), 3]])
A.eigenvects()
and from the eigendecomposition of a real symmetric matrix we know that: $$A=PDPT=[−2​1​2​/21​][10​04​][−2​2​/2​11​] (44)$$
Now, instead of , we could use , where is an arbitrary diagonal matrix of type: $$[e1​0​0e2​​] (45)$$ With this, would reach a new matrix : $$B=(PE)D(PE)T=P(EDET)PT=P(EED)PT (46)$$ Therefore, with this congruence, we are able to multiply the eigenvalues of by any positive number and . Since we are multiplying by two arbitrary positive numbers, we cannot change the signs of the original eigenvalues, and so the metric signature is maintained, but respecting that any value can be reached.
Note that the matrix congruence relation looks a bit like the eigendecomposition of a matrix: $$D=SMST (47)$$ but note that does not have to contain eigenvalues, unlike the eigendecomposition of a matrix. This is because here is not fixed to having eigenvectors in its columns.
But because the matrix is symmetric however, we could always choose to actually diagonalize as mentioned at eigendecomposition of a real symmetric matrix. Therefore, the metric signature can be seen directly from eigenvalues.
Also, because is a diagonal matrix, and thus symmetric, it must be that: $$ST=S−1 (48)$$
What this does represent, is a general change of bases that maintains the matrix a symmetric matrix.
###### Congruent matrix
split words: 63
Two symmetric matrices and are defined to be congruent if there exists an in such that: $$A=SBST (49)$$
###### Matrix congruence can be seen as the change of basis of a bilinear form
split words: 41
From effect of a change of basis on the matrix of a bilinear form, remember that a change of basis modifies the matrix representation of a bilinear form as: $$CTMC (50)$$
So, by taking , we understand that two matrices being congruent means that they can both correspond to the same bilinear form in different bases.

split words: 2

#### Spectral theorem

split words: 40
##### Hermitian matrix (complex analogue of symmetric matrix)
split words: 40
###### Hermitian operator
split words: 40
This is the possibly infinite dimensional version of a Hermitian matrix, since linear operators are the possibly infinite dimensional version of matrices.
There's a catch though: now we don't have explicit matrix indices here however in general, the generalized definition is shown at: en.wikipedia.org/w/index.php?title=Hermitian_adjoint&oldid=1032475701#Definition_for_bounded_operators_between_Hilbert_spaces

### Named matrix

split words: 127

split words: 10
split words: 5

#### Square matrix

split words: 28
##### Matrix ring (Matrix ring of degree n, , Set of all n-by-y square matrices)
split words: 28
The matrix ring of degree n is the set of all n-by-n square matrices together with the usual vector space and matrix multiplication operations.
This set forms a ring.

#### Orthogonal matrix

split words: 23
Members of the orthogonal group.
split words: 19
Applications:

#### Symmetric matrix

split words: 66
A matrix that equals its transpose: $$M=MT (51)$$
##### Definite matrix
split words: 27
The definition implies that this is also a symmetric matrix.
###### Positive definite matrix
split words: 18
The dot product is a positive definite matrix, and so we see that those will have an important link to familiar geometry.
##### Skew-symmetric matrix (Antisymmetric matrix)
split words: 21
WTF is a skew? "Antisymmetric" is just such a better name! And it also appears in other definitions such as antisymmetric multilinear map.

## Vector space

split words: 300

### Basis (linear algebra)

split words: 199

#### Change of basis

split words: 199
$$N=BMB−1 (52)$$ where:
• : matrix in the old basis
• : matrix in the new basis
• : change of basis matrix
##### Change of basis matrix
split words: 152
The change of basis matrix is the matrix that allows us to express the new basis in an old basis: $$xold​=Cxnew​ (53)$$
Mnemonic is as follows: consider we have an initial basis . Now, we define the new basis in terms of the old basis, e.g.: $$xnew​ynew​​=1xold​+2yold​=3xold​+4yold​​ (54)$$ which can be written in matrix form as: $$[xnew​ynew​​]=[13​​24​][xold​yold​​] (55)$$ and so if we set: $$M=[13​​24​] (56)$$ we have: $$xnew​​=Mxold​​ (57)$$
The usual question then is: given a vector in the new basis, how do we represent it in the old basis?
The answer is that we simply have to calculate the matrix inverse of : $$xold​​=M−1xnew​​ (58)$$
That is the matrix inverse.
##### Change of basis between symmetric matrices
split words: 23
When we have a symmetric matrix, a change of bases keeps symmetry iff it is done by an orthogonal matrix, in which case: $$N=BMB−1=OMOT (59)$$

### Underlying field of a vector space

split words: 74
Every vector space is defined over a field.
E.g. in , the underlying field is , the real numbers. And in the underlying field is , the complex numbers.
Any field can be used, including finite field. But the underlying thing has to be a field, because the definitions of a vector need all field properties to hold to make sense.
Elements of the underlying field of a vector space are known as scalar.

### Vector (mathematics)

split words: 27

#### Scalar (mathematics)

split words: 27
A member of the underlying field of a vector space. E.g. in , the underlying field is , and a scalar is a member of , i.e. a real number.

## Tensor

split words: 681
A multilinear form with a domain that looks like: $$Vm×V∗n→R (60)$$ where is the dual space.
Because a tensor is a multilinear form, it can be fully specified by how it act on all combinations of basis sets, which can be done in terms of components. We refer to each component as: $$Ti1​…im​j1​…jn​​=T(ei1​​,…,eim​​,ej1​,…,ejm​) (61)$$ where we remember that the raised indices refer dual vector.
Some examples:

### A linear map is a (1,1) tensor

split words: 51
A linear map can be seen as a (1,1) tensor because: $$T(w,v∗)=v∗Aw (62)$$ is a number, . is a dual vector, and W is a vector. Furthermoe, is linear in both and . All of this makes fullfill the definition of a (1,1) tensor.

split words: 7
Bibliography:

split words: 6
has order

### Einstein notation (Einstein summation convention)

split words: 549
The Wikipedia page of this article is basically a masterclass why Wikipedia is useless for learning technical subjects. They are not even able to teach such a simple subject properly there!
Bibliography:

#### Raised and lowered indices

split words: 72
TODO what is the point of them? Why not just sum over every index that appears twice, regardless of where it is, as mentioned at: www.maths.cam.ac.uk/postgrad/part-iii/files/misc/index-notation.pdf.
Vectors with the index on top such as are the "regular vectors", they are called covariant vectors.
Those in indices on bottom are called contravariant vectors.
It is possible to change between them by Raising and lowering indices.
The values are different only when the metric signature matrix is different from the identity matrix.

#### Implicit metric signature in Einstein notation

split words: 57
Then a specific metric is involved, sometimes we want to automatically add it to products.
E.g., in a context considering the common Minkowski inner product matrix where the 4x4 matrix and is a vector in $$xμxμ​=xμημν​xν=−x02​+x12​+x22​+x32​; (63)$$ which leads to the change of sign of some terms.

#### Einstein notation for partial derivatives

split words: 339
The Einstein summation convention works will with partial derivative, and this case is widely used in particle physics.
Partial index partial derivative notation is the partial derivative notation commonly used in this context, as we want to do operations by index rather than by labels such as , , .
This notation also allows us to have raised and lowered indices on the partial derivative symbol TODO how are they different?
##### Divergence in Einstein notation
split words: 100
Given a vector function of three variables: $$F(x0​,x1​,x2​)=(F0(x0​,x1​,x2​),F1(x0​,x1​,x2​),F2(x0​,x1​,x2​)):R3→R3 (64)$$ so note that we are denoting each component of as with a raised index.
Then, the divergence can be written in Einstein notation as: $$∇⋅F=∂i​Fi(x0​,x1​,x2​)=∂xi∂Fi(x0​,x1​,x2​)​=∂x0​∂F0(x0​,x1​,x2​)​+∂x1​∂F1(x0​,x1​,x2​)​+∂x2​∂F2(x0​,x1​,x2​)​ (65)$$
It is common to just omit the variables of the function, so we tend to just say: $$∇⋅F=∂i​Fi (66)$$ or equivalently when referring just to the operation: $$∇⋅=∂i​ (67)$$
##### Laplacian in Einstein notation ()
split words: 185
Given a real function of three variables: $$F(x0​,x1​,x2​)=:R3→R (68)$$ its Laplacian can be written as: $$∇2F(x0​,x1​,x2​)=∂i​∂iF(x0​,x1​,x2​)=∂0​∂0F(x0​,x1​,x2​)+∂1​∂1F(x0​,x1​,x2​)+∂2​∂2F(x0​,x1​,x2​)∂02​F(x0​,x1​,x2​)+∂12​F(x0​,x1​,x2​)+∂22​F(x0​,x1​,x2​) (69)$$
It is common to just omit the variables of the function, so we tend to just say: $$∇2F=∂i​∂iF (70)$$ or equivalently when referring just to the operation: $$∇2=∂i​∂i (71)$$
###### D'alembert operator in Einstein notation ()
split words: 88
Given the function : $$ψ:R4→C (72)$$ the operator can be written in Planck units as: $$∂i​∂iψ(x0​,x1​,x2​,x3​)−m2ψ(x0​,x1​,x2​,x3​)=0 (73)$$ often written without function arguments as: $$∂i​∂iψ (74)$$ Note how this looks just like the Laplacian in Einstein notation, since the D'alembert operator is just a generalization of the laplace operator to Minkowski space.
###### Klein-Gordon equation in Einstein notation
split words: 30
The Klein-Gordon equation can be written in terms of the D'alembert operator as: $$□ψ+m2ψ=0 (75)$$ so we can expand the D'alembert operator in Einstein notation to: $$∂i​∂iψ−m2ψ=0 (76)$$

## Linear algebra bibliography

split words: 5

### Interactive Linear Algebra by Margalit and Rabinoff

split words: 5
Written in MathBook XML.