ID photo of Ciro Santilli taken in 2013 right eyeCiro Santilli OurBigBook logoOurBigBook.com  Sponsor 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱

Linear function

words: 106
The term is not very clear, as it could either mean:
  • a real number function whose graph is a line, i.e.:
    or for higher dimensions, a hyperplane:
  • a linear map. Note that the above linear functions are not linear maps unless (known as the homogeneous case), because e.g.:
    but
    For this reason, it is better never to refer to linear maps as linear functions.

Linear map (linear operator)

words: 1k articles: 27
A linear map is a function where and are two vector spaces over underlying fields such that:
A common case is , and .
One thing that makes such functions particularly simple is that they can be fully specified by specifyin how they act on all possible combinations of input basis vectors: they are therefore specified by only a finite number of elements of .
Every linear map in finite dimension can be represented by a matrix, the points of the domain being represented as vectors.
As such, when we say "linear map", we can think of a generalization of matrix multiplication that makes sense in infinite dimensional spaces like Hilbert spaces, since calling such infinite dimensional maps "matrices" is stretching it a bit, since we would need to specify infinitely many rows and columns.
The prototypical building block of infinite dimensional linear map is the derivative. In that case, the vectors being operated upon are functions, which cannot therefore be specified by a finite number of parameters, e.g.
For example, the left side of the time-independent Schrödinger equation is a linear map. And the time-independent Schrödinger equation can be seen as a eigenvalue problem.
A form is a function from a vector space to elements of the underlying field of the vector space.
Examples:
Tagged

Linear form

words: 362 articles: 3
A Linear map where the image is the underlying field of the vector space, e.g. .
The set of all linear forms over a vector space forms another vector space called the dual space.
For the typical case of a linear form over , the form can be seen just as a row vector with n elements, the full form being specified by the value of each of the basis vectors.

Dual space ()

words: 302 articles: 1
The dual space of a vector space , sometimes denoted , is the vector space of all linear forms over with the obvious addition and scalar multiplication operations defined.
Since a linear form is completely determined by how it acts on a basis, and since for each basis element it is specified by a scalar, at least in finite dimension, the dimension of the dual space is the same as the , and so they are isomorphic because all vector spaces of the same dimension on a given field are isomorphic, and so the dual is quite a boring concept in the context of finite dimension.
Infinite dimension seems more interesting however, see: en.wikipedia.org/w/index.php?title=Dual_space&oldid=1046421278#Infinite-dimensional_case
One place where duals are different from the non-duals however is when dealing with tensors, because they transform differently than vectors from the base space .
Dual vector ()
words: 174
Dual vectors are the members of a dual space.
In the context of tensors , we use raised indices to refer to members of the dual basis vs the underlying basis:
The dual basis vectors are defined to "pick the corresponding coordinate" out of elements of V. E.g.:
By expanding into the basis, we can put this more succinctly with the Kronecker delta as:
Note that in Einstein notation, the components of a dual vector have lower indices. This works well with the upper case indices of the dual vectors, allowing us to write a dual vector as:
In the context of quantum mechanics, the bra notation is also used for dual vectors.

Linear operator

words: 85 articles: 1
We define it as a linear map where the domain is the same as the image, i.e. an endofunction.
Examples:
Given a linear operator over a space that has a inner product defined, we define the adjoint operator (the symbol is called "dagger") as the unique operator that satisfies:

Multilinear map

words: 808 articles: 18

Bilinear map

words: 98
Linear map of two variables.
More formally, given 3 vector spaces X, Y, Z over a single field, a bilinear map is a function from:
that is linear on the first two arguments from X and Y, i.e.:
Note that the definition only makes sense if all three vector spaces are over the same field, because linearity can mix up each of them.
The most important example by far is the dot product from , which is more specifically also a symmetric bilinear form.

Bilinear form ()

words: 294 articles: 2
Analogous to a linear form, a bilinear form is a Bilinear map where the image is the underlying field of the vector space, e.g. .
Some definitions require both of the input spaces to be the same, e.g. , but it doesn't make much different in general.
The most important example of a bilinear form is the dot product. It is only defined if both the input spaces are the same.
As usual, it is useful to think about how a bilinear form looks like in terms of vectors and matrices.
Unlike a linear form, which was a vector, because it has two inputs, the bilinear form is represented by a matrix which encodes the value for each possible pair of basis vectors.
In terms of that matrix, the form is then given by:
If is the change of basis matrix, then the matrix representation of a bilinear form that looked like:
then the matrix in the new basis is:
Sylvester's law of inertia then tells us that the number of positive, negative and 0 eigenvalues of both of those matrices is the same.
Proof: the value of a given bilinear form cannot change due to a change of basis, since the bilinear form is just a function, and does not depend on the choice of basis. The only thing that change is the matrix representation of the form. Therefore, we must have:
and in the new basis:
and so since:
Related:
See form.
Analogous to a linear form, a multilinear form is a Multilinear map where the image is the underlying field of the vector space, e.g. .

Symmetric bilinear map

words: 373 articles: 9
Subcase of symmetric multilinear map:
Requires the two inputs and to be in the same vector space of course.
The most important example is the dot product, which is also a positive definite symmetric bilinear form.
Symmetric bilinear form
words: 76 articles: 1
symmetric bilinear maps that is also a bilinear form.
Like the matrix representation of a bilinear form, it is a matrix, but now the matrix has to be a symmetric matrix.
We can then immediately see that the matrix is symmetric, then so is the form. We have:
But because is a scalar, we have:
and:
Hermitian form
words: 52 articles: 1
The complex number analogue of a symmetric bilinear form.
The prototypical example of it is the complex dot product.
Note that this form is neither strictly symmetric, it satisfies:
where the over bar indicates the complex conjugate, nor is it linear for complex scalar multiplication on the second argument.
Bibliography:
;
A Hermitian matrix.
Quadratic form
words: 181
Multivariate polynomial where each term has degree 2, e.g.:
is a quadratic form because each term has degree 2:
but e.g.:
is not because the term has degree 3.
More generally for any number of variables it can be written as:
There is a 1-to-1 relationship between quadratic forms and symmetric bilinear forms. In matrix representation, this can be written as:
where contains each of the variabes of the form, e.g. for 2 variables:
Strictly speaking, the associated bilinear form would not need to be a symmetric bilinear form, at least for the real numbers or complex numbers which are commutative. E.g.:
But that same matrix could also be written in symmetric form as:
so why not I guess, its simpler/more restricted.
Symmetric bilinear form that is also positive definite, i.e.:
A positive definite matrix that is also a symmetric matrix.
Subcase of antisymmetric multilinear map:
Skew-symmetric bilinear map that is also a bilinear form.

Symmetric multilinear map

words: 16 articles: 1
Same value if you swap any input arguments.
Change sign if you swap two input values.
Implies antisymmetric multilinear map.

Dot product

words: 177 articles: 3
The definition of the "dot product" of a general space varies quite a lot with different contexts.
Most definitions tend to be bilinear forms.
We use the unqualified generally refers to the dot product of Real coordinate spaces, which is a positive definite symmetric bilinear form. Other important examples include:
The rest of this section is about the case.
The positive definite part of the definition likely comes in because we are so familiar with metric spaces, which requires a positive norm in the norm induced by an inner product.
The default Euclidean space definition, we use the matrix representation of a symmetric bilinear form as the identity matrix, e.g. in :
so that:

Orthogonality

articles: 1

Index picking function

words: 96 articles: 3

Levi-Civita symbol ()

words: 96 articles: 1
Denoted by the Greek letter epsilon with \varepsilon encoding in LaTeX.
Definition:
An Introduction to Tensors and Group Theory for Physicists by Nadir Jeevanjee (2011) shows that this is a tensor that represents the volume of a parallelepiped.
It takes as input three vectors, and outputs one real number, the volume. And it is linear on each vector. This perfectly satisfied the definition of a tensor of order (3,0).
Given a basis and a function that return the volume of a parallelepiped given by three vectors , .

Matrix

words: 1k articles: 55

Matrix operation

words: 81 articles: 6
Name origin: likely because it "determines" if a matrix is invertible or not, as a matrix is invertible iff determinant is not zero.

Matrix inverse ()

words: 38 articles: 1
When it exists, which is not for all matrices, only invertible matrix, the inverse is denoted:
The set of all invertible matrices forms a group: the general linear group with matrix multiplication. Non-invertible matrices don't form a group due to the lack of inverse.

Transpose ()

words: 20 articles: 2
When it distributes it inverts the order of the matrix multiplication:
The transpose and matrix inverse commute:

Matrix multiplication

words: 163 articles: 9
Since a matrix can be seen as a linear map , the product of two matrices can be seen as the composition of two linear maps:
One cool thing about linear functions is that we can easily pre-calculate this product only once to obtain a new matrix, and so we don't have to do both multiplications separately each time.

System of linear equations

words: 91 articles: 4
No 2x2 examples please. I'm talking about large matrices that would be used in supercomputers.
Tagged
For positive definite matrices only.
TODO application.
TODO speedup over algorithm for general matrices.
www.studentclustercompetition.us/ comments:
The HPCG benchmark uses a preconditioned conjugate gradient (PCG) algorithm to measure the performance of HPC platforms with respect to frequently observed but challenging patterns of computing, communication, and memory access. While HPL provides an optimistic performance target for applications, HPCG can be considered as a lower bound on performance. Many of the top 500 supercomputers also provide their HPCG performance as a reference.
math.stackexchange.com/questions/41706/practical-uses-of-matrix-multiplication/4647422#4647422 highlights deep learning applications.

Matrix multiplication algorithm

words: 11 articles: 1
math.stackexchange.com/questions/30330/fast-algorithm-for-solving-system-of-linear-equations/259372#259372
The terminology GEMM is present on BLAS, and has stuck pretty much.

Eigenvalues and eigenvectors

words: 592 articles: 19

Eigenvalue

words: 56 articles: 2
See: eigenvalues and eigenvectors.
Spectrum (functional analysis)
words: 54 articles: 1
Set of eigenvalues of a linear operator.
Unlike the simple case of a matrix, in infinite dimensional vector spaces, the spectrum may be continuous.
The quintessential example of that is the spectrum of the position operator in quantum mechanics, in which any real number is a possible eigenvalue, since the particle may be found in any position. The associated eigenvectors are the corresponding Dirac delta functions.

Eigendecomposition of a matrix

words: 494 articles: 7
Every invertible matrix can be written as:
where:
Note therefore that this decomposition is unique up to swapping the order of eigenvectors. We could fix a canonical form by sorting eigenvectors from smallest to largest in the case of a real number.
Intuitively, Note that this is just the change of basis formula, and so:
  • changes basis to align to the eigenvectors
  • multiplies eigenvectors simply by eigenvalues
  • changes back to the original basis
The general result from eigendecomposition of a matrix:
becomes:
where is an orthogonal matrix, and therefore has .
Sylvester's law of inertia
words: 383 articles: 5
The main interest of this theorem is in classifying the indefinite orthogonal groups, which in turn is fundamental because the Lorentz group is an indefinite orthogonal groups, see: all indefinite orthogonal groups of matrices of equal metric signature are isomorphic.
It also tells us that a change of basis does not the alter the metric signature of a bilinear form, see matrix congruence can be seen as the change of basis of a bilinear form.
The theorem states that the number of 0, 1 and -1 in the metric signature is the same for two symmetric matrices that are congruent matrices.
For example, consider:
The eigenvalues of are and , and the associated eigenvectors are:
symPy code:
A = Matrix([[2, sqrt(2)], [sqrt(2), 3]])
A.eigenvects()
and from the eigendecomposition of a real symmetric matrix we know that:
Now, instead of , we could use , where is an arbitrary diagonal matrix of type:
With this, would reach a new matrix :
Therefore, with this congruence, we are able to multiply the eigenvalues of by any positive number and . Since we are multiplying by two arbitrary positive numbers, we cannot change the signs of the original eigenvalues, and so the metric signature is maintained, but respecting that any value can be reached.
Note that the matrix congruence relation looks a bit like the eigendecomposition of a matrix:
but note that does not have to contain eigenvalues, unlike the eigendecomposition of a matrix. This is because here is not fixed to having eigenvectors in its columns.
But because the matrix is symmetric however, we could always choose to actually diagonalize as mentioned at eigendecomposition of a real symmetric matrix. Therefore, the metric signature can be seen directly from eigenvalues.
Also, because is a diagonal matrix, and thus symmetric, it must be that:
What this does represent, is a general change of basis that maintains the matrix a symmetric matrix.
Related:
Congruent matrix
words: 63 articles: 1
Two symmetric matrices and are defined to be congruent if there exists an in such that:
From effect of a change of basis on the matrix of a bilinear form, remember that a change of basis modifies the matrix representation of a bilinear form as:
So, by taking , we understand that two matrices being congruent means that they can both correspond to the same bilinear form in different bases.
Metric signature
articles: 1

Eigenvector

words: 2
See: eigenvalues and eigenvectors.
math.stackexchange.com/questions/1507290/linear-algebra-identity-matrix-and-its-relation-to-eigenvalues-and-eigenvectors/3934023#3934023

Spectral theorem

words: 40 articles: 3
Hermitian operator
words: 40 articles: 1
This is the possibly infinite dimensional version of a Hermitian matrix, since linear operators are the possibly infinite dimensional version of matrices.
There's a catch though: now we don't have explicit matrix indices here however in general, the generalized definition is shown at: en.wikipedia.org/w/index.php?title=Hermitian_adjoint&oldid=1032475701#Definition_for_bounded_operators_between_Hilbert_spaces

Named matrix

words: 207 articles: 16
Tagged

Dense and sparse matrices

words: 80 articles: 2
A good definition is that the sparse matrix has non-zero entries proportional the number of rows. Therefore this is Big O notation less than something that has non zero entries. Of course, this only makes sense when generalizing to larger and larger matrices, otherwise we could take the constant of proportionality very high for one specific matrix.
Of course, this only makes sense when generalizing to larger and larger matrices, otherwise we could take the constant of proportionality very high for one specific matrix.

Diagonal matrix

words: 10 articles: 2
Forms a normal subgroup of the general linear group.
Scalar matrix ()
words: 5 articles: 1
Forms a normal subgroup of the general linear group.

Square matrix

words: 28 articles: 1
The matrix ring of degree n is the set of all n-by-n square matrices together with the usual vector space and matrix multiplication operations.
This set forms a ring.
Related terminology:

Orthogonal matrix

words: 23 articles: 1
Members of the orthogonal group.
Unitary matrix
words: 19
Complex analogue of orthogonal matrix.
Applications:

Symmetric matrix

words: 66 articles: 4
A matrix that equals its transpose:
Can represent a symmetric bilinear form as shown at matrix representation of a symmetric bilinear form, or a quadratic form.
Definite matrix
words: 27 articles: 1
The definition implies that this is also a symmetric matrix.
The dot product is a positive definite matrix, and so we see that those will have an important link to familiar geometry.
WTF is a skew? "Antisymmetric" is just such a better name! And it also appears in other definitions such as antisymmetric multilinear map.

Vector space

words: 300 articles: 10

Basis (linear algebra)

words: 199 articles: 4

Change of basis

words: 199 articles: 2
where:
  • : matrix in the old basis
  • : matrix in the new basis
  • : change of basis matrix
The change of basis matrix is the matrix that allows us to express the new basis in an old basis:
Mnemonic is as follows: consider we have an initial basis . Now, we define the new basis in terms of the old basis, e.g.:
which can be written in matrix form as:
and so if we set:
we have:
The usual question then is: given a vector in the new basis, how do we represent it in the old basis?
The answer is that we simply have to calculate the matrix inverse of :
That is the matrix inverse.
When we have a symmetric matrix, a change of basis keeps symmetry iff it is done by an orthogonal matrix, in which case:
en.wikipedia.org/wiki/Dimension_(vector_space)#Facts
Every vector space is defined over a field.
E.g. in , the underlying field is , the real numbers. And in the underlying field is , the complex numbers.
Any field can be used, including finite field. But the underlying thing has to be a field, because the definitions of a vector need all field properties to hold to make sense.
Elements of the underlying field of a vector space are known as scalar.
Tagged

Vector (mathematics)

words: 27 articles: 1
A member of the underlying field of a vector space. E.g. in , the underlying field is , and a scalar is a member of , i.e. a real number.

Tensor

words: 694 articles: 17
A multilinear form with a domain that looks like:
where is the dual space.
Because a tensor is a multilinear form, it can be fully specified by how it act on all combinations of basis sets, which can be done in terms of components. We refer to each component as:
where we remember that the raised indices refer dual vector.
Some examples:
Tagged
A linear map can be seen as a (1,1) tensor because:
is a number, . is a dual vector, and W is a vector. Furthermoe, is linear in both and . All of this makes fullfill the definition of a (1,1) tensor.

Tensor space ()

words: 7 articles: 1
Bibliography:
has order
The Wikipedia page of this article is basically a masterclass why Wikipedia is useless for learning technical subjects. They are not even able to teach such a simple subject properly there!
Bibliography:

Raised and lowered indices

words: 72 articles: 3
TODO what is the point of them? Why not just sum over every index that appears twice, regardless of where it is, as mentioned at: www.maths.cam.ac.uk/postgrad/part-iii/files/misc/index-notation.pdf.
Vectors with the index on top such as are the "regular vectors", they are called covariant vectors.
Those in indices on bottom are called contravariant vectors.
It is possible to change between them by Raising and lowering indices.
The values are different only when the metric signature matrix is different from the identity matrix.
Then a specific metric is involved, sometimes we want to automatically add it to products.
E.g., in a context considering the common Minkowski inner product matrix where the 4x4 matrix and is a vector in
which leads to the change of sign of some terms.
The Einstein summation convention works will with partial derivatives and it is widely used in particle physics.
In particular, the divergence and the Laplacian can be succintly expressed in this notation:
In order to expresse partial derivatives, we must use what Ciro Santilli calls the "partial index partial derivative notation", which refers to variales with indices such as , , , , and instead of the usual letters , and .
First we write a vector field as:
Note how we are denoting each component of as with a raised index.
Then, the divergence can be written in Einstein notation as:
It is common to just omit the variables of the function, so we tend to just say:
or equivalently when referring just to the operator:
Consider a real valued function of three variables:
Its Laplacian can be written as:
It is common to just omit the variables of the function, so we tend to just say:
or equivalently when referring just to the operator:
Given the function :
the operator can be written in Planck units as:
often written without function arguments as:
Note how this looks just like the Laplacian in Einstein notation, since the D'alembert operator is just a generalization of the laplace operator to Minkowski space.
The Klein-Gordon equation can be written in terms of the D'alembert operator as:
so we can expand the D'alembert operator in Einstein notation to:

Linear algebra bibliography

words: 5 articles: 1
textbooks.math.gatech.edu/ila/index.html
Source: github.com/QBobWatson/ila.
Written in MathBook XML.

Ancestors (4)

  1. Algebra
  2. Area of mathematics
  3. Mathematics
  4. Home