= Linear algebra
{wiki}
= Linear function
{parent=Linear algebra}
{wiki}
= Linear
{synonym}
= Linearly
{synonym}
The term is not very clear, as it could either mean:
* a function whose graph is a line, i.e.:
$$
f(x) = ax + b
$$
or for higher dimensions, a :
$$
f(x_1, x_2, \ldots, x_n) = c_1 x_1 + c_2 x_2 + \ldots + c_n x_n + b
$$
* a . Note that the above linear functions are not linear maps unless $b = 0$ (known as the homogeneous case), because e.g.:
$$
f(x + y) = ax + ay + b
$$
but
$$
f(x) + f(y) = ax + b + ay + b
$$
For this reason, it is better never to refer to linear maps as linear functions.
= Linear map
{parent=Linear algebra}
{title2=linear operator}
{wiki}
A linear map is a function $f : V_1(F) \to V_2(F)$ where $V_1(F)$ and $V_2(F)$ are two vector spaces over [underlying fields] $F$ such that:
$$
\forall v_{1}, v_{2} \in V_1, c_{1}, c_{2} \in F \\
f(c_{1} v_{1} + c_{2} v_{2}) = c_{1} f(v_{1}) + c_{2} f(v_{2})
$$
A common case is $F = \R$, $V_1 = \R_m$ and $V_2 = \R_n$.
One thing that makes such functions particularly simple is that they can be fully specified by specifyin how they act on all possible combinations of input basis vectors: they are therefore specified by only a finite number of elements of $F$.
Every linear map in can be represented by a , the points of the being represented as .
As such, when we say "linear map", we can think of a generalization of that makes sense in spaces like , since calling such infinite dimensional maps "matrices" is stretching it a bit, since we would need to specify infinitely many rows and columns.
The prototypical building block of linear map is the . In that case, the vectors being operated upon are , which cannot therefore be specified by a finite number of parameters, e.g.
For example, the left side of the is a linear map. And the can be seen as a problem.
= Form
{disambiguate=mathematics}
{parent=Linear map}
A form is a from a to elements of the .
Examples:
*
*
*
= Linear form
{parent=Linear map}
{wiki}
A where the is the , e.g. $\R^n \to \R$.
The set of all over a forms another vector space called the .
= Matrix representation of a linear form
{parent=Linear form}
For the typical case of a linear form over <\R^n>, the form can be seen just as a row vector with n elements, the full form being specified by the value of each of the .
= Dual space
{parent=Linear form}
{title2=$V^*$}
{wiki}
The dual space of a $V$, sometimes denoted $V^*$, is the vector space of all over $V$ with the obvious addition and scalar multiplication operations defined.
Since a linear form is completely determined by how it acts on a \x[basis]{magic}, and since for each basis element it is specified by a scalar, at least in finite dimension, the dimension of the dual space is the same as the $V$, and so they are isomorphic because , and so the dual is quite a boring concept in the context of finite dimension.
Infinite dimension seems more interesting however, see: https://en.wikipedia.org/w/index.php?title=Dual_space&oldid=1046421278#Infinite-dimensional_case
One place where duals are different from the non-duals however is when dealing with , because they transform differently than vectors from the base space $V$.
= Dual vector
{parent=Dual space}
{title2=$e^i$}
Dual vectors are the members of a .
In the context of , we use raised indices to refer to members of the dual basis vs the underlying basis:
$$
\begin{aligned}
e_1 & \in V \\
e_2 & \in V \\
e_3 & \in V \\
e^1 & \in V^* \\
e^2 & \in V^* \\
e^3 & \in V^* \\
\end{aligned}
$$
The dual basis vectors $e^i$ are defined to "pick the corresponding coordinate" out of elements of V. E.g.:
$$
\begin{aligned}
e^1 (4, -3, 6) & = 4 \\
e^2 (4, -3, 6) & = -3 \\
e^3 (4, -3, 6) & = 6 \\
\end{aligned}
$$
By expanding into the basis, we can put this more succinctly with the as:
$$
e^i(e_j) = \delta_{ij}
$$
Note that in , the components of a dual vector have lower indices. This works well with the upper case indices of the dual vectors, allowing us to write a dual vector $f$ as:
$$
f = f_i e^i
$$
In the context of , the [bra] notation is also used for dual vectors.
= Linear operator
{parent=Linear map}
{wiki}
= Operator
{synonym}
We define it as a where the is the same as the , i.e. an .
Examples:
* a 2x2 matrix can represent a from <\R^2> to <\R^2>, so which is a linear operator
* the is a from to , so which is also a linear operator
= Adjoint operator
{parent=Linear operator}
{title2=$A^\dagger$}
Given a $A$ over a space $S$ that has a defined, we define the adjoint operator $A^\dagger$ (the $\dagger$ symbol is called "dagger") as the unique operator that satisfies:
$$
\forall v, w \in S, =
$$
= Self-adjoint operator
{parent=Linear map}
{wiki}
= Self-adjoint
{synonym}
= Multilinear map
{parent=Linear map}
{wiki}
= Bilinear map
{parent=Multilinear map}
{wiki}
= Bilinear product
{synonym}
of two variables.
More formally, given 3 X, Y, Z over a single , a bilinear map is a function from:
$$
f : X \times Y \to Z
$$
that is linear on the first two arguments from X and Y, i.e.:
$$
f(a_1\vec{x_1} + a_2\vec{x_2}, \vec{y}) = a_1f(\vec{x_1}, \vec{y}) + a_2f(\vec{x_2}, \vec{y})
$$
Note that the definition only makes sense if all three vector spaces are over the same field, because linearity can mix up each of them.
The most important example by far is the from $\R^n \times \R^n \to \R$, which is more specifically also a .
= Bilinear form
{parent=Multilinear map}
{title2=$B(x, y)$}
{wiki}
Analogous to a , a bilinear form is a where the is the , e.g. $\R^n \times \R^m \to \R$.
Some definitions require both of the input spaces to be the same, e.g. $\R^n \times \R^n \to \R$, but it doesn't make much different in general.
The most important example of a bilinear form is the . It is only defined if both the input spaces are the same.
= Matrix representation of a bilinear form
{parent=Bilinear form}
As usual, it is useful to think about how a looks like in terms of and .
Unlike a , which [was a vector], because it has two inputs, the bilinear form is represented by a matrix $M$ which encodes the value for each possible pair of .
In terms of that , the form $B(x,y)$ is then given by:
$$
B(x,y) = x^T M y
$$
= Effect of a change of basis on the matrix of a bilinear form
{parent=Matrix representation of a bilinear form}
{title2=$B_2 = C^T B C$}
If $C$ is the , then the $M$ that looked like:
$$
B(x,y) = x^T M y
$$
then the matrix in the new basis is:
$$
C^T M C
$$
then tells us that the number of positive, negative and 0 eigenvalues of both of those matrices is the same.
Proof: the value of a given bilinear form cannot change due to a \x[change of basis]{magic}, since the bilinear form is just a , and does not depend on the choice of basis. The only thing that change is the matrix representation of the form. Therefore, we must have:
$$
x^T M y = x_{new}^T M_{new} y_{new}
$$
and in the new basis:
$$
x = C x_{new} \\
y = C y_{new} \\
x_{new}^T M_{new} y_{new} = x^T M y = (Cx_{new})^T M (Cy_{new}) = x_{new}^T (C^T M C) y_{new} \\
$$
and so since:
$$
\forall x_{new}, y_{new} x_{new}^T M_{new} y_{new} = x_{new}^T (C^T M C) y_{new} \implies M_{new} = C^T M C \\
$$
Related:
* https://proofwiki.org/wiki/Matrix_of_Bilinear_Form_Under_Change_of_Basis
= Multilinear form
{parent=Multilinear map}
{wiki}
See