Ciro Santilli OurBigBook.com $£ Sponsor €¥ 中国独裁统治 China Dictatorship 新疆改造中心、六四事件、法轮功、郝海东、709大抓捕、2015巴拿马文件 邓家贵、低端人口、西藏骚乱
calculus.bigb
= Calculus
{wiki}

Well summarized as "the branch of mathematics that deals with <limit (mathematics)>[limits]".

= Mathematical analysis
{parent=Calculus}
{wiki}

= Analytical
{synonym}

An fancy name for <calculus>, with the "more advanced" connotation.

= Limit
{disambiguate=mathematics}
{parent=Calculus}
{wiki}

= Limit
{synonym}

The fundamental concept of <calculus>!

The reason why the epsilon delta definition is so venerated is that it fits directly into well known methods of the <formalization of mathematics>, making the notion completely precise.

= Convergent series
{parent=Limit (mathematics)}
{wiki}

= Convergence
{disambiguate=mathematics}
{synonym}

= Converges
{disambiguate=mathematics}
{synonym}

= Convergent
{disambiguate=mathematics}
{synonym}

= Continuous function
{parent=Limit (mathematics)}
{wiki}

= Continuity
{synonym}

= Continuous
{synonym}

= Continuous problems are simpler than discrete ones
{parent=Continuous function}

This is a general philosophy that <Ciro Santilli>, and likely others, observes over and over.

Basically, <continuity>, or higher order conditions like <differentiability> seem to impose greater constraints on problems, which make them more solvable.

Some good examples of that:
* complex <discrete> problems:
  * <classification of finite groups>
* simple <continuous> problems:
  * characterization of <Lie groups>

= Discrete
{parent=Continuous function}

Something that is very not <continuous>.

Notably studied in <discrete mathematics>.

= Discretization
{parent=Discrete}
{wiki}

= Discretize
{synonym}

= Infinity
{parent=Limit (mathematics)}
{title2=$\infty$}
{wiki}

= Infinite
{synonym}

\Q[Chuck Norris counted to infinity. Twice.]

= Finite
{synonym}

There are a few related concepts that are called infinity in <mathematics>:
* <limits> that are greater than any number
* the <cardinality> of a <set> that does not have a finite number of elements
* in some number systems, there is an explicit "element at infinity" that is not a <limit>, e.g. <projective geometry>

= L'Hôpital's rule
{parent=Limit (mathematics)}
{title2=limit of a ratio}
{wiki}

= Derivative
{parent=Calculus}
{wiki}

The derivative of a function gives its slope at a point.

More precisely, it give sthe inclination of a tangent line that passes through that point.

\Image[https://web.archive.org/web/20240417202558if_/https://upload.wikimedia.org/wikipedia/commons/0/0f/Tangent_to_a_curve.svg]
{source=https://en.wikipedia.org/wiki/File:Tangent_to_a_curve.svg}

= Chain rule
{parent=Derivative}
{wiki}

Here's an example of the chain rule. Suppose we want to calculate:
$$
\dv{e^2x}{x}
$$
So we have:
$$
f(x) = e^x \\
g(x) = 2x
$$
and so:
$$
f'(x) = e^x \\
g'(x) = 2
$$
Therefore the final result is:
$$
f'(g(x))g'(x) = e^{2x} 2 = 2 e ^{2x}
$$

= Multivariable chain rule
{parent=Chain rule}

= Differentiable function
{parent=Derivative}
{wiki}

= Differentiable
{synonym}

= Differentiability
{synonym}

= Smoothness
{parent=Differentiable function}
{wiki}

= Infinitely differentiable function
{parent=Differentiable function}

= $C^{\infty}$
{synonym}
{title2}

= Bump function
{parent=Infinitely differentiable function}
{wiki}

= Flat top bump function
{parent=Bump function}

https://math.stackexchange.com/questions/1786964/is-it-possible-to-construct-a-smooth-flat-top-bump-function

= Maxima and minima
{parent=Derivative}
{wiki}

Given a <function> $f$:
* from some space. For beginners the <real numbers> but more generally <topological spaces> should work in general
* to the <real numbers>
we want to find the points $x$ of the <domain (function)> of $f$ where the value of $f$ is smaller (for minima, or larger for maxima) than all other points in some <neighbourhood (mathematics)> of $x$.

In the case of <Functionals>, this problem is treated under the theory of the <calculus of variations>.

= Lifegard problem
{parent=Maxima and minima}

https://pumphandle.consulting/2020/09/04/the-lifeguard-problem-solved/

= Derivative test
{parent=Maxima and minima}
{wiki}

= Saddle point
{parent=Maxima and minima}
{wiki}

= Newton dot notation
{c}
{parent=Derivative}

= Partial derivative
{parent=Derivative}
{wiki}

= Partial derivative notation
{parent=Partial derivative}

= Partial derivative symbol
{parent=Partial derivative notation}
{title2=$\partial$}

Nope, it is not a <Greek letter>, notably it is not a lowercase <delta>. It is just some random made up symbol that looks like a <letter D>. Which is of course derived from <delta>, which is why it is all so damn confusing.

I think the symbol is usually just read as "<D>" as in "d f d x" for $\pdv{F(x, y, z)}{x}$.

= Partial label partial derivative notation
{parent=Partial derivative notation}
{title2=$\partial_x F$}
{title2=$\partial_y F$}

= Partial index partial derivative notation
{parent=Partial derivative notation}
{title2=$\partial_0 F$}
{title2=$\partial_1 F$}

This notation is not so common in basic mathematics, but it is so incredibly convenient, especially with <Einstein notation> as shown at <einstein notation for partial derivatives>{full}:
$$
\partial_0 F(x, y, z) = \pdv{F(x, y, z)}{x} \\
\partial_1 F(x, y, z) = \pdv{F(x, y, z)}{y} \\
\partial_2 F(x, y, z) = \pdv{F(x, y, z)}{x} \\
$$

This notation is similar to <partial label partial derivative notation>, but it uses indices instead of labels such as $x$, $y$, etc.

= Total derivative
{parent=Derivative}
{wiki}

The total derivative of a function assigns for every point of the domain a linear map with same domain, which is the best linear approximation to the function value around this point, i.e. the tangent plane.

E.g. in 1D:
$$
Total derivative = D[f(x_0)](x) = f(x_0) + \pdv{f}{x}(x_0) \times x
$$
and in 2D:
$$
D[f(x_0, y_0)](x, y) = f(x_0, y_0) + \pdv{f}{x}(x_0, y_0) \times x + \pdv{f}{y}(x_0, y_0) \times y
$$

= Directional derivative
{c}
{parent=Derivative}
{wiki}

= Integral
{parent=Calculus}
{wiki}

= Area
{parent=Integral}
{wiki}

= Volume
{parent=Area}
{wiki}

<3D> <area>.

= Riemann integral
{c}
{parent=Integral}
{wiki}

The easy and less generic <integral>. The harder one is the <Lebesgue integral>.

= Lebesgue integral
{c}
{parent=Integral}
{wiki=Lebesgue_integration}

"More complex and general" integral. Matches the <Riemann integral> for "simple functions", but also <Lebesgue integral vs Riemann integral>[works for some "funkier" functions that Riemann does not work for].

<Ciro Santilli> sometimes wonders how much someone can gain from learning this besides <the beauty of mathematics>, since we can hand-wave a <Lebesgue integral> on almost anything that is of practical use. The beauty is good reason enough though.

= Lebesgue integral vs Riemann integral
{c}
{parent=Lebesgue integral}

Advantages over Riemann:
* <Lebesgue integral of \LP is complete but Riemann isn't>.
* https://youtu.be/PGPZ0P1PJfw?t=710 you are able to switch the order of integrals and limits of function sequences on non-uniform convergence. TODO why do we care? This is linked to the <Fourier series> of course, but concrete example?

\Video[https://youtube.com/watch?v=PGPZ0P1PJfw]
{title=Riemann integral vs. Lebesgue integral by The Bright Side Of Mathematics (2018)}
{description=
https://youtube.com/watch?v=PGPZ0P1PJfw&t=808 shows how Lebesgue can be visualized as a partition of the function range instead of domain, and then you just have to be able to measure the size of pre-images.

One advantage of that is that the range is always one dimensional.

But the main advantage is that having infinitely many discontinuities does not matter.

Infinitely many discontinuities can make the Riemann partitioning diverge.

But in Lebesgue, you are instead measuring the size of preimage, and to fit infinitely many discontinuities in a finite domain, the size of this preimage is going to be zero.

So then the question becomes more of "how to define the measure of a subset of the domain".

Which is why we then fall into <measure theory>!
}

= Real world applications of the Lebesgue integral
{parent=Lebesgue integral vs Riemann integral}

In "practice" it is likely "useless", because the functions that it can integrate that Riemann can't are just too funky to appear in practice :-)

Its value is much more indirect and subtle, as in "it serves as a solid basis of <quantum mechanics>" due to the definition of <Hilbert spaces>.

Bibliography:
* https://math.stackexchange.com/questions/53121/how-do-people-apply-the-lebesgue-integration-theory
* https://www.quora.com/What-are-some-real-life-applications-of-Lebesgue-Integration

= Lebesgue measurable
{c}
{parent=Lebesgue integral}

= Lebesgue integral of $\LP$ is complete but Riemann isn't
{c}
{parent=Lebesgue integral}

$\LP$ is:
* <complete metric space>[complete] under the Lebesgue integral, this result is may be called the <Riesz-Fischer theorem>
* not complete under the <Riemann integral>: https://math.stackexchange.com/questions/397369/space-of-riemann-integrable-functions-not-complete

And then this is why <quantum mechanics> basically lives in <l2>: not being complete makes no sense physically, it would mean that you can get closer and closer to states that don't exist!

TODO intuition

= Riesz-Fischer theorem
{c}
{parent=Lebesgue integral of LP is complete but Riemann isn't}
{wiki=Riesz–Fischer_theorem}

A measurable function defined on a closed interval is square integrable (and therefore in <l2>) if and only if <Fourier series> converges in <l2> norm the function:
$$
\lim_{N \to \infty} \left \Vert S_N f - f \right \|_2 = 0
$$

= $\LP$ is complete
{parent=Riesz-Fischer theorem}

TODO

= Fourier basis is complete for $\LTwo$
{id=fourier-basis-is-complete-for-l2}
{c}
{parent=Riesz-Fischer theorem}

https://math.stackexchange.com/questions/316235/proving-that-the-fourier-basis-is-complete-for-cr-2-pi-c-with-l2-norm

<Riesz-Fischer theorem> is a norm version of it, and <Carleson's theorem> is stronger pointwise almost everywhere version.

Note that the <Riesz-Fischer theorem> is weaker because the pointwise limit could not exist just according to it: <lp norm sequence convergence does not imply pointwise convergence>.

= $L^p$ norm sequence convergence does not imply pointwise convergence
{id=lp-norm-sequence-convergence-does-not-imply-pointwise-convergence}
{parent=fourier basis is complete for l2}

https://math.stackexchange.com/questions/138043/does-convergence-in-lp-imply-convergence-almost-everywhere

There are explicit examples of this. We can have ever thinner disturbances to convergence that keep getting less and less area, but never cease to move around.

If it does converge pointwise to something, then it must match of course.

= Carleson's theorem
{c}
{parent=fourier basis is complete for l2}
{wiki}

The <Fourier series> of an <l2> function (i.e. the function generated from the infinite sum of weighted sines) converges to the function pointwise almost everywhere.

The theorem also seems to hold (maybe trivially given the transform result) for the <Fourier series> (TODO if trivially, why trivially).

Only proved in 1966, and known to be a hard result without any known simple proof.

This theorem of course implies that <fourier basis is complete for l2>, as it explicitly constructs a decomposition into the Fourier basis for every single function.

TODO vs <Riesz-Fischer theorem>. Is this just a stronger pointwise result, while Riesz-Fischer is about norms only?

One of the many <fourier inversion theorems>.

= Lp space
{parent=Lebesgue integral of LP is complete but Riemann isn't}
{wiki}

= $\LP$
{synonym}
{title2}

Integrable functions to the power $p$, usually and in this text assumed under the <Lebesgue integral> because: <Lebesgue integral of \LP is complete but Riemann isn't>

= $L^1$
{id=l1-space}
{parent=Lp space}

= $\LTwo$
{id=l2}
{parent=Lp space}

<\LP> for $p == 2$.

$\LTwo$ is by far the most important of $\LP$ because it is <mathematical formulation of quantum mechanics>[quantum mechanics states] live, because the total probability of being in any state has to be 1!

<l2> has some crucially important properties that other $\LP$ don't (TODO confirm and make those more precise):
* it is the only $\LP$ that is <Hilbert space> because it is the only one where an inner product compatible with the metric can be defined:
  * https://math.stackexchange.com/questions/2005632/l2-is-the-only-hilbert-space-parallelogram-law-and-particular-ft-gt
  * https://www.quora.com/Why-is-L2-a-Hilbert-space-but-not-Lp-or-higher-where-p-2
* <fourier basis is complete for l2>, which is great for solving <differential equation>

= Plancherel theorem
{c}
{parent=l2}

Some sources say that this is just the part that says that the <norm (mathematics)> of a <l2> function is the same as the norm of its <Fourier transform>.

Others say that this theorem actually says that the <Fourier transform> is <bijective>.

The comment at https://math.stackexchange.com/questions/446870/bijectiveness-injectiveness-and-surjectiveness-of-fourier-transformation-define/1235725#1235725 may be of interest, it says that the <bijection> statement is an easy consequence from the <norm (mathematics)> one, thus the confusion.

TODO does it require it to be in <l1 space> as well? <Wikipedia> https://en.wikipedia.org/w/index.php?title=Plancherel_theorem&oldid=987110841 says yes, but https://courses.maths.ox.ac.uk/node/view_material/53981 does not mention it.

= The Fourier transform is a bijection in $L^2$
{parent=Plancherel theorem}

As mentioned at <Plancherel theorem>{full}, some people call this part of <Plancherel theorem>, while others say it is just a corollary.

This is an important fact in <quantum mechanics>, since it is because of this that it makes sense to talk about <position and momentum space> as two dual representations of the <wave function> that contain the exact same amount of information.

= Every Riemann integrable function is Lebesgue integrable
{parent=Plancherel theorem}

But only for the proper Riemann integral: https://math.stackexchange.com/questions/2293902/functions-that-are-riemann-integrable-but-not-lebesgue-integrable

= Measure theory
{parent=Calculus}
{wiki=Measure_(mathematics)}

Main motivation: <Lebesgue integral>.

The Bright Side Of Mathematics 2019 playlist: https://www.youtube.com/watch?v=xZ69KEg7ccU&list=PLBh2i93oe2qvMVqAzsX1Kuv6-4fjazZ8j

The key idea, is that we can't define a measure for the power set of R. Rather, we must select a large measurable subset, and the Borel sigma algebra is a good choice that matches intuitions.

= Fourier series
{c}
{parent=Calculus}
{wiki}

Approximates an original function by sines. If the function is "well behaved enough", the approximation is to arbitrary precision.

<Fourier>'s original motivation, and a key application, is <solving partial differential equations with the Fourier series>.

Can only be used to approximate for periodic functions (obviously from its definition!). The <Fourier transform> however overcomes that restriction:
* https://math.stackexchange.com/questions/1115240/can-a-non-periodic-function-have-a-fourier-series
* https://math.stackexchange.com/questions/1378633/every-function-can-be-represented-as-a-fourier-series

The Fourier series behaves really nicely in <l2>, where it always exists and converges pointwise to the function: <Carleson's theorem>.

\Video[https://www.youtube.com/watch?v=r6sGWTCMz2k]
{title=But what is a <Fourier series>? by <3Blue1Brown> (2019)}
{description=Amazing 2D visualization of the decomposition of complex functions.}

= Applications of the Fourier series
{parent=Fourier series}

= Solving partial differential equations with the Fourier series
{parent=Applications of the Fourier series}

See: https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366 from <heat equation solution with Fourier series>.

<Separation of variables> of certain equations like the <heat equation> and <wave equation> are solved immediately by calculating the <Fourier series> of initial conditions!

Other basis besides the Fourier series show up for other equations, e.g.:
* <bessel function>
* <Hermite polynomials>

= Discrete Fourier transform
{parent=Fourier series}
{title2=DFT}
{wiki}

Input: a sequence of $N$ <complex numbers> $x_k$.

Output: another sequence of $N$ <complex numbers> $X_k$ such that:
$$
x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{i 2 \pi \frac{k n}{N}}
$$
Intuitively, this means that we are braking up the complex signal into $N$ <sinusoidal> frequencies:
* $X_0$: is kind of magic and ends up being a constant added to the signal because $e^{i 2 \pi \frac{k n}{N}} = e^{0} = 1$
* $X_1$: <sinusoidal> that completes one cycle over the signal. The larger the $N$, the larger the resolution of that <sinusoidal>. But it completes one cycle regardless.
* $X_2$: <sinusoidal> that completes two cycles over the signal
* ...
* $X_{N-1}$: <sinusoidal> that completes $N-1$ cycles over the signal
and  is the amplitude of each sine.

We use <Zero-based numbering> in our definitions because it just makes every formula simpler.

Motivation: similar to the <Fourier transform>:
* compression: a <sine> would use N points in the time domain, but in the frequency domain just one, so we can throw the rest away. A sum of two sines, only two. So if your signal has periodicity, in general you can compress it with the transform
* noise removal: many systems add noise only at certain frequencies, which are hopefully different from the main frequencies of the actual signal. By doing the transform, we can remove those frequencies to attain a better <signal-to-noise>
In particular, the <discrete Fourier transform> is used in <signal processing> after a <analog-to-digital converter>. <Digital signal processing> historically likely grew more and more over analog processing as digital <processor (computing)>[processors] got faster and faster as it gives more flexibility in algorithm design.

Sample software implementations:
* <numpy.fft>, notably see the example: <numpy/fft.py>{file}

\Image[https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/DFT_2sin%28t%29_%2B_cos%284t%29_25_points.svg/583px-DFT_2sin%28t%29_%2B_cos%284t%29_25_points.svg.png]

= Discrete Fourier transform of a real signal
{parent=Discrete Fourier transform}

See sections: "Example 1 - N even", "Example 2 - N odd" and "Representation in terms of sines and cosines" of https://www.statlect.com/matrix-algebra/discrete-Fourier-transform-of-a-real-signal

The transform still has complex numbers.

Summary:
* $X_0$ is real
* $X_1 = \conj{X_{N-1}}$
* $X_2 = \conj{X_{N-2}}$
* $X_k = \conj{X_{N-k}}$
Therefore, we only need about half of $X_k$ to represent the signal, as the other half can be derived by conjugation.

"Representation in terms of sines and cosines" from https://www.statlect.com/matrix-algebra/discrete-Fourier-transform-of-a-real-signal then gives explicit formulas in terms of $X_k$.

<NumPy> for example has "Real FFTs" for this: https://numpy.org/doc/1.24/reference/routines.fft.html#real-ffts

= Normalized DFT
{parent=Discrete Fourier transform}

There are actually two possible definitions for the DFT:
* 1/N, given as "the default" in many sources:
  $$
  x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{i 2 \pi \frac{k n}{N}}
  $$
* $1/\sqrt{N}$, known as the "normalized DFT" by some sources: https://www.dsprelated.com/freebooks/mdft/Normalized_DFT.html[], definition which we adopt:
  $$
  x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{i 2 \pi \frac{k n}{N}}
  $$

The $1/\sqrt{N}$ is nicer mathematically as the inverse becomse more symmetric, and power is conserved between time and frequency domains.
* https://math.stackexchange.com/questions/3285758/scaling-magnitude-of-the-dft
* https://dsp.stackexchange.com/questions/63001/why-should-i-scale-the-fft-using-1-n
* https://www.dsprelated.com/freebooks/mdft/Normalized_DFT.html

= Fast Fourier transform
{parent=Discrete Fourier transform}
{wiki}

An efficient <algorithm> to calculate the <discrete Fourier transform>.

= Fourier transform
{c}
{parent=Fourier series}
{wiki}

Continuous version of the <Fourier series>.

Can be used to represent functions that are not periodic: https://math.stackexchange.com/questions/221137/what-is-the-difference-between-fourier-series-and-fourier-transformation while the <Fourier series> is only for periodic functions.

Of course, every function defined on a finite line segment (i.e. a <compact space>).

Therefore, the <Fourier transform> can be seen as a generalization of the <Fourier series> that can also decompose functions defined on the entire <real line>.

As a more concrete example, just like the <Fourier series> is how you solve the <heat equation> on a line segment with <Dirichlet boundary conditions> as shown at: <solving partial differential equations with the Fourier series>{full}, the <Fourier transform> is what you need to solve the problem when the <domain (function)> is the entire <real line>.

= Multidimensional Fourier transform
{parent=Fourier transform}

Lecture notes:
* http://www.robots.ox.ac.uk/~az/lectures/ia/lect2.pdf Lecture 2: 2D Fourier transforms and applications by A. Zisserman (2014)

\Video[https://www.youtube.com/watch?v=v743U7gvLq0]
{title=How the 2D FFT works by Mike X Cohen (2017)}
{description=Animations showing how the 2D Fourier transform looks like for simple inpuf functions.}

= Fourier inversion theorem
{parent=Fourier transform}
{wiki}

A set of theorems that prove under different conditions that the <Fourier transform> has an inverse for a given space, examples:
* <Carleson's theorem> for <l2>

= Laplace transform
{c}
{parent=Fourier transform}

\Video[https://www.youtube.com/watch?v=7UvtU75NXTg]
{title=The Laplace Transform: A Generalized Fourier Transform by Steve Brunton (2020)}
{description=Explains how the Laplace transform works for functions that do not go to zero on infinity, which is a requirement for the <Fourier transform>. No applications in that video yet unfortunately.}

= History of the Fourier series
{parent=Fourier series}

First published by Fourier in 1807 to solve the <heat equation>.

= Topology
{parent=Calculus}
{wiki}

= Topological
{synonym}

Topology is the plumbing of <calculus>.

The key concept of topology is a <neighbourhood (mathematics)>.

Just by havin the notion of neighbourhood, concepts such as <limit (mathematics)> and <continuity> can be defined without the need to specify a precise numerical value to the distance between two points with a <metric (mathematics)>.

As an example. consider the <orthogonal group>, which is also naturally a <topological space>. That group does not usually have a notion of distance defined for it by default. However, we can still talk about certain properties of it, e.g. that <the orthogonal group is compact>, and that <the orthogonal group has two connected components>.

= Covering space
{parent=Topology}
{wiki}

Basically it is a larger space such that there exists a <surjection> from the large space onto the smaller space, while still being compatible with the <topology> of the small space.

We can characterize the cover by how injective the function is. E.g. if two elements of the large space map to each element of the small space, then we have a <double cover> and so on.

= Double cover
{parent=Covering space}

= Neighbourhood
{disambiguate=mathematics}
{parent=Topology}
{wiki}

The key concept of <topology>.

= Topological space
{parent=Topology}
{wiki}

= Manifold
{parent=Topology}
{wiki}

We map each point and a small enough <neighbourhood (mathematics)> of it to <\R^n>, so we can talk about the manifold points in terms of coordinates.

Does not require any further structure besides a consistent <topological> map. Notably, does not require <metric (mathematics)> nor an addition operation to make a <vector space>.

Manifolds are <good>[cool]. Especially <differentiable manifolds> which we can do <calculus> on.

A notable example of a <Non-Euclidean geometry> manifold is the space of <generalized coordinates> of a <Lagrangian>. For example, in a problem such as the <double pendulum>, some of those generalized coordinates could be angles, which wrap around and thus are not <euclidean>.

= Atlas
{disambiguate=topology}
{parent=Manifold}
{wiki}

Collection of <coordinate charts>.

The key element in the definition of a <manifold>.

= Coordinate chart
{parent=Atlas (topology)}

= Covariant derivative
{parent=Manifold}
{wiki}

A generalized definition of <derivative> that works on <manifolds>.

TODO: how does it maintain a single value even across different <coordinate charts>?

= Differentiable manifold
{parent=Manifold}
{wiki}

TODO find a concrete numerical example of doing <calculus> on a differentiable manifold and visualizing it. Likely start with a boring circle. That would be sweet...

= Tangent space
{parent=Manifold}
{wiki}

TODO what's the point of it.

Bibliography:
* https://www.youtube.com/watch?v=j1PAxNKB_Zc Manifolds \#6 - Tangent Space (Detail) by WHYB maths (2020). This is worth looking into.
  * https://www.youtube.com/watch?v=oxB4aH8h5j4 actually gives a more concrete example. Basically, the vectors are defined by saying "we are doing the <Directional derivative> of any function along this direction".

    One thing to remember is that of course, the most convenient way to define a function $f$ and to specify a direction, is by using one of the <coordinate charts>.

    We can then just switch between charts by change of basis.
* http://jakobschwichtenberg.com/lie-algebra-able-describe-group/ by <Jakob Schwichtenberg>
* https://math.stackexchange.com/questions/1388144/what-exactly-is-a-tangent-vector/2714944 What exactly is a tangent vector? on <Stack Exchange>

= Tangent vector to a manifold
{parent=Tangent space}

A member of a <tangent space>.

= One-form
{parent=Manifold}
{wiki}

https://www.youtube.com/watch?v=tq7sb3toTww&list=PLxBAVPVHJPcrNrcEBKbqC_ykiVqfxZgNl&index=19 mentions that it is a bit like a <dot product> but for a <tangent vector to a manifold>: it measures how much that vector <derivative>[derives] along a given direction.

= Metric
{disambiguate=mathematics}
{parent=Topology}
{title2=$d(x, y)$}
{wiki}

= Distance
{synonym}

= Metric
{synonym}

A metric is a function that give the distance, i.e. a <real number>, between any two elements of a space.

A metric may be induced from a <norm> as shown at: <metric induced by a norm>{full}.

Because a <norm induced by an inner product>[norm can be induced by an inner product], and the <inner product> given by the <matrix representation of a positive definite symmetric bilinear form>, in simple cases metrics can also be represented by a <matrix>.

= Metric space
{parent=Metric (mathematics)}
{wiki}

Canonical example: <Euclidean space>.

= Metric space vs normed vector space vs inner product space
{parent=Metric space}

TODO examples:
* <metric space> that is not a <normed vector space>
* <norm (mathematics)> vs <metric>: a norm gives size of one element. A <metric> is the distance between two elements. Given a norm in a space with subtraction, we can obtain a distance function: the <metric induced by a norm>.

\Image[https://upload.wikimedia.org/wikipedia/commons/7/74/Mathematical_Spaces.png]
{title=Hierarchy of topological, metric, normed and inner product spaces}

= Complete metric space
{parent=Metric space}
{wiki}

In plain English: the space has no visible holes. If you start walking less and less on each step, you always converge to something that also falls in the space.

One notable example where completeness matters: <Lebesgue integral of \LP is complete but Riemann isn't>.

= Normed vector space
{parent=Metric space}
{wiki}

= Inner product space
{parent=Normed vector space}
{wiki}

Subcase of a <normed vector space>, therefore also necessarily a <vector space>.

= Inner product
{parent=Inner product space}
{wiki}

Appears to be analogous to the <dot product>, but also defined for <infinite dimensions>.

= Norm
{disambiguate=mathematics}
{parent=Metric space}
{title2=$|x|$}

= Norm
{synonym}

Vs <metric>:
* a norm is the size of one element. A <metric> is the distance between two elements.
* a norm is only defined on a <vector space>. A <metric> could be defined on something that is not a vector space. Most basic examples however are also <vector spaces>.

= Norm induced by an inner product
{parent=Norm (mathematics)}
{wiki}

= Norm induced by the inner product
{synonym}

An <inner product> $x \cdot y$ induces a <norm> with:
$$
|x| = \sqrt{<x, x>}
$$

= Metric induced by a norm
{parent=Norm (mathematics)}

In a <vector space>, a <metric> may be induced from a norm by using <subtraction>:
$$
d(x, y) = |x - y|
$$

= Pseudometric space
{parent=Metric space}
{wiki}

<Metric space> but where the distance between two distinct points can be zero.

Notable example: <Minkowski space>{child}.

= Compact space
{parent=Topology}
{wiki}

= Compact
{synonym}

= Dense set
{parent=Topology}
{wiki}

= Connected space
{parent=Topology}
{wiki}

= Disconnected space
{synonym}

= Connected component
{parent=Connected space}
{wiki}

When a <disconnected space> is made up of several smaller <connected spaces>, then each smaller component is called a "connected component" of the larger space.

See for example the

= Simply connected space
{parent=Connected space}
{wiki}

= Simply connected
{synonym}

= Loop
{disambiguate=topology}
{parent=Simply connected space}

= Homotopy
{parent=Topology}
{wiki}

= Homotopic
{synonym}

= Generalized Poincaré conjecture
{parent=Homotopy}

There are two cases:
* (topological) manifolds
* differential manifolds

Questions: are all compact manifolds / differential manifolds homotopic / diffeomorphic to the sphere in that dimension?
* for topological manifolds: this is a generalization of the <Poincaré conjecture>.

  Original problem posed, $n = 3$ for topological manifolds.

  <Millennium Prize Problems>.

  Last to be proven, only the 4-differential manifold case missing as of 2013.

  Even the truth for all $n > 4$ was proven in the 60's!

  Why is low dimension harder than high dimension?? Surprise!

  AKA: classification of compact 3-manifolds. The result turned out to be even simpler than compact 2-manifolds: there is only one, and it is equal to the 3-sphere.

  For dimension two, we know there are infinitely many: <classification of closed surfaces>
* for differential manifolds:

  Not true in general. First counter example is $n = 7$. Surprise: what is special about the number 7!?

  Counter examples are called <exotic spheres>.

  Totally unpredictable count table:
  | Dimension    | 1 | 2 | 3 | 4 | 5 | 6 | 7  | 8 | 9 | 10 | 11  | 12 | 13 | 14 | 15    | 16 | 17 | 18 | 19     | 20 |
  | Smooth types | 1 | 1 | 1 | ? | 1 | 1 | 28 | 2 | 8 | 6  | 992 | 1  | 3  | 2  | 16256 | 2  | 16 | 16 | 523264 | 24 |
  $n = 4$ is an open problem, there could even be infinitely many. Again, why are things more complicated in lower dimensions??

= Exotic sphere
{parent=Generalized Poincaré conjecture}
{wiki}

= Poincaré conjecture
{c}
{parent=Generalized Poincaré conjecture}
{wiki}

= Classification of closed surfaces
{parent=Generalized Poincaré conjecture}

* https://en.wikipedia.org/wiki/Surface_(topology)#Classification_of_closed_surfaces
* http://www.proofwiki.org/wiki/Classification_of_Compact_Two-Manifolds

So simple!! You can either:
* cut two holes and glue a handle. This is easy to visualize as it can be embedded in <\R^3>: you just get a <Torus>, then a double torus, and so on
* cut a single hole and glue  a<Möbius strip> in it. Keep in mind that this is possible because the <Möbius strip> has a single boundary just like the hole you just cut. This leads to another infinite family that starts with:
  * 1: <real projective plane>
  * 2: <Klein bottle>

A handle cancels out a <Möbius strip>, so adding one of each does not lead to a new object.

You can glue a Mobius strip into a single hole in dimension larger than 3! And it gives you a Klein bottle!

Intuitively speaking, they can be sees as the smooth surfaces in N-dimensional space (called an embedding), such that deforming them is allowed. 4-dimensions is enough to embed cover all the cases: 3 is not enough because of the Klein bottle and family.

= Torus
{c}
{parent=Classification of closed surfaces}
{wiki}

= Möbius strip
{c}
{parent=Classification of closed surfaces}
{wiki}

= Klein bottle
{c}
{parent=Classification of closed surfaces}
{wiki}

<sphere> with two <Möbius strips> stuck into it as per the <classification of closed surfaces>.

= Real coordinate space
{c}
{parent=Topology}
{wiki}

= $\R^n$
{synonym}
{title2}

= Real line
{parent=Real coordinate space}
{wiki}

= $\R^1$
{synonym}
{title2}

= 1D
{synonym}

= Real plane
{parent=Real coordinate space}

= $\R^2$
{synonym}
{title2}

= 2D
{synonym}

= Real coordinate space of dimension three
{c}
{parent=Real coordinate space}

= $\R^3$
{synonym}
{title2}

= 3D
{synonym}

= Real coordinate space of dimension four
{c}
{parent=Real coordinate space}

= $\R^4$
{synonym}
{title2}

= Four-dimensional space
{synonym}

= Four-dimensional
{synonym}

= 4D
{synonym}
{title2}

Important 4D spaces:
* <3-sphere>

= Visualizing 4D
{parent=Real coordinate space of dimension four}

Simulate it. Just simulate it.

\Video[http://youtube.com/watch?v=0t4aKJuKP0Q]
{title=4D Toys: a box of four-dimensional toys by Miegakure (2017)}

= Dimension
{parent=Real coordinate space}
{wiki}

= Infinite dimensional
{parent=Dimension}

= Infinite dimensions
{synonym}

https://math.stackexchange.com/questions/466707/what-are-some-examples-of-infinite-dimensional-vector-spaces

= Finite dimensional
{parent=Infinite dimensional}

= Finite dimension
{synonym}

= Complex coordinate space
{parent=Real coordinate space}
{wiki}

= $\C^n$
{title2}
{synonym}

= Complex coordinate space of dimension 2
{parent=Complex coordinate space}

= $\C^2$
{synonym}
{title2}

= Complex dot product
{parent=Complex coordinate space}

This section is about the definition of the <dot product> over <c n>, which extends the definition of the <dot product> over <r n>.

Some motivation is discussed at: https://math.stackexchange.com/questions/2459814/what-is-the-dot-product-of-complex-vectors/4300169#4300169

The complex dot product is defined as:
$$
\sum a_i \overline{b_i}
$$

E.g. in $\C^1$:
$$
(a + bi) \cdot (c + di) = (a + bi) (\overline{c + di}) = (a + bi) (c - di) = (ac + bd) + (bc - ad)i
$$

We can see therefore that this is a <form (mathematics)>, and a positive definite because:
$$
(a + bi) \cdot (a + bi) = (aa + bb) + (ba - ab)i = a^2 + b^2
$$

Just like the usual <dot product>, this will be a <positive definite symmetric bilinear form> by definition.

= Norm induced by the complex dot product
{parent=Complex dot product}
{tag=Norm induced by an inner product}

Given:
$$
x = \sum_{k=1}^n a_k + b_k i \in \C^n, a_k, b_k \in \R
$$
the norm ends up being:
$$
|x| = \sqrt{\sum_{k=1}^n a_k^2 + b_k^2}
$$

E.g. in <C 2>:
$$
|(2 + 3i, -1 + 5i)| = \sqrt{2^2 + 3^2 + (-1)^2 + 5^2} = \sqrt{4 + 9 + 1 + 25} = \sqrt{39}
$$

= Euclidean space
{c}
{parent=Real coordinate space}
{wiki}

= Euclidean
{synonym}

<\R^n> with extra structure added to make it into a <metric space>{parent}.

= Euclidean metric signature matrix
{parent=Euclidean space}

The <identity matrix>.

= Cartesian coordinate system
{c}
{parent=Euclidean space}
{wiki}

= Cartesian coordinate
{synonym}

= Polar coordinate system
{c}
{parent=Euclidean space}
{wiki}

= Polar coordinate
{synonym}

= Spherical coordinate system
{c}
{parent=Polar coordinate system}
{wiki}

= Spherical coordinate
{synonym}

= Pythagorean theorem
{c}
{parent=Euclidean space}
{wiki}

= Non-Euclidean geometry
{c}
{parent=Euclidean space}
{wiki}

= Non-Euclidean
{synonym}

= Elliptic geometry
{parent=Non-Euclidean geometry}
{wiki}

= Model of elliptic geometry
{parent=Elliptic geometry}

= Projective elliptic geometry
{parent=Model of elliptic geometry}

= Projective model of elliptic geometry
{synonym}

Each elliptic space can be modelled with a <real projective space>. The best thing is to just start thinking about the <real projective plane>.

= Hyperbolic gemoetry
{parent=Non-Euclidean geometry}
{wiki}

= Hyperbolic functions
{parent=Hyperbolic gemoetry}
{wiki}

= Hyperbolic sine
{parent=Hyperbolic functions}

= sinh
{synonym}

= Hyperbolic cossine
{parent=Hyperbolic functions}

= cosh
{synonym}

= Distribution
{disambiguate=mathematics}
{parent=Calculus}

Generalize <function (mathematics)> to allow adding some useful things which people wanted to be classical functions but which are not,

It therefore requires you to redefine and reprove all of calculus.

For this reason, most people are tempted to assume that all the hand wavy intuitive arguments <undergrad> teachers give are true and just move on with life. And they generally are.

One notable example where distributions pop up are the <eigenvectors> of the <position operator> in <quantum mechanics>, which are given by <Dirac delta functions>, which is most commonly rigorously defined in terms of <distribution (mathematics)>.

Distributions are also defined in a way that allows you to do calculus on them. Notably, you can define a <derivative>, and the derivative of the <Heaviside step function> is the <Dirac delta function>.

= Dirac delta function
{c}
{parent=Distribution (mathematics)}
{wiki}

The "0-width" pulse <distribution (mathematics)> that integrates to a step.

There's not way to describe it as a classical <function (mathematics)>, making it the most important example of a <distribution (mathematics)>.

Applications:
* <position operator> in <quantum mechanics>. It's not a coincidence that the function is named after <Paul Dirac>.

= Green's function
{c}
{parent=Dirac delta function}
{wiki}

= Heaviside step function
{c}
{parent=Dirac delta function}
{wiki}

= Normal distribution
{c}
{parent=Distribution (mathematics)}
{wiki}

= Complex analysis
{parent=Calculus}
{wiki}

The surprising thing is that a bunch of results are simpler in complex analysis!

= Complex analysis bibliography
{parent=Complex analysis}

= Complex Analysis by Juan Carlos Ponce Campuzano
{c}
{parent=Complex analysis bibliography}
{tag=Visual math HTML book}
{tag=CC BY-NC-SA}

https://complex-analysis.com

= Holomorphic function
{parent=Complex analysis}
{wiki}

Being a complex holomorphic function is an extremely strong condition.

The existence of the first derivative implies the existence of all derivatives.

Another extremely strong consequence is the <identity theorem>.

"Holos" means "entire" in Greek, so maybe this is a reference to the fact that due to the identity theorem, knowing the function on a small open ball implies knowing the function everywhere.

= Analytic continuation
{parent=Complex analysis}
{wiki}

<visualizing the Riemann hypothesis and analytic continuation by 3Blue1Brown (2016)> is a good quick visual non-mathematical introduction is to it.

The key question is: how can this continuation be unique since we are defining the function outside of its original domain?

The answer is: due to the <identity theorem>.

= Visualizing the Riemann hypothesis and analytic continuation by 3Blue1Brown (2016)
{parent=Analytic continuation}

Good ultra quick visual non-mathematical introduction to the Riemann hypothesis and analytic continuation.

\Video[http://youtube.com/watch?v=sD0NjbwqlYw]

= Identity theorem
{parent=Analytic continuation}
{wiki}

Essentially, defining an <holomorphic function> on any open subset, no matter how small, also uniquely defines it everywhere.

This is basically why it makes sense to talk about <analytic continuation> at all.

One way to think about this is because the <Taylor series> matches the exact value of an holomorphic function no matter how large the difference from the starting point.

Therefore a holomorphic function basically only contains as much information as a countable sequence of numbers.

= Riemann zeta function
{c}
{parent=Identity theorem}
{wiki}

= Riemann hypothesis
{c}
{parent=Riemann zeta function}
{wiki}

<visualizing the Riemann hypothesis and analytic continuation by 3Blue1Brown (2016)> is a good quick visual non-mathematical introduction is to it.

One of the <Millennium Prize Problems>{parent} and <Hilbert's problems>{parent}.

\Video[https://www.youtube.com/watch?v=e4kOh7qlsM4]
{title=What is the Riemann Hypothesis REALLY about? by HexagonVideos (2022)}

= Hilbert space
{c}
{parent=Calculus}
{wiki}

Key for <quantum mechanics>, see: <mathematical formulation of quantum mechanics>, the most important example by far being <l2>.

= Complete basis
{parent=Hilbert space}

Finding a complete basis such that each vector solves a given <differential equation> is the basic method of solving <partial differential equation> through <separation of variables>.

The first example of this you must see is <solving partial differential equations with the Fourier series>.

Notable examples:
* <Fourier series>{child} for the <heat equation> as shown at <fourier basis is complete for l2> and <solving partial differential equations with the Fourier series>
* <Hermite functions>{child} for the <quantum harmonic oscillator>
* <Legendre polynomials>{child} for <Laplace's equation> in <spherical coordinates>
* <bessel function>{child} for the <2D wave equation on a circular domain> in <polar coordinates>

= Differential equation
{parent=Calculus}
{tag=Functional equation}
{wiki}

= Euler number
{c}
{parent=Differential equation}
{title2=$e$}
{wiki}

= Natural logarithm
{parent=Euler number}
{title2=$ln(n)$}
{title2=$log_e(n)$}
{wiki}

= Logarithmic integral function
{parent=Natural logarithm}
{title2=$li(x) = \int _{0}^{x}{\frac {dt}{\ln t}}$}
{wiki}

= Logarithm integral
{synonym}
{title2}

Sample software implementations:
* <SymPy>: <python/sympy_cheat/logarithm_integral.py>{file}

= Euler-Mascheroni constant
{c}
{parent=Natural logarithm}
{wiki=Euler–Mascheroni constant}

<Convergence (mathematics)>: https://math.stackexchange.com/questions/629630/simple-proof-euler-mascheroni-gamma-constant

= Linear differential equation
{parent=Differential equation}
{wiki}

The name is a bit obscure if you don't think in very generalized terms right out of the gate. It refers to a <linear polynomial> of <multivariate polynomial>[multiple variables], which by definition must have the super simple form of:
$$
f(x_0, x_1, ..., x_n) = c_0x_0 + c_1x_1 + ... + c_nx_n + k
$$
and then we just put the unknown $y$ and each derivative into that simple polynomial:
$$
f(y(x), y'(x), ..., y^{(n)}(x)) = c_0y + c_1y' + ... + c_ny^{(n)} + k
$$
except that now the $c_i$ are not just constants, but they can also depend on the argument $x$ (but not on $y$ or its derivatives).

Explicit solutions exist for the very specific cases of:
* constant coefficients, any degree. These were known for a long time, and are were studied when <Ciro Santilli's formal education>[Ciro was at university] in the <University of São Paulo>.
* degree 1 and any coefficient

= Holonomic function
{parent=Linear differential equation}
{wiki}

= Order of a differential equation
{parent=Differential equation}
{wiki}

Order of the highest derivative that appears.

= Ordinary differential equation
{parent=Differential equation}
{title2=ODE}
{wiki}

= Existence and uniqueness of solutions of ordinary differential equations
{parent=Ordinary differential equation}
{tag=Existence and uniqueness}

= Peano existence theorem
{c}
{parent=Existence and uniqueness of solutions of ordinary differential equations}
{wiki}

= Picard-Lindelöf theorem
{c}
{parent=Existence and uniqueness of solutions of ordinary differential equations}
{wiki=Picard–Lindelöf theorem}

= System of ordinary differential equations
{parent=Ordinary differential equation}

= System of linear ordinary differential equations
{parent=System of ordinary differential equations}

= Partial differential equation
{parent=Differential equation}
{wiki}

= PDE
{c}
{synonym}
{title2}

= Analytical method to solve a partial differential equation
{parent=Partial differential equation}

* <how to use Lie Groups to solve differential equations>{child}

= Separation of variables
{parent=Analytical method to solve a partial differential equation}
{wiki}

Technique to solve <partial differential equations>

Naturally leads to the <Fourier series>, see: <solving partial differential equations with the Fourier series>, and to other analogous expansions:

One notable application is the solution of the <Schrödinger equation> via the <time-independent Schrödinger equation>.

Bibliography:
* https://math.libretexts.org/Bookshelves/Differential_Equations/Book%3A_Differential_Equations_for_Engineers_(Lebl)/4%3A_Fourier_series_and_PDEs/4.06%3A_PDEs_separation_of_variables_and_the_heat_equation on <LibreTexts> for the <heat equation>

= Numerical method to solve a partial differential equation
{parent=Partial differential equation}
{wiki=Numerical_methods_for_partial_differential_equations}

= Numerical methods to solve partial differential equations
{synonym}

The <finite element method> is one of the most common ways to solve PDEs in practice.

= Variational formulation of a partial differential equation
{parent=Numerical method to solve a partial differential equation}

https://www.cis.upenn.edu/~cis515/cis515-12-sl11.pdf

Used for example in <FreeFem> and <FEniCS Project> as the input description of the PDEs, TODO why.

= Weak solution
{parent=Variational formulation of a partial differential equation}
{wiki}

= Finite element method
{parent=Numerical method to solve a partial differential equation}
{wiki}

Used to solve <partial differential equation>.

TODO understand, give intuition, justification of bounds and <JavaScript> demo.

= Important partial differential equation
{parent=Partial differential equation}

The majority likely comes from <physics>:
* <heat equation>{child}
* <wave equation>{child}
* <Maxwell's equations>{child}
* <Schrödinger equation>{child}
* <Navier-Stokes equations>{child}

= Laplace's equation
{c}
{parent=Important partial differential equation}
{wiki}

Like a <heat equation> but for functions without time dependence, space-only.

TODO confirm: does the solution of the heat equation always converge to the solution of the Laplace equation as time tends to infinity?

In one dimension, the Laplace equation is boring as it is just a straight line since the second derivative must be 0. That also matches our intuition of the limit solution of the heat equation.

Uniqueness: <Uniqueness theorem for Poisson's equation>.

= Legendre polynomials
{c}
{parent=Laplace's equation}

Show up when solving the <Laplace's equation> on <spherical coordinates> by <separation of variables>, which leads to the <differential equation> shown at: https://en.wikipedia.org/w/index.php?title=Legendre_polynomials&oldid=1018881414#Definition_via_differential_equation[].

= Poisson's equation
{c}
{parent=Laplace's equation}
{wiki}

Generalization of <Laplace's equation> where the value is not necessarily 0.

= Uniqueness theorem for Poisson's equation
{c}
{parent=Poisson's equation}
{wiki}

= Harmonic function
{parent=Laplace's equation}
{wiki}

A solution to <Laplace's equation>.

= Spherical harmonic
{parent=Harmonic function}
{wiki=Spherical_harmonics}

Correspond to the angular part of <Laplace's equation> in spherical coordinates after using <separation of variables> as shown at: https://en.wikipedia.org/wiki/Spherical_harmonics#Laplace's_spherical_harmonics

= Heat equation
{parent=Important partial differential equation}
{wiki}

Besides being useful in engineering, it was very important historically from a "development of mathematics point of view", e.g. <history of the Fourier series>[it was the initial motivation for the Fourier series].

Some interesting properties:
* TODO confirm: for a fixed boundary condition that does not depend on time, the solutions always approaches one specific equilibrium function.

  This is in contrast notably with the <wave equation>, which can oscillate forever.
* TODO: for a given point, can the temperature go down and then up, or is it always monotonic with time?
* information propagates instantly to infinitely far. Again in contrast to the wave equation, where information propagates at wave speed.

Sample numerical solutions:
* with <FreeFem>:
  * <heat-dirichlet.1d.freefem>
  * <heat-dirichlet-2d-freefem>

= Heat equation solution with Fourier series
{parent=Heat equation}
{tag=Solving partial differential equations with the Fourier series}

See: https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366

= Wave equation
{parent=Important partial differential equation}
{wiki}

Describes perfect lossless waves on the surface of a string, or on a water surface.

Uniqueness: https://math.stackexchange.com/questions/1113622/uniqueness-of-solutions-to-the-wave-equation

As mentioned at: https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366[] from <solving partial differential equations with the Fourier series> citing https://courses.maths.ox.ac.uk/node/view_material/1720[], analogously to the <heat equation>, the wave linear equation can be be solved nicely with <separation of variables>.

= Wave equation solver
{parent=Wave equation}

This section talks about solvers/simulators dedicated solving the <wave equation>. Of course, any serious solver will likely be able to solve a wider range of PDE, so this section contains mostly fun toys. For more serious stuff see: <PDE solver>{full}.

<JavaScript> toy solvers:
* https://jtiscione.github.io/webassembly-wave/index.html circular domain, create waves with mouse click
* https://dionyziz.com/graphics/wave-experiment/ with useless 3D <WebGL> visualization :-), waves with mouse click. Solving itself done on <CPU>, not GPU.

Related:
* https://stackoverflow.com/questions/69949335/how-to-simulate-a-wave-equation

= Wave equation solution with Fourier series
{parent=Wave equation}
{tag=Solving partial differential equations with the Fourier series}

https://web.archive.org/web/20200621205928/https://courses.maths.ox.ac.uk/node/view_material/1720 also mentioned at https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366 from <heat equation solution with Fourier series>.

= The wave equation can be seen as infinitely many infinitesimal coupled oscillators
{parent=Wave equation}

TODO confirm, see also: <coupled oscillators>. And then this idea can be used to define/motivate <quantum field theory> in terms of <quantum harmonic oscillators> with <second quantization>.

* https://youtu.be/SMmFgIEGYtw?t=324 Quantum Field Theory 2a - Field Quantization I by <ViaScience> (2018)

= Lossy 1D Wave Equation
{parent=Wave equation}
{wiki}

https://ccrma.stanford.edu/~jos/pasp/Lossy_1D_Wave_Equation.html

= Wave
{parent=Wave equation}
{wiki}

= Envelope
{disambiguate=waves}
{parent=Wave}
{wiki}

= Polarization
{parent=Wave equation}
{wiki=Polarization_(waves)}

Start with: <string polarization>{full}.

Then go to: <polarization of light>{full}.

= String polarization
{parent=Polarization}

This is about the polarization of a string in 3D space. That is the first concept of polarization you must have in mind! 

= Diffraction
{parent=Wave equation}
{wiki}

= Huygens-Fresnel principle
{c}
{parent=Diffraction}
{wiki=Huygens–Fresnel principle}

= Kirchhoff's diffraction formula
{c}
{parent=Huygens-Fresnel principle}
{wiki}

Approximation to <Huygens-Fresnel principle>.

= Fraunhofer diffraction
{c}
{parent=Kirchhoff's diffraction formula}
{wiki}

Far field approximation to <Kirchhoff's diffraction formula>, i.e. when the plane of observation is far from the object diffracting.

= Fresnel diffraction
{c}
{parent=Kirchhoff's diffraction formula}
{wiki}

Near field approximation to <Kirchhoff's diffraction formula>, i.e. when the plane of observation is near the object diffracting.

= Refraction
{parent=Wave equation}
{wiki}

= Resonance
{parent=Wave equation}
{wiki}

= Resonance frequency
{synonym}

= Resonate
{synonym}

= Resonates
{synonym}

Resonance is a really cool thing.

Examples:
* <mechanical resonance>, notably:
  * pipe instruments
* <electronic oscillators>, notably:
  * <LC oscillator>, and notably the lossy version <RLC circuit>

Perhaps a key insight of resonance is that the reonant any lossy system tends to look like the resonance frequency quite quickly even if the initial condition is not the resonant condition itself, because everything that is not the resonant frequency interferes destructively and becomes noise. Some examples of that:
* striking a bell or drum can be modelled by applying an impuse to the system
* playing a pipe instrument comes down to blowing a piece that vibrates randomly, and then leads the pipe to vibrate mostly in the resonant frequency. Likely the same applies to bowed string instruments, the bow must be creating a random vibration.
* playing a plucked string instrument comes down to initializing the system to an triangular wave form and then letting it evolve. TODO find a simulation of that!

Another cool aspect of resonance is that it was kind of the motivation for <de Broglie hypothesis>, as <de Broglie> was kind of thinking that electroncs might show discrete jumps on <atomic spectra> because of constructive interference.

= Wave interference
{parent=Wave equation}
{wiki}

= Interference pattern
{parent=Wave interference}

What you see along a line or plane in a <wave interference>.

Notably used for the pattern of the <double-slit experiment>.

= 2D wave equation on a circular domain
{parent=Wave equation}
{wiki=Vibrations_of_a_circular_membrane}

= Bessel function
{parent=2D wave equation on a circular domain}
{wiki}

Shows up when trying to solve <2D wave equation on a circular domain> in <polar coordinates> with <separation of variables>, where we have to decompose the initial condition in termes of a <fourier-Bessel series>, exactly like the <Fourier series> appears when solving the wave equation in linear coordinates.

For the same fundamental reasons, also appears when calculating the <Schrödinger equation solution for the hydrogen atom>.

= Fourier-Bessel series
{parent=Bessel function}
{wiki=Fourier–Bessel_series}

Completeness: https://math.stackexchange.com/questions/2192665/is-this-set-of-bessel-functions-a-basis-for-all-c10-a-functions TODO

This is the <bessel function> analogue to <fourier basis is complete for l2>.

= Helmholtz equation
{c}
{parent=Wave equation}
{wiki}

<eigenvalue> problem of <Laplace's equation>.

= Existence and uniqueness of solutions of partial differential equations
{parent=Partial differential equation}
{tag=Existence and uniqueness}

If you have a <PDE> that models <physics>[physical phenomena], it is fundamental that:
* there must exist a solution for every physically valid initial condition, otherwise it means that the equation does not describe certain cases of reality
* the solution must be unique, otherwise how are we to choose between the multiple solutions?

Unlike for <ordinary differential equations> which have the https://en.wikipedia.org/wiki/Picard–Lindelöf_theorem[Picard–Lindelöf theorem], the existence and uniqueness of solution is not well solved for PDEs.

For example, <Navier-Stokes existence and smoothness>{child} was one of the <Millennium Prize Problems>.

= Partial differential equation solver
{parent=Partial differential equation}
{tag=Numerical software}

= PDE solver
{c}
{synonym}

= FreeFem
{c}
{parent=Partial differential equation solver}
{wiki=FreeFem++}

https://freefem.org/

https://github.com/FreeFem/FreeFem-sources

Started in 1987 and written in Pascal, by the French from <Pierre and Marie Curie University>, the French are really strong in <numerical analysis>.

Ciro wasn't expecting it to be as old. Ported to C++ in 1992.

The fact that French wrote it can be seen in the documentation, for example https://doc.freefem.org/tutorials/index.html uses file extension `mycode.edp` instead of `mycode.pde` where `dep` stands for "https://fr.wikipedia.org/wiki/Équation_aux_dérivées_partielles[Équation aux dérivées partielles]".

Besides the painful build, using FreeFem is relatively simple, as can be seen from the examples on the website.

They do use a <domain-specific language> on the examples, which appears to be the main/only interface, which is a bad thing, Ciro would rather have a <Python> <API> as the "main API", which is more the approach taken by the <FEniCS Project>, but so be it. This <domain-specific language> business means that you always stumble upon basic stuff you want to do but can't, and then you have to think about how to share data between the simulation and the plotting. The plotting notably is super complex and they can't implement all of what people want, upstream examples often offload that to gnuplot. This is potentially a big advantage of <FEniCS Project>.

It nice though that they do have some graphics out of the box, as that allows to quickly debug common problems.

Uses <variational formulation of a partial differential equation>, which is not immediately obvious to beginners? The introduction https://doc.freefem.org/tutorials/poisson.html gives an ultra quick example, but your are mostly on your own with that.

On Ubuntu 20.04, the `freefem` is a bit out-of-date (3.5.8, there isn't even a tag for that in the <GitHub> repo, and refs/tags/release_3_10 is from 2010!) and fails to run the examples from the website. It did work with the example package though, but the output does not have color, which makes me sad :-)
``
sudo apt install freefem freefem-examples
freefem /usr/share/doc/freefem-examples/heat.pde
``

So let's just compile the latest v4.6 it from source, on Ubuntu 20.04:
``
sudo apt build-dep freefem
git clone https://github.com/FreeFem/FreeFem-sources
cd FreeFem-sources
# Post v4.6 with some fixes.
git checkout 3df0e2370d9752801ac744b11307b14e16743a44

# Won't apply automatically due to tab hell.
# https://superuser.com/questions/607410/how-to-copy-paste-tab-characters-via-the-clipboard-into-terminal-session-on-gnom
git apply <<'EOS'
diff --git a/3rdparty/ff-petsc/Makefile b/3rdparty/ff-petsc/Makefile
index dc62ab06..13cd3253 100644
--- a/3rdparty/ff-petsc/Makefile
+++ b/3rdparty/ff-petsc/Makefile
@@ -204,7 +204,7 @@ $(SRCDIR)/tag-make-real:$(SRCDIR)/tag-conf-real
 $(SRCDIR)/tag-install-real :$(SRCDIR)/tag-make-real
     cd $(SRCDIR) && $(MAKE) PETSC_DIR=$(PETSC_DIR) PETSC_ARCH=fr install
     -test -x "`type -p otool`" && make changer
-    cd $(SRCDIR) && $(MAKE) PETSC_DIR=$(PETSC_DIR) PETSC_ARCH=fr check
+    #cd $(SRCDIR) && $(MAKE) PETSC_DIR=$(PETSC_DIR) PETSC_ARCH=fr check
     test -e $(DIR_INSTALL_REAL)/include/petsc.h
     test -e $(DIR_INSTALL_REAL)/lib/petsc/conf/petscvariables
     touch $@
@@ -293,7 +293,6 @@ $(SRCDIR)/tag-tar:$(PACKAGE)
     -tar xzf $(PACKAGE)
     patch -p1 < petsc-hpddm.patch
 ifeq ($(WIN32DLLTARGET),)
-    patch -p1 < petsc-metis.patch
 endif
     touch $@
 $(PACKAGE):
EOS

autoreconf -i
./configure --enable-download --enable-optim --prefix="$(pwd)/../FreeFem-install"
./3rdparty/getall -a
cd 3rdparty/ff-petsc
make petsc-slepc
cd -
./reconfigure
make -j`nproc`
make install
cd ../FreeFem-install
PATH="${PATH}:$(pwd)/bin" ./bin/FreeFem++ ../FreeFem-sources/examples/tutorial/
``

Ciro's initial build experience was a bit painful, possibly because it was done on a relatively new Ubuntu 20.04 as of June 2020, but in the end it worked: https://github.com/FreeFem/FreeFem-sources/issues/141

The main/only dependency appears to be https://en.wikipedia.org/wiki/Portable,_Extensible_Toolkit_for_Scientific_Computation[PETSc] which is used by default, which is a good sign, as that library appears to automatically parallelize a single input to several backends (single <CPU>, MPI, GPU) so you know things will scale up as you reach simulations.

The problem is that it compiling such a complex dependency opens up much more room for hard to solve compilation errors, and takes a lot more time.

= FreeFem examples
{parent=FreeFem}

= heat-dirichlet.1d.freefem
{parent=FreeFem examples}

1-dimensional <heat equation> example with <Dirichlet boundary condition>
* \a[freefem/heat-dirichlet.1d.freefem]

= heat-dirichlet-2d-freefem
{parent=FreeFem examples}

2-dimensional <heat equation> example with <Dirichlet boundary condition>:
* \a[freefem/heat-dirichlet.2d.freefem]

= FEniCS Project
{c}
{parent=Partial differential equation solver}
{wiki}

https://fenicsproject.org/

One big advantage over <FreeFem> is that it uses plain old <Python> to describe the problems instead of a <domain-specific language>. <Matplotlib> is used for plotting by default, so we get full Python power out of the box!

Also uses <variational formulation of a partial differential equation> like <FreeFem> which is a pain.

One downside is that its documentation is a Springer published PDF https://link.springer.com/content/pdf/10.1007%2F978-3-319-52462-7.pdf which is several years out-of-date (tested with FEnics 2016.2. Newbs. This causes problems e.g.: https://stackoverflow.com/questions/53730427/fenics-did-not-show-figure-nameerror-name-interactive-is-not-defined/57390687#57390687

<system of partial differential equations> are mentioned at: https://link.springer.com/content/pdf/10.1007%2F978-3-319-52462-7.pdf 3.5 "A system of advection–diffusion–reaction equations". You don't need to manually iterate between the equations.

On Ubuntu 20.04 as per https://fenicsproject.org/download/
``
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:fenics-packages/fenics
sudo apt-get update
sudo apt-get install --no-install-recommends fenics
sudo apt install fenics
python3 -m pip install -u matplotlib
``
Before 2020-06, it was failing with:
``
E: The repository 'http://ppa.launchpad.net/fenics-packages/fenics/ubuntu focal Release' does not have a Release file.
``
but they seem to have created the Ubuntu 20.04 package as of 2020-06, so it now worked! https://askubuntu.com/questions/866901/what-can-i-do-if-a-repository-ppa-does-not-have-a-release-file

TODO heat equation <hello world>.

= Hans Petter Langtangen
{c}
{parent=FEniCS Project}
{wiki}

<GitHub> account: https://github.com/hplgit

It should be mentioned that when you start <Googling> for <partial differential equation>[PDE] stuff, you will reach Han's writings a lot under his <GitHub Pages>: http://hplgit.github.io/[], and he is one of the main authors of the <FEniCS Project>.

Unfortunately he died of <cancer> in 2016, shame, he seemed like a good educator.

He also published to <GitHub> pages with his own crazy <markdown>-like multi-output <markup language>: https://github.com/hplgit/doconce[].

Rest in peace, Hans.

= System of partial differential equations
{parent=Partial differential equation}

In many important applications, what you have to solve is not just a single <partial differential equation>, but multiple partial differential equations coupled to each other. This is the case for many key PDEs including:
* <Maxwell's equations>, see: <explicit scalar form of the Maxwell's equations>{full}
* <Navier-Stokes equations>
* <Schrödinger equation>, see: <why are complex numbers used in the Schrodinger equation?>{full}

= Classification of second order partial differential equations into elliptic, parabolic and hyperbolic
{parent=Partial differential equation}

One major application of this classification is that different <boundary conditions> are suitable for different types of <partial differential equations> as explained at: <which boundary conditions lead to existence and uniqueness of a second order PDE>.

Bibliography:
* https://math.stackexchange.com/questions/1090299/why-are-elliptic-parabolic-hyperbolic-pdes-called-elliptic-parabolic-hyperb

= Elliptic partial differential equation
{parent=Classification of second order partial differential equations into elliptic, parabolic and hyperbolic}
{wiki}

= Parabolic partial differential equation
{parent=Classification of second order partial differential equations into elliptic, parabolic and hyperbolic}
{wiki}

= Hyperbolic partial differential equation
{parent=Classification of second order partial differential equations into elliptic, parabolic and hyperbolic}
{wiki}

= Which boundary conditions lead to existence and uniqueness of a second order PDE
{parent=Classification of second order partial differential equations into elliptic, parabolic and hyperbolic}

http://www.cns.gatech.edu/~predrag/courses/PHYS-6124-12/StGoChap6.pdf 6.1 "Classification of PDE's" clarifies which boundary conditions are needed for existence and uniqueness of each <classification of second order partial differential equations into elliptic, parabolic and hyperbolic>[type of second order of PDE]:
* <elliptic partial differential equation> and <parabolic partial differential equation>: <Dirichlet boundary condition> or <Neumann boundary condition>
* <hyperbolic partial differential equation>: <Cauchy boundary condition>

= Phase space
{parent=Differential equation}
{wiki}

This idea comes up particularly in the <phase space coordinate> of <Hamiltonian mechanics>.

= Boundary condition
{parent=Differential equation}

= Initial condition
{parent=Boundary condition}

Basically a subset of the <boundary condition> for when one of the parameters is time and we are specifying values for the time 0.

= Boundary value problem
{parent=Boundary condition}
{wiki}

= Dirichlet boundary condition
{c}
{parent=Boundary condition}
{wiki}

Specifies fixed values.

Can be used for <elliptic partial differential equations> and <parabolic partial differential equations>.

Numerical examples:
* with <FreeFem>:
  * <heat-dirichlet.1d.freefem>
  * <heat-dirichlet-2d-freefem>

= Neumann boundary condition
{c}
{parent=Boundary condition}
{wiki}

Specifies the derivative in a direction normal to the boundary.

Can be used for <elliptic partial differential equations> and <parabolic partial differential equations>.

= Cauchy boundary condition
{c}
{parent=Neumann boundary condition}
{wiki}

Sets both a <Dirichlet boundary condition> and a <Neumann boundary condition> for a single part of the boundary.

Can be used for <hyperbolic partial differential equations>.

We understand intuitively that this imposes stricter requirements on solutions, which makes it easier to guarantee uniqueness, but also harder to have existence. TODO intuitively why hyperbolic need this extra level of restriction.

= Robin boundary condition
{c}
{parent=Neumann boundary condition}
{wiki}

Linear combination of a <Dirichlet boundary condition> and <Neumann boundary condition> at each point of the boundary.

Examples:
* <heat equation> when metal plaque is immersed in a large external environment of fixed temperature.

  In this case, the normal derivative at the boundary is proportional to the difference between the temperature of the boundary and the fixed temperature of the external environment.

  The result as time tends to infinity is that the temperature of the plaque tends to that of the environment.

  Shown a solved example in the <FreeFem> tutorial: https://doc.freefem.org/tutorials/thermalConduction.html (https://github.com/FreeFem/FreeFem-doc/blob/1d5996d8b891fd553fd318321249c2c30f693fc3/source/tutorials/thermalConduction.rst)

= Open boundary condition
{parent=Neumann boundary condition}

In the context of wave-like equations, an open-boundary condition is one that "lets the wave go through without reflection".

This condition is very useful when we want to simulate infinite domains with a numerical method. <Ciro Santilli> wants to do this all the time when trying to come up with demos for his <physics> writings.

Here are some resources that cover such boundary conditions:
* https://www.asc.tuwien.ac.at/~arnold/pdf/graz/graz.pdf lots of slides
* http://hplgit.github.io/wavebc/doc/pub/._wavebc_cyborg002.html mentions them and gives a 1D formula. It mentions that things get complicated in 2D and 3D TODO why.

  The other page: http://hplgit.github.io/wavebc/doc/pub/._wavebc_cyborg003.html shows solution demos.

= Mixed boundary condition
{parent=Neumann boundary condition}
{wiki}

Multiple <boundary conditions> for different parts of the boundary.

= Time dependent boundary condition
{parent=Boundary condition}

Most commonly, <boundary conditions> such as the <Dirichlet boundary condition> are taken to be fixed values in time.

But it also makes sense to think about cases where those values vary in time.

Some bibliography:
* https://math.stackexchange.com/questions/261251/heat-equation-with-time-dependent-boundary-conditions
* https://secure.math.ubc.ca/~peirce/M257_316_2012_Lecture_20.pdf

= Control theory
{parent=Differential equation}
{wiki}

This basically adds one more ingredient to <partial differential equations>: a <function> that we can select.

And then the question becomes: if this function has such and such limitation, can we make the solution of the <differential equation> have such and such property?

It's quite fun from a mathematics point of view!

Control theory also takes into consideration possible <discretization> of the domain, which allows using <numerical methods to solve partial differential equations>, as well as digital, rather than analogue control methods.

= Control engineering
{parent=Control theory}
{wiki}

= Control system
{parent=Control theory}
{wiki}

= Feedback loop
{parent=Control theory}

= Control loop
{synonym}
{title2}

= Series
{disambiguate=mathematics}
{parent=Calculus}
{wiki}

= Power series
{parent=Series (mathematics)}
{wiki}

= Analytic function
{parent=Power series}
{wiki}

= Sine and cossine
{parent=Analytic function}
{wiki}

= Sinusoidal
{parent=Sine and cossine}
{tag=Periodic function}

A function that is either a <sine> or <cosine>, i.e. we don't know or care where the origin is exactly.

This is particularly relevant in <electronics>, where the <oscilloscope>'s time origin is set to match the wave.

= Sine
{parent=Sine and cossine}
{wiki}

= Cosine
{parent=Sine and cossine}
{wiki}

= Radius of convergence
{parent=Power series}
{wiki}

= Taylor series
{c}
{parent=Power series}
{wiki}

= Gradient, Divergence, Curl, and Laplacian
{parent=Calculus}

= Curl
{disambiguate=mathematics}
{parent=Gradient, Divergence, Curl, and Laplacian}
{title2=$\curl{}$}
{wiki}

Points in the direction in which a wind spinner spins fastest.

= Nabla symbol
{parent=Gradient, Divergence, Curl, and Laplacian}
{title2=$\nabla$}
{wiki}

= Nabla
{synonym}

As if <Greek letters> weren't enough, <physicists> and <mathematicians> also like to make up tons of symbols, <mathematical symbol that looks like a Greek letter but isn't>[some of which look like the could actually be Greek letters]!

Nabla is one of those: it was completely made up in modern times, and just happens to look like an inverted upper case <delta (letter)> to make things even more confusing!

Nabla means "harp" in Greek, which looks like the symbol.

= Del
{parent=Nabla symbol}
{wiki}

Oh, and if it weren't enough, <mathematicians> have a separate name for the damned <nabla symbol> : "del" instead of "nabla".

TODO why is it called "Del"? Is is because it is an inverted uppercase <delta (letter)>?

= Divergence
{parent=Gradient, Divergence, Curl, and Laplacian}
{title2=$\div{}$}
{title2=$div()$}
{wiki}

Takes a <vector (mathematics)> field as input and produces a scalar field.

Mnemonic: it gives out the amount of fluid that is going in or out of a given volume per unit of time.

Therefore, if you take a cubic volume:
* the input has to be the 6 flows across each face, therefore 3 derivatives
* the output is the variation of the quantity of fluid, and therefore a scalar

= Gradient
{parent=Gradient, Divergence, Curl, and Laplacian}
{title2=$\grad{}$}
{wiki}

Takes a scalar field as input and produces a vector field.

Mnemonic: the gradient shows the direction in which the function increases fastest.

Think of a color gradient going from white to black from left to right.

Therefore, it has to:
* take a scalar field as input. Otherwise, how do you decide which vector is larger than the other?
* output a vector field that contains the direction in which the scalar increases fastest locally at each point. This has to give out vectors, since we are talking about directions

= Laplace operator
{parent=Gradient, Divergence, Curl, and Laplacian}
{title2=$\Delta$}
{title2=$\nabla^2$}
{wiki}

= Laplacian
{c}
{synonym}

Can be denoted either by:
* the upper case <Greek letter> <delta>
* <nabla symbol> squared
Our default symbol is going to be:
$$
\laplacian{}
$$

= D'alembert operator
{c}
{parent=Laplace operator}
{title2=$\Box$}
{wiki}

The <laplace operator> for <Minkowski space>.

Can be nicely written with <Einstein notation> as shown at: <D'alembert operator in Einstein notation>{full}.

= Infinitesimal
{parent=Calculus}
{wiki}

Just use <limit (mathematics)> instead, please. The <French> are particularly guilty of this.