= Calculus {wiki} Well summarized as "the branch of mathematics that deals with [limits]". = Mathematical analysis {parent=Calculus} {wiki} = Analytical {synonym} An fancy name for , with the "more advanced" connotation. = Limit {disambiguate=mathematics} {parent=Calculus} {wiki} = Limit {synonym} The fundamental concept of ! The reason why the epsilon delta definition is so venerated is that it fits directly into well known methods of the , making the notion completely precise. = Convergent series {parent=Limit (mathematics)} {wiki} = Convergence {disambiguate=mathematics} {synonym} = Converges {disambiguate=mathematics} {synonym} = Convergent {disambiguate=mathematics} {synonym} = Continuous function {parent=Limit (mathematics)} {wiki} = Continuity {synonym} = Continuous {synonym} = Continuous problems are simpler than discrete ones {parent=Continuous function} This is a general philosophy that , and likely others, observes over and over. Basically, , or higher order conditions like seem to impose greater constraints on problems, which make them more solvable. Some good examples of that: * complex problems: * * simple problems: * characterization of = Discrete {parent=Continuous function} Something that is very not . Notably studied in . = Discretization {parent=Discrete} {wiki} = Discretize {synonym} = Infinity {parent=Limit (mathematics)} {title2=$\infty$} {wiki} = Infinite {synonym} \Q[Chuck Norris counted to infinity. Twice.] = Finite {synonym} There are a few related concepts that are called infinity in : * that are greater than any number * the of a that does not have a finite number of elements * in some number systems, there is an explicit "element at infinity" that is not a , e.g. = L'Hôpital's rule {parent=Limit (mathematics)} {title2=limit of a ratio} {wiki} = Derivative {parent=Calculus} {wiki} = Chain rule {parent=Derivative} {wiki} Here's an example of the chain rule. Suppose we want to calculate: $$ \dv{e^2x}{x} $$ So we have: $$ f(x) = e^x \\ g(x) = 2x $$ and so: $$ f'(x) = e^x \\ g'(x) = 2 $$ Therefore the final result is: $$ f'(g(x))g'(x) = e^{2x} 2 = 2 e ^{2x} $$ = Multivariable chain rule {parent=Chain rule} = Differentiable function {parent=Derivative} {wiki} = Differentiable {synonym} = Differentiability {synonym} = Smoothness {parent=Differentiable function} {wiki} = Infinitely differentiable function {parent=Differentiable function} = $C^{\infty}$ {synonym} {title2} = Bump function {parent=Infinitely differentiable function} {wiki} = Flat top bump function {parent=Bump function} https://math.stackexchange.com/questions/1786964/is-it-possible-to-construct-a-smooth-flat-top-bump-function = Maxima and minima {parent=Derivative} {wiki} Given a $f$: * from some space. For beginners the but more generally should work in general * to the we want to find the points $x$ of the of $f$ where the value of $f$ is smaller (for minima, or larger for maxima) than all other points in some of $x$. In the case of , this problem is treated under the theory of the . = Lifegard problem {parent=Maxima and minima} https://pumphandle.consulting/2020/09/04/the-lifeguard-problem-solved/ = Derivative test {parent=Maxima and minima} {wiki} = Saddle point {parent=Maxima and minima} {wiki} = Newton dot notation {c} {parent=Derivative} = Partial derivative {parent=Derivative} {wiki} = Partial derivative notation {parent=Partial derivative} = Partial derivative symbol {parent=Partial derivative notation} {title2=$\partial$} Nope, it is not a , notably it is not a lowercase . It is just some random made up symbol that looks like a . Which is of course derived from , which is why it is all so damn confusing. I think the symbol is usually just read as "" as in "d f d x" for $\pdv{F(x, y, z)}{x}$. = Partial label partial derivative notation {parent=Partial derivative notation} {title2=$\partial_x F$} {title2=$\partial_y F$} = Partial index partial derivative notation {parent=Partial derivative notation} {title2=$\partial_0 F$} {title2=$\partial_1 F$} This notation is not so common in basic mathematics, but it is so incredibly convenient, especially with as shown at {full}: $$ \partial_0 F(x, y, z) = \pdv{F(x, y, z)}{x} \\ \partial_1 F(x, y, z) = \pdv{F(x, y, z)}{y} \\ \partial_2 F(x, y, z) = \pdv{F(x, y, z)}{x} \\ $$ This notation is similar to , but it uses indices instead of labels such as $x$, $y$, etc. = Total derivative {parent=Derivative} {wiki} The total derivative of a function assigns for every point of the domain a linear map with same domain, which is the best linear approximation to the function value around this point, i.e. the tangent plane. E.g. in 1D: $$ Total derivative = D[f(x_0)](x) = f(x_0) + \pdv{f}{x}(x_0) \times x $$ and in 2D: $$ D[f(x_0, y_0)](x, y) = f(x_0, y_0) + \pdv{f}{x}(x_0, y_0) \times x + \pdv{f}{y}(x_0, y_0) \times y $$ = Directional derivative {c} {parent=Derivative} {wiki} = Integral {parent=Calculus} {wiki} = Area {parent=Integral} {wiki} = Volume {parent=Area} {wiki} <3D> . = Riemann integral {c} {parent=Integral} {wiki} The easy and less generic . The harder one is the . = Lebesgue integral {c} {parent=Integral} {wiki=Lebesgue_integration} "More complex and general" integral. Matches the for "simple functions", but also [works for some "funkier" functions that Riemann does not work for]. sometimes wonders how much someone can gain from learning this besides , since we can hand-wave a on almost anything that is of practical use. The beauty is good reason enough though. = Lebesgue integral vs Riemann integral {c} {parent=Lebesgue integral} Advantages over Riemann: * . * https://youtu.be/PGPZ0P1PJfw?t=710 you are able to switch the order of integrals and limits of function sequences on non-uniform convergence. TODO why do we care? This is linked to the of course, but concrete example? \Video[https://youtube.com/watch?v=PGPZ0P1PJfw] {title=Riemann integral vs. Lebesgue integral by The Bright Side Of Mathematics (2018)} {description= https://youtube.com/watch?v=PGPZ0P1PJfw&t=808 shows how Lebesgue can be visualized as a partition of the function range instead of domain, and then you just have to be able to measure the size of pre-images. One advantage of that is that the range is always one dimensional. But the main advantage is that having infinitely many discontinuities does not matter. Infinitely many discontinuities can make the Riemann partitioning diverge. But in Lebesgue, you are instead measuring the size of preimage, and to fit infinitely many discontinuities in a finite domain, the size of this preimage is going to be zero. So then the question becomes more of "how to define the measure of a subset of the domain". Which is why we then fall into ! } = Real world applications of the Lebesgue integral {parent=Lebesgue integral vs Riemann integral} In "practice" it is likely "useless", because the functions that it can integrate that Riemann can't are just too funky to appear in practice :-) Its value is much more indirect and subtle, as in "it serves as a solid basis of " due to the definition of . Bibliography: * https://math.stackexchange.com/questions/53121/how-do-people-apply-the-lebesgue-integration-theory * https://www.quora.com/What-are-some-real-life-applications-of-Lebesgue-Integration = Lebesgue measurable {c} {parent=Lebesgue integral} = Lebesgue integral of $\LP$ is complete but Riemann isn't {c} {parent=Lebesgue integral} $\LP$ is: * [complete] under the Lebesgue integral, this result is may be called the * not complete under the : https://math.stackexchange.com/questions/397369/space-of-riemann-integrable-functions-not-complete And then this is why basically lives in : not being complete makes no sense physically, it would mean that you can get closer and closer to states that don't exist! TODO intuition = Riesz-Fischer theorem {c} {parent=Lebesgue integral of LP is complete but Riemann isn't} {wiki=Riesz–Fischer_theorem} A measurable function defined on a closed interval is square integrable (and therefore in ) if and only if converges in norm the function: $$ \lim_{N \to \infty} \left \Vert S_N f - f \right \|_2 = 0 $$ = $\LP$ is complete {parent=Riesz-Fischer theorem} TODO = Fourier basis is complete for $\LTwo$ {id=fourier-basis-is-complete-for-l2} {c} {parent=Riesz-Fischer theorem} https://math.stackexchange.com/questions/316235/proving-that-the-fourier-basis-is-complete-for-cr-2-pi-c-with-l2-norm is a norm version of it, and is stronger pointwise almost everywhere version. Note that the is weaker because the pointwise limit could not exist just according to it: . = $L^p$ norm sequence convergence does not imply pointwise convergence {id=lp-norm-sequence-convergence-does-not-imply-pointwise-convergence} {parent=fourier basis is complete for l2} https://math.stackexchange.com/questions/138043/does-convergence-in-lp-imply-convergence-almost-everywhere There are explicit examples of this. We can have ever thinner disturbances to convergence that keep getting less and less area, but never cease to move around. If it does converge pointwise to something, then it must match of course. = Carleson's theorem {c} {parent=fourier basis is complete for l2} {wiki} The of an function (i.e. the function generated from the infinite sum of weighted sines) converges to the function pointwise almost everywhere. The theorem also seems to hold (maybe trivially given the transform result) for the (TODO if trivially, why trivially). Only proved in 1966, and known to be a hard result without any known simple proof. This theorem of course implies that , as it explicitly constructs a decomposition into the Fourier basis for every single function. TODO vs . Is this just a stronger pointwise result, while Riesz-Fischer is about norms only? One of the many . = Lp space {parent=Lebesgue integral of LP is complete but Riemann isn't} {wiki} = $\LP$ {synonym} {title2} Integrable functions to the power $p$, usually and in this text assumed under the because: = $L^1$ {id=l1-space} {parent=Lp space} = $\LTwo$ {id=l2} {parent=Lp space} <\LP> for $p == 2$. $\LTwo$ is by far the most important of $\LP$ because it is [quantum mechanics states] live, because the total probability of being in any state has to be 1! has some crucially important properties that other $\LP$ don't (TODO confirm and make those more precise): * it is the only $\LP$ that is because it is the only one where an inner product compatible with the metric can be defined: * https://math.stackexchange.com/questions/2005632/l2-is-the-only-hilbert-space-parallelogram-law-and-particular-ft-gt * https://www.quora.com/Why-is-L2-a-Hilbert-space-but-not-Lp-or-higher-where-p-2 * , which is great for solving = Plancherel theorem {c} {parent=l2} Some sources say that this is just the part that says that the of a function is the same as the norm of its . Others say that this theorem actually says that the is . The comment at https://math.stackexchange.com/questions/446870/bijectiveness-injectiveness-and-surjectiveness-of-fourier-transformation-define/1235725#1235725 may be of interest, it says that the statement is an easy consequence from the one, thus the confusion. TODO does it require it to be in as well? https://en.wikipedia.org/w/index.php?title=Plancherel_theorem&oldid=987110841 says yes, but https://courses.maths.ox.ac.uk/node/view_material/53981 does not mention it. = The Fourier transform is a bijection in $L^2$ {parent=Plancherel theorem} As mentioned at {full}, some people call this part of , while others say it is just a corollary. This is an important fact in , since it is because of this that it makes sense to talk about as two dual representations of the that contain the exact same amount of information. = Every Riemann integrable function is Lebesgue integrable {parent=Plancherel theorem} But only for the proper Riemann integral: https://math.stackexchange.com/questions/2293902/functions-that-are-riemann-integrable-but-not-lebesgue-integrable = Measure theory {parent=Calculus} {wiki=Measure_(mathematics)} Main motivation: . The Bright Side Of Mathematics 2019 playlist: https://www.youtube.com/watch?v=xZ69KEg7ccU&list=PLBh2i93oe2qvMVqAzsX1Kuv6-4fjazZ8j The key idea, is that we can't define a measure for the power set of R. Rather, we must select a large measurable subset, and the Borel sigma algebra is a good choice that matches intuitions. = Fourier series {c} {parent=Calculus} {wiki} Approximates an original function by sines. If the function is "well behaved enough", the approximation is to arbitrary precision. 's original motivation, and a key application, is . Can only be used to approximate for periodic functions (obviously from its definition!). The however overcomes that restriction: * https://math.stackexchange.com/questions/1115240/can-a-non-periodic-function-have-a-fourier-series * https://math.stackexchange.com/questions/1378633/every-function-can-be-represented-as-a-fourier-series The Fourier series behaves really nicely in , where it always exists and converges pointwise to the function: . \Video[https://www.youtube.com/watch?v=r6sGWTCMz2k] {title=But what is a ? by <3Blue1Brown> (2019)} {description=Amazing 2D visualization of the decomposition of complex functions.} = Applications of the Fourier series {parent=Fourier series} = Solving partial differential equations with the Fourier series {parent=Applications of the Fourier series} See: https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366 from . of certain equations like the and are solved immediately by calculating the of initial conditions! Other basis besides the Fourier series show up for other equations, e.g.: * * = Discrete Fourier transform {parent=Fourier series} {title2=DFT} {wiki} Input: a sequence of $N$ $x_k$. Output: another sequence of $N$ $X_k$ such that: $$ x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{i 2 \pi \frac{k n}{N}} $$ Intuitively, this means that we are braking up the complex signal into $N$ frequencies: * $X_0$: is kind of magic and ends up being a constant added to the signal because $e^{i 2 \pi \frac{k n}{N}} = e^{0} = 1$ * $X_1$: that completes one cycle over the signal. The larger the $N$, the larger the resolution of that . But it completes one cycle regardless. * $X_2$: that completes two cycles over the signal * ... * $X_{N-1}$: that completes $N-1$ cycles over the signal and is the amplitude of each sine. We use in our definitions because it just makes every formula simpler. Motivation: similar to the : * compression: a would use N points in the time domain, but in the frequency domain just one, so we can throw the rest away. A sum of two sines, only two. So if your signal has periodicity, in general you can compress it with the transform * noise removal: many systems add noise only at certain frequencies, which are hopefully different from the main frequencies of the actual signal. By doing the transform, we can remove those frequencies to attain a better In particular, the is used in after a . historically likely grew more and more over analog processing as digital [processors] got faster and faster as it gives more flexibility in algorithm design. Sample software implementations: * , notably see the example: {file} \Image[https://upload.wikimedia.org/wikipedia/commons/thumb/3/31/DFT_2sin%28t%29_%2B_cos%284t%29_25_points.svg/583px-DFT_2sin%28t%29_%2B_cos%284t%29_25_points.svg.png] = Discrete Fourier transform of a real signal {parent=Discrete Fourier transform} See sections: "Example 1 - N even", "Example 2 - N odd" and "Representation in terms of sines and cosines" of https://www.statlect.com/matrix-algebra/discrete-Fourier-transform-of-a-real-signal The transform still has complex numbers. Summary: * $X_0$ is real * $X_1 = \conj{X_{N-1}}$ * $X_2 = \conj{X_{N-2}}$ * $X_k = \conj{X_{N-k}}$ Therefore, we only need about half of $X_k$ to represent the signal, as the other half can be derived by conjugation. "Representation in terms of sines and cosines" from https://www.statlect.com/matrix-algebra/discrete-Fourier-transform-of-a-real-signal then gives explicit formulas in terms of $X_k$. for example has "Real FFTs" for this: https://numpy.org/doc/1.24/reference/routines.fft.html#real-ffts = Normalized DFT {parent=Discrete Fourier transform} There are actually two possible definitions for the DFT: * 1/N, given as "the default" in many sources: $$ x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{i 2 \pi \frac{k n}{N}} $$ * $1/\sqrt{N}$, known as the "normalized DFT" by some sources: https://www.dsprelated.com/freebooks/mdft/Normalized_DFT.html[], definition which we adopt: $$ x_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k e^{i 2 \pi \frac{k n}{N}} $$ The $1/\sqrt{N}$ is nicer mathematically as the inverse becomse more symmetric, and power is conserved between time and frequency domains. * https://math.stackexchange.com/questions/3285758/scaling-magnitude-of-the-dft * https://dsp.stackexchange.com/questions/63001/why-should-i-scale-the-fft-using-1-n * https://www.dsprelated.com/freebooks/mdft/Normalized_DFT.html = Fast Fourier transform {parent=Discrete Fourier transform} {wiki} An efficient to calculate the . = Fourier transform {c} {parent=Fourier series} {wiki} Continuous version of the . Can be used to represent functions that are not periodic: https://math.stackexchange.com/questions/221137/what-is-the-difference-between-fourier-series-and-fourier-transformation while the is only for periodic functions. Of course, every function defined on a finite line segment (i.e. a ). Therefore, the can be seen as a generalization of the that can also decompose functions defined on the entire . As a more concrete example, just like the is how you solve the on a line segment with as shown at: {full}, the is what you need to solve the problem when the is the entire . = Multidimensional Fourier transform {parent=Fourier transform} Lecture notes: * http://www.robots.ox.ac.uk/~az/lectures/ia/lect2.pdf Lecture 2: 2D Fourier transforms and applications by A. Zisserman (2014) \Video[https://www.youtube.com/watch?v=v743U7gvLq0] {title=How the 2D FFT works by Mike X Cohen (2017)} {description=Animations showing how the 2D Fourier transform looks like for simple inpuf functions.} = Fourier inversion theorem {parent=Fourier transform} {wiki} A set of theorems that prove under different conditions that the has an inverse for a given space, examples: * for = Laplace transform {c} {parent=Fourier transform} \Video[https://www.youtube.com/watch?v=7UvtU75NXTg] {title=The Laplace Transform: A Generalized Fourier Transform by Steve Brunton (2020)} {description=Explains how the Laplace transform works for functions that do not go to zero on infinity, which is a requirement for the . No applications in that video yet unfortunately.} = History of the Fourier series {parent=Fourier series} First published by Fourier in 1807 to solve the . = Topology {parent=Calculus} {wiki} = Topological {synonym} Topology is the plumbing of . The key concept of topology is a . Just by havin the notion of neighbourhood, concepts such as and can be defined without the need to specify a precise numerical value to the distance between two points with a . As an example. consider the , which is also naturally a . That group does not usually have a notion of distance defined for it by default. However, we can still talk about certain properties of it, e.g. that , and that . = Covering space {parent=Topology} {wiki} Basically it is a larger space such that there exists a from the large space onto the smaller space, while still being compatible with the of the small space. We can characterize the cover by how injective the function is. E.g. if two elements of the large space map to each element of the small space, then we have a and so on. = Double cover {parent=Covering space} = Neighbourhood {disambiguate=mathematics} {parent=Topology} {wiki} The key concept of . = Topological space {parent=Topology} {wiki} = Manifold {parent=Topology} {wiki} We map each point and a small enough of it to <\R^n>, so we can talk about the manifold points in terms of coordinates. Does not require any further structure besides a consistent map. Notably, does not require nor an addition operation to make a . Manifolds are [cool]. Especially which we can do on. A notable example of a manifold is the space of of a . For example, in a problem such as the , some of those generalized coordinates could be angles, which wrap around and thus are not . = Atlas {disambiguate=topology} {parent=Manifold} {wiki} Collection of . The key element in the definition of a . = Coordinate chart {parent=Atlas (topology)} = Covariant derivative {parent=Manifold} {wiki} A generalized definition of that works on . TODO: how does it maintain a single value even across different ? = Differentiable manifold {parent=Manifold} {wiki} TODO find a concrete numerical example of doing on a differentiable manifold and visualizing it. Likely start with a boring circle. That would be sweet... = Tangent space {parent=Manifold} {wiki} TODO what's the point of it. Bibliography: * https://www.youtube.com/watch?v=j1PAxNKB_Zc Manifolds \#6 - Tangent Space (Detail) by WHYB maths (2020). This is worth looking into. * https://www.youtube.com/watch?v=oxB4aH8h5j4 actually gives a more concrete example. Basically, the vectors are defined by saying "we are doing the of any function along this direction". One thing to remember is that of course, the most convenient way to define a function $f$ and to specify a direction, is by using one of the . We can then just switch between charts by change of basis. * http://jakobschwichtenberg.com/lie-algebra-able-describe-group/ by * https://math.stackexchange.com/questions/1388144/what-exactly-is-a-tangent-vector/2714944 What exactly is a tangent vector? on = Tangent vector to a manifold {parent=Tangent space} A member of a . = One-form {parent=Manifold} {wiki} https://www.youtube.com/watch?v=tq7sb3toTww&list=PLxBAVPVHJPcrNrcEBKbqC_ykiVqfxZgNl&index=19 mentions that it is a bit like a but for a : it measures how much that vector [derives] along a given direction. = Metric {disambiguate=mathematics} {parent=Topology} {title2=$d(x, y)$} {wiki} = Distance {synonym} = Metric {synonym} A metric is a function that give the distance, i.e. a , between any two elements of a space. A metric may be induced from a as shown at: {full}. Because a [norm can be induced by an inner product], and the given by the , in simple cases metrics can also be represented by a . = Metric space {parent=Metric (mathematics)} {wiki} Canonical example: . = Metric space vs normed vector space vs inner product space {parent=Metric space} TODO examples: * that is not a * vs : a norm gives size of one element. A is the distance between two elements. Given a norm in a space with subtraction, we can obtain a distance function: the . \Image[https://upload.wikimedia.org/wikipedia/commons/7/74/Mathematical_Spaces.png] {title=Hierarchy of topological, metric, normed and inner product spaces} = Complete metric space {parent=Metric space} {wiki} In plain English: the space has no visible holes. If you start walking less and less on each step, you always converge to something that also falls in the space. One notable example where completeness matters: . = Normed vector space {parent=Metric space} {wiki} = Inner product space {parent=Normed vector space} {wiki} Subcase of a , therefore also necessarily a . = Inner product {parent=Inner product space} {wiki} Appears to be analogous to the , but also defined for . = Norm {disambiguate=mathematics} {parent=Metric space} {title2=$|x|$} = Norm {synonym} Vs : * a norm is the size of one element. A is the distance between two elements. * a norm is only defined on a . A could be defined on something that is not a vector space. Most basic examples however are also . = Norm induced by an inner product {parent=Norm (mathematics)} {wiki} = Norm induced by the inner product {synonym} An $x \cdot y$ induces a with: $$ |x| = \sqrt{} $$ = Metric induced by a norm {parent=Norm (mathematics)} In a , a may be induced from a norm by using : $$ d(x, y) = |x - y| $$ = Pseudometric space {parent=Metric space} {wiki} but where the distance between two distinct points can be zero. Notable example: {child}. = Compact space {parent=Topology} {wiki} = Compact {synonym} = Dense set {parent=Topology} {wiki} = Connected space {parent=Topology} {wiki} = Disconnected space {synonym} = Connected component {parent=Connected space} {wiki} When a is made up of several smaller , then each smaller component is called a "connected component" of the larger space. See for example the = Simply connected space {parent=Connected space} {wiki} = Simply connected {synonym} = Loop {disambiguate=topology} {parent=Simply connected space} = Homotopy {parent=Topology} {wiki} = Homotopic {synonym} = Generalized Poincaré conjecture {parent=Homotopy} There are two cases: * (topological) manifolds * differential manifolds Questions: are all compact manifolds / differential manifolds homotopic / diffeomorphic to the sphere in that dimension? * for topological manifolds: this is a generalization of the . Original problem posed, $n = 3$ for topological manifolds. . Last to be proven, only the 4-differential manifold case missing as of 2013. Even the truth for all $n > 4$ was proven in the 60's! Why is low dimension harder than high dimension?? Surprise! AKA: classification of compact 3-manifolds. The result turned out to be even simpler than compact 2-manifolds: there is only one, and it is equal to the 3-sphere. For dimension two, we know there are infinitely many: * for differential manifolds: Not true in general. First counter example is $n = 7$. Surprise: what is special about the number 7!? Counter examples are called . Totally unpredictable count table: | Dimension | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | | Smooth types | 1 | 1 | 1 | ? | 1 | 1 | 28 | 2 | 8 | 6 | 992 | 1 | 3 | 2 | 16256 | 2 | 16 | 16 | 523264 | 24 | $n = 4$ is an open problem, there could even be infinitely many. Again, why are things more complicated in lower dimensions?? = Exotic sphere {parent=Generalized Poincaré conjecture} {wiki} = Poincaré conjecture {c} {parent=Generalized Poincaré conjecture} {wiki} = Classification of closed surfaces {parent=Generalized Poincaré conjecture} * https://en.wikipedia.org/wiki/Surface_(topology)#Classification_of_closed_surfaces * http://www.proofwiki.org/wiki/Classification_of_Compact_Two-Manifolds So simple!! You can either: * cut two holes and glue a handle. This is easy to visualize as it can be embedded in <\R^3>: you just get a , then a double torus, and so on * cut a single hole and glue a in it. Keep in mind that this is possible because the has a single boundary just like the hole you just cut. This leads to another infinite family that starts with: * 1: * 2: A handle cancels out a , so adding one of each does not lead to a new object. You can glue a Mobius strip into a single hole in dimension larger than 3! And it gives you a Klein bottle! Intuitively speaking, they can be sees as the smooth surfaces in N-dimensional space (called an embedding), such that deforming them is allowed. 4-dimensions is enough to embed cover all the cases: 3 is not enough because of the Klein bottle and family. = Torus {c} {parent=Classification of closed surfaces} {wiki} = Möbius strip {c} {parent=Classification of closed surfaces} {wiki} = Klein bottle {c} {parent=Classification of closed surfaces} {wiki} with two stuck into it as per the . = Real coordinate space {c} {parent=Topology} {wiki} = $\R^n$ {synonym} {title2} = Real line {parent=Real coordinate space} {wiki} = $\R^1$ {synonym} {title2} = 1D {synonym} = Real plane {parent=Real coordinate space} = $\R^2$ {synonym} {title2} = 2D {synonym} = Real coordinate space of dimension three {c} {parent=Real coordinate space} = $\R^3$ {synonym} {title2} = 3D {synonym} = Real coordinate space of dimension four {c} {parent=Real coordinate space} = $\R^4$ {synonym} {title2} = Four-dimensional space {synonym} = Four-dimensional {synonym} = 4D {synonym} {title2} Important 4D spaces: * <3-sphere> = Visualizing 4D {parent=Real coordinate space of dimension four} Simulate it. Just simulate it. \Video[http://youtube.com/watch?v=0t4aKJuKP0Q] {title=4D Toys: a box of four-dimensional toys by Miegakure (2017)} = Dimension {parent=Real coordinate space} {wiki} = Infinite dimensional {parent=Dimension} = Infinite dimensions {synonym} https://math.stackexchange.com/questions/466707/what-are-some-examples-of-infinite-dimensional-vector-spaces = Finite dimensional {parent=Infinite dimensional} = Finite dimension {synonym} = Complex coordinate space {parent=Real coordinate space} {wiki} = $\C^n$ {title2} {synonym} = Complex coordinate space of dimension 2 {parent=Complex coordinate space} = $\C^2$ {synonym} {title2} = Complex dot product {parent=Complex coordinate space} This section is about the definition of the over , which extends the definition of the over . Some motivation is discussed at: https://math.stackexchange.com/questions/2459814/what-is-the-dot-product-of-complex-vectors/4300169#4300169 The complex dot product is defined as: $$ \sum a_i \overline{b_i} $$ E.g. in $\C^1$: $$ (a + bi) \cdot (c + di) = (a + bi) (\overline{c + di}) = (a + bi) (c - di) = (ac + bd) + (bc - ad)i $$ We can see therefore that this is a
, and a positive definite because: $$ (a + bi) \cdot (a + bi) = (aa + bb) + (ba - ab)i = a^2 + b^2 $$ Just like the usual , this will be a by definition. = Norm induced by the complex dot product {parent=Complex dot product} {tag=Norm induced by an inner product} Given: $$ x = \sum_{k=1}^n a_k + b_k i \in \C^n, a_k, b_k \in \R $$ the norm ends up being: $$ |x| = \sqrt{\sum_{k=1}^n a_k^2 + b_k^2} $$ E.g. in : $$ |(2 + 3i, -1 + 5i)| = \sqrt{2^2 + 3^2 + (-1)^2 + 5^2} = \sqrt{4 + 9 + 1 + 25} = \sqrt{39} $$ = Euclidean space {c} {parent=Real coordinate space} {wiki} = Euclidean {synonym} <\R^n> with extra structure added to make it into a {parent}. = Euclidean metric signature matrix {parent=Euclidean space} The . = Cartesian coordinate system {c} {parent=Euclidean space} {wiki} = Cartesian coordinate {synonym} = Polar coordinate system {c} {parent=Euclidean space} {wiki} = Polar coordinate {synonym} = Spherical coordinate system {c} {parent=Polar coordinate system} {wiki} = Spherical coordinate {synonym} = Pythagorean theorem {c} {parent=Euclidean space} {wiki} = Non-Euclidean geometry {c} {parent=Euclidean space} {wiki} = Non-Euclidean {synonym} = Elliptic geometry {parent=Non-Euclidean geometry} {wiki} = Model of elliptic geometry {parent=Elliptic geometry} = Projective elliptic geometry {parent=Model of elliptic geometry} = Projective model of elliptic geometry {synonym} Each elliptic space can be modelled with a . The best thing is to just start thinking about the . = Hyperbolic gemoetry {parent=Non-Euclidean geometry} {wiki} = Hyperbolic functions {parent=Hyperbolic gemoetry} {wiki} = Hyperbolic sine {parent=Hyperbolic functions} = sinh {synonym} = Hyperbolic cossine {parent=Hyperbolic functions} = cosh {synonym} = Distribution {disambiguate=mathematics} {parent=Calculus} Generalize to allow adding some useful things which people wanted to be classical functions but which are not, It therefore requires you to redefine and reprove all of calculus. For this reason, most people are tempted to assume that all the hand wavy intuitive arguments teachers give are true and just move on with life. And they generally are. One notable example where distributions pop up are the of the in , which are given by , which is most commonly rigorously defined in terms of . Distributions are also defined in a way that allows you to do calculus on them. Notably, you can define a , and the derivative of the is the . = Dirac delta function {c} {parent=Distribution (mathematics)} {wiki} The "0-width" pulse that integrates to a step. There's not way to describe it as a classical , making it the most important example of a . Applications: * in . It's not a coincidence that the function is named after . = Green's function {c} {parent=Dirac delta function} {wiki} = Heaviside step function {c} {parent=Dirac delta function} {wiki} = Normal distribution {c} {parent=Distribution (mathematics)} {wiki} = Complex analysis {parent=Calculus} {wiki} The surprising thing is that a bunch of results are simpler in complex analysis! = Complex analysis bibliography {parent=Complex analysis} = Complex Analysis by Juan Carlos Ponce Campuzano {c} {parent=Complex analysis bibliography} {tag=Visual math HTML book} {tag=CC BY-NC-SA} https://complex-analysis.com = Holomorphic function {parent=Complex analysis} {wiki} Being a complex holomorphic function is an extremely strong condition. The existence of the first derivative implies the existence of all derivatives. Another extremely strong consequence is the . "Holos" means "entire" in Greek, so maybe this is a reference to the fact that due to the identity theorem, knowing the function on a small open ball implies knowing the function everywhere. = Analytic continuation {parent=Complex analysis} {wiki} is a good quick visual non-mathematical introduction is to it. The key question is: how can this continuation be unique since we are defining the function outside of its original domain? The answer is: due to the . = Visualizing the Riemann hypothesis and analytic continuation by 3Blue1Brown (2016) {parent=Analytic continuation} Good ultra quick visual non-mathematical introduction to the Riemann hypothesis and analytic continuation. \Video[http://youtube.com/watch?v=sD0NjbwqlYw] = Identity theorem {parent=Analytic continuation} {wiki} Essentially, defining an on any open subset, no matter how small, also uniquely defines it everywhere. This is basically why it makes sense to talk about at all. One way to think about this is because the matches the exact value of an holomorphic function no matter how large the difference from the starting point. Therefore a holomorphic function basically only contains as much information as a countable sequence of numbers. = Riemann zeta function {c} {parent=Identity theorem} {wiki} = Riemann hypothesis {c} {parent=Riemann zeta function} {wiki} is a good quick visual non-mathematical introduction is to it. One of the {parent} and {parent}. \Video[https://www.youtube.com/watch?v=e4kOh7qlsM4] {title=What is the Riemann Hypothesis REALLY about? by HexagonVideos (2022)} = Hilbert space {c} {parent=Calculus} {wiki} Key for , see: , the most important example by far being . = Complete basis {parent=Hilbert space} Finding a complete basis such that each vector solves a given is the basic method of solving through . The first example of this you must see is . Notable examples: * {child} for the as shown at and * {child} for the * {child} for in * {child} for the <2D wave equation on a circular domain> in = Differential equation {parent=Calculus} {tag=Functional equation} {wiki} = Euler number {c} {parent=Differential equation} {title2=$e$} {wiki} = Natural logarithm {parent=Euler number} {title2=$ln(n)$} {title2=$log_e(n)$} {wiki} = Logarithmic integral function {parent=Natural logarithm} {title2=$li(x) = \int _{0}^{x}{\frac {dt}{\ln t}}$} {wiki} = Logarithm integral {synonym} {title2} Sample software implementations: * : {file} = Euler-Mascheroni constant {c} {parent=Natural logarithm} {wiki=Euler–Mascheroni constant} : https://math.stackexchange.com/questions/629630/simple-proof-euler-mascheroni-gamma-constant = Linear differential equation {parent=Differential equation} {wiki} The name is a bit obscure if you don't think in very generalized terms right out of the gate. It refers to a of [multiple variables], which by definition must have the super simple form of: $$ f(x_0, x_1, ..., x_n) = c_0x_0 + c_1x_1 + ... + c_nx_n + k $$ and then we just put the unknown $y$ and each derivative into that simple polynomial: $$ f(y(x), y'(x), ..., y^{(n)}(x)) = c_0y + c_1y' + ... + c_ny^{(n)} + k $$ except that now the $c_i$ are not just constants, but they can also depend on the argument $x$ (but not on $y$ or its derivatives). Explicit solutions exist for the very specific cases of: * constant coefficients, any degree. These were known for a long time, and are were studied when [Ciro was at university] in the . * degree 1 and any coefficient = Holonomic function {parent=Linear differential equation} {wiki} = Order of a differential equation {parent=Differential equation} {wiki} Order of the highest derivative that appears. = Ordinary differential equation {parent=Differential equation} {title2=ODE} {wiki} = Existence and uniqueness of solutions of ordinary differential equations {parent=Ordinary differential equation} {tag=Existence and uniqueness} = Peano existence theorem {c} {parent=Existence and uniqueness of solutions of ordinary differential equations} {wiki} = Picard-Lindelöf theorem {c} {parent=Existence and uniqueness of solutions of ordinary differential equations} {wiki=Picard–Lindelöf theorem} = System of ordinary differential equations {parent=Ordinary differential equation} = System of linear ordinary differential equations {parent=System of ordinary differential equations} = Partial differential equation {parent=Differential equation} {wiki} = PDE {c} {synonym} {title2} = Analytical method to solve a partial differential equation {parent=Partial differential equation} * {child} = Separation of variables {parent=Analytical method to solve a partial differential equation} {wiki} Technique to solve Naturally leads to the , see: , and to other analogous expansions: One notable application is the solution of the via the . Bibliography: * https://math.libretexts.org/Bookshelves/Differential_Equations/Book%3A_Differential_Equations_for_Engineers_(Lebl)/4%3A_Fourier_series_and_PDEs/4.06%3A_PDEs_separation_of_variables_and_the_heat_equation on for the = Numerical method to solve a partial differential equation {parent=Partial differential equation} {wiki=Numerical_methods_for_partial_differential_equations} = Numerical methods to solve partial differential equations {synonym} The is one of the most common ways to solve PDEs in practice. = Variational formulation of a partial differential equation {parent=Numerical method to solve a partial differential equation} https://www.cis.upenn.edu/~cis515/cis515-12-sl11.pdf Used for example in and as the input description of the PDEs, TODO why. = Weak solution {parent=Variational formulation of a partial differential equation} {wiki} = Finite element method {parent=Numerical method to solve a partial differential equation} {wiki} Used to solve . TODO understand, give intuition, justification of bounds and demo. = Important partial differential equation {parent=Partial differential equation} The majority likely comes from : * {child} * {child} * {child} * {child} * {child} = Laplace's equation {c} {parent=Important partial differential equation} {wiki} Like a but for functions without time dependence, space-only. TODO confirm: does the solution of the heat equation always converge to the solution of the Laplace equation as time tends to infinity? In one dimension, the Laplace equation is boring as it is just a straight line since the second derivative must be 0. That also matches our intuition of the limit solution of the heat equation. Uniqueness: . = Legendre polynomials {c} {parent=Laplace's equation} Show up when solving the on by , which leads to the shown at: https://en.wikipedia.org/w/index.php?title=Legendre_polynomials&oldid=1018881414#Definition_via_differential_equation[]. = Poisson's equation {c} {parent=Laplace's equation} {wiki} Generalization of where the value is not necessarily 0. = Uniqueness theorem for Poisson's equation {c} {parent=Poisson's equation} {wiki} = Harmonic function {parent=Laplace's equation} {wiki} A solution to . = Spherical harmonic {parent=Harmonic function} {wiki=Spherical_harmonics} Correspond to the angular part of in spherical coordinates after using as shown at: https://en.wikipedia.org/wiki/Spherical_harmonics#Laplace's_spherical_harmonics = Heat equation {parent=Important partial differential equation} {wiki} Besides being useful in engineering, it was very important historically from a "development of mathematics point of view", e.g. [it was the initial motivation for the Fourier series]. Some interesting properties: * TODO confirm: for a fixed boundary condition that does not depend on time, the solutions always approaches one specific equilibrium function. This is in contrast notably with the , which can oscillate forever. * TODO: for a given point, can the temperature go down and then up, or is it always monotonic with time? * information propagates instantly to infinitely far. Again in contrast to the wave equation, where information propagates at wave speed. Sample numerical solutions: * with : * * = Heat equation solution with Fourier series {parent=Heat equation} {tag=Solving partial differential equations with the Fourier series} See: https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366 = Wave equation {parent=Important partial differential equation} {wiki} Describes perfect lossless waves on the surface of a string, or on a water surface. Uniqueness: https://math.stackexchange.com/questions/1113622/uniqueness-of-solutions-to-the-wave-equation As mentioned at: https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366[] from citing https://courses.maths.ox.ac.uk/node/view_material/1720[], analogously to the , the wave linear equation can be be solved nicely with . = Wave equation solver {parent=Wave equation} This section talks about solvers/simulators dedicated solving the . Of course, any serious solver will likely be able to solve a wider range of PDE, so this section contains mostly fun toys. For more serious stuff see: {full}. toy solvers: * https://jtiscione.github.io/webassembly-wave/index.html circular domain, create waves with mouse click * https://dionyziz.com/graphics/wave-experiment/ with useless 3D visualization :-), waves with mouse click. Solving itself done on , not GPU. Related: * https://stackoverflow.com/questions/69949335/how-to-simulate-a-wave-equation = Wave equation solution with Fourier series {parent=Wave equation} {tag=Solving partial differential equations with the Fourier series} https://web.archive.org/web/20200621205928/https://courses.maths.ox.ac.uk/node/view_material/1720 also mentioned at https://math.stackexchange.com/questions/579453/real-world-application-of-fourier-series/3729366#3729366 from . = The wave equation can be seen as infinitely many infinitesimal coupled oscillators {parent=Wave equation} TODO confirm, see also: . And then this idea can be used to define/motivate in terms of with . * https://youtu.be/SMmFgIEGYtw?t=324 Quantum Field Theory 2a - Field Quantization I by (2018) = Lossy 1D Wave Equation {parent=Wave equation} {wiki} https://ccrma.stanford.edu/~jos/pasp/Lossy_1D_Wave_Equation.html = Wave {parent=Wave equation} {wiki} = Envelope {disambiguate=waves} {parent=Wave} {wiki} = Polarization {parent=Wave equation} {wiki=Polarization_(waves)} Start with: {full}. Then go to: {full}. = String polarization {parent=Polarization} This is about the polarization of a string in 3D space. That is the first concept of polarization you must have in mind! = Diffraction {parent=Wave equation} {wiki} = Huygens-Fresnel principle {c} {parent=Diffraction} {wiki=Huygens–Fresnel principle} = Kirchhoff's diffraction formula {c} {parent=Huygens-Fresnel principle} {wiki} Approximation to . = Fraunhofer diffraction {c} {parent=Kirchhoff's diffraction formula} {wiki} Far field approximation to , i.e. when the plane of observation is far from the object diffracting. = Fresnel diffraction {c} {parent=Kirchhoff's diffraction formula} {wiki} Near field approximation to , i.e. when the plane of observation is near the object diffracting. = Refraction {parent=Wave equation} {wiki} = Resonance {parent=Wave equation} {wiki} = Resonance frequency {synonym} = Resonate {synonym} = Resonates {synonym} Resonance is a really cool thing. Examples: * , notably: * pipe instruments * , notably: * , and notably the lossy version Perhaps a key insight of resonance is that the reonant any lossy system tends to look like the resonance frequency quite quickly even if the initial condition is not the resonant condition itself, because everything that is not the resonant frequency interferes destructively and becomes noise. Some examples of that: * striking a bell or drum can be modelled by applying an impuse to the system * playing a pipe instrument comes down to blowing a piece that vibrates randomly, and then leads the pipe to vibrate mostly in the resonant frequency. Likely the same applies to bowed string instruments, the bow must be creating a random vibration. * playing a plucked string instrument comes down to initializing the system to an triangular wave form and then letting it evolve. TODO find a simulation of that! Another cool aspect of resonance is that it was kind of the motivation for , as was kind of thinking that electroncs might show discrete jumps on because of constructive interference. = Wave interference {parent=Wave equation} {wiki} = Interference pattern {parent=Wave interference} What you see along a line or plane in a . Notably used for the pattern of the . = 2D wave equation on a circular domain {parent=Wave equation} {wiki=Vibrations_of_a_circular_membrane} = Bessel function {parent=2D wave equation on a circular domain} {wiki} Shows up when trying to solve <2D wave equation on a circular domain> in with , where we have to decompose the initial condition in termes of a , exactly like the appears when solving the wave equation in linear coordinates. For the same fundamental reasons, also appears when calculating the . = Fourier-Bessel series {parent=Bessel function} {wiki=Fourier–Bessel_series} Completeness: https://math.stackexchange.com/questions/2192665/is-this-set-of-bessel-functions-a-basis-for-all-c10-a-functions TODO This is the analogue to . = Helmholtz equation {c} {parent=Wave equation} {wiki} problem of . = Existence and uniqueness of solutions of partial differential equations {parent=Partial differential equation} {tag=Existence and uniqueness} If you have a that models [physical phenomena], it is fundamental that: * there must exist a solution for every physically valid initial condition, otherwise it means that the equation does not describe certain cases of reality * the solution must be unique, otherwise how are we to choose between the multiple solutions? Unlike for which have the https://en.wikipedia.org/wiki/Picard–Lindelöf_theorem[Picard–Lindelöf theorem], the existence and uniqueness of solution is not well solved for PDEs. For example, {child} was one of the . = Partial differential equation solver {parent=Partial differential equation} {tag=Numerical software} = PDE solver {c} {synonym} = FreeFem {c} {parent=Partial differential equation solver} {wiki=FreeFem++} https://freefem.org/ https://github.com/FreeFem/FreeFem-sources Started in 1987 and written in Pascal, by the French from , the French are really strong in . Ciro wasn't expecting it to be as old. Ported to C++ in 1992. The fact that French wrote it can be seen in the documentation, for example https://doc.freefem.org/tutorials/index.html uses file extension `mycode.edp` instead of `mycode.pde` where `dep` stands for "https://fr.wikipedia.org/wiki/Équation_aux_dérivées_partielles[Équation aux dérivées partielles]". Besides the painful build, using FreeFem is relatively simple, as can be seen from the examples on the website. They do use a on the examples, which appears to be the main/only interface, which is a bad thing, Ciro would rather have a as the "main API", which is more the approach taken by the , but so be it. This business means that you always stumble upon basic stuff you want to do but can't, and then you have to think about how to share data between the simulation and the plotting. The plotting notably is super complex and they can't implement all of what people want, upstream examples often offload that to gnuplot. This is potentially a big advantage of . It nice though that they do have some graphics out of the box, as that allows to quickly debug common problems. Uses , which is not immediately obvious to beginners? The introduction https://doc.freefem.org/tutorials/poisson.html gives an ultra quick example, but your are mostly on your own with that. On Ubuntu 20.04, the `freefem` is a bit out-of-date (3.5.8, there isn't even a tag for that in the repo, and refs/tags/release_3_10 is from 2010!) and fails to run the examples from the website. It did work with the example package though, but the output does not have color, which makes me sad :-) `` sudo apt install freefem freefem-examples freefem /usr/share/doc/freefem-examples/heat.pde `` So let's just compile the latest v4.6 it from source, on Ubuntu 20.04: `` sudo apt build-dep freefem git clone https://github.com/FreeFem/FreeFem-sources cd FreeFem-sources # Post v4.6 with some fixes. git checkout 3df0e2370d9752801ac744b11307b14e16743a44 # Won't apply automatically due to tab hell. # https://superuser.com/questions/607410/how-to-copy-paste-tab-characters-via-the-clipboard-into-terminal-session-on-gnom git apply <<'EOS' diff --git a/3rdparty/ff-petsc/Makefile b/3rdparty/ff-petsc/Makefile index dc62ab06..13cd3253 100644 --- a/3rdparty/ff-petsc/Makefile +++ b/3rdparty/ff-petsc/Makefile @@ -204,7 +204,7 @@ $(SRCDIR)/tag-make-real:$(SRCDIR)/tag-conf-real $(SRCDIR)/tag-install-real :$(SRCDIR)/tag-make-real cd $(SRCDIR) && $(MAKE) PETSC_DIR=$(PETSC_DIR) PETSC_ARCH=fr install -test -x "`type -p otool`" && make changer - cd $(SRCDIR) && $(MAKE) PETSC_DIR=$(PETSC_DIR) PETSC_ARCH=fr check + #cd $(SRCDIR) && $(MAKE) PETSC_DIR=$(PETSC_DIR) PETSC_ARCH=fr check test -e $(DIR_INSTALL_REAL)/include/petsc.h test -e $(DIR_INSTALL_REAL)/lib/petsc/conf/petscvariables touch $@ @@ -293,7 +293,6 @@ $(SRCDIR)/tag-tar:$(PACKAGE) -tar xzf $(PACKAGE) patch -p1 < petsc-hpddm.patch ifeq ($(WIN32DLLTARGET),) - patch -p1 < petsc-metis.patch endif touch $@ $(PACKAGE): EOS autoreconf -i ./configure --enable-download --enable-optim --prefix="$(pwd)/../FreeFem-install" ./3rdparty/getall -a cd 3rdparty/ff-petsc make petsc-slepc cd - ./reconfigure make -j`nproc` make install cd ../FreeFem-install PATH="${PATH}:$(pwd)/bin" ./bin/FreeFem++ ../FreeFem-sources/examples/tutorial/ `` Ciro's initial build experience was a bit painful, possibly because it was done on a relatively new Ubuntu 20.04 as of June 2020, but in the end it worked: https://github.com/FreeFem/FreeFem-sources/issues/141 The main/only dependency appears to be https://en.wikipedia.org/wiki/Portable,_Extensible_Toolkit_for_Scientific_Computation[PETSc] which is used by default, which is a good sign, as that library appears to automatically parallelize a single input to several backends (single , MPI, GPU) so you know things will scale up as you reach simulations. The problem is that it compiling such a complex dependency opens up much more room for hard to solve compilation errors, and takes a lot more time. = FreeFem examples {parent=FreeFem} = heat-dirichlet.1d.freefem {parent=FreeFem examples} 1-dimensional example with * \a[freefem/heat-dirichlet.1d.freefem] = heat-dirichlet-2d-freefem {parent=FreeFem examples} 2-dimensional example with : * \a[freefem/heat-dirichlet.2d.freefem] = FEniCS Project {c} {parent=Partial differential equation solver} {wiki} https://fenicsproject.org/ One big advantage over is that it uses plain old to describe the problems instead of a . is used for plotting by default, so we get full Python power out of the box! Also uses like which is a pain. One downside is that its documentation is a Springer published PDF https://link.springer.com/content/pdf/10.1007%2F978-3-319-52462-7.pdf which is several years out-of-date (tested with FEnics 2016.2. Newbs. This causes problems e.g.: https://stackoverflow.com/questions/53730427/fenics-did-not-show-figure-nameerror-name-interactive-is-not-defined/57390687#57390687 are mentioned at: https://link.springer.com/content/pdf/10.1007%2F978-3-319-52462-7.pdf 3.5 "A system of advection–diffusion–reaction equations". You don't need to manually iterate between the equations. On Ubuntu 20.04 as per https://fenicsproject.org/download/ `` sudo apt-get install software-properties-common sudo add-apt-repository ppa:fenics-packages/fenics sudo apt-get update sudo apt-get install --no-install-recommends fenics sudo apt install fenics python3 -m pip install -u matplotlib `` Before 2020-06, it was failing with: `` E: The repository 'http://ppa.launchpad.net/fenics-packages/fenics/ubuntu focal Release' does not have a Release file. `` but they seem to have created the Ubuntu 20.04 package as of 2020-06, so it now worked! https://askubuntu.com/questions/866901/what-can-i-do-if-a-repository-ppa-does-not-have-a-release-file TODO heat equation . = Hans Petter Langtangen {c} {parent=FEniCS Project} {wiki} account: https://github.com/hplgit It should be mentioned that when you start for [PDE] stuff, you will reach Han's writings a lot under his : http://hplgit.github.io/[], and he is one of the main authors of the . Unfortunately he died of in 2016, shame, he seemed like a good educator. He also published to pages with his own crazy -like multi-output : https://github.com/hplgit/doconce[]. Rest in peace, Hans. = System of partial differential equations {parent=Partial differential equation} In many important applications, what you have to solve is not just a single , but multiple partial differential equations coupled to each other. This is the case for many key PDEs including: * , see: {full} * * , see: {full} = Classification of second order partial differential equations into elliptic, parabolic and hyperbolic {parent=Partial differential equation} One major application of this classification is that different are suitable for different types of as explained at: . Bibliography: * https://math.stackexchange.com/questions/1090299/why-are-elliptic-parabolic-hyperbolic-pdes-called-elliptic-parabolic-hyperb = Elliptic partial differential equation {parent=Classification of second order partial differential equations into elliptic, parabolic and hyperbolic} {wiki} = Parabolic partial differential equation {parent=Classification of second order partial differential equations into elliptic, parabolic and hyperbolic} {wiki} = Hyperbolic partial differential equation {parent=Classification of second order partial differential equations into elliptic, parabolic and hyperbolic} {wiki} = Which boundary conditions lead to existence and uniqueness of a second order PDE {parent=Classification of second order partial differential equations into elliptic, parabolic and hyperbolic} http://www.cns.gatech.edu/~predrag/courses/PHYS-6124-12/StGoChap6.pdf 6.1 "Classification of PDE's" clarifies which boundary conditions are needed for existence and uniqueness of each [type of second order of PDE]: * and : or * : = Phase space {parent=Differential equation} {wiki} This idea comes up particularly in the of . = Boundary condition {parent=Differential equation} = Initial condition {parent=Boundary condition} Basically a subset of the for when one of the parameters is time and we are specifying values for the time 0. = Boundary value problem {parent=Boundary condition} {wiki} = Dirichlet boundary condition {c} {parent=Boundary condition} {wiki} Specifies fixed values. Can be used for and . Numerical examples: * with : * * = Neumann boundary condition {c} {parent=Boundary condition} {wiki} Specifies the derivative in a direction normal to the boundary. Can be used for and . = Cauchy boundary condition {c} {parent=Neumann boundary condition} {wiki} Sets both a and a for a single part of the boundary. Can be used for . We understand intuitively that this imposes stricter requirements on solutions, which makes it easier to guarantee uniqueness, but also harder to have existence. TODO intuitively why hyperbolic need this extra level of restriction. = Robin boundary condition {c} {parent=Neumann boundary condition} {wiki} Linear combination of a and at each point of the boundary. Examples: * when metal plaque is immersed in a large external environment of fixed temperature. In this case, the normal derivative at the boundary is proportional to the difference between the temperature of the boundary and the fixed temperature of the external environment. The result as time tends to infinity is that the temperature of the plaque tends to that of the environment. Shown a solved example in the tutorial: https://doc.freefem.org/tutorials/thermalConduction.html (https://github.com/FreeFem/FreeFem-doc/blob/1d5996d8b891fd553fd318321249c2c30f693fc3/source/tutorials/thermalConduction.rst) = Open boundary condition {parent=Neumann boundary condition} In the context of wave-like equations, an open-boundary condition is one that "lets the wave go through without reflection". This condition is very useful when we want to simulate infinite domains with a numerical method. wants to do this all the time when trying to come up with demos for his writings. Here are some resources that cover such boundary conditions: * https://www.asc.tuwien.ac.at/~arnold/pdf/graz/graz.pdf lots of slides * http://hplgit.github.io/wavebc/doc/pub/._wavebc_cyborg002.html mentions them and gives a 1D formula. It mentions that things get complicated in 2D and 3D TODO why. The other page: http://hplgit.github.io/wavebc/doc/pub/._wavebc_cyborg003.html shows solution demos. = Mixed boundary condition {parent=Neumann boundary condition} {wiki} Multiple for different parts of the boundary. = Time dependent boundary condition {parent=Boundary condition} Most commonly, such as the are taken to be fixed values in time. But it also makes sense to think about cases where those values vary in time. Some bibliography: * https://math.stackexchange.com/questions/261251/heat-equation-with-time-dependent-boundary-conditions * https://secure.math.ubc.ca/~peirce/M257_316_2012_Lecture_20.pdf = Control theory {parent=Differential equation} {wiki} This basically adds one more ingredient to : a that we can select. And then the question becomes: if this function has such and such limitation, can we make the solution of the have such and such property? It's quite fun from a mathematics point of view! Control theory also takes into consideration possible of the domain, which allows using , as well as digital, rather than analogue control methods. = Control engineering {parent=Control theory} {wiki} = Control system {parent=Control theory} {wiki} = Feedback loop {parent=Control theory} = Control loop {synonym} {title2} = Series {disambiguate=mathematics} {parent=Calculus} {wiki} = Power series {parent=Series (mathematics)} {wiki} = Analytic function {parent=Power series} {wiki} = Sine and cossine {parent=Analytic function} {wiki} = Sinusoidal {parent=Sine and cossine} {tag=Periodic function} A function that is either a or , i.e. we don't know or care where the origin is exactly. This is particularly relevant in , where the 's time origin is set to match the wave. = Sine {parent=Sine and cossine} {wiki} = Cosine {parent=Sine and cossine} {wiki} = Radius of convergence {parent=Power series} {wiki} = Taylor series {c} {parent=Power series} {wiki} = Gradient, Divergence, Curl, and Laplacian {parent=Calculus} = Curl {disambiguate=mathematics} {parent=Gradient, Divergence, Curl, and Laplacian} {title2=$\curl{}$} {wiki} Points in the direction in which a wind spinner spins fastest. = Nabla symbol {parent=Gradient, Divergence, Curl, and Laplacian} {title2=$\nabla$} {wiki} = Nabla {synonym} As if weren't enough, and also like to make up tons of symbols, [some of which look like the could actually be Greek letters]! Nabla is one of those: it was completely made up in modern times, and just happens to look like an inverted upper case to make things even more confusing! Nabla means "harp" in Greek, which looks like the symbol. = Del {parent=Nabla symbol} {wiki} Oh, and if it weren't enough, have a separate name for the damned : "del" instead of "nabla". TODO why is it called "Del"? Is is because it is an inverted uppercase ? = Divergence {parent=Gradient, Divergence, Curl, and Laplacian} {title2=$\div{}$} {title2=$div()$} {wiki} Takes a field as input and produces a scalar field. Mnemonic: it gives out the amount of fluid that is going in or out of a given volume per unit of time. Therefore, if you take a cubic volume: * the input has to be the 6 flows across each face, therefore 3 derivatives * the output is the variation of the quantity of fluid, and therefore a scalar = Gradient {parent=Gradient, Divergence, Curl, and Laplacian} {title2=$\grad{}$} {wiki} Takes a scalar field as input and produces a vector field. Mnemonic: the gradient shows the direction in which the function increases fastest. Think of a color gradient going from white to black from left to right. Therefore, it has to: * take a scalar field as input. Otherwise, how do you decide which vector is larger than the other? * output a vector field that contains the direction in which the scalar increases fastest locally at each point. This has to give out vectors, since we are talking about directions = Laplace operator {parent=Gradient, Divergence, Curl, and Laplacian} {title2=$\Delta$} {title2=$\nabla^2$} {wiki} = Laplacian {c} {synonym} Can be denoted either by: * the upper case * squared Our default symbol is going to be: $$ \laplacian{} $$ = D'alembert operator {c} {parent=Laplace operator} {title2=$\Box$} {wiki} The for . Can be nicely written with as shown at: {full}. = Infinitesimal {parent=Calculus} {wiki} Just use instead, please. The are particularly guilty of this.