Abhimanyu Pallavi Sudhir

Fractional calculus

According to Ortigueira (2004), "The Grunwald-Letnikov fractional derivative is equal to the generalised Cauchy derivative". However, as readers who've studied the Grunwald derivative in some detail may recall, the Grunwald derivative is in fact ill-defined for rather innocuous-looking functions, like $f(x)=x$. Here, the Grunwald is calculated as:

$$\begin{gathered} {D^{1/2}}x = \mathop {\lim }\limits_{h \to 0} {h^{ - 1/2}}\sum\limits_{k = 0}^\infty {\left( {\begin{array}{*{20}{c}} {1/2} \\ k \end{array}} \right)} {( - 1)^k}(x - kh) \\ = {h^{ - 1/2}}\left[ {x\sum\limits_{k = 0}^\infty {\left( {\begin{array}{*{20}{c}} {1/2} \\ k \end{array}} \right)} {{( - 1)}^k} - h\sum\limits_{k = 0}^\infty {\left( {\begin{array}{*{20}{c}} {1/2} \\ k \end{array}} \right)k} {{( - 1)}^k}} \right] \\ = \frac{{\sqrt h }}{2}\sum\limits_{k = 0}^\infty {\left( {\begin{array}{*{20}{c}} { - 1/2} \\ k \end{array}} \right)} {( - 1)^k} \\ \end{gathered}$$

The remain summation is the binomial expansion of $(1-1)^{-1/2}$, which is infinite, and multiplied by the infinitesimal $\sqrt{h}/2$ yields an undefined result. However, the Cauchy/Riemann-Liouville derivative does in fact give us a well-defined result: $\frac{2}{\sqrt{\pi}}\sqrt{x}$.

How might this be reconciled with the position that the two derivatives are the same – and indeed, are the same wherever the Grunwald derivative is convergent? You would declare that the Cauchy derivative is a "principal value" of the Grunwald derivative – and the only area of freedom we have in determining the value of the derivative is in the two limits employed in its calculation, $\mathop {\lim }\limits_{h \to 0}$ and $\mathop {\lim }\limits_{N \to \infty}$ (the limit as the summation approaches impropriety). So we declare that $N$ and $h$ are not in fact independent variables whose limits are calculated one after the other (which is equivalent to setting $N$ to be "infinity over $h$"), but rather related by some relation which when applied to unify the limits, produces the principal value of the summation as the Cauchy derivative. One may easily calculate this relation for $D^{1/2}x$ – as it turns out, the required relation which produces the principal value is $h=x/N$.

As it turns out, a more general relation is possible for other functions and other derivative orders (the solution is a solution to a certain characteristic polynomial – read the paper to find out!), but $h=x/N$ is an acceptable relation for any function with a Taylor expansion.

Determinant-like function

Suppose you wanted to define for a function $f:D \to R$ to another function $f':D'\to R'$ where $D\subset D'$ such that for all $x\in D$, $f'(x)=f(x)$. Such a function $f'$ would be called a "generalisation" of $f$. But $f'$ is hardly the only way to generalise $f$ to $D'$ – after all, the only constraint on the generalisation is that in $D$, the function is the same as $f$. There is no constraint at all on the value of $f'$ for any other value – for all you know, $f(x)=x^2+\pi(x-|x|)$ is a valid generalisation of $f(x)=x^2$ from the positive numbers to the real numbers, and $f(x)=x^2\cos(2\pi x)$ is a valid generalisation of $f(x)=x^2$ from the integers to the real numbers.

But these aren't particularly useful generalisations – or at least the first example isn't. A useful generalisation shares certain important properties with the original function. Of course, there can certainly be multiple useful generalisations of a function too, depending on which defining property you choose.

What does this have to do with the determinant? Well, on first glance it may seem like it's nonsensical to generalise the determinant to non-square matrices. The determinant shows how volumes transform. Non-square matrices are inter-dimensional transformations – they either flatten volumes into lower-dimensional spaces, or raise them into higher-dimensional spaces. In this sense, the determinant of a non-square matrix must be either 0 or infinity only.

But "determinant is a volume" is not the only property of the determinant – sure, that is often treated as the axiomatic definition of the determinant, but there are several ways to define an axiomatic basis. Another defining property of the determinant is what I call the "determinant-wedge equivalence", or "the determinant of a matrix is equal to the exterior product of its column vectors". For square matrices, this obviously reduces to the volume, as the exterior product of the column vectors equals the volume. But for non-square matrices, this is calculated differently.

Indeed, this is the defining property I choose to define the determinant-like function. After a bit of calculation, I proved [1] that:

$$\operatorname{detl}(A)=\sqrt{(m-n)!}\sum_{k_1,k_2...k_{m-n}}\left({\det}\left(\operatorname{cross}_{k_1,k_2...k_{m-n}}A\right)\right)^2$$

For m by n matrix A with more rows than columns (i.e. which raises the dimensionality of the transformed vector), and the $\operatorname{cross}_{k_1,k_2...k_{m-n}}A$ operator represents cancelling the rows $k_1,k_2...k_{m-n}$ from the matrix (thus obtaining a square matrix whose determinant can be calculated conventionally).

There are other generalisations, including HR Pyle's (1962) and M Radic's (1966). Interestingly, the former is a vector-valued determinant whose norm turns out to be equal to the determinant-like function – I discuss this in [2].

Archived

Tensors in unit vector notation?

My first paper was titled "The representation of matrices in unit vector notation". Obviously, this makes no sense. Matrices as they are introduced in linear algebra for kindergarteners are linear transformations of vectors. It doesn't make sense to write them in terms of vectors.

That is all very true. However, it is instructive to understand what exactly the representation I introduce means, if it is not a representation of matrices in unit vector notation. In fact, the paper is just a clumsily worded re-statement of the fact that tensors can be written as linear combinations of the outer products of vectors.

Publication List

(numbering different from numbering in inline references above and elsewhere)

  1. Abhimanyu Pallavi Sudhir, “The generalised Cauchy derivative as a principal value of the Grunwald-Letnikov fractional derivative for divergent expansions,” (2018) [arXiv:1809.08051], arxiv.org/abs/1809.08051
  2. Abhimanyu Pallavi Sudhir and Rahel Knoepfel, “PhysicsOverflow: A postgraduate-level physics Q&A site and open peer review system,” Asia-Pacific Physics Newsletter [World Scientific] 4-1 (2015): 53-55, dx.doi.org/10.1142/S2251158X15000193
  3. Abhimanyu Pallavi Sudhir, “On the Determinant-like function and the Vector Determinant,” Advances in Applied Clifford Algebras [Springer] 24-3 (2014): 805-807, dx.doi.org/10.1007/s00006-014-0455-3
  4. Abhimanyu Pallavi Sudhir, “On the properties of the determinant-like function,” (paper presented at the International Conference on Mathematical Sciences, Chennai, India, July 17-19, 2014).
  5. Abhimanyu Pallavi Sudhir, “Defining the Determinant-like Function for m by n Matrices Using the Exterior Algebra,” Advances in Applied Clifford Algebras [Springer] 23-4 (2013): 787-792, dx.doi.org/10.1007/s00006-013-0416-2
  6. Abhimanyu Pallavi Sudhir, “The Representation of Matrices in Unit Vector Notation,” Journal of Mathematics Research [Canadian Center of Science and Education] 4-4 (2012): 86-91, dx.doi.org/10.5539/jmr.v4n4p86