Taking half a derivative

March 13, 2021. Can you take half a derivative? Or π derivatives? Or even √–1 derivatives? It turns out the answer is yes, and there are two simple but apparently different ways to do it. I show that one implies the other!

Introduction

In calculus, the regular derivative is defined as the local gradient of a function:

\[f'(x) = \frac{d}{dx} f(x) = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h}.\]

We will abbreviate this as $f’ = Df$, understanding that $f$ is a function of $x$ and $D$ differentiates with respect to $x$. We can always differentiate again, and again, and in fact as many times as we want. Using our new notation, we can write the $n$th derivative as

\[D (D \cdots (Df)) = D^n f.\]

This is well-defined as long as $n$ is a whole number. But what if we could consider other types of derivatives, say half a derivative? Let’s call this $D^{1/2} = \sqrt{D}$. In the same way that applying two ordinary derivatives gives the second derivative, it seems reasonable to hope that two half derivatives give a full derivative:

\[f' = \sqrt{D} \sqrt{D}f = Df \quad \Longrightarrow \quad \sqrt{D} \cdot \sqrt{D} = D.\]

What could half a derivative look like?

To be continued

The easiest way to go about this to use a trick called analytic continuation. This has a precise meaning in complex analysis, and we’re going to do something similar in spirit, but not quite as rigorous. The basic idea is to find some nice, specific function we can differentiate $n$ times, and which happens to give us a nice answer in terms of $n$. We then define the fractional derivative $D^\alpha$ acting on this function by replacing $n$ with $\alpha$. A sanity check will be that, for general $\alpha, \beta$, the fractional derivatives obey

\[D^\alpha \cdot D^\beta = D^{\alpha+\beta},\]

so, e.g., two half-derivatives give a full derivative, $\sqrt{D}\cdot \sqrt{D} = D$. We call this property multiplicativity after the identical-looking rule for indices. There are two issues with this approach. First, how do we extend the definition to general functions? And second, are the definitions for different functions in agreement? In general, the answers are very complicated, but in this post, I’ll consider the two simplest methods for defining fractional derivatives. This means we can talk about the functions they apply to, and check they agree, without a huge technical overhead.

Our first nice function is the exponential $e^{\omega x}$. Differentiating simply pulls down a factor of $\omega$ each time, so

\[D^n e^{\omega x} = \omega^n e^{\omega x}.\]

It’s very clear, then, how to define the fractional derivative acting on this:

\[D^\alpha e^{\omega x} = \omega^\alpha e^{\omega x}.\]

Great! We can easily check the multiplicative property, assuming that constants pass through the derivatives:

\[D^\alpha D^\beta e^{\omega x} = \omega^\alpha D^\beta e^{\omega x} = \omega^{\alpha + \beta} e^{\omega x} = D^{\alpha+\beta}e^{\omega x}.\]

Now, you might think this is useless because we can only take fractional derivatives of exponential functions. But at this point, we introduce another assumption, namely that the fractional derivatives are linear:

\[D^\alpha (\lambda_1 f_1 + \lambda_2 f_2) = \lambda_1 D^\alpha f_1 + \lambda_2 D^\alpha f_2,\]

where $f_1, f_2$ are functions and $\lambda_1, \lambda_2$ are constants. In particular, let’s suppose this linearity applies to an infinite collection of exponentials multiplied by constants $\lambda$, arranged into an integral

\[f(x) = \int_{-\infty}^\infty d\omega \, \lambda(\omega) e^{i\omega x}.\]

Then by linearity,

\[D^\alpha f(x) = \int_{-\infty}^\infty d\omega \, \lambda(\omega) D^\alpha e^{i\omega x} = \int_{-\infty}^\infty d\omega \, \lambda (\omega) (i\omega)^\alpha e^{i\omega x}. \tag{1} \label{exp}\]

Functions which can be written this way are said to have a Fourier representation, with the function $ \lambda (\omega)$ the Fourier transform. Most functions have one! Let’s do a very simple example: the sine function, bane of high school trigonometry classes everywhere. What is its half derivative? We start by writing sine in terms of exponentials as

\[\sin(x) = \frac{1}{2i}(e^{ix} - e^{-ix}).\]

We then take a half-derivative using our exponential rule and linearity:

\[\sqrt{D} \sin(x) = \frac{1}{2i}(\sqrt{D} e^{ix} - \sqrt{D} e^{-ix}) = \frac{1}{2i}\left(\sqrt{i} e^{ix} - \sqrt{-i} e^{-ix}\right).\]

There are a few things to note. First, this is not a real function, so in general, half derivatives of a real functions need not be real. It should also be clear there is some ambiguity about which roots we choose. In general this ambiguity is harmless, and we just take the principal values (with arguments between $-\pi$ and $\pi$), but this issue will crop up any below in a subtle way. Finally, observe that we can just as easily do crazy things like take $i$ derivatives! We set $\alpha = i$, so the $i$th derivative of sine is

\[D^i \sin(x) = \frac{1}{2i}\left(i^i e^{ix} - (-i)^i e^{-ix}\right) = \frac{1}{2i}(e^{-\pi/4 + ix} - e^{+\pi/4 - ix}),\]

since the principal values are

\[i^i = e^{i (i \pi/4)} = e^{-\pi/4}, \quad (-i)^i = e^{i (-i \pi/4)} = e^{\pi/4}.\]

I’m not sure if this has any applications, but it’s cute. I invite the interested reader to take $\pi$ derivatives of sine. What better way to celebrate $\pi$ day!

Fractorials

Exponentials aren’t the only nice functions we can use to define fractional derivatives. In fact, a more common approach is to use powers. The first function we encounter in high school is usually the identity function, $f(x) = x$. From there, we build up to polynomials $x^m$, and then arbitrary powers $x^s$. The derivative of a power has a very simple form:

\[D x^s = s x^{s-1}.\]

If we differentiate again, we bring down a factor of $s - 1$ and reduce the index again. And so on and so forth. This leads to the expression for $n$ derivatives:

\[D^n x^s = s(s- 1) \cdots (s - n + 1) x^{s-n}.\]

So far, this doesn’t look like something we can easily continue to non-integer values of $n$. But let’s assume for a moment $s$ is an integer. Then we can write

\[s(s- 1) \cdots (s - n + 1) = \frac{s(s - 1) (s-2) \cdots 1}{(s - n)(s-n - 1) \cdots 1} = \frac{s!}{(s -n)!},\]

where we have used the good old factorial function $s!$. Thus, we can write

\[D^n x^s = \frac{s!}{(s -n)!} x^{s-n}.\]

To analytically continue this, we need a beautiful object called the Gamma function $\Gamma$. We’ll define it properly below, but for the moment, the only properties we need are that (a) it agrees with the factorial function at (shifted) integer values,

\[\Gamma(k + 1) = k!;\]

and (b) is defined for non-integer values as well. I like to think of it as the “fractorial” because it makes sense for fractional arguments! In addition to delightfully bad puns, the Gamma function lets us write

\[D^n x^s = \frac{\Gamma(s + 1)}{\Gamma(s -n + 1)} x^{s-n},\]

and immediately continue to the fractional derivative:

\[D^\alpha x^s = \frac{\Gamma(s + 1)}{\Gamma(s -\alpha + 1)} x^{s-\alpha}. \tag{2} \label{power}\]

Too easy! Once again, we can check the multiplicative property:

\[\begin{align*} D^\alpha D^\beta x^s & = \frac{\Gamma(s + 1)}{\Gamma(s -\beta + 1)} D^\alpha x^{s-\beta} \\ & = \frac{\Gamma(s + 1)}{\Gamma(s -\beta + 1)} \cdot \frac{\Gamma(s - \beta + 1)}{\Gamma(s -\alpha - \beta + 1)} x^{s-\beta - \alpha} \\ & = \frac{\Gamma(s + 1)}{\Gamma(s -\alpha -\beta + 1)}x^{s-\beta - \alpha} = D^{\alpha+\beta} x^s. \end{align*}\]

So this gives us another, evidently different way to define fractional derivatives. It will apply to any sum or integral of powers of $x$, for instance, infinite polynomials called power series, and their close cousins the Laurent series which include reciprocal powers:

\[\sum_{k = 0}^\infty a_k x^k, \quad \sum_{k = -\infty}^\infty b_k x^k.\]

These cover a lot of ground, and there is an even more general object called the Mellin transform, analogous to the Fourier transform. But we won’t go there. Instead, let’s do another simple example. One of the interesting properties of the Gamma function is that it blows up to (minus) infinity for nonpositive integers:

\[\Gamma(-n) = -\infty, \quad n = 0, 1, 2, \ldots.\]

This is actually essential to get sensible answers! For instance, let’s take the derivative of a constant, $1 = x^0$. Then according to our definition,

\[D x^0 = \frac{\Gamma(0 + 1)}{\Gamma(0 -1 + 1)} x^{0 - 1} = \frac{\Gamma(1)}{\Gamma(0)} x^{- 1} = 0,\]

since the $\Gamma(0)$ in the denominator makes the whole thing vanish. More intriguingly, these infinities sometimes cancel in sensible ways. For instance, if we take a derivative of $1/x$, we should get $-1/x^2$. If we plug $x^{-1}$ into our formula, it gives

\[D x^{-1} = \frac{\Gamma(-1 + 1)}{\Gamma(-1 -1 + 1)} x^{-1 - 1} = \frac{\Gamma(0)}{\Gamma(-1)} x^{-2}.\]

Both the numerator and the denominator blow up, which should make us queasy. But there is a trick here. It turns out that for any $z$, the Gamma function obeys the functional equation

\[\Gamma(1 + z) = z\Gamma(z).\]

Since $\Gamma(k + 1) = k!$, this gives the usual relation for factorials,

\[k! = \Gamma(k + 1) = k\Gamma(k) = k \cdot (k - 1)!.\]

It also gives the sneaky result $\Gamma(0) = (-1)\Gamma(-1)$. Both $\Gamma(0)$ and $\Gamma(-1)$ blow up of course, but in the derivative of $1/x$, the $\Gamma(-1)$ terms cancel, leaving $(-1)x^{-2} = -1/x^2$ as required.

Gamma and tongs

This all sounds great, but you might be wondering why the Gamma function is the right way to extend the factorial function away from whole numbers. In fact, any old function that interpolates between them would also work and satisfy the multiplicative property. What we’re going to do in this last section is use the fractional derivatives, defined using exponentials, to derive the Gamma function continuation. And in order to this, we have to grit our teeth and define the Gamma function in all its glory:

\[\Gamma(s) = \int_{0}^\infty dt\, t^{s-1} e^{-t}.\]

If you’re interested, you can find proofs of the functional equation and so on elsewhere. Instead, we’re going to make the sneaky change of variables $t = \omega x$, yielding

\[\Gamma(s) = x^{s} \int_{0}^\infty d\omega\, \omega^{s-1} e^{-\omega x}.\]

If we change $s \to -s$, and rearrange, we get a formula for $x^s$ in terms of exponentials:

\[x^{s} = \frac{1}{\Gamma(-s)}\int_{0}^\infty d\omega\, \omega^{-(1+ s)} e^{-\omega x}. \tag{3} \label{gamma}\]

Great! Now we just go ahead and use rule (\ref{exp}), with the hope we will get rule (\ref{power}). As usual, we proceed using linearity:

\[\begin{align*} D^\alpha x^{s} & = \frac{1}{\Gamma(-s)}\int_{0}^\infty d\omega\, \omega^{-(1+ s)} D^\alpha e^{-\omega x} \\ & = \frac{1}{\Gamma(-s)}\int_{0}^\infty d\omega\, \omega^{-(1+ s)} (-\omega)^\alpha e^{-\omega x} \\ & = \frac{(-1)^\alpha}{\Gamma(-s)}\int_{0}^\infty d\omega\, \omega^{-(1+ s - \alpha)} e^{-\omega x} \\ & = \frac{(-1)^\alpha}{\Gamma(-s)} \cdot \Gamma[-(s-\alpha)]x^{s-\alpha}, \end{align*}\]

where on the last line we used (\ref{gamma}), but with $s -\alpha$ instead of $s$. This isn’t quite what we want. To make progress, we’ll take advantage of the reflection formula for the Gamma function (derived here for instance):

\[\Gamma(z) \Gamma(1 - z) = \frac{\pi}{\sin(\pi z)}.\]

We can apply this to both $\Gamma(-s)$ and $\Gamma[-(s-\alpha)]$ to get

\[\begin{align*} D^\alpha x^{s} & = (-1)^\alpha \frac{\sin(\pi s)}{\sin[\pi(s-\alpha)]}\cdot \frac{\Gamma(s+1)}{\Gamma(s-\alpha + 1)} x^{s-\alpha}. \end{align*}\]

This is almost (\ref{power}), the thing we were after! But there is this strange factor with sines out the front. Recall the definition of sine in terms of complex exponentials. This lets us write the funny factor as

\[(-1)^\alpha \frac{\sin(\pi s)}{\sin[\pi(s-\alpha)]} = \frac{e^{\pi i s} - e^{-\pi i s}}{(-1)^\alpha e^{\pi i (s-\alpha)} - (-1)^\alpha e^{-\pi i (s-\alpha)}}.\]

It would be magical if that $(-1)^\alpha$ could somehow behave differently and cancel the $\alpha$ terms floating around, right? Well, turns out it does! We can write $-1 = e^{\pm \pi i}$, and hence

\[(-1)^\alpha = e^{\pm \pi i \alpha}.\]

I won’t spell out the details, but if you look at this proof of the reflection formula, the two different terms in the sine arise from parts of an integration contour which lie in almost the same place, but where we take roots in different ways. In particular, evaluating $(-1)^\alpha$ gives $e^{\pm \pi i \alpha}$ respectively, so they cancel the $\alpha$ terms after all. The upshot is that our funny factor is just unity:

\[\frac{e^{\pi i s} - e^{-\pi i s}}{(-1)^\alpha e^{\pi i (s-\alpha)} - (-1)^\alpha e^{-\pi i (s-\alpha)}} = \frac{e^{\pi i s} - e^{-\pi i s}}{e^{\pi i \alpha} e^{\pi i (s-\alpha)} - e^{-\pi i \alpha} e^{-\pi i (s-\alpha)}} = \frac{e^{\pi i s} - e^{-\pi i s}}{e^{\pi i s} - e^{-\pi i s}} = 1.\]

Thus, our exponential rule actually reproduces the rule for powers of $x$ involving the Gamma function! Now, to be clear, fractional derivatives are a big and mathematically heavy topic, and I’ve only skimmed the surface. But it’s neat that the two simplest approaches agree.

Acknowledgments

Thanks to J.A. for chatting about fractional derivatives, and getting me thinking about the simplest way to define them.

Written on March 13, 2021
Mathematics