(The presentation that follows is a light introduction. For more austere results, consult Paper 1).

While looking for an analytic continuation of the hyper4 operator, one is tempted to
search for series expansions for ^{n}x. Having such an expansion at hand may
offer additional insight into what an analytic ^{y}x can look like, for y in
R.

Instead of searching for an expansion of ^{n}x, it is often convenient to
look for an expansion for ^{n}(e^{x}) first. We are then faced with the
following problem:

Let a series s with coefficients a_{n} be given:

Determine the coefficients b_{n} of the series expansion (initially
disregarding issues of convergence) of s1 = e^{s*x}:

The author posted the above problem to sci.math a little while ago, and here are two answers:

The first one, by Robert Israel:

where the sum is over all sequences s = [s_{1}, s_{2},
s_{3}, ... ] of nonnegative integers with:

and the product is over those k for which s_{k} > 0 (a finite set).

For example, the partitions of 4 correspond to s = [4,0...], [2,1,0...], [0,2,0...], [1,0,1,0...] and [0,0,0,1,0...], so:

b_{4} = a_{0}^{4}/4! +
(a_{0}^{2}*a_{1})/(2!*1!) + a_{1}^{2}/2! +
a_{0}*a_{2} + a_{3}.

Robert has also provided Maple code to explicitly calculate the b_{n}.

> b:=(n::nonnegint)->add( mul(a[q[1]-1]^q[2]/q[2]!, q = map(j -> [j, numboccur(j,P)], convert(P,set))), P = combinat[partition](n));

The second one by Leroy Quet, from a simple differentiation property of exp(z):

b_{n}=1, if n=0 else:

Leroy derived the above recursive expression by noticing that if B(x) = exp(A(x)), then:

B'(x) = B(x) *A'(x). Ignoring temporarily issues of convergence inside a specified range, the expression follows by equating coefficients and changing indexes.

Both the above solutions are of course correct. Can we perhaps use either solution
to calculate the series expansion of ^{n}(e^{x})? Yes we can. The
author will use the second solution because it is easier to handle when using induction
later on.

Before we proceed on calculating the coefficients, note that a simple inductive
argument shows that all the iterates ^{m}(e^{x}), m in N, are entire
functions, so for any m in N, the corresponding series expansions converge for all x in
R.

Now we need some consistent notation: The author will use a_{m,n} for the
coefficients of the expansion of ^{m}(e^{x}): In other words, m ranges
in N = {1, 2, 3, ...} and n ranges in {0} U N = {0, 1, 2, 3, ...}. The coefficients of
e^{x} then will be: {a_{1},_{0}, a_{1},_{1},
a_{1},_{2},...}. The coefficients of ^{2}(e^{x}) will
be: {a_{2},_{0}, a_{2},_{1},
a_{2},_{2},...}, etc.

The recursion then becomes:

It is immediately clear that a_{m},_{0} = 1, for all m in N.

Since we are also in possession of the coefficients of e^{x}, the recursion
is complete, and we can immediately write Maple code to generate any
a_{m,n}.

> a:=proc(m,n)

> option remember;

> local j;

> if n=0 then 1

> elif m=1 then 1/n!

> else add(j*a(m-1,j-1)*a(m,n-j),j=1..n)/n;

> fi;

> end:

Here's a table with the coefficients of x^{n} in the series expansions for
the first few ^{m}(e^{x}):

a_{m},_{n} |
0 | 1 | 2 | 3 | 4 | 5 | 6 |

1 | 1 | 1 | 1/2 | 1/6 | 1/24 | 1/120 | 1/720 |

2 | 1 | 1 | 3/2 | 5/3 | 41/24 | 49/30 | 1057/720 |

3 | 1 | 1 | 3/2 | 8/3 | 101/24 | 63/10 | 6607/720 |

4 | 1 | 1 | 3/2 | 8/3 | 125/4 | 49/5 | 12847/720 |

5 | 1 | 1 | 3/2 | 8/3 | 125/4 | 54/5 | 16087/720 |

6 | 1 | 1 | 3/2 | 8/3 | 125/5 | 54/5 | 16807/720 |

The keen eye will notice the pattern that emerges. But before we actually investigate it, let us prove that there is a pattern, whatever it is:

Lemma:

a_{m},_{n} = a_{n},_{n}, for all m ≥ n.

Proof:

By induction on n. It is clear that a_{1},_{1} = 1 and
a_{m},_{0} = 1 for all m in N. If a_{k},_{1} = 1 then
a_{k+1},_{1} =
Σ_{1}^{1}j*a_{k+1},_{1-j}*a_{k},_{j-1}/1
= a_{k+1},_{0}*a_{k},_{0} = 1 =
a_{1},_{1}.

Therefore, a_{m},_{1} = a_{1},_{1} for all m in
N.

Now assume a_{m},_{k} = a_{k},_{k} is true for all m
≥ k.

We have to show that a_{m},_{k+1} = a_{k+1},_{k+1}
for all m ≥ k+1. However:

a_{m},_{k+1} =
Σ_{1}^{k+1}j*a_{m},_{k+1-j}*a_{m-1},_{j-1}/(k+1)

a_{k+1},_{k+1} =
Σ_{1}^{k+1}j*a_{k+1},_{k+1-j}*a_{k},_{j-1}/(k+1)

m ≥ k+1 => m ≥ k+1-j, for all j in {1..k+1}, so
a_{m},_{k+1-j} = a_{k+1-j},_{k+1-j}, by the induction
hypothesis. (1)

k+1 ≥ k+1-j, for all j in {1..k+1}, so a_{k+1},_{k+1-j} =
a_{k+1-j},_{k+1-j}, again by the induction hypothesis. (2)

(1) and (2) establish: a_{m},_{k+1-j} =
a_{k+1},_{k+1-j}.

j ≤ k+1 => j-1 ≤ k and m ≥ k+1 => m-1 ≥ k, so j-1 ≤ m-1, so
a_{m-1},_{j-1} = a_{j-1},_{j-1}, by the induction
hypothesis as well. (3)

And finally: j ≤ k+1 => j-1 ≤ k, so a_{k},_{j-1} =
a_{j-1},_{j-1}, by the induction hypothesis. (4)

(3) and (4) establish: a_{m-1},_{j-1} = a_{k},_{j-1}
and we are done.

So, the coefficients eventually "stabilize". To what though? Well, here comes the
surprise. Assuming that lim_{n->+∞}^{n}x converges, it
converges to e^{-W(-log(x))}, as established in
article 1. Therefore assuming that
lim_{n->+∞}^{n}(e^{x}) converges, it converges to
e^{-W(-log(exp(x)))}. But this is e^{-W(-x)}. And if we use Lambert's W
function's alternate definition, this is W(-x)/(-x).

Now, knowing that the series expansion for the Lambert's W function is:

it follows immediately that whenever we have convergence, W(-x)/(-x) is:

Now we know exactly what those coefficients a_{m},_{n} in the
expansion of ^{m}(e^{x}) will be. To summarize:

For m = 1:

^{m}(e^{x}) = e^{x}, of course.

For m > 1:

^{m}(e^{x}) =

with:

.

And here is the corresponding Maple code to calculate the series expansions of the
iterates ^{m}(e^{x}), given m and x:

> a:=proc(m,n) #coefficients calculation

> option remember;

> local j;

> if n=0 then 1

> elif m=1 then 1/n!

> else add(j*a(m-1,j-1)*a(m,n-j),j=1..n)/n; #recursion

> fi;

> end:

> f:=proc(m,x) #expansion as per above

> local s1,s2;

> s1:=1+sum('(n+1)^n/(n+1)!*x^n','n'=1..m);

> s2:=sum('a(m,n)*x^n','n'=m+1..100);

> s1+s2;

> end:

> g:=proc(m,x) #for comparison

> if m=1 then exp(x)

> else exp(x*g(m-1,x));

> fi;

> end:

Let's do some comparisons:

> evalf(g(2,0.12345));

1.149894868

> evalf(f(2,0.12345));

1.149894869

> evalf(g(10,0.22));

1.344055119

> evalf(f(10,0.22));

1.344055119

> evalf(g(10,exp(-1)));

2.320458538

> evalf(f(10,exp(-1)));

2.320454949

> evalf(g(10,0.33*I));

.8788015388 + .2622477964 I

> evalf(f(10,0.33*I));

.8788015388 + .2622477963 I

> evalf(g(20,exp(-1)*I));

.8576167143 + .2799279251 I

> evalf(f(20,exp(-1)*I));

.8578364601 + .2796999407 I

So we have a sequence of entire functions f_{m}(x) =
^{m}(e^{x}), which converge to the limit function:
f_{∞}(x) = e^{-W(-x)}.

It is easy to see via the Ratio Test that the radius of convergence of the limit
function: f_{∞}(x) is: R = 1/e^{[1]}.

Because of the above fact, it is natural to expect "bad" convergence behavior around
1/e in the corresponding series expansions of the functions f_{m}(x). What this
means, is that while the Maple code above works perfectly for |x| ≤ 1/e, all the
corresponding series expansions will most likely misbehave for values of x of the form:
x = 1/e + dx(m), where dx(m) > 0, and dx(m) is a function of m. If we were to
describe the situation in very rough terms, we could say that the series expansion for
f_{m}(x) will "misbehave" around 1/e + dx(m), with dx(m) -> 0 as m ->
+∞. And this is to be expected. If convergence of the series expansions of the
iterates: ^{m}(e^{x}) was everywhere nice, the limit function would be
entire. But it's not.

- Compare with the radius of convergence of W, here.