## A Detailed Look at the Hyperroot Functions Using Lambert's W Function

(The presentation that follows is a very light introduction. For more austere results, consult Paper 6).

Main Problem:

If F(x,n) = xxx... iterated n-1 times, is there a definition of a continuous function identical to F, not limited to integral values of n, and valid for larger values of x (Greg Kavalec in sci.math)?

We need some consistent notation. One such commonly used notation for iterated exponentials, is the hyper4 or tetration operator described in Dave L. Renfro's Big Number article in sci.math and Robert Munafo's pages. The author will sometimes use the following alternative notation concurrently, because it bares well with html and because the extension that is about to be constructed follows some sort of natural symmetry with regular exponents:

nm = mmm...m (n m's).

Following the standard convention for iterated exponentials, the above is essentially equal to:

nm = m(m(m(m...(mm)...))) (n m's).

Whenever the above notation becomes insufficient in describing the hyperexponent, as in cases where a simple superscript is not enough to indicate it, we will use the regular notation,

m^^n = nm (n m's).

Now that hyperexponential notation has been defined as such, we can define iterated hyperexponential towers to aid us in some extended cases as follows:

n^^n^^...^^n = n^^(m^^(m^^(m^^...(m^^m)...))).

In Greg's notation, above, then, F(x,n) = nx (n x's) and his question really amounts to extending nx to yx, with y real or possibly complex. If we want to have any hope of extending the above notation to reals, we should try to define first hyperexponentials with rational hyperexponents and then pass onto the reals via Cauchy sequences or density.

First then, we need to define: (1/n)m = m^^(1/n) (the n-th hyperroot of m).

The only natural choice here, is to think in terms of using a symmetric definition to that of the regular n-th root of a number. So, roughly speaking,

(1/n)m = (some object), iff m = n(some object).

For given c in C, c=/=0, then, define (1/n)c to be "the object" z (possibly in C and perhaps not singly defined, but we'll take care of this later) which satisfies:

c = nz. (1)

That is:

c = z^^n (n z's). (2)

For n=2, equation (1) has a nice solution, which is reminiscent of the fixed points of the iterated exponential in the first article. The solution is:

z=Log(c)/W(Log(c)) (3)

Since the Lambert's W function has infinitely many branches, equation (1) has really an infinite number of solutions, but for our purposes, let us stick with the principal branch of the W. Let us verify that this z satisfies (2) for n=2:

Using the definition of the W, z can be rewritten as z=eW(Log(c)), so plugging this into (2) we get 2z=zz=[eW(Log(c))][eW(Log(c))] =eW(Log(c))*eW(Log(c)) (3). Note that the exponent is nothing more than Log(c) by the definition of the W again, so (3) equals c.

For n>2 the situation is more complicated. In order to understand exactly what's going on, we first need to define a suitable function in the spirit of Greg's F, above, which we can fall back onto, when we have problems with our definitions (assume all good things, Log is the principal branch of the complex log function, use careful inductive definitions on n's, etc).

Let f then be defined as follows:

f(z,w,n) = {zw, iff n=1, zf(z,w,n-1) iff n>1} (4)

f is a general purpose complex exponential, in which we can control the base, the exponent and the level of hyperexponentiation. Its use will come forth later when we examine rational powers. So:

f(z,1,1) = 1z = z,

f(z,1,2) = 2z = zz,

f(z,1,3) = 3z = z(zz),

etc, while,

f(z,w,1) = zw,

f(z,w,2) = zzw,

etc.

Our aim is to show:

Lemma #1:

Let c =e(-1/e). Then, given n>2 in N and y in R, such that y >= f(eW(Log(c)),c,n-2), the equation nz =y admits (at least) one solution for z.

Proof:

For n=2, the point: z2 = Log(c)/W(Log(c)) given by (3) above satisfies the equation and the hyperexponential equation 2z=c admits a (at least one) solution, so we are done.

For n>2, the situation is much more complex, but can be resolved, with some effort. Let us look at the following recurrence:

a(2)=2z, a(n) = za(n-1). (5)

Now write:

z=eLog(z)

=eW(Log(z)*eLog(z))

=eW(z*Log(z))

=eW(Log(zz))

=eW(Log(a(2))).

Accordingly, nz=f(eW(Log(zz)),zz,n-2). Calling a(2) = b, recurrence (5) gets transformed into:

a(2) = b, a(n)=(eW(Log(b)))a(n-1)= ea(n-1)*W((Log(b)) (6)

If we can determine a suitable b in recurrence (6) above, then we are essentially done, since we know that zz=b=a(2) and then z is given by (3), z=eW(Log(b)). Such a z will satisfy our recurrence (5).

First, we have to restrict our y suitably, so we don't have a wild goose chase on the b. Note that W(w) is real-valued for w in [-1/e, +∞). Since our final z will be given via (3), we have to have:

eW(Log(b)) in R. For this to happen, we must force Log(b) in [-1/e, +∞), which gives us a lower calculation bound for b: b=e(-1/e).

Having a lower bound for b, it is easy to calculate all the y's for which we hope recurrence (6) will have a solution for b.

Now we are ready for the heavy artillery. The author will quickly go over the following lemmas, which are almost trivial.

First, it is obvious that when b ranges in [e(-1/e),+∞), then z ranges in [1/e,+∞).

Lemma #2:

For fixed n, the corresponding function nx is continuous on [1/e,+∞).

Sketch of Proof:

By induction on n. All the partial iterates, 1x, 2x, 3x,...nx are continuous on the indicated interval. The inductive step can be dealt with by considering the decomposition, k+1x=e{Log(x)*kx}, which is itself a composition of continuous functions on the indicated interval with the aid of the induction hypothesis.

Lemma #3

For fixed n's the corresponding function nx is strictly increasing on [1/e,+∞).

Sketch of Proof:

By induction on n. The n=2 case has a little trickery in it, in that the function 2x needs to be examined closely. The minimum of the function occurs at x=1/e, and this is precisely the point we are interested in. Right of this point, 2x is increasing. This takes care of the step n=2. For the inductive step, we have kx1<kx2, by the induction hypothesis, consider again the decomposition k+1x=e{Log(x)*kx} and note that the functions exp and Log are increasing respectively.

Continuing with the proof of Lemma #1:

If c=e(-1/e), then the minimum y in recurrence (6) will be f(eW(Log(c)),c,n-2) (which is also f(1/e,1,n)). Now let h(x) = f(eW(Log(x)),x,n-2)-y, as usual. Since y≥f(eW(Log(c)),c,n-2) by our hypothesis =>h(c)≤0. If h(c) = 0, we are done and b = c is a solution. If h(c)<0 then, we consider two cases.

1. y>1. In this case, look at h(yy). We have, h(yy)=f(eW(Log(yy)),yy,n-2)-y =ny-y>0.
2. y<1. Here, look at h(1). h(1)=1-y>0.

Now apply IVT and we get a b such that recurrence (6) is satisfied. Having this b at hand, equation (3) gives us the required z that satisfies our hyperexponential equation and Lemma #1 is proved.

Corollary:

For fixed n, the hyperroot function is continuous on [1/e,+∞).

Sketch of Proof:

By Lemma #2, for fixed n the corresponding function nx is continuous on [1/e,+∞) and the hyperroot function 1/nx is an inverse of nx on [1/e,+∞), since f(1/nx,1,n)=x. Result follows by Theorem 3 on Spivak's page 220.

Let's now see what Maple can do with our results.

> restart;
> W:=LambertW;

> a:=proc(n)
> option remember;
> global b;
> b:='b';
> if n=2 then b
> else exp(a(n-1)*W(Log(b)));
> fi;
> end:

> f_N:=proc(z,w,n)
> option remember;
> if n=1 then z^w;
> else z^f_N(z,w,n-1);
> fi;
> end:

so,

>b:='b'; #clear previous b
>b:=fsolve(a(4)=3,b,1..infinity);# get b
b := 2.011649992

> z:=exp(W(Log(b)));  #get z, solution to hyperexponential equation
z := 1.563627903

> evalf(f_N(z,1,4));
2.999999996

Cool. So 1.563627903 is the hyperroot of order 4 of the number 3. Let us now construct the full hyperroot function:

> hr_N:=proc(y,n)
> global b;
> local c, minb, miny;
> b:='b';
> minb:=evalf(f_N(exp(1),-exp(-1),1));
> miny:=evalf(f_N(exp(W(log(minb))),minb,n-2));
> if evalf(y)>=miny then
>  if n<>0 then
>   b:=fsolve(a(n)=y,b);    # but see , below
>   evalf(exp(W(log(b))));
>  else
>   ERROR(`division by zero.`,n);
>  fi;
> else
>  b:=fsolve(a(n)=y,b,complex);
>  evalf(exp(W(log(b))));
> fi;
> end:

Let's see some examples:

> evalf(hr_N(134,3));
2.238885474

And the verification:

> evalf(f_N(",1,3));
134.0000001

> evalf(hr_N(2,4));
1.446601432

> evalf(f_N(",1,4));
1.999999997

> evalf(hr_N(8170554,3));
2.738985962

> evalf(f_N(",1,3));
.8170554501 107

> evalf(hr_N(89,4));
1.861796185

> evalf(f_N(",1,4));
89.00000035

> evalf(hr_N(90,5));
1.716715302

> evalf(f_N(",1,5));
89.99999930

> evalf(hr_N(0.124,5));
1.839576693 - .2214100457*I

> evalf(f_N(",1,5));
.1239999998 + .3218906008 10-8*I

Let's now see the graph of the hr function:

> with (plots):
> plot({'hr_N(x,3)','hr_N(x,4)','hr_N(x,5)'},x=1..10); Here is another little interesting fact. Could we possibly make sense of limn->+∞1/nx for fixed x? Well, if we proceed through the definition 1/nx=y<=>x =ny, and extend it to include the limiting case, that is, limn->+∞1/nx=y<=>x= limn->+∞ny, then since we know that limn->+∞ny exists for y in [(1/e)e, e(1/e)] and is equal to e-W(-log(y)), let us solve for y the following:

> solve(x=exp(-W(-log(y))),y);

Surprise!

y=x(1/x). Which is a partial inverse of the infinite hyperpower function F(x) = x = xxx.... Therefore, if x is in [1/e, e], we can define as well:

limn->+∞1/nx = x(1/x).

Which is another way of saying that on the indicated interval, the hyperroot functions convergence point-wise to the function x(1/x).

Here then, are the graphs of the first 3 hyperroot functions (order 2 (red), order 3 (green) order 4 (blue), along with their limit (yellow)).

>plot({exp(W(log(x))),'hr_N(x,3)','hr_N(x,4)',x^(1/x)},x=evalf((exp(-1)))..evalf(exp(1))); Notes

1. Alternatively, z = eW(Log(c)), by the definition of Lambert's W function
2. The code for hr_N above, may go as high as n=6 under certain circumstances. If it fails for large values of y, replace the line b:=fsolve(a(n)=y,b); with b:=fsolve(a(n)=y,b,minb..infinity);. Sometimes Maple seems to need a default interval, although in some cases it doesn't help at all. The author has no idea why Maple misbehaves on this. Perhaps it internally has a more accurate value picking mechanism for the root finding algorithm. 