Talk:Half-exponential function

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Proved only for "positive" combinations?[edit]

"Not expressible in terms of elementary functions", really? Following the link (to Scott) I see much weaker statement, and a question: "And of course, how do we handle subtraction and division?" Or is this question answered already? Boris Tsirelson (talk) 19:58, 22 May 2014 (UTC)[reply]

I agree with you - Scott's proof definitely can't handle subtraction and division - so I've changed the wording of the sentence and removed the "factual inaccuracy" template. David9550 (talk) 00:17, 11 April 2016 (UTC)[reply]
After a bit more searching, I found a MathOverflow post that points to some references that claim to do the full case. So I've added that in. David9550 (talk) 00:29, 11 April 2016 (UTC)[reply]

The article says...[edit]

It has been proven that every function ƒ composed of basic arithmetic operations, exponentials, and logarithms, then ƒ(ƒ(x)) is either subexponential or superexponential:[3] half-exponential functions are not expressible in terms of elementary functions.

Stronger statement that I would like to know if it's true:

The smallest set of all functions that contains all elementary functions as well as being closed under indefinite integration still doesn't have the half-exponential functions.

In other words, if is a half-exponential function, then no function in the sequence whose first function is and each function after that is the derivative of the preceding function will be elementary. Georgia guy (talk) 20:25, 22 May 2014 (UTC)[reply]

That could be interesting, but for now the statement in the article seems to be too strong, not too week, as compared to what is proved. Boris Tsirelson (talk) 04:58, 23 May 2014 (UTC)[reply]

Aspiration for smoothness[edit]

One would like to have a solution f to f(f(x))=exp(x) which is not merely continuous and strictly increasing, but as smooth as possible. One might hope for C, but even that would still leave the possibility of undulations in the rate of increase. Analyticity would be desireable, but the sources suggest that it is not possible. If there were a place where the function was especially smooth, it could be used to extend the definition of such a smooth function to the whole real line using the facts:

and

Define A=f(0). We can show that as x approaches -∞, f(x) approaches ln(A)<0. This is asymptotic. So we might hope that f will take an especially simple form in this limit. Let us take the derivative using the chain rule as x approaches -∞

So for some constant c1,

Integrating from -∞ to x gives

If we go to the second derivative, we get

So for some constants c1 and c2,

Integrating from -∞ to x gives

Integrating from -∞ to x gives

The pattern suggests that

where c0=ln(A). In other words, we can hope that there is a function h analytic in a neighborhood of 0 such that

since exp(x) approaches 0 as x approaches -∞. Notice that

and consequently h cannot be entire because f -1(ln(A)) is undefined. JRSpriggs (talk) 00:32, 26 May 2014 (UTC)[reply]

The maximum value of A consistent with it lying within the radius of convergence of

is Ω, the Omega constant, where A=−ln(A). JRSpriggs (talk) 01:26, 27 May 2014 (UTC)[reply]

For a while, I explored the possibility that A=Ω=0.5671... . Then I realized that that value was too large to be consistent with my belief that the appropriate f should be C and that all its derivatives should be positive on the whole real line. According to the mean value theorem, there should be a point in the interval (ln(A),0) where the first derivative is

and there should be a point in the interval (0,A) where the first derivative is

If we require the first of these to be less than the second and cross-multiply, we get

If A were Ω, then

which would imply

a clear falsehood. JRSpriggs (talk) 04:46, 29 May 2014 (UTC)[reply]

From

we get

by the chain rule. If we then substitute f−1(x) in place of x, we get

from which it follows that

This will allow us to calculate the derivative of f at larger arguments, if we know it at smaller arguments. By analogy with

we could calculate a series expansion for f near x, if we know a series for its derivative at f−1(x). We begin with

for

Applying our formula and using analytic continuation, we get

for

Applying it again

for

Applying it yet again

for

JRSpriggs (talk) 06:28, 11 June 2014 (UTC)[reply]

For the second derivative, the recursion rule is

Here is a table of values

JRSpriggs (talk) 11:04, 11 June 2014 (UTC)[reply]

We need constraints on the possible values of the parameters A, c0, c1, c2, c3, ... :

Assuming that the derivative of f is strictly increasing, we get

Assuming that the second derivative of f is strictly increasing, we get

And so forth. JRSpriggs (talk) 04:19, 12 June 2014 (UTC)[reply]

So far, it appears that

JRSpriggs (talk) 07:15, 14 June 2014 (UTC)[reply]

Link to Tetration?[edit]

I think this article should include some mention of tetration. While it defines f(x) as a piecewise, it would make more sense to define it using tetration.

Let's focus on this for example:

This can be rewritten using tetration:

Here's a proof to show that you indeed get as a result:

Unfortunately, I can't figure out a way to get that coefficient in the front, so using the function above only satisfies when . Regardless, I believe it's worth mentioning. Not only would using tetration help make it easier to define functions such as , but the fact that the link exists means that there might be some way to calculate , although this assumes that the value of a half-exponential function at is known. Expfac user (talk) 20:05, 20 January 2021 (UTC)[reply]

I don't see the point. Functional iteration is a mature, self-standing field, as your can see from the cited bibliography, and no salutary information or techniques can come from tetrations. You might consider linking this one to the iterated function page, however. Cuzkatzimhut (talk) 01:20, 21 January 2021 (UTC)[reply]

"linking this one to the iteration page" as in, linking the half exponential function to the iteration page? Because if that's the case, the page already has a link to iterated functions. As for the comment on tetration...yeah, I guess it isn't practical. Although, would it be worth it to mention half-exponentials on the tetration page, assuming it hasn't been done already?
Other than that appreciate the advice, I suppose there's no point mentioning tetration if it's not practical. Expfac user (talk) 16:03, 21 January 2021 (UTC)[reply]
Apologies; I took my own advice and linked the iterated function page to this one, after my comment above! Personally, more Kneser stuff is in order, but I lack the energy to do that. Cuzkatzimhut (talk) 22:00, 21 January 2021 (UTC)[reply]