Wikipedia:Reference desk/Archives/Mathematics/2012 April 28

From Wikipedia, the free encyclopedia
Mathematics desk
< April 27 << Mar | April | May >> April 29 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 28[edit]

Cubic Interpolation Polynomials[edit]

Related to the interpolation polynomial I asked above, in one dimension if I have two points and I know both the function values and the derivatives at those two points, I can fit a unique cubic polynomial through them because I will have four unknowns and four constraints. How can I extend this to 2 dimensions? Let's say I have four points (one cell). What would be the form of the polynomial? Would it be a general bivariate cubic

or a bicubic like this one

The first one has fifteen unknowns and the second has eight. So which constraints are natural to use here? I am thinking of four function values, four x-partials, and four y-partial at each of the four points. But this only gives me twelve constraints. Are some constraints redundant or am I missing some? I am just confused about the "natural" extension of the above method to two variables. Thanks!75.171.224.232 (talk) 00:49, 28 April 2012 (UTC)[reply]

Bicubic interpolation may get you started. —Tamfang (talk) 17:40, 28 April 2012 (UTC)[reply]

Right, I have looked at that article (and others on interpolation I could find on wikipedia) but my question is a little different. If I have ten points and I want to use cubic splines then I will need nine cubic polynomials, one for each interval. The way these "standard" cubic splines work is that I derive a large system and then solve for all nine cubic simultaneously. The problem I have is that I have too many points so interpolating this way is infeasible. So I was just asking if anyone knows of a way where I can solve only one cubic (for one interval) at a time as I need them. Linear interpolation would do this but I also need the gradient to be continuous. So then I said to myself, in one dimension, if I have two points x1 and x2 and I know f(x1),f(x2),f'(x1),f'(x2) then I can find the cubic polynomial for that one interval without having to solve for all others. But I am a little confused how to extend this to two dimensions (or even higher). If I can do something like this it will solve a lot of problems I am having. Any ideas? Thanks.75.171.224.232 (talk) 20:50, 28 April 2012 (UTC)[reply]

Have you looked at Numerical Recipies http://www.nr.com/ They have a much better treatment of the subject that wikipedia and will focus more on application issues. Most of the interpolation methods they use are piecewise ones which only use a small number of points at a time.--Salix (talk): 22:23, 28 April 2012 (UTC)[reply]
One 2D technique they mention (NR 3.6.3) is first calculating values for first derivatives , and cross derivative for each point. With this information you can then treat each patch separately using bicubic interpolation.--Salix (talk): 23:36, 28 April 2012 (UTC)[reply]

Yep, I looked at it and looks very promising. Thanks for this suggestion! One question though, did I read it right? They say (if I don't know the values of the partials at the grid points) then basically just assume whatever values you want? This way I will have the smoothness I want. It may not be very accurate but it will be smooth.75.171.224.232 (talk) 03:41, 30 April 2012 (UTC)[reply]

Yes you can just assume the values for the partials. For example in 1D you could assume the derivatives are alway zero giving a smooth but bumpy surface. Better is to estimate them using finite differences.--Salix (talk): 05:16, 30 April 2012 (UTC)[reply]

Trigonometric Interpolation[edit]

Second interpolation question, Bo Jacoby, your idea of using trigonometric interpolation is a pretty cool idea. I didn't even think about it. I only have one question about that if someone can clear it up. I remember from my class that a trig polynomial is

so in two dimensions (so that I can use the FFT), what is the form of the polynomial? Is it

analogous to how we generalize polynomial interpolation like bicubics? Thank you.75.171.224.232 (talk) 01:03, 28 April 2012 (UTC)[reply]

Thanks for asking! Here are some tricks.
  1. The expression cos(2πx)+i sin(2πx) was (by Euler) identified as the complex exponential function e2π i x. This identification makes life easier. Forget all about cos and sin and stick to the exponential e2π i x.
  2. The identity e2π i=1 invites to the (nonstandard) notation 1x = e2π i x . Most people think that 1x = 1, but that convention is useless - nobody write 1x meaning 1. When x is rational then e2π i x is algebraic. So forget about the transcendental numbers e and 2π and stick to the root of unity 1x.
  3. For fixed positive N, there are only N different values of 1n / N . You may think of n as an integer modulo N. The summation sign Σn means summation over N consecutive values of n. It doesn't matter if it is from n=1 to N, or from n=0 to N−1 , or from n=−(N−1)/2 to (N−1)/2 for odd N, or from n=−N/2 to N/2−1 for even N.
  4. A trigonometric polynomial is f(x) = Σn an 1n x / N/√N where the amplitudes are an = Σx f(x) 1n x / N/√N. Note that complex conjugating the amplitudes is nonstandard. The standard discrete fourier transform has different procedures for transforming forwards and backwards(!).
  5. A two dimensional trigonometric polynomial is f(x,y) = Σn Σm anm (1n x / N/√N) (1m y / M/√M) where the amplitudes are anm = Σx Σy f(x,y) (1n x / N/√N) (1m y / M/√M).
  6. For interpolation, first transform the function values to amplitudes. Then extend the array of amplitudes from (−N/2 ≤ n < N/2) to (−kN/2 ≤ n < kN/2) for some k>1, with zeroes for the high frequencies. (bN/2=bN/2=aN/2/2, bn=an for −N/2 < n < N/2, bn=b−n=0 for N/2 < n < kN/2, bkN/2=0). Multiply the amplitudes by √k. Transform from amplitudes to function values.
Bo Jacoby (talk) 12:01, 28 April 2012 (UTC).[reply]
For example. x=(0,1,2,3,4,5,6,7,8). f(x)=(2,3,4,6,5,2,1,4,2). The amplitudes are an=(9.67,-0.953+2i,-0.929-1.76i,-0.333+1.15i,0.382+0.573i,0.382-0.573i,-0.333-1.15i,-0.929+1.76i,-0.953-2i). Insert 9 zeroes and multiply by √2 : bn=(13.7,-1.35+2.83i,-1.31-2.49i,-0.471+1.63i,0.54+0.81i,0,0,0,0,0,0,0,0,0,0.54-0.81i,-0.471-1.63i,-1.31+2.49i,-1.35-2.83i). Transform again: g(x)=(2, 2.83, 3, 3.12, 4, 5.3, 6, 5.8, 5, 3.72, 2, 0.706, 1, 2.7, 4, 3.5, 2, 1.34). The even items reproduce the original data, g(2x)=f(x), and the odd items interpolate between them. Bo Jacoby (talk) 11:51, 3 May 2012 (UTC).[reply]

More Interpolation[edit]

Let's say I want to interpolate using a ridiculously large number of points (like a trillion points). I know their x-values and the corresponding y-values and I want the interpolation function and its first derivative to be continuous. For this, cubic splines are perfect but the problem is that the resulting system is far too large. A trillion points mean 999,999,999,999 cubic polynomials with four unknowns in each. And the system will have something like (4-trillion)^2 entries. Using even single precision (4 bytes per entry), I would need like a lot of hard drive/RAM space. Granted it will be tri-diagonal, I can save memory using sparse structure, and I can use specialized algorithms like Thomas' algorithm to solve it, it is still too large and will take too much time and memory.

Is there any way (any cool tricks anyone knows of) where I can only solve one cubic polynomial in each interval (from n-th to n+1st point) as I need them? I would much rather solve a bunch of small systems as I need them instead of solving them all in preprocessing. This is why I asked about the cubic spline question above. I figured a cubic in any one interval can be solved independently if I give it four constraints. It doesn't have to be cubic. I would love to hear all of your thoughts on any of this or any cool idea anyone knows of. How can I compute a interpolating function on one cell only, such that the function and its derivative/gradient would be continuous if I compute and stitch all of them together? Please remember that eventually I will be doing this in at least three dimensions so the problem will only become worse much much quickly thanks to the curse....of dimensionality.75.171.224.232 (talk) 01:19, 28 April 2012 (UTC)[reply]

I'm not convinced that I understand your question. No matter how many intervals you have, a cubic for each interval depends on the same number of local values. —Tamfang (talk) 17:48, 28 April 2012 (UTC)[reply]
I think the problem is that while the datapoints are know, the gradients are not. In fitting the splines you need to adjust adjacent patches to match gradients. First thoughts are you could try to drastically reduce the size of you dataset before fitting splines. There are lots of techniques for this start with Dimension reduction. How accurately are the data points obtained will there be some error in their measurement? If so beware of overfitting. There quite a few google hits for "interpolation large datasets" with some interesting techniques, Multi Level B-Spline might be promising.--Salix (talk): 19:01, 28 April 2012 (UTC)[reply]
To assign a first derivative to each sample point, in a one-dimensional sequence, you fit a quadratic to it and its two neighbors; if the samples are equally spaced, this will give a tangent slope equal to the slope of a straight line between the two adjacent samples. In multiple dimensions I imagine it's similar. If the sample points are not a grid, you'd probably have to start with Delaunay triangulation. —Tamfang (talk) 18:48, 1 May 2012 (UTC)[reply]
You can always compute a patch-wise spline and then blend the patches together. For example, if you have 200 points, you might do the following:
  1. Compute 3 splines:
    • One on the first 100 points, S1(x)
    • One on points 51-150, S2(x),
    • One on points 101-200, S3(x)
  2. Define weight functions:
    • w1(x) = 1 on points 0-50, declines linearly to zero over points 51-100, and zero otherwise
    • w2(x) = grows linearly from zero on points 51-100, declines linearly to zero over points 101-150, and zero otherwise
    • w3(x) = grows linearly from zero on points 101-150, 1 on points 151-200, and zero otherwise
  3. Define a composite curve: S(x) = (w1(x)*S1(x) + w2(x)*S2(x) + w3(x)*S3(x)) / (w1(x) + w2(x) + w3(x));
It should be obvious how to extend this to arbitrarily many patches. This will be continuous. With just a little more effort, you can make the blending function smooth and ensure the composite curve is also completely smooth. You aren't guaranteed to have the same result as the full spline, but as long as the patches are large, the edge effects will be small and you get a good approximation of the full spline (assuming you aren't overfitting in the first place). Dragons flight (talk) 19:23, 1 May 2012 (UTC)[reply]

Request for applications[edit]

Could anyone supply me with explicit information about the application of second order ODEs? Firstly, I am interested in the case of linear homogeneous ODEs, i.e.

where a, b and c are constants and a is non-zero. Secondly, I am interested in the non-homogeneous case:

where ƒ is a sum, or difference of, exponential and trigonometric functions. Fly by Night (talk) 21:38, 28 April 2012 (UTC)[reply]

Hooke's law gives an application of the homogeneous case (with a and c terms in your equation). The b term is a damping term (see damping). The f represents the application of a force external to the system. Many other mechanical systems have similar interpretations for their respective differential equations (e.g., RLC circuits). Sławomir Biały (talk) 01:15, 29 April 2012 (UTC)[reply]
The special case cy=f is Hooke's law, that force is proportional to displacement: more load makes a spring longer. The case by'=f is Aristotle saying that force is proportional to velocity: more horses draw a wagon faster. The case ay"=f is Newton's law that force is proportional to acceleration: the falling apple. Bo Jacoby (talk) 04:10, 29 April 2012 (UTC).[reply]
Second-order fixed-coefficients differential equations sometimes come up in economics, with f being either zero or a non-zero constant. Two examples are given in Chiang, Alpha C., Fundamental Methods of Mathematical Economics, McGraw-Hill, 3rd edition, 1984, pp.529-540. One of them equates supply and demand for a good, with demand being affected not only by price P but also by the trend in price given by dP/dt and d2P/dt2. The other one involves the economy-wide inflation rate and its first and second derivatives, as it relates to the unemployment rate. In both examples the first and second derivatives appear because of the assumptions about how price or inflation expectations are formed. Duoduoduo (talk) 17:34, 29 April 2012 (UTC)[reply]
An RLC circuit current and voltage response are described with second order ODEs, with f(t) being an external signal (current or voltage) applied to the circuit. --CiaPan (talk) 05:37, 30 April 2012 (UTC)[reply]