Wikipedia:Reference desk/Archives/Mathematics/2012 January 11

From Wikipedia, the free encyclopedia
Mathematics desk
< January 10 << Dec | January | Feb >> January 12 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 11[edit]

How soon would a dishwasher pay for itself in savings?[edit]

I was told that machine dishwashers are more efficient than washing by hand.

In this case, how much do we use on average to wash dishes by hand?

On the other... "hand," how efficient is a new dishwasher made this year, and by how much more is it efficient than the hand methods? How much does said dishwasher cost (and from what store?)

Therefore, assuming regular usage, how soon would the machine dishwasher pay for itself in savings? --70.179.174.101 (talk) 00:19, 11 January 2012 (UTC)[reply]

A comparison of the article and its comments here give some indication of the gap between the spin and the common experience. --Tagishsimon (talk) 00:58, 11 January 2012 (UTC)[reply]
Another factor is the amount of detergent used up. Hand dish washing detergent seems cheaper, to me, and you might use less, since I apply it directly to the dishes, and only as needed. StuRat (talk) 16:50, 12 January 2012 (UTC)[reply]

Polynomials[edit]

Hello. I am trying to prove succinctly but rigorously that if for some polynomial P(x+c)-P(x)=k, c and k constant, for all x, then P must be a polynomial of degree at most one. I already have a proof considering an indeterminate degree 'n' that involves sigma summation and binomial expansion but it is very ugly. Can anybody provide a hint? Thanks. 24.92.85.35 (talk) 01:06, 11 January 2012 (UTC)[reply]

Assume that has degree , and say . What is the coefficient of in ? --COVIZAPIBETEFOKY (talk) 01:43, 11 January 2012 (UTC)[reply]
Take a derivative of both sides of your equation and conclude that P' is a periodic function (and therefore it is bounded). The only bounded polynomials are constant, so P' must be constant. Sławomir Biały (talk) 10:52, 11 January 2012 (UTC)[reply]
Sławomir, that's a neat little proof. It is intuitively obvious that the only bounded polynomials are constant but how might one prove that? The sine function can be represented as an infinite power series. If you took the polynomial consisting of the first googol to the power of a googol terms in that power series then you'd have a polynomial. It's tempting to think that this polynomial might be periodic over some large interval. Presumably you'd need an argument involving the convergence of power series? What would you do next? Fly by Night (talk) 19:53, 11 January 2012 (UTC)[reply]
You just consider x large enough and the polynomial will be dominated by the largest power. n times the largest coefficient (or 2 if smaller) will be quite big enough and more. Dmcq (talk) 20:11, 11 January 2012 (UTC)[reply]
I'm aware of the method, but I was asking Sławomir to give explicit details about what he would do. Fly by Night (talk) 23:04, 11 January 2012 (UTC)[reply]
Actually, there's a simple proof that is constant that doesn't rely on this fact. First note that for each integer j (by the argument I just gave) . By Rolle's theorem, this implies that has a zero in . Thus is a polynomial with infinitely many zeros, and so is therefore the zero polynomial.
To answer your question about how I would prove that the only bounded polynomials are constant: By the squeeze theorem, if is bounded and then . But if had degree , then the leading coefficient of is , a contradiction. Sławomir Biały (talk) 00:08, 12 January 2012 (UTC)[reply]
It's obvious when you put it like that :o) Fly by Night (talk) 17:53, 12 January 2012 (UTC)[reply]

Thank you everybody, this was really helpful! Thank you especially Slawomir, for such a clever proof. Just to be sure I understand you: there is no real requirement that the j in your argument be an integer correct? It could in fact be any number? Thanks again! 24.92.85.35 (talk) 03:10, 12 January 2012 (UTC)[reply]

Right, j need not be an integer for the identity to hold, but it simplifies the argument a bit to assume this because it ensures that the intervals do not overlap. Sławomir Biały (talk) 10:06, 12 January 2012 (UTC)[reply]
Alternatively: prove that if is a polynomial of degree then is a polynomial of degree . This is straightforward for the polynomial and generalizes immediately to any polynomial. So if is a non-zero constant, has degree 1, and if , then is a constant polynomial, and that's it. --pma 13:55, 13 January 2012 (UTC)[reply]

Matrix multiplication on integers and overflow[edit]

While reading the description of YUV, this occurred to me. Normally, multiplication by a square, nonsingular matrix is one-to-one and onto, but because RGB and YUV values are constricted on an interval, the transform can overflow. This led me to the following question:

For the matrix equation A*x=b, what values of x, given (x1, x2, x3, ...) exist in an interval [0, 1], will produce values of b with (b1, b2, b3, ...) etc on the interval [0, 1]? — Preceding unsigned comment added by 68.40.57.1 (talk) 05:35, 11 January 2012 (UTC)[reply]

All possible values of x where Ax falls within a Cartesian product of intervals [0, 1], a hypercube, are given by inverting the matrix and applying that to the hypercube, which will give a distorted hypercube where bits may fall outside the source. You'd then need to find the intersection of that and the source hypercube which doesn't sound too nice. Dmcq (talk) 11:12, 11 January 2012 (UTC)[reply]

Rapid calculation of standard deviation in a time series[edit]

I came up with a rapid algorithm that calculates a running variance of the last N values of a noisy (chaotic) time series as each new data sample comes in, without relying on any loops. Basically it maintains two running sums:

  • S = the sum of the last N values of x
  • T = the sum of the last N values of x2.

Each new data value x is added to S, and the square is added to T, while the value that was added n samples ago is subtracted from each. In this way the running sums require no loops. Based on this identity for variance:

...the standard deviation for my time series (N weighted, not N−1 weighted) is simply:

So far this is working OK for me. However, I am concerned about errors that can occur when the two terms in the numerator are many orders of magnitude larger than their difference (or worse, the difference due to roundoff is negative). This hasn't happened yet in my application, but the possibility is there.

So I've spent several hours looking for rapid calculation techniques for time series, and found nothing. I do find single-pass methods for calculating variance (see Standard deviation#Rapid calculation methods for example, based on Welford's classic paper of 1962), but for a fixed-length variance of a time series, this would still require a loop every time the series gets a new data point.

Does anybody know of a loop-less rapid calculation of standard deviation of a time series that doesn't introduce the possibility of roundoff error? The only alternative I know of is exponential moving standard deviation:

...but this has some undesirable settling time properties for me, so I'd prefer not to use it.

Anyone know of any other alternatives? ~Amatulić (talk) 21:37, 11 January 2012 (UTC)[reply]

Would this Algorithms_for_calculating_variance#Compute_running_.28continuous.29_variance work? --NorwegianBlue talk 22:20, 11 January 2012 (UTC)[reply]
No, that's Welford's algorithm that I mentioned above. That section title is somewhat misleading. You could run that algorithm continuously on a time series but your result would be the variance of all the data contained within the time series, not the last N values as I'm trying to calculate.
My problem can be restated like this:
Calculate the variance or standard deviation inside a fixed-length window that slides over an infinitely long data set, using a rapid calculation algorithm that doesn't require loops AND doesn't introduce potentially catastrophic roundoff errors.
I have solved the first part but not that last part. ~Amatulić (talk) 23:23, 11 January 2012 (UTC)[reply]
Try searching for "recursive calculation of variance". Does it help? --HappyCamper 23:37, 11 January 2012 (UTC)[reply]
You can always remove roundoff errors by not introducing any when adding or subtracting by doing rounding before that stage. For instance in computing terms if you round the square to a float instead of a double before adding it to a double and you don't have too wide a range of values then double value will be the exact sum of the float values. Dmcq (talk) 12:54, 12 January 2012 (UTC)[reply]

HappyCamper: Thanks. My searches for recursive variance turned up mixed results, either a method that does what I'm already doing, or a description of exponentially weighted variance that I described above, or a method like Welford's algorithm, which isn't a sliding-window algorithm.

Dmcq: Intuitively, rounding off a double to a float seems like it would introduce even more error, which one might get if one started out with floats in the first place. After thinking about it, though, I see how errors in the low significant digits would get lopped off by rounding to floats, so that might work. What I'm doing now, just for safety, is to use as my numerator max(0, NTS2). At least that will prevent the possibility of a negative argument in the square root. ~Amatulić (talk) 20:33, 12 January 2012 (UTC)[reply]