Wikipedia:Reference desk/Archives/Mathematics/2010 May 3

From Wikipedia, the free encyclopedia
Mathematics desk
< May 2 << Apr | May | Jun >> May 4 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


May 3[edit]

show that cos(2pi/7)+cos(4pi/7)+cos(6pi/7)=-1/2[edit]

solving the equation yields the solutions . Hence show that Show me where to begin plz.--115.178.29.142 (talk) 01:11, 3 May 2010 (UTC)[reply]

If the sum is 0 then subtracting 1 from each side, you get that the sum of the six roots is −1. The real parts of those six roots are the cosines. The roots come in pairs. Two roots in a pair have equal real parts. You're summing only three of them—one from each pair—and not the other three, so the sum is only half as much. Michael Hardy (talk) 01:15, 3 May 2010 (UTC)[reply]
Oh.... I guess I'd better mention: the six roots are just the 1st through 6th powers of one of the roots. Therefore you really are summing the six roots if you're summing the 1st through 6th powers of that one root. Michael Hardy (talk) 01:21, 3 May 2010 (UTC)[reply]
...and by the way, notice the difference between this:
and this:
Writing \cos x, with a backslash has three effects: (1) "cos" doesn't get italicized along with x; (2) proper spacing appears between "cos" and "x"; and (3) when TeX is used in the normal way (as opposed to the way it's used on Wikipedia, then no line-break will appear between "cos" and x. (In Wikipedia, no line-break will ever appear anyway....) Michael Hardy (talk) 01:18, 3 May 2010 (UTC)[reply]
In real TeX, no line breaks will ever appear inside $cos x$ either: a math formula can only be broken at the outermost brace level after an explicit penalty (e.g., \break), a discretionary break (e.g., \-), a relation sign (e.g., =), or a binary operation sign (e.g., +), the last three incurring a \hyphenpenalty/\exhyphenpenalty, \relpenalty, and \binoppenalty, respectively. Not that I would advocate using <math>cos x</math> (it's clearly wrong for the other reasons you gave), but let's get the facts straight.—Emil J. 14:36, 3 May 2010 (UTC)[reply]

Formal proof that (2^x-1)/(3x!+x^2-1) converges to 0[edit]

Prove that converges to
This is what I've done so far:







but now I'm stuck —Preceding unsigned comment added by 115.178.29.142 (talk) 02:05, 3 May 2010 (UTC)[reply]

Are you allowed to use Stirling's formula? (Igny (talk) 02:37, 3 May 2010 (UTC))[reply]
First you want to get rid of the -1 to make things cleaner. For example you could say for all x ≥ 0 (note that the constant factors don't really matter for any argument you're going to make). Then you want to make an argument for why a factorial grows faster than an exponential. In particular, when x increases by 1, 2x increases by a factor of 2, while x! increases by a factor of x (which is much bigger than 2 when x gets large). For any specific choice of ε, you need to find any large enough x such that 2x/x! has definitely decreased below ε. Rckrone (talk) 03:06, 3 May 2010 (UTC)[reply]

Begin by rearranging

Then prove that each of the four fractions converges to 0 for x going to infinity. Bo Jacoby (talk) 06:09, 3 May 2010 (UTC).[reply]

The core of the problem is to get an estimate for . If x is assumed an integer (usually the gamma function is used for non-integer x) then it should be easy to construct an inductive argument that for some value of C. If x is not an integer then it gets back to whether you can use Stirling's formula.--RDBury (talk) 15:02, 3 May 2010 (UTC)[reply]
No need of Stirling formula even for real x. Follow Bo Jacoby's directions and use e.g the obvious bound x!≥3x-3 for x≥3.--pma 17:43, 3 May 2010 (UTC)[reply]

Paying a point for lower mortgage rate[edit]

My local bank offers 5.125% 30 year fixed mortgage for 0 points and 5.000% for 1 point (1% of principal paid as a fee to get the lower rate). To determine which is more fair, I used Excel functions RATE() and PMT() in the following way

rate=5.125%/12

duration=30*12

payment=PMT(rate,duration,100000) <---- monthly payment on $100k loan at 5.125 APR

guess=rate

fv=0

type=0

fairrate=RATE(duration,payment,101000,fv,type,guess)*12

For this particular exercise, payment per month at 5.125% is $544.49, and the "fair rate" is 5.037%. That is, owing $100k at 5.125% mean the same payments as owing $101k at 5.037% APR. But because the bank offers 5% for 1 point fee, it seems like a good deal, right? Is there a mistake in my reasoning? (Igny (talk) 03:14, 3 May 2010 (UTC))[reply]

I don't know if the mathematics desk is the right place to ask. You have to take the tax deduction on the interest into account, and the point may not be deductable the same way. You have to think about potential changes in your tax bracket over the next 30 years, either because your income changes or because the tax code changes. And even if one deal is better from a linear perspective, the marginal utility of paying a chunk of cash now instead of spreading it across multiple years may not work out the same way. Finally, you're presumably getting a fixed mortage instead of an ARM because you anticipate inflation levels to increase compared to now. But in that case, why would you want to pay valuable cash now instead of worthless cash later? 69.228.170.24 (talk) 08:07, 3 May 2010 (UTC)[reply]
No, mathematics is the right place to ask these questions. I did not come here for tax advice, philosophical ramifications, legal or other consequences of having two different loans. I believe that two loans with a certain set of parameters can be compared to each other, for example by looking at the present values of the payments. In my example, owing $100k at 5.125% have the same payments as owing $101k at 5.037%, isn't owing $100k at 5.000% better if I pay $1k for the discount, thus adding $1k to PV of my payments? I consider simple assumptions that I do not prepay loan or the rates stay the same in 30y, and other simplifying assumptions you can think of. After all the banks do use some algorithms saying that for 1 point fee I am getting a discounted rate of 5%, not 4.5% or 4%, but 5%. How do they figure it out? (Igny (talk) 05:59, 4 May 2010 (UTC))[reply]
If which loan you take affects how much you're paying in taxes and when, that has some value too which needs to be taken into account. Rckrone (talk) 17:47, 4 May 2010 (UTC)[reply]
But otherwise it seems like it would be a good deal if you have the money to pay. Compare it to if you were allowed to pay off 1% of the loan now and only owe $99,000 at 5.125%. That payment would be slightly higher than $100,000 at 5%. Of course presumably the ability to borrow money is worth more to you than 5.125% APR, in which case borrowing more might be favorable even at a premium. You should figure out what the cost of that addition $1000 is. Rckrone (talk) 17:57, 4 May 2010 (UTC)[reply]
I think that to properly compare the two loans for yourself, you do have to take tax issues and utility into account. Since your user page says you have a PhD in mathematics I'm sure you can figure out the PV of the payment stream from the bank's point of view (it is just where P is the monthly payment, z is the discount coefficient 1/(1+i) and i is the monthly interest rate. That summation is in turn just the difference between two geometric series, unless I made an error someplace. 69.228.170.24 (talk) 23:16, 4 May 2010 (UTC)[reply]
I know this is a little late, but I wanted to throw in my advice as someone who has studied this stuff. No one has suggested that you subtract the common portion of the monthly payment out of both equations so that the math is easier. Suppose that the 30 year (12 monthly payments) = 360 payments @ 5.000% interest equals $536.82 per month. Thus the difference in monthly payment on owing $100k at 5.125% interest vs 5.000% interest equals $7.67 per month. (line break for new paragraph)
Now you mathematician/PhD guys can look at this problem simplified. What is more valuable to the OP? Would he prefer to be given $1,000 cash today or would he prefer to be given $7.67 per month for the next 360 months? (trivially, when you multiply $7.67 times 360 equals $2761.20) If the OP prefers to be given $1000 cash today, then he should take the 5.125% loan. If the OP prefers $7.67 cash per month for the next 360 months, then he should take the 5.000% loan. (line break for new paragraph)
If you want my advice, you should go with the loan which has a higher interest because you can avoid a sunk cost (the $1000 loan origination fee to the bank) which makes refinancing at a lower interest rate in the future more "profitable" if interest rates go down, if your credit rating goes up, or if markets predict/forecast low inflation expectations. Also built in to the "utility" of a lower rate is the consumer's peace of mind that he has acquired a lower interest rate (which is always good, right?) and by giving customer's a choice, some customer's will take the "wrong choice" which the bank simply got a free few dollars simply by giving all customers a choice, and some customers picking the option which is slightly more profitable for the bank, by a few dollars. 66.231.146.7 (talk) 12:57, 8 May 2010 (UTC)[reply]

Thank you 66. I think I figured out solution to my question, and I made a mistake in the formulas above. Consider a loan of $100k at 5.125%APR fixed for 30years. That is $544.49 per month. The formula for the "fair rate" above is close but not exact,

fair rate= RATE(30*12,-544.49,100000/.99)*12, approximately 5.0363% APR 

To better understand what I mean, compare the following scenarios for a loan for a home valued at $125k. I could use 1 point to pay for a rate discount or towards a downpayment.

Scenario Downpayment Discount fee Loan Monthly payment Principal in 10y
5.125%APR, 0 points $26,000 $0 $99,000 $539.04 $80,830
5.0363%APR, 1 point $25,000 $1,000 $100,000 $539.04 $81,430
5.000%APR, 1 point $25,000 $1,000 $100,000 $536.82 $81,342

In all 3 cases the closing payment is the same. First two have identical monthly payments, that is what I meant when I said to get a "fair rate" if you pay 1 point. It seems that the third scenario is a clear winner since we save approximately $2.22 per month for 30 years, while paying the same money at the closing.

I could be saving by taking the discounted offer, but only if I plan to keep the loan for full 30 years. However, as 66. noted, I may lose later if I make prepayments, or if I refinance, or if I sell the house early because the principal balance say 10 years from now is higher in 2nd or 3rd scenario. (Igny (talk) 01:18, 9 May 2010 (UTC))[reply]

Uniform distribution[edit]

If we assume that a number on a computer screen ticks over every three seconds (say, percentage downloaded or something), and the person previously sitting at the computer walks out of the room, makes a cup of tea and walks back in, then the point along the 0-3 second cycle at which they walk in (X, say) is uniformly distributed on [0,3], right? So if walking in within 0.1 seconds of the number ticking over counts as seeing it tick over as soon as one walks in, then .

However, what if the length of the cycle is itself a random variable? What if, due to fluctuations in signal strength (or something), the time it takes for the number to tick over, Y, is itself uniformly distributed on [0.5,5.5]; what then is and hence , defined as above? It Is Me Here t / c 11:45, 3 May 2010 (UTC)[reply]

The probability to walk in during a cycle of length l is proportional to l. Therefore (after some calculations), the result is . -- Meni Rosenfeld (talk) 13:32, 3 May 2010 (UTC)[reply]
What calculations, if I may? It Is Me Here t / c 14:10, 3 May 2010 (UTC)[reply]
In fact, there's a more general result: Let t be the mean time in seconds between two ticks. Then, trivially, the mean number of ticks per second is 1/t, and therefore the expected number of ticks observed during a randomly chosen s-second interval is s/t. This holds regardless of the distribution of interval lengths. Further, if the distribution is such that no more than one tick can occur within any s-second interval, then the probability of observing a tick during such an interval is also s/t. —Ilmari Karonen (talk) 15:07, 3 May 2010 (UTC)[reply]

Ramsey Theory and TCS[edit]

How are results of Ramsey theory applied in the field of theoretical computer science? Thanks-Shahab (talk) 13:32, 3 May 2010 (UTC)[reply]

Um, try google? 69.228.170.24 (talk) 18:24, 5 May 2010 (UTC)[reply]

line and curve fitting algorithms[edit]

I'm interested in learning about curve fitting using something considerably more sophisticated than the traditional least squares method. Least-squares has two significant disadvantages, I think:

  • The distances it minimizes are always along lines perpendicular to the x-axis, meaning that, in effect, x coordinates are always assumed to be perfectly accurate.
  • By squaring the errors, outlying points are given disproportionate weight. The farther out a point is, the less weight I'd like to give it (not more, as squaring it does). Ideally I'd like the fit to embody an assumption of Gaussian error distribution.

I'm also interested in fitting fancier curves than simple lines or polynomials -- ideally I'd like to be able to fit parametric equations (x=f(t) and y=g(t)), as well.

I'm sure this is a well-studied problem, but I don't know where to start. Thanks for any pointers. —Steve Summit (talk) 18:53, 3 May 2010 (UTC)[reply]

  • For the problem of x values being assumed accurate, see Errors-in-variables models. But usually accounting for measurement errors is unnecessary, see this.
  • I think your intuition with regards to squaring the errors is wrong. In fact the least-squares model does embody an assumption of Gaussian error distribution. It looks like what you are really trying to do is an error which is a mixture of a Gaussian with a small, non-negligible chance of being an extreme outlier. So the weight will increase at first but then decrease past the point where the error could be in the Gaussian part. This is less elegant computationally but I'm sure something can be worked out.
For non-linear\polynomial curves, you need Nonparametric regression. Kernel regression is simple and effective, but the more general Local regression has some advantages. I don't know how much fitting a parametric equation has been studied but I suspect that simple modifications can address it. -- Meni Rosenfeld (talk) 19:23, 3 May 2010 (UTC)[reply]
Our kernel regression article is unsatisfying. It doesn't seem to address how to choose a bandwidth, which is typically by cross-validation. All of Nonparametric Statistics by Larry Wasserman discusses these matters, but I'm neutral as to how good the book is. -- Meni Rosenfeld (talk) 19:32, 3 May 2010 (UTC)[reply]
Squaring the errors is the maximum likelihood estimate given an assumption of a Gaussian distribution. Outliers are given higher weight precisely because they are unlikely to occur under Gaussian distribution. The first level of improvement is probably a technique like total least squares that considers errors in both x and y. You might consider other general techniques like generalized least squares. There are also various iterative procedures for determining if an outlier is so improbable that you are better off assuming it is totally erroneous and excluding it, but I don't see any articles about that. Dragons flight (talk) 19:27, 3 May 2010 (UTC)[reply]

There is a very very extensive literature on this. You might start with some of the Wikipedia articles mentioned above.

You say both that you want to assume a Gaussian error distribution, and that you don't want to use least squares. But least squares is in some senses optimal for a Gaussian error distribution, and in fact the reason it's called "Gaussian" is that Gauss showed that it's the error distribution for which least squares is optimal in at least one sense. I see someone's mentioned above that least squares coincides with maximum likelihood when the errors have a Gaussian distribution. Michael Hardy (talk) 01:47, 4 May 2010 (UTC)[reply]

The farther out a point is, the less weight I'd like to give it (not more, as squaring it does). That is easily done. Just pick one point to be credible and discard all the others. (That is what many people do, I think, but good statisticians do not throw valid data away). Bo Jacoby (talk) 14:00, 5 May 2010 (UTC).[reply]

Thanks for all the pointers, and for correcting my misapprehension about outliers. —Steve Summit (talk) 13:21, 7 May 2010 (UTC)[reply]