Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 229: Line 229:
There are also more minor prizes in addition to the jackpot - these would affect the expected value a lot. Very good point that the expected value of a ticket must be 50p on average for the Lotto - is it the same for Euromillions? And are these the only UK lotteries that have rollovers? And what is the normal prize fund - I think, maybe I'm wrong, that the lottery TV rollover adverts only quote the forecast jackpot size, rather than the total prize fund. I'm not sure if extra ticket sales would be reflected in a bigger jackpot for that draw or if the extra sales money goes to make the prize-money in the next draw. And do the smaller prizes vary in value, or are they fixed? Sorry, so many questions. [[Special:Contributions/80.0.121.236|80.0.121.236]] ([[User talk:80.0.121.236|talk]]) 12:13, 8 February 2008 (UTC)
There are also more minor prizes in addition to the jackpot - these would affect the expected value a lot. Very good point that the expected value of a ticket must be 50p on average for the Lotto - is it the same for Euromillions? And are these the only UK lotteries that have rollovers? And what is the normal prize fund - I think, maybe I'm wrong, that the lottery TV rollover adverts only quote the forecast jackpot size, rather than the total prize fund. I'm not sure if extra ticket sales would be reflected in a bigger jackpot for that draw or if the extra sales money goes to make the prize-money in the next draw. And do the smaller prizes vary in value, or are they fixed? Sorry, so many questions. [[Special:Contributions/80.0.121.236|80.0.121.236]] ([[User talk:80.0.121.236|talk]]) 12:13, 8 February 2008 (UTC)
:I don't know about anything other than Lotto. With that, the prize fund for any given week is 50% of the total ticket sales for that week, plus any rolled over jackpots. Extra ticket sales due to there being a rollover would be included, but would actually reduce the expected net gain, since the effect of there being more tickets out there is more significant than the extra prize money. I haven't really watched the lottery for a while, but I know they used to give both the total prize fund and the jackpot fund before the draw - I would assume both numbers are available somewhere in advance. The only fixed prize is £10 for 3 numbers, all the others are done in terms of percentage - I think it's x% split between all the people that get 4 balls, y% split between all the people that get 5 balls, etc., with the x, y... being fixed (and published on the official website somewhere, as I recall). --[[User:Tango|Tango]] ([[User talk:Tango|talk]]) 14:03, 8 February 2008 (UTC)
:I don't know about anything other than Lotto. With that, the prize fund for any given week is 50% of the total ticket sales for that week, plus any rolled over jackpots. Extra ticket sales due to there being a rollover would be included, but would actually reduce the expected net gain, since the effect of there being more tickets out there is more significant than the extra prize money. I haven't really watched the lottery for a while, but I know they used to give both the total prize fund and the jackpot fund before the draw - I would assume both numbers are available somewhere in advance. The only fixed prize is £10 for 3 numbers, all the others are done in terms of percentage - I think it's x% split between all the people that get 4 balls, y% split between all the people that get 5 balls, etc., with the x, y... being fixed (and published on the official website somewhere, as I recall). --[[User:Tango|Tango]] ([[User talk:Tango|talk]]) 14:03, 8 February 2008 (UTC)
::"Extra ticket sales due to there being a rollover would be included, but would actually reduce the expected net gain, since the effect of there being more tickets out there is more significant than the extra prize money." Does this mean its impossible in practice to get an expected value greater than the ticket price? I'm curious to see any statistics that demonstrate that rollovers decrease the expected value due to the increase in ticket sales. Does anyone know where I can find the stats to calculate the expected value - I need a) total prize fund, and b) number of ticket sales - both figures now seem to be kept secret. [[Special:Contributions/80.3.47.25|80.3.47.25]] ([[User talk:80.3.47.25|talk]]) 23:23, 8 February 2008 (UTC)


== <math>\lambda</math> calculus ==
== <math>\lambda</math> calculus ==

Revision as of 23:23, 8 February 2008

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
  • [[:|{{{1}}}]]
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 30

grid coordinates from sequence number

This is a very elementary question but I am no good at math

I am writing a program and I need a function to determine the coordinates of an element in a table given its serial position (counting left to right, top to bottom) and the number of rows and columns. to visualise, I might have a grid like this.

1234
5678

I might want to look up the row and column of 6, which would be column 2 row 2. element 4 would have column 4, row 1. I have a feeling I should use mod somehow but I can't figure out how. Thanks! -- 71.86.121.200 (talk) 02:21, 30 January 2008 (UTC)[reply]

Numbering from 0 would make the relationship more obvious.
0123
4567
The number 6 is located in row 1, column 2 (the left and top are row/column 0). The table is 4 cells wide. The relationship between those numbers is (serial=row*width+column). To reverse that, row=serial/width (rounded down) and column=serial-row*width, a.k.a. serial mod width. If you insist on numbering from 1, you'll have to subtract 1 from serial first and then add 1 to the results afterward. --tcsetattr (talk / contribs) 03:06, 30 January 2008 (UTC)[reply]
Thanks! that looks right. Plugging that equation into my calculator (I told you im not good at math ;-) I see that row=(serial-column)/width which eliminates the need to round down. -- 71.86.121.200 (talk) 03:54, 30 January 2008 (UTC)[reply]

Finding the correct sequence for painting

The PVC figure like this one, requires manual panting.

http://www.entertainmentearth.com/images/%5CAUTOIMAGES%5CORG40601.jpg

Suppose it has 3 sections that require painting, A , B and C. And suppose each section has a painting failure rate. If a painting failure occurs, the whole thing would need to be discarded resulting in a loss of effort.

So the question is, in what sequence should the painting occurs? My gut feeling is that the section with the highest failure rate should be painted first, and then the second highest and so on until the section of the lowest failure rate is painted last.

Now if each section requires a different amount of effort, then this will change the problem. So what algorithm should be followed to find the sequence to minimize the loss of effort.

For a figure with only 3 section, the search is trivial and brute force can be used. But if a figure has 17 painting sections, each section with an effort rate and a failure rate then finding the optimum sequence is required.

Example:

sectionA {effort=5, failure=0.2}
sectionB {effort=1, failure=0.8}
sectionC {effort=20, failure=0.1}

Is this type of problem a hard problem to solve?

202.168.50.40 (talk) 02:30, 30 January 2008 (UTC)[reply]

Does it become clear only after the whole effort on painting one sector has been expended that this resulted in failure, or is that typically on the average detected more or less halfway?  --Lambiam 07:59, 30 January 2008 (UTC)[reply]
Never mind, I don't think that makes a difference.  --Lambiam 08:26, 30 January 2008 (UTC)[reply]
You can simply work out the average amount of wasted effort. With your numbers 0.2*5, 1*0.8, 20*0.1 = 1, 0.8, 2. Then you would do it in the order of LEAST wasted to MOST wasted (B, A, C), as you want to to minimal effort on something that fails.--58.111.143.164 (talk) 08:32, 30 January 2008 (UTC)[reply]
For a thourough analysis you can use a Decision tree, to calculate the Expected value. First choose the order you wish to use, say ABC.
Start -   A passes  - B passes  - C passes (effort 26, prob = .8 * .2 * .9)
      \            \            \ C fails  (effort 26 + E, prob = .8 * .2 * .1)
        \            \ B fails             (effort 6 +E, prob = .8 * .8)
          A fails                          (effort 5+E, prob = .2)
where E is the expected effort. This will give you a formula E=26*.8*.2*.9+(26+E)*.8*.2*.1+(6+E)*.8*.8+(5+E)*.2. You can solve this for E. Repeat for each combination and viola the order with the least expected effort is the one. If you have 17 steps then this will be too much to do by hand, but a computer should be able to cope. --Salix alba (talk) 09:18, 30 January 2008 (UTC)[reply]
If you have 17 steps, then there are 17! = 355687428096000 possible orderings of the sections. In general, the complexity of this method grows as the factorial function, which is going to become infeasible very quickly. I think the "brute force" method mentioned in the original post is what you're describing, and 202.168.50.40 wants a more efficient algorithm. —Bkell (talk) 12:18, 30 January 2008 (UTC)[reply]
Highest failure rate first would fail for Job1 having cost 1 and failure rate 0.1. And Job2 having cost 100 and failure rate 0.2. Because the effort put into job 1 is so much smaller it's less of a loss. Anon; you method would fail for Job1 having cost 20 and failure rate 0.8 while Job2 have cost 20 and failure rate 0.1. Here you would clearly want to do the unlikely job first, but this has a higher amount of wasted cycles. I think perhaps it's possible to solve this by comparing elements pairwise. Ie the sequence A.B has an expected wastage of 5*.2+(5+1)*.8=5.8, while B.A has 1*.8+(1+5)*.2=2 so B should go before A. Then compare A with C for 5*.2+(5+20)*0.1=3.5 or 20*0.1+(5+20)*0.2=7, so A should go before C. This sorting could in theory be done in O(n*log n) time. Taemyr (talk) 13:34, 30 January 2008 (UTC)[reply]
I believe I can provide a reasonably good argument that, to minimize the expected waste of effort, the pieces should be painted in descending order of failure rate per effort. To see why, consider dividing each task into arbitrarily many small subtasks, each requiring the same amount of effort and, for subtasks of the same original task, having the same probability of failure. Since the effort per subtask is constant, by the same argument as in the constant-effort case above, they should be done in descending order of probability of failure — but this is achieved exactly when the original tasks are ordered by decreasing probability of failure per effort.
This proof sketch implicitly assumes that failures are detected as soon as they happen and that the failure rate is constant throughout each task, but as Lambiam mentions above, I don't think these assumptions should actually make any difference. It should be possible to construct a more rigorous proof by considering the integral of failure rate over effort, but I'm not going to start doing that just now. —Ilmari Karonen (talk) 07:25, 31 January 2008 (UTC)[reply]
Hmm. I had already given a counter example to this approach. There is two reasons that your argument fails. The easiest follows from the assumption that all work devoted to a task is lost when a failure occurs. This means that the task can not be divided into smaller tasks. Secondly the probability of failure during one of the smaller tasks would not be independent on the number of tasks the original is divided into. In my example above if the job with cost 100 and probability of failure 0.2 was divided into 100 tasks with a constant chance of failure such that the chance of failure during any of these 100 tasks was 0.2 the probability of failure in any single one of them would be less than 0.1. So these tasks should go after the single task that we divided the task with cost 1 and probability 0.1 into. Taemyr (talk) 06:51, 9 February 2008 (UTC)[reply]

Wow! I'm really impressed! I wonder how many anime otaku has actually considered the mathematics of painting anime PVC figures before they start painting their garage kits. I guess mathematics applies everywhere even where you would have least expected it. Thanks. Oh by the way, here is another nice figure http://www.tmpanime.com/images/medium/4560228201475_MED.jpg 122.107.141.142 (talk) 10:40, 31 January 2008 (UTC)[reply]

Polynomial Fitting

I know that you can put a polynomial through any finite set of points, no two of which are directly vertical. Can you do the same thing with a power series and an infinite set of points? Does it matter if there are accumulation points? Black Carrot (talk) 04:18, 30 January 2008 (UTC)[reply]

If you take the infinite set of points to be *every* point, then specifying the value at those points is simply specifying the function, and some functions do not have expressions as power series. If you take an infinite set of points including a limit point, then a function continuous on those points will have to have its value on the limit point to be the limit of the values converging to it. Convergent power series expansions tend to be continuous, so this gives some restrictions on accumulation points. If you take the points to be on the real line, then accumulation points and arbitrary values do not mix.
If you take an infinite set of points with no accumulation point, then I believe a variation on Weierstrass factorization provides a function with prescribed values on the points, and that has a convergent power series expansion. The discussion of this is in the section of (Rudin 1987, pp. 304–305) called "An interpolation problem", and the result itself is theorem 15.13.
  • Rudin, Walter (1987), Real and complex analysis (3rd ed.), New York: McGraw-Hill Book Co., ISBN 978-0-07-054234-1, MR 0924157
The theorem even says you can prescribe finitely many values of the derivative at each point as well. JackSchmidt (talk) 05:51, 30 January 2008 (UTC)[reply]

integration and undesirable points

The purpose behinde this post is an attempt to expand integration space to include functions with undesirable points where the function could be undefined.I need help to understand if this post makes sense.The main question here, is this way could be more general than using the measurement set theory to expand the integration?

Assume the function F(x),

F:[a,b]→R 

We will put the interval[a,b] as acombination of subsets, UGk,


UGk=GNUGQUGQ̀....etc.(U,here stands for combination symbol) which stands for natural set,rational set ,irrational set,….etc.


Lets define the subsets,

өi={gk:F(xi)≥gk≥0},

xiЄ[a,b],

Let,(pi) be apartitoning of Gk in [a,b] and (pөi) apartitioning of(өi),


(Mөi=SUPөi), and (mөi=infөi),

(UFөi,pi)= Σi Mөi(xi-xi-1), (UFөi,pi)=upper darboux sum.

(LFөi,pi)= Σi mөi(xi-xi-1),(LFөi,pi)=lowe darboux sum.

(UFөi,pi,Pөi)=inf{UFөi:piPөi,(Pөi) partioning of(өi), (pi) partitioning of (Gk)},

(LFөi,pi,Pөi)=sup{LFөi:piPөi,(Pөi) partioning of(өi),(pi) partitioning (Gk)},

now, we put the integration in the form,

∫Fөi,over,Gk={0,UFөi}={0,LFөi}=the subsets,sөi,


∫F,over,[a,b]=Usөi


for example;

f(x):[0,1]→R,

f(x)=x , xЄ irrational numbers=Q̀, within[0,1],

f(x)=1,xЄ rational numbers=Q, within[0,1],

∫F,over,[o,1]={0,1\2}Q̀,U{0,1}Q={0,1\2}R,U,{1\2,1}Q ,(Q̀,Q AND,R,ARE SUFFIXES HERE AND U STANDS FOR COMBINATION symbol.)


this way enable us to exclude the undesired points from the integration where f(x)may be undefined. 88.116.163.226 (talk) 11:02, 30 January 2008 (UTC)husseinshimaljasim[reply]

What you are talking about seems to be similar to the difference between Riemann integration and Lebesgue integration. In Lebesgue integration, if two functions are identical everywhere except a set of measure zero (very roughly, a set of points that isn't too big), then their integrals will be equal. Thus, for example, a function which is continuous everywhere except x=0, where it is infinite, will have the same integral as the same function, but where at x=0 it takes the value 0, or similar. Since the rationals have measure zero over the reals, you could even have a function which is poorly-behaved over an infinite set of points (as long as it's a "small" infinite), and still be integrable. Confusing Manifestation(Say hi!) 04:37, 31 January 2008 (UTC)[reply]

well,i dont think so,in Lebesgue integration,the above example would be, μ(0,1\2)inQ`set+μ(0,1)inQset=1\2+0=1\2,iam suggesting to keep integration in form of combination of subsets regardless they were countable or not.wouldnot this be more genral?210.5.236.35 (talk) 14:36, 31 January 2008 (UTC)husseinshimaljasim[reply]

equation of natural numbers.

If ,N1,N2,s and k,represent 4 natural numbers,then why the equation, [(N1)^s]+2=[(N2)^k)],where,N1,N2,s,and k Є(N-SET) has only one solution in N-SET,where, N1=2,s=1,N2=2,k=2? i guess. 88.116.163.226 (talk) 11:19, 30 January 2008 (UTC)husseinshimaljasim[reply]

I don't think 21+2=22 is the only solution. Isn't 52+2=33 also a solution ? Unless maybe "N-SET" has some special meaning ?? Gandalf61 (talk) 11:29, 30 January 2008 (UTC)[reply]
There appear to be lots of trivial solutions, such as . —Bkell (talk) 12:09, 30 January 2008 (UTC)[reply]

what i was thinking!?i must be needing some sleep.210.5.236.35 (talk) 15:56, 30 January 2008 (UTC)husseinshimaljasim[reply]

Capital-sigma notation

From the article Summation: Mathematical notation has a special representation for compactly representing summation of many similar terms: the summation symbol, a large upright capital Sigma. This is defined thus:

The subscript gives the symbol for an index variable, i. Here, i represents the index of summation; m is the lower bound of summation, and n is the upper bound of summation. Here i = m under the summation symbol means that the index i starts out equal to m. Successive values of i are found by adding 1 to the previous value of i, stopping when i = n.

An example of the above would be? --Obsolete.fax (talk) 13:40, 30 January 2008 (UTC)[reply]
Isn't the equation an example in of itself? I suppose you could have say - is that what you mean? -mattbuck 13:50, 30 January 2008 (UTC)[reply]
By definition, "example" is something specific, while the equation above is quite general. I think the OP is looking for something like:
-- Meni Rosenfeld (talk) 13:57, 30 January 2008 (UTC)[reply]

Spearmen correlation coefficient.

I have been reading on how to calculate correlation coefficient using spearman formula But I got confused on the way when ranking the scores of this nature. According to the procedure of ranking the arbitrary ranks of 64,64 and 64 of x if 9th 10th and 11th respectively. Now is it true that the ranks of each of the numbers above is 10? X Y 22 36 24 24 30 25 40 20 45 48 50 44 50 40 52 56 64 62 64 68 64 56 72 32 78 78 78 68 84 68 90 58 —Preceding unsigned comment added by Nkomali (talkcontribs) 14:31, 30 January 2008 (UTC)[reply]

Yes. See Spearman's rank correlation coefficient. If several values are tied, the rank of each of them is the average of the positions they occupy. -- Meni Rosenfeld (talk) 16:16, 30 January 2008 (UTC)[reply]

Dual Curve

Hello,

I've been looking at the article on dual curves, but it wasn't very informative.

Am I right when I say this is the construction for the dual curve :

For each point on the curve, draw the tangent. Then take the perpendicular to that tangent that passes through the origin. Labelling the point at the intersection of those two lines P, take the point H that is at distance 1/OP from the origin, that lies on the perpendicular line. The set of all such points H forms the dual curve.

If that's the case, doesn't that give rise to two different curves : one when we choose H to be the same side of the origin, one when H is the other side ?

Finally, why does this rather unlikely combination of constructions lead to a curve of any interest ? Seeing as the position of the origin has an importance for the resulting curve...

Thanks. --Xedi (talk) 23:31, 30 January 2008 (UTC)[reply]

I can't comment on your construction, but Duality_(projective_geometry)#Points_and_lines_in_the_plane might help clarify duality's goal. As it happens, given certain conditions, absolutely everything that can be proven in geometry has a corresponding dual proof that exchanges lines with points. For instance, every pair of points determines a line that crosses them, and every pair of lines (unless they're parallel) determines a point of intersection. Three lines determine a triangle, as do three points, etc. The dual of a curve is the corresponding curve with similar properties, where points have been exchanged with tangents. Black Carrot (talk) 00:43, 31 January 2008 (UTC)[reply]
Xedi does pick up an important point, which is not really represented in the article. We should really be working in the projective plane, so points and lines are represented by triples (x,y,z) subject to the equivalence relation (a x,a y,a z) ~ (x,y,z). The ambiguity happens when we project from the projective plane into R2. Yes you can get different curves in R2 but they will be projective equivalent. --Salix alba (talk) 08:44, 31 January 2008 (UTC)[reply]
Thanks ! I'll have to investigate projective geometry a bit further. -- Xedi (talk) 17:28, 31 January 2008 (UTC)[reply]


January 31

operator

I have looked at the page on the operator , but I don't really understand it. Would someone please explain the function of in layman's terms? I think it means integral or something, but I don't know. Zrs 12 (talk) 02:43, 31 January 2008 (UTC)[reply]

Take a look at integral, but the easiest way to describe it might be "area under the curve". Or to give you a better idea of some of its uses, maybe it would be helpful to know that position is the integral of velocity, and velocity is the integral of acceleration? - Rainwarrior (talk) 04:07, 31 January 2008 (UTC)[reply]
I didn't find a page about the operator itself, could you provide a link? Anyway, the symbol is used for two related, but conceptually different things. The first is for the antiderivative, and the second is for the definite integral. I'll try to explain the second first:
The definite integral can be thought of as a summation process, resulting in the area under a curve, as stated by Rainwarrior. You have limits for integration, denoted by little symbols (say a and b) below and above the ∫. You divide the x-axis in the interval a..b into tiny pieces, and calculate the sum of the products of each tiny x-axis piece multiplied by the corresponding height of the curve. As you reduce the length of each piece of the x-axis, the sum will converge into a well-defined number, provided the function is "well-behaved". This gives you your area, which may have a physical interpretation as stated by Rainwarrior.
The antiderivative of a function f(x) is written as ∫f(x)dx, without the little symbols below and above the ∫. The reason why the same symbol is used, lies in the fundamental theorem of calculus - once you know the antiderivative of a function, say that ∫f(x)dx = F(x) + C, where C is a constant term, you can calculate the area under the curve by simple subtraction - the integral from a to b is F(b)-F(a). However, not all functions have closed-form antiderivatives. --NorwegianBlue talk 16:25, 31 January 2008 (UTC)[reply]
Note that area under the curve only really makes sense when you're talking about a curve defined on and into the real numbers. It's a good way to start thinking about integration, but be aware that integration can be generalised to many different spaces. -mattbuck 16:52, 31 January 2008 (UTC)[reply]
Thank you all for your input. I went to this [[1]] link and found it was fairly helpful as it provided graphs (there is also a link on this page to derivatives). However, I have one more question. What is the purpose for defining these? Why would anyone need to calculate the area under the curve or get a derivative of a function? Zrs 12 (talk) 20:27, 31 January 2008 (UTC)[reply]
To answer the side question... the integration symbol is a Unicode character, U+222B, which in UTF-8 e2 88 ab which leads to http://en.wikipedia.org/wiki/%E2%88%AB being the article about the symbol itself. (I suppose you could make a proper link by putting that single character inside double brackets, but I have bad luck inserting fancy characters into wikipedia articles) --tcsetattr (talk / contribs) 21:28, 31 January 2008 (UTC)[reply]
In response to User:Zrs 12 - Why would anyone need to calculate the area under the curve or get a derivative of a function? - the answer is Calculus and, in particular, Calculus#Applications. Just about every field in physics, and many other sciences, has a set of differential equations that describe some kind of behaviour, and without the ability to differentiate and integrate functions those equations can't be solved. Some examples are Maxwell's equations, which underpin electromagnetism and hence make your computer work, Newton's law of universal gravitation, which tells you how things fall down and how planets orbit each other, the Einstein field equations of General relativity, which do the same thing but better, and everything in Fluid dynamics#Equations of fluid dynamics and aerodynamics, which let ships sail and planes fly. Admittedly these are all more complicated than just an "area under the curve", but they all come from differentiation and integration.
For a simpler example, like User:Rainwarrior said above, differentiation can take you from distance to velocity, and from veloity to acceleration, and integration can take you in the opposite direction. Confusing Manifestation(Say hi!) 22:28, 31 January 2008 (UTC)[reply]
Integration is also important in probability theory, and thus in statistics and hence all the social sciences. Algebraist 14:52, 1 February 2008 (UTC)[reply]
The page intergral should be helpful. Ftbhrygvn (talk) 05:37, 5 February 2008 (UTC)[reply]

Determinants

I know that the determinant is a multiplicative map, meaning that det(AB)=det(A)det(B). I also know that det(A+B) is not necessarily the same as det(A)+det(B). Is there a special case where det(A+B)=det(A)+det(B)? What are the restrictions on matrices A and B in this case? A Real Kaiser (talk) 04:47, 31 January 2008 (UTC)[reply]

This is certainly not a general case, but if A and B share a nontrivial eigenspace with eigenvalue 0, then we have det(A+B)=0=0+0=det(A)+det(B). For example, this holds if A and B represent linear transformations with ker A ker B ≠ {0}.
Also, if A = -B and the dimension of your matrices is odd, then det(A) = -det(B) and you have det(A+B)=det(0)=0=det(A)+det(B). Probably these are not the kind of examples you were looking for. Tesseran (talk) 09:40, 31 January 2008 (UTC)[reply]
det(A) and det(B) are essentially the volumes of parallelpipeds with vectors corresponding to rows or columns of the matrix. So you need something so that the vector sum parallelpipeds have volume equal to the sum of the vector parallelpipeds. You can probably construct some with good enough use of algebra, but there doesn't seem to be an easy answer.--Fangz (talk) 16:06, 31 January 2008 (UTC)[reply]

What about if the matrices A and B are (real) orthogonal matrices (of any dimension nxn)? Actually, I am trying to prove that if A and B are real orthogonal matrices and if det(A)=-det(B), then A+B is singular. We can also use the fact that |det(A)|=|det(B)|=1. I was thinking about showing that det(A+B)=0 which would be really easy if det(A+B)=det(A)+det(B)=det(A)-det(A)=0. If the determinant is not additive in this case, then how can I prove the above proposition? A Real Kaiser (talk) 17:17, 31 January 2008 (UTC)[reply]

The fact that |det(A)| = |det(B)| = 1 shouldn't matter: if det(A+B) = 0 then det(λA+λB) = λndet(A+B) = 0 for all scalars λ. —Ilmari Karonen (talk) 20:25, 31 January 2008 (UTC)[reply]
I don't think determinants are a good way of solving this problem. Remember that an orthogonal matrix has determinant +1 for a rotation, and -1 for a reflectionnon-rotation.--Fangz (talk) 01:01, 1 February 2008 (UTC)[reply]
While considering the geometric interpretation is often a good trick for linear algebra problems (at least as motivation of a proof, if not for the proof itself), what is the geometric interpretation of addition of matrices? I may be missing something obvious, but I can't see one... --Tango (talk) 22:28, 1 February 2008 (UTC)[reply]
What we want is, given arbitrary rotations (=det 1 orthog operators) A and B (A a rotation, B a -rotation), there exists a vector v s.t. A(v)=-B(v). In other words, we want a v s.t. v=-A-1Bv. -A-1B is just an arbitrary rotation/-rotation (depending on dimension), so we want that all rotations (n odd)/det -1 orthogonal maps (n even) to have an eigenvalue 1. The rest is fairly easy (using fact that orthogonal matrices are diagonable over C). Algebraist 23:03, 1 February 2008 (UTC)[reply]
I just wrote that as I thought it, and realised only at the end that it was in no way geometric. Oops. Looks like I'm incurably tainted with algebra. Algebraist 23:04, 1 February 2008 (UTC)[reply]
I'm not following you - I don't see how that proves the required theorem. For a start, the theorem says det(A)=-det(B), so one is a rotation and one a rotation+reflection, secondly, the theorem says it's over the reals, and you've explicitly mentioned doing it over C (in the real case, an eigenvector with eigenvalue one would correspond to an axis of rotation, which I seem to remember reading only exists for n=3, although it could be that there's more than one for n>3, I'm pretty sure there are none for n=2). Lastly, Av=Bv => A-B is singular, not A+B (this error compensates for the det=-1 vs det=1 error for odd numbers of dimensions, but n odd was not given in the theorem). --Tango (talk) 23:20, 1 February 2008 (UTC)[reply]
Sorry, I got slightly very confused in my typing. The argument works, I just typed it erroneously. And the fact that I'm using C is not a problem; I can give the details if you want. Algebraist 23:46, 1 February 2008 (UTC)[reply]
I'll take your word for it. The question that I'd like an answer to (because it's really bugging me now) is what the geometric interpretation of addition of matrices is. Matrix multiplication is just composition of linear maps, but what is addition? Any ideas? --Tango (talk) 00:15, 2 February 2008 (UTC)[reply]
Well, I suppose you'd have to base it on the geometric interpretation of vector addition, but it's not very nice in any case. Indeed I arrived at my proof by the thought process 'Matrix addition is horrible. Let's get multiplication involved.' Algebraist 00:19, 2 February 2008 (UTC)[reply]
Yeah, I guess so, but I still can't really see it. You're adding a vector to itself, but transforming each copy in some way... I can't see what significance that has. Getting rid of the addition by moving one term onto the other side of the equals sign is good trick, though. (Of course, it only works if the sum is singular, otherwise you just get a different addition.)--Tango (talk) 01:08, 2 February 2008 (UTC)[reply]
In response to Tango's question: matrix addition is just pointwise addition of linear maps. Let f and g be linear maps from a vector space V to a vector space W. Then we can define the linear map f + g from V to W by (f + g)(x) = f(x) + g(x). The coordinate representation of this is just matrix addition. I would hardly call it 'horrible'. -- Fropuff (talk) 00:42, 2 February 2008 (UTC)[reply]
That's just a restatement of the definition of matrix addition, really. Restated in the same terms, the question becomes: What is the geometric interpretation of pointwise addition of linear maps? A linear map is basically a transformation (stretching, squashing, twisting, rotating, reflecting, whatever) - what does it mean to add two transformations together? --Tango (talk) 01:08, 2 February 2008 (UTC)[reply]

(undent) Would you find it easier to geometrically interpret the mean of two vectors? The sum, after all, is simply twice the mean (and, in particular, the sum of two vectors is zero if and only if their mean is also zero). In terms of geometric operations, to apply the mean of two transformations to a figure, just apply each of the original transformations to the figure and then average them together — i.e. map each point in the original figure to the point halfway between its images under each original transformation. —Ilmari Karonen (talk) 01:34, 2 February 2008 (UTC)[reply]

Thank you! That's an excellent way to look at it. --Tango (talk) 13:46, 2 February 2008 (UTC)[reply]

Predicate logic

Express the following statement as a well formed formula in predicate logic:

"There is a natural number which is a divisor of no natural number except itself."

Here's what I've put:

I just wanted to see if it was valid. Damien Karras (talk) 20:43, 31 January 2008 (UTC)[reply]

(ec) It strikes me that you're assuming that every natural number is a divisor of itself. That's true, of course (at least, if you adopt the convention that zero divides zero, which I do), but it's not a purely logical truth. --Trovatore (talk) 20:50, 31 January 2008 (UTC)[reply]
(off-topic) Are there really people who have 0 not dividing 0? How bizarre. Algebraist 21:09, 31 January 2008 (UTC)[reply]
I don't think so. We have
,
which is a faithful translation. There may be an hidden assumption underlying the way the phrase is formulated (i.e. the choice of the word 'except'), but it doesn't appear in the formula. Morana (talk) 21:36, 31 January 2008 (UTC)[reply]
I don't agree that your last equivalent is a faithful translation. "No natural number except itself" does not imply that "itself" is included; it simply fails to exclude it. --Trovatore (talk) 21:57, 31 January 2008 (UTC)[reply]
I don't know, your interpretation seems unusual. It tried reformulating the given sentence in all the ways I could think of but I cannot possibly understand it like you do. Can you justify this interpretation ? This is more an issue of grammar than logic. In any case this is an interesting illustration of the many ambiguities of the natural language. Morana (talk) 00:13, 1 February 2008 (UTC)[reply]
Seems straightforward to me. When I say "no natural number" unmodified, that means none at all. If I say "except" I'm weakening that claim. Just weakening it, not making any new claim.
Example: If I say "all eighteen-year-olds in Elbonia enter the military, except for the women", I am not wrong if it turns out that some women actually do join the military. --Trovatore (talk) 06:28, 1 February 2008 (UTC)[reply]
Did you notice the negation ? I can't make a poll right now but I argue that if you say "no eighteen-year-olds enter the military, except for the women", any normal (i.e. non-mathematician) person would think that you just claimed that women enter the military.
Do you have any kids ? I you do, tell them "you cannot take any candy, except for the blue ones", and tell me if they didn't jump on the blue ones. Morana (talk) 09:11, 1 February 2008 (UTC)[reply]
At the very least the formulation in natural language is ambiguous. If I say "Noone except idiots will buy that", I don't claim that idiots will buy that.  --Lambiam 10:56, 1 February 2008 (UTC)[reply]

I agree with Trovatores interpretation. "no eighteen-year-olds enter the military, except for the women" a normal person would draw the inference that there exists women entering since otherwise you could have made a stronger statement with easier form. However a valid interpretation would go, I know that no guys enter the military, but I don't know about the women. "you cannot take any candy, except for the blue ones" means that there is no rule saying the kids can't take the blue candy, so of course they take it. I write the statement as , using Trovatores interpretation of except. Taemyr (talk) 10:59, 1 February 2008 (UTC)[reply]

Regression analysis question

Given that X (X-bar) = 3, Sx = 3.2, Y (Y-bar) = 2, and S1.7 = 3.2, and r = -0.7, is the least squares regression line Ŷ (y-hat) = 3.12-0.37x, or something else? I think I'm right (it seems right) but I want to check. 128.227.206.220 (talk) 22:22, 31 January 2008 (UTC)[reply]

It's not exactly right, as Y ≠ 3.12 - 0.37X. Further, X = 3, Sx = 3.2 gives the number of observations as the non-integral 3.2/3, and what is the meaning of S1.7? Nor does mixing x, X, y and Y help. At least one of us is deeply confused.…86.132.166.138 (talk) 11:58, 1 February 2008 (UTC)[reply]

Infinite set containing itself

Is it possible for an infinite set to contain itself? For example, would {{{...{{{}}}...}}} be considered a valid set?

As an unrelated question, how many sets are there? I know it's an infinite number, but which infinite number? — Daniel 23:52, 31 January 2008 (UTC)[reply]

In standard set theory (the study of the von Neumann universe) it is not possible for a set to be an element of itself. There are alternative set theories in which it is -- see anti-foundation axiom. There are a proper class of sets (that is, there are more sets than can be measured by any transfinite cardinal number). --Trovatore (talk) 00:09, 1 February 2008 (UTC)[reply]
It is the possibility of a set containing itself that led to Russell's paradox, that in turn led to the formal axioms of set theory. Confusing Manifestation(Say hi!) 05:38, 1 February 2008 (UTC)[reply]
No, I don't really agree with you there. The Russell paradox was the fruit of conflating the notion of intensional class with the notion of extensional set. It's probably true that historically the antinomies led to the axiomatizations, but it is quite possible to have a non-axiomatic conception of set that avoids them. That non-axiomatic conception also avoids sets that are their own elements, but that isn't the reason it avoids the antinomies. --Trovatore (talk) 06:13, 1 February 2008 (UTC)[reply]
Well, the set of all sets has cardinality greater than that of the real numbers , in fact it would be greater than that of the power set on the real numbers, . I suppose then you could take sets which are in the power set of the power set of the real numbers, so you get , etc etc etc. If you assume the continuum hypothesis is true, and that then the cardinality of the set of all sets would I suppose be , though I expect it would be larger. -mattbuck 11:54, 2 February 2008 (UTC)[reply]
See Trovatore's answer: in conventional set theory (eg ZF), the class of all sets is not a set, and thus does not have a cardinality at all. Algebraist 14:28, 2 February 2008 (UTC)[reply]
Oh, and doesn't make much sense. I suspect you mean . Also, you're assuming (some cases of) the generalised continuum hypothesis, not just CH itself. Algebraist 15:18, 2 February 2008 (UTC)[reply]
Mmm, actually is a pretty common notation, though it's true I haven't seen it very often for n=0. You could write , but for whatever reason, people usually don't, in my experience. --Trovatore (talk) 01:12, 5 February 2008 (UTC)[reply]
To the OP: by the way, your notation {{{...{{{}}}...}}} is somewhat ambiguous. It could mean a set A whose only element is A, or it could mean (for example) a set A such that A={{A}} but A {A}. Both these (and much more) can exist in the absence of the axiom of foundation. Algebraist 15:23, 2 February 2008 (UTC)[reply]
I don't know anything about foundations (of mathematics), but is it really possible to have A = {{A}} but A ≠ {A}? In what sense can they be different? -- BenRG (talk) 14:02, 3 February 2008 (UTC)[reply]
The usual sense - that one has an element not in the other. I don't know enough about ZF sans foundation to support Algebraist's claim, but I can say that the scenario is not immediately inconsistent. A is different from {A} because A's unique element, {A}, is different from {A}'s unique element, A (yep, sounds circular. But again, I am only showing internal consistency here). -- Meni Rosenfeld (talk) 14:19, 3 February 2008 (UTC)[reply]
The most common anti-foundation axiom, due to Aczel ("every directed graph has a unique decoration") implies that if A={{A}}, then A={A}. Otherwise you would have two different decorations of the graph that has a top element and then just descends infinitely in a straight line.
However, a more ontologically expansive axiom, due to Boffa ("every injective decoration of a transitive subgraph of an extensional graph extends to an injective decoration of the whole graph", or something like that) implies that there is a pair A={B}, B={A}, but where A and B are different. This axiom is consistent with ZF. I did some work on this a long time ago but unfortunately never published it. --Trovatore (talk) 22:22, 3 February 2008 (UTC)[reply]
Um, of course I mean it's consistent with ZF minus the axiom of foundation. Obviously it contradicts the axiom of foundation. --Trovatore (talk) 18:35, 4 February 2008 (UTC)[reply]


February 1

The units of an interest rate

Interest is calculated as , where is the new amount of money, is the principal, is the rate, and is the time elapsed. Since and are both measured in the same units, the term must be dimensionless. And since is an amount of time, must be raised to the power of .So a 2% annual interest rate be expressed as . This is a rather strange unit. How do you refer to this kind of unit, if at all? Rannovania (talkcontribs) 01:24, 1 February 2008 (UTC)[reply]

You are confused. t is not a measure of time but a measure of terms. If t is a measure of t then it would be in units of seconds. It is a measure of terms, ie how many terms. And terms are dimensionless value. 202.168.50.40 (talk) 03:14, 1 February 2008 (UTC)[reply]

It's probably better to think of the formula as , where is the unit of time used for expressing the interest rate. Then with a 2% annual interest rate (compound) the formula comes out as where is the time in years. HTH, Robinh (talk) 08:23, 1 February 2008 (UTC)[reply]

Quadratic inequalities

I know you are supposed to solve quadratic inequalities like you do quadratic equations, except that at the end, when you get the two solutions, you use an inequality sign instead of the equals sign. My question is, when do you have a solution like 1 < x < 2 and when do you have a solution like x < 1 OR x > 2? —Preceding unsigned comment added by 165.21.155.94 (talk) 05:06, 1 February 2008 (UTC)[reply]

That's a good question, and the very rough answer is that you do it either graphically, or with a couple of test points. So if you have, say, (x-a)(x-b) > 0, then you use the fact that it's concave up to show that the inequality holds on the left branch (x < a, say) and the right branch (x > b) but not on the little in-between interval. For high school mathematics that's about as far as you'd need to go, but for something more rigorous you'd look at the continuity of the function to ground your argument. Confusing Manifestation(Say hi!) 05:37, 1 February 2008 (UTC)[reply]
Take as your equation, with solutions
If , then
If , then
Morana (talk) 05:41, 1 February 2008 (UTC)[reply]

sin^2 x

Is it (sin x)^2 or sin (x^2) or something else? —Preceding unsigned comment added by 165.21.155.91 (talk) 05:11, 1 February 2008 (UTC)[reply]

sin2x means (sinx)2. Strad (talk) 05:12, 1 February 2008 (UTC)[reply]

Thanks! —Preceding unsigned comment added by 165.21.155.91 (talk) 05:15, 1 February 2008 (UTC)[reply]

But be warned that sin−1 x doesn't mean (sin x)−1, it means arcsin x. -- BenRG (talk) 14:32, 1 February 2008 (UTC)[reply]

e to the power of

Given y-1=4, solving for y, you would say you "add one to both sides" or something similar. Same thing with subtraction, multiplication and division. But what about ln(y) = 4? Is there a word that describes "e to the power of both sides" as an action? My calculus teacher says "exponentiate", but admits he made that word up. —Preceding unsigned comment added by 70.240.254.243 (talk) 21:37, 1 February 2008 (UTC)[reply]

I would say "take exponentials of each side" or something similar. The inverse is "take logs of each side". --Tango (talk) 21:46, 1 February 2008 (UTC)[reply]
I'd be willing to bet that wikt:exponentiate was not made up by the teacher. --LarryMac | Talk 21:50, 1 February 2008 (UTC)[reply]
"Exponentiate both sides" is exactly what I'd say. I'd consider it standard usage. --Anonymous, 00:13 UTC, February 2, 2008.

The word is antilogarithm. Bo Jacoby (talk) 23:38, 1 February 2008 (UTC).[reply]

Technically, maybe, not I've never heard anyone use the word (and I'm currently studying maths at Uni). --Tango (talk) 23:43, 1 February 2008 (UTC)[reply]
And it's a noun, so if you do use it you have to say "take the antilogarithm". --Anonymous, 00:13 UTC, February 2, 2008.
Our article on Exponentiation uses the verb "exponentiated". hydnjo talk 02:21, 2 February 2008 (UTC)[reply]
Both "to exponentiate" and "to take the antilog" are operations with an unspecified base. The number e plays a special role, in particular when the argument is complex. I don't know another name for the operation exp than exponential function. I'd abbreviate that in the obvious way when speaking out loud, like when tutoring, as "now take the exps of both sides" and expect to be understood.  --Lambiam 03:09, 2 February 2008 (UTC)[reply]
In higher level maths, the base is always e unless explicitly stated otherwise. I don't think you need to worry about the base. --Tango (talk) 13:45, 2 February 2008 (UTC)[reply]
To use exponentiate I think you’d have to say “exponentiate e to both sides”, because exponentating each side by e I believe would look like "ln(y)^e=4^e"
"take exponentials of each side" can also make sense though, because it’s commonly viewed as a function rather than an operation: "exp(ln(y))=exp(4)" which is "e^ln(y)=e^4". GromXXVII (talk) 12:20, 2 February 2008 (UTC)[reply]
That would be raising to the power e, not exponentiating, I don't think there is any ambiguity there. --Tango (talk) 13:45, 2 February 2008 (UTC)[reply]

Replying to Tango on antilogarithm. In the days when everything was done with published tables rather than calculators (not so long ago) there were tables of antilogarithms as well as logarithms and the phrases "take the log of both sides" and "take the antilog of both sides" were common currency. Both noun and verb forms were used. Terminology may well have changed now, I don't know. If you really want to know what geological era this was practiced, you will find that my age is displayed on my userpage. SpinningSpark 14:56, 2 February 2008 (UTC)[reply]

Using log tables to do arithmetic is very different to usings logs/exponentials in higher maths. (If nothing else, one uses base 10, the other base e and one uses numbers, the other algebraic expressions.) I wouldn't expect to use the same terminology in both. --Tango (talk) 17:17, 2 February 2008 (UTC)[reply]
Please do not be condescending. I know perfectly well the difference between arithmetic, pure, and applied mathematics. You are the one with a gap in your knowledge, which you highlighted yourself, I was merely politely giving you some information. You are wrong on just about every count. My mathematical tables (Knott, 1965, W&R Chambers) include both base 10 and base e. Likewise my slide-rule (a pre-computer analog calculating device) has scales for both log base 10 and log base e. In engineering calculations I have often (possibly more often) used base e for convenience. It is certainly not the case that mathematicians have a monopoly on base e while the rest of us grind out calcualtions in base 10. On the other hand there is no reason at all that algebraic expressions should not use log base 10 where appropriate; in engineering, many, in fact, do so. SpinningSpark 19:17, 2 February 2008 (UTC)[reply]
Ok, forget the difference in the bases then - the more important difference is that log tables are about the logs of numbers, not expressions. Exponentiating something generally refers to an expression (eg. "x"), and that is very different. As for engineers using base 10 - not when they're doing higher maths, they don't. Sure, decibels and things are defined in terms of base 10 logs, but that's not mathematics, that's just a convention. Base 10 might appear in the odd formula involving such units, but any time exponentials come up in relation to actual maths (eg. solving ODEs), they are base e. Oh, and don't complain about me being condescending and then do it yourself - I know what a slide rule is... I even own one, although I've no idea where I put it. --Tango (talk) 21:57, 2 February 2008 (UTC)[reply]

Just as you talk about the natural logarithm and the base ten logarithm, you may talk about the natural antilogarithm and the base ten antilogarithm. If ln(y) = 4, then take the natural antilogarithm on both sides and get y=antiln(4)=ln−1(4)=exp(4)=e4. If sin(y)=a, then take the arcus-sine on both sides and get y=arcsin(a)=sin−1(a). Bo Jacoby (talk) 21:11, 3 February 2008 (UTC).[reply]

February 2

February 2

Isn't there an interesting question out there about February 2?

Anyone? (extending and editing my previous remarks about February 2 ...hydnjo talk 06:47, 4 February 2008 (UTC))[reply]

How about or something. --hydnjo talk 07:18, 4 February 2008 (UTC)[reply]
Oh, ok then, - guess that'll have to do :-( --hydnjo talk 07:18, 4 February 2008 (UTC)[reply]
Well done87.102.90.249 (talk) 12:54, 4 February 2008 (UTC)[reply]

2+2=2*2=2^2=2^^2=4

Feb 3

Distance

OK, so lets say I have points A,B,C on a plane forming the veritices of a triangle, with AB=1 and BC=1, both lines parallel to the axes. From the Pythagorean theorem, I know that AC=sqrt2. However, lets say in order to get from A to C, I instead started parallel to BC, walked, say 1/4 of the horizontal distance, then made a 90 degree turn and move forwards 1/4 of the horizontal distance, then turned 90 degrees again, etc. The distance of that path will be equal to AB+BC, because each of those little fragments in one direction corespond to a fragment of the line to which it is parallel. Using that logic, I could move, say, 1/32 of the distance before turning 90 degrees, etc, and the total distance would still be equal to AB+BC. In fact, any distance for each fragment will give AB+BC for the total distance! If I keep on making each incriment smaller and smaller, the total distance from A to C will always be 2. But, at the end of that sequence is the straight line, AC, which I know is not 2, but sqrt 2... An explanation? 70.156.60.236 (talk) 04:44, 3 February 2008 (UTC)[reply]

Wow, funny you should talk about that. We recently had a discussion about that in my analysis and my teacher gave us the same exact example. If you have a right triangle and one leg is A units long and the other leg is B units long. If you travel along the legs, the distance traveled is A+B and it will always be A+B unless you take a straight line along the hypotenuse in which case the distance will be sqrt{A^2+B^2}. So at which point, does the length change from A+B to and they are certainly not always equal? The answer has something to do with uniform convergence. I don't know how familiar with analysis you are but in a sequence of functions, if you don't have uniform convergence (as opposed to just convergence, which we shall now call pointwise convergence), then some of the most "obvious" results which we take for granted fall apart. For example, in calculus, if we do term by term integration or term by term differentiation of an infinite power series, we can only do that because that series converges uniformly. I will give you another example. Consider the sequence . This is a sequence of functions and all of them are continuous on the given interval. In fact, they are continuous everywhere. But if you take the limit as then the limiting function becomes

So, what happens? Each function in the sequence is continuous but the limit breaks up. The reason is uniform convergence. This sequence does NOT converge uniformly and that is why even if each terms is continuous, the limiting function is not guaranteed to be continuous.A Real Kaiser (talk) 05:27, 3 February 2008 (UTC)[reply]
While I agree with your description of uniform convergence, I struggle to see how it applies to this question. I can't see a sequence of functions, I can only see a sequence of numbers {2*n/n}, which ought to converge to 2, no sqrt(2) - there's no "x" or similar in there for the convergence to be uniform or not with respect to... I should probably have paid more attention in my Analysis in Many Variables lectures, but maybe you can point out what I'm missing? --Tango (talk) 16:36, 3 February 2008 (UTC)[reply]
Everything that you computed is right, you just need to go the last step and also believe it. You are currently confused by an assumption that you took for granted, but it is false. The limit of the lengths of a series of curves is not equal to the length of the limit of a series of curves. If you had made the analysis in the hope to find the length of AC, than your analysis is wrong, because you made this incorrect assumption. The length of a curve is defined by integrating along the curve in the direction of the curve, but the direction of the curve changes when you switch from the staircase like approximations to the the straight line limit. Another view is if you look at the definition in Arc length. Your approximation staircases have points which are not on the straight line, but the definition requires all points to be on the line.
btw.: Very good question. Clever people have reasoned about what would happen if length were defined by a metric that works like your analysis does. The result is called Manhattan distance. Thorbadil (talk) 18:32, 3 February 2008 (UTC)[reply]
It might provide perspective to imagine other examples. Say, for instance, that instead of walking parallel to the axes you walk in curliques that get smaller and smaller, or back and forth along the line itself, or in a hairpin-turn zigzag that covers dozens of times the length of the line while never going far from it or backtracking. In general, you can get these paths be of whatever length you want while never getting more than an arbitrarily small distance from the diagonal line. With one exception: you can never produce a path with less length than the actual diagonal distance, and you can never achieve that minimum except with a perfectly straight line. So, it seems reasonable to think of these other paths as extremely intricate detours. Another bit of perspective might be gained by studying more pathological examples, like the Koch curve, Peano curve, and Weierstrass function. Black Carrot (talk) 20:09, 3 February 2008 (UTC)[reply]
By the way, I just noticed that the introduction of Koch curve makes the exact assumption that started this. It's important to notice the difference: The Koch curve is the actual limit curve, and any description of its length has to keep track of that. It is the set of all points that remain part of the simpler curves from some step onwards, and the limit points thereof. You can justify that it has infinite length by letting curves of known length hug it more and more tightly. They must have length less than the Koch curve, since they're smoothed-out versions of it, but can be made arbitrarily long. These approximation curves can be most easily taken as the curves that defined the Koch curve in the first place, since they certainly hug it tightly. The curve you mention works in the opposite way. Each point of the diagonal line is a point that was fixed at some point in the process (where the stepped line meets it) and the limit points thereof. However, there's no reason to think the stepped line is shorter than the diagonal. Since it's rougher, it could only be longer, meaning that you could prove the length of the diagonal at most 2, but not at least 2. Black Carrot (talk) 20:20, 3 February 2008 (UTC)[reply]


February 4

HEY IT`S ME THE PHYSICS MAGAZINE GUY THIS TIME WITH AN ALGEBRA QUESTION

From an Algebra Magazine. 1.Find the consecutive integers whose product is 156. Find two consecutive integers whose product is 156. Find two consecutive even integers whose product is 288. Find two consecutive odd integers whose product is 783.

I also have some word problems Michelle is 5 years younger than Cindy.Janet is years less than twice Cindy`s age.Kenny is 10 years old.The ratio of Janet`s age to Cindy`s age equals the ratio of Michelle`s age to Kenny`s age.How old are Cindy,Michelle and Janet.

A rectangular poster has an area of 190 square inches.The height of the poster is 1 inch less than twice it`s width.Find the dimensions of the poster.

Daryl earns twice as much per hour as Andy,and John earns 6 dollars more per hour than Andy.June earns 16 dollars per hour.The ratio of John`s hourly earnings to Andy`s hourly earnings is the same as the ratio of John`s hourly earnings to Andy`s hourly earnings is the same as the ratio of Daryl`s hourly earnings to June`s hourly earnings.How much does Daryl earn per hour.

A small rocket is launched upward from ground level.The height of the pocket from the ground is given by the quadratic equation h=-16t2 plus 144t where h is the height of the rocket in feet and t is the number of seconds since the rocket was launched.How many seconds will it take for the rocket to return to the ground.

given by the quadratic equation h0 —Preceding unsigned comment added by Yeats30 (talkcontribs) 00:13, 4 February 2008 (UTC)[reply]

Try taking the square root of the numbers. --Salix alba (talk) 00:33, 4 February 2008 (UTC)[reply]
Well I'd strongly encourage you to try them yourself - for example with no. 1 make the first number x and the second number y, then x*y = 156 (from "the product is 156") & y = x + 1 (from "the numbers are consecutive") and you can solve them like any pair of simultaneous equations...the others all work similarily - decide what the variables are and make equations with them, then try to solve those equations. So for the second question you might represent michelle, cindy, janet & kenny's ages by the letters m, c, j & k respectively. Then "michelle is 5 years younger than cindy" becomes m = c - 5, and so on and so forth. It's not that hard once you get equations. Trimethylxanthine (talk) 00:47, 4 February 2008 (UTC)[reply]
While solving simultaneous equations is a good general method, in these cases, I would expect it to be easier to just take the square root and use trial and error with nearby integers. --Tango (talk) 17:58, 4 February 2008 (UTC)[reply]
True true, but that can't be applied to the word questions? Or am I missing something?? Trimethylxanthine (talk) —Preceding comment was added at 00:27, 5 February 2008 (UTC)[reply]
They're only word questions because that's the way they're formulated. It would be perfectly easy to formulate them all as a system of equations. -mattbuck 00:36, 5 February 2008 (UTC)[reply]
I meant taking square roots for them, which is why I added in. Trimethylxanthine (talk) 06:29, 6 February 2008 (UTC)[reply]
These are homework questions. See the science desk. Ignore him. --98.217.18.109 (talk) 23:32, 7 February 2008 (UTC)[reply]

Complex Base

It's possible to write the set of complex numbers in decimal-like form using a complex base, like 2i, and the appropriate number of digits, in that case 0, 1, 2, and 3. Given a base, how do you determine whether it will produce a consistent number system in that way? Black Carrot (talk) 00:52, 4 February 2008 (UTC)[reply]

Yeah, it is. See quater-imaginary baseKieff | Talk 01:17, 4 February 2008 (UTC)[reply]

I'm not sure you actually read my question. Black Carrot (talk) 07:34, 4 February 2008 (UTC)[reply]

Sorry, you're right. I read it as two questions ("is it possible, if then how to determine"). My bad. :( — Kieff | Talk 08:24, 4 February 2008 (UTC)[reply]

Not sure if I understood - but if the base is a+ib where a and b are integers then the new number system will be workable - ? (it's just solving linear equations of order 2)87.102.90.249 (talk) 12:57, 4 February 2008 (UTC) Don't ask me how to find out how many digits to use...87.102.90.249 (talk) 13:11, 4 February 2008 (UTC)[reply]

How do you define "consistency" of a number system? Is it that you can get all integral numbers as a sum of powers of the base weighted with digits? Do you also require uniqueness?  --Lambiam 13:14, 4 February 2008 (UTC)[reply]
I can't help wondering that since real integer number bases produce unique results then complex number bases (using integers) will as well (eg base 4+2i using the numerals 0,1,2,3,1+i,2+i,3+i probably these numbers should go up to 19..) since there is no distinction algebraically between real and complex numbers.. No proof or refutation I can provide right now.87.102.90.249 (talk) 14:11, 4 February 2008 (UTC)I was wrong.87.102.90.249 (talk) 15:02, 4 February 2008 (UTC)[reply]
Well, decimal expansions of real numbers aren't unique (see 0.999...), so chances are a similar approach to complex numbers won't be either. --Tango (talk) 17:55, 4 February 2008 (UTC)[reply]
Yes, but representations in non-integer bases can fail to be unique in more interesting ways.
For example, in base 3/2, 2 does not have a finite representation, but it does have at least two quite different non-terminating representations. A greedy algorithm gives 2=10.01000001001001... whereas we also have 2=0.111111....
And if we use the golden ratio as a base in the ordinary way, then 1=0.11, 2=1.11=10.01 etc. because the base satisfies x2=x+1. (Note that the golden ratio base avoids this non-uniqueness by introducing the additional restriction that the digit string "11" is not allowed in a representation). Gandalf61 (talk) 11:20, 5 February 2008 (UTC)[reply]

Here's an attempt at an answer. Let b be the base and say |b| > 1. Because we can always move the "decimal" point around (i.e. scale by powers of the base) it suffices to consider points inside a shape of our choice which contains a neighborhood of the origin. The fundamental reason why base 2i with digits 0,1,2,3 works is that there exists such a shape which can be covered by four copies of itself scaled down by 2i and translated by 0,1,2,3: namely the rectangle defined by and . In general, since the unit disc (for example) can be covered by finitely many scaled copies of itself regardless of the scaling factor, any base b with |b| > 1 can be used, but the minimum number of digits for arbitrary b looks like it might be a difficult geometric covering problem. We can get a lower bound of from area alone, though. -- BenRG (talk) 18:30, 4 February 2008 (UTC)[reply]

That makes sense. It looks like the same argument would produce a system base bi for b the square root of any natural number. Black Carrot (talk) 02:51, 5 February 2008 (UTC)[reply]
Yes, I think that lower bound is tight for pure imaginary bases. It's not tight in general because I'm pretty sure that base 1 + ε (ε real) won't work with less than 4 digits. The disk covering problem gives an upper bound of n+1 digits for |b| ≤ 1/r(n). (The +1 is because we need a zero digit, which gives us the additional restriction that one of the small disks must be contained in the large disk; otherwise the large disk can't contain a neighborhood of the origin.) Among other things this implies an approximate asymptotic upper bound of digits, though I guess that was pretty obvious to begin with (as was the lower bound).
I just noticed the article Complex base systems, which contains a link to a recent arXiv article on exactly this subject. It seems to only consider "proper" systems, meaning those in which almost every complex number has only one representation (which of course implies that |b|2 is an integer). Actually, on the basis of the IFS fractals in that paper, I'm starting to think that that lower bound is also an upper bound except on parts of the real line. That would be disappointingly boring. I was hoping that the boundaries between minimum-digit regions would be fractal. -- BenRG (talk) 21:53, 5 February 2008 (UTC)[reply]

solution of Euclide

i visted clay mathematics institute web site and i read this, Euclid gave the complete solution for that equation,(x^2+y^2=z^2).my question is how could we give such solution?by picking up an arbitrary values? or by finding operators, like we put,y=(a+s),x=(b+s),then z=(b+a+2s),where,a,b are constants and s,is variable.then we fixed a,and ,b,and start chossing differents value of ,s? is that how we find the solutions?or the general way is different?209.8.244.39 (talk) 12:01, 4 February 2008 (UTC)[reply]

You may want to take a look at Pythagorean triple. If you then have a more specific question we'll be happy to help. -- Meni Rosenfeld (talk) 12:06, 4 February 2008 (UTC).thank you very much.i read it.but why cannot we apply the same way on Hilbert's tenth problem?88.116.163.226 (talk) 13:22, 4 February 2008 (UTC)[reply]
Because Hilbert's tenth problem is about solving arbitrary Diophantine equations, while the Pythagorean equation is just one simple example. The analysis that works for it needn't work in the general case. -- Meni Rosenfeld (talk) 15:33, 4 February 2008 (UTC)[reply]

how to pose and solve problems involvo=ing squares and square roots

Insert non-formatted text here

Your question is somewhat vague, but perhaps our article on quadratic equations might help? —Ilmari Karonen (talk) 14:15, 4 February 2008 (UTC)[reply]

How to find generator of a group

The article Digital Signature Algorithm contains the line: Choose g, a number whose multiplicative order modulo p is q. This may be done by setting g = h(p–1)/q for some arbitrary h (1 < h < p-1), and trying again if the result comes out as 1. Here p-1 is a multiple of q. I am translating this as follows: If q|(p-1) then h(p–1)/q generates the multiplicative subgroup of of order q provided it is not 1. Can someone give me the proof for this fact. Thanks.--Shahab (talk) 15:34, 4 February 2008 (UTC)[reply]

Is q assumed to be prime? If so, it's fairly straightforward: You have , so the order of g must divide q, and is thus either 1 (if g=1) or q. -- Meni Rosenfeld (talk) 15:40, 4 February 2008 (UTC)[reply]
q is prime. Thanks. I hadn't thought of Fermat's result. —Preceding unsigned comment added by Shahab (talkcontribs) 15:54, 4 February 2008 (UTC)[reply]

Top Ten List

Recently, I've done a poll on an online forum asking other users what their Top Ten favorite The Land Before Time characters are (the list is in a hierarchy, meaning #10 on their list is their 10th most favorite, and #1 is their favorite character. The positions are not equal). I've recieved about 30 replies. Now, I want to calculate the Top Ten favorite characters overall. What would be the best way to calculate the popularity of each individual character? I'll add that I have ample time to go through each poll and check each character one-by-one, if need be. --Ye Olde Luke (talk) 23:57, 4 February 2008 (UTC)[reply]

This sounds like Preferential voting, but I've no idea how they add it up sorry - the article might help just a little :) Trimethylxanthine (talk) 00:40, 5 February 2008 (UTC)[reply]
Well, there are a many many ways you could do it.
1 - Whoever has most #1 appearances is #1, person with 2nd most is #2, etc. Once all #1s are exhausted, rank by #2s.
2 - Assign values to the positions, (eg 1 for 1st, up to 10 for 10th), then rank by lowest score, divided by the number of appearances.
3 - Assign values as above, but rank by number of appearances first, then score.
4 - Rank by # of appearances, then by # of 1sts, etc.
There are many more ways you could do it, but those are the ones which seem obvious to me. -mattbuck 00:41, 5 February 2008 (UTC)[reply]

I like the 2nd and 3rd options. I might do one of those.

One other thing. Since the main characters appear in all 13 movies, while most guest stars only appear in one, the recurring characters have a bit of an advantage, seeing as more people are likely to have seen them. Should this be factored in? --Ye Olde Luke (talk) 00:52, 5 February 2008 (UTC)[reply]

You could divide the scores by the # of movies they appeared in I guess, or just create a separate list for guest stars. -mattbuck 00:58, 5 February 2008 (UTC)[reply]

Wait, there is one problem with your #2 solution. You're assuming there are only ten characters to choose from. Since there are many, many characters, I can't rate the lowest score as the most popular. Some character who's #1 on a single person's list would beat a character who is #2 on five different lists, even though the latter character is obviously more popular. --Ye Olde Luke (talk) 00:58, 5 February 2008 (UTC)[reply]

You have a point. I went to a lecture once on voting systems. Nothing is perfect, no matter how you score it. You could rank #1 as 10pts, #2 as 9, ranking by highest score, and this would fix it, but likely throw up further problems. -mattbuck 01:03, 5 February 2008 (UTC)[reply]
At least you got me thinking in the right direction. I can probably work out any bugs from here. Thanks!
Say, are you the user that I introduced to this place? His name was similar to yours. --Ye Olde Luke (talk) 01:07, 5 February 2008 (UTC)[reply]
Seems so! -mattbuck 09:26, 5 February 2008 (UTC)[reply]
Incidentally, the 'no perfect voting system' result is Arrow's impossibility theorem. AndrewWTaylor (talk) 11:27, 5 February 2008 (UTC)[reply]
Most individual rankers have a clear notion of which items are their #1, #2 and perhaps #3 favourites, but by the time they get to #8 or so, there is generally not much perceived difference between one item and the next. A method to obtain an aggregate ranking that takes this into account is to assign a value of 1/1 to #1, 1/2 to #2, 1/3 to #3, and so on. The item with the highest total value is #1 in the aggregate ranking, and so on. Example: three items A, B and C and three rankers, giving input 1:A,2:B,3:C, 1:B,2:A,3:C, 1:B,2:C,3:A. Then A gets 1/1 + 1/2 + 1/3 = 11/6, B gets 1/2 + 1/1 + 1/1 = 15/6, and C gets 1/3 + 1/3 + 1/2 = 7/6. B is highest, then A, then C, giving aggregate ranking 1:B,2:A,3:C.  --Lambiam 02:17, 5 February 2008 (UTC)[reply]
Oh, all right. I'll do that. Thanks! --Ye Olde Luke (talk) 02:28, 5 February 2008 (UTC)[reply]

February 5

Top x% contribute y%

I know there's a saying something along the lines of "90% of the world's wealth is held by 10% of its population", or possibly the US's wealth, or something else. Is there a name for a statistic that, for example, given X_1, X_2, ..., X_n from some random distribution, will tell you what percentage of the total comes from the m largest values? Confusing Manifestation(Say hi!) 01:30, 5 February 2008 (UTC)[reply]

Try Pareto principle. AndrewWTaylor (talk) 11:24, 5 February 2008 (UTC)[reply]
I don't know the answer to your question, but I wanted to mention that for any integrable distribution on an interval there will always be exactly one n with the property that the bottom n% of the interval contains (100−n)% of the total. (This is because is continuous and monotonic on [a,b] and negative at x=a and positive at x=b, therefore exists, and .) So n is a well-defined and potentially interesting measure of the degree to which the distribution is biased toward one end or the other, and I wouldn't be too surprised if it did have a name. -- BenRG (talk) 14:52, 5 February 2008 (UTC)[reply]
The Lorenz curve will answer such issues on income distribution. Pallida  Mors 17:21, 5 February 2008 (UTC)[reply]

Σ notation

Is there any way using the summation operator () that one can cause it to skip at regular intervals such as the sequence:  ? If so how would one write this?Zrs 12 (talk) 02:43, 5 February 2008 (UTC)[reply]

There are two ways to do this. Way #1 involves writing the terms of summation as expressions in the integers, so in this case, t_n = (2n + 1)^3, giving:

The other way is a little bit more versatile, and basically involves writing specific conditions in the operator. For example (my poor LaTeX notwithstanding),

Note that in the first example, the terms go from 1 to 2n+1, whereas in the second, they go from 1 to n (assuming n is odd), so always keep an eye on that. For more complicated syntax, you can also define a set, S, say, that contains all of the indices you want to sum over, and you can then write

Where the xi are the terms. Confusing Manifestation(Say hi!) 04:26, 5 February 2008 (UTC)[reply]
You mean . --wj32 t/c 08:56, 5 February 2008 (UTC)[reply]
More likely or , if you want to preserve the number of terms. — Lomn 14:50, 5 February 2008 (UTC)[reply]
You can get scriptsize multiline subscripts to a big Σ by using \begin{smallmatrix}...\end{smallmatrix}:
 --Lambiam 16:05, 5 February 2008 (UTC)[reply]

One can also write

which is perhaps not all that widespread, but nonetheless it will be understood and has a certain advantage of simplicity. Michael Hardy (talk) 22:53, 5 February 2008 (UTC)[reply]

HELP!!! I dont understand prime numbers.

Could anyone please explain prime numbers to me, i am totally confused. I've learnt that it is a number having no factors except itself and one, In mathematics, a prime number (or a prime) is a natural number which has exactly two distinct natural number divisors: 1 and itself.Here are some prime numbers: 2,3,5,7,11,13,17,19,23,29,31,37 etc... I have no idea what "natural number divisors" mean Please can someone explain it to me???

367 373 379 383 389 397 401 409 —Preceding unsigned comment added by 220.239.2.52 (talk) 22:24, 5 February 2008 (UTC)[reply]

A natural number divisor of a number is a whole number greater than zero (1, 2, 3, 4 etc.) that divides a number without a remainder. For instance, 2 is a NND of 4 (2 fits into 4 twice). 7 is an NND of 21 (7 fits into 21 three times). Prime numbers are numbers that only have the number 1 and itself as a NND. Take 11 for instance - can you think of a number apart from 1 and 11 that divides it without leaving a remainder? There isn't one, and that's why it's prime. Damien Karras (talk) 22:34, 5 February 2008 (UTC)[reply]
You quoted text from prime number which just has a more formal way of saying "a number having no factors except itself and one", without risk of ambiguity. Your own formulation makes it a little unclear whether 1 is a prime. It isn't, which is made explicit by demanding two distinct divisors. And in some contexts a number is said to have negative factors. For example 5 having the factors 5, 1, -1, -5, since 5 = (-1) × (-5). This ambiguity is avoided by demanding natural divisors (natural numbers are by definition positive). By the way, editors of prime number don't agree on the best way to formulate the definition and it has changed many times. PrimeHunter (talk) 22:50, 5 February 2008 (UTC)[reply]
Quibble: Natural numbers are by definition nonnegative. There are two distinct usages that differ on the question of whether zero is a natural number. Some of the sillier religious wars in WP math articles are fought over this difference in usage. --Trovatore (talk) 23:05, 5 February 2008 (UTC)[reply]
Ah, but when you study abstract algebra, -5 is considered prime in certain contexts. The definition of prime really does depend on context (the term is used for things other than integers, as well). --Tango (talk) 14:08, 6 February 2008 (UTC)[reply]
Of course, in the ring of the integers, -5 is prime. That has nothing to do with my point, which was about zero, not about the negatives. --Trovatore (talk) 18:31, 6 February 2008 (UTC)[reply]

Predicate logic, bis

Are the statements

1.

and

2.

logically equivalent?

I do not know, but was hoping 1. would "distribute", allowing me to write:

I could then write 2. as:

Which are not logically equivalent? Damien Karras (talk) 22:53, 5 February 2008 (UTC)[reply]

They are not logically equivalent; 1 implies 2 but not vice versa. Note that 2 is true if is false; that is, it's sufficient that there be at least one thing that does not have P. But that is not sufficient to make 1 true. --Trovatore (talk) 22:59, 5 February 2008 (UTC)[reply]
Thanks! I therefore assume that quantifiers can distribute? Is there a formal definition? Damien Karras (talk) 06:24, 6 February 2008 (UTC)[reply]
It wouldn't be a definition; it would be a theorem. You can prove that 1 implies 2, using the definitions of and and previously proved theorems, thus establishing "1 implies 2" as a theorem. But there are lots and lots of things like that which you can prove. I don't know that this particular one has a name. —Bkell (talk) 12:18, 6 February 2008 (UTC)[reply]
Okay, what I mean to ask: Is ? Damien Karras (talk) 18:27, 6 February 2008 (UTC)[reply]
No. Come on, translate it into words, and you should be able to figure out why. --Trovatore (talk) 18:34, 6 February 2008 (UTC)[reply]
With that in mind I can create a countermodel where P(x) : x is positive and Q(x) : x is negative and the universe is all nonzero integers, which makes false and true. Putting it into words does nothing for me. :( But I get the idea... Damien Karras (talk) 18:58, 6 February 2008 (UTC)[reply]
The cases of quantifiers distributing over logical connectives can be derived from two basic ones:
In the last equivalence, proposition P is required to be independent of x.
For example, you can derive:
 --Lambiam 19:31, 6 February 2008 (UTC)[reply]

February 6

Trig Proof

I'm stuck on the following proof. Could anyone help me out? --Sturgeonman (talk) 00:58, 6 February 2008 (UTC)[reply]

Got it. Never mind. -- Sturgeonman (talk) 01:34, 6 February 2008 (UTC)[reply]

BTW, you should put a backslash before common mathematical functions like \cos and \sin in LaTeX. It makes it look much better:
Keenan Pepper 06:04, 6 February 2008 (UTC)[reply]

Two unrelated calculus questions

1) I'm being asked to evaluate the indefinite intergal ∫sin2 2x and for some reason, I don't know how to proceed ... all my calculations have been fruitless and led to culs-de-sac. Could anyone at least give me a hint in the right direction? (I don't think the [First] Fundamental Theorem of Calculus comes into play, since that's for definite integrals, if I'm not mistaken.)

2) For computing a Riemann sum using the midpoint method, my teacher recommends staying away from the formula and using a simple, straightforward nonformulaic approach. It's something like taking the midpoint of each interval and adding f(midpoint1) + f(midpoint2) + ... + f(midpointn-1) + f(midpointn). Will this give me the right answer, or do I have to multiply through by ∆x/2 or something like that, like when finding a trapezoidal sum?

Thanks! —anon —Preceding unsigned comment added by 141.155.57.220 (talk) 05:12, 6 February 2008 (UTC)[reply]

There is a version of the FTOC that applies to indefinite integrals, but it's true that in this case you need something a little more - and that something a little more is a trick called a double-angle formula. Put simply, do you know an identity that lets you write cos 2x in terms of sin x? And, having done so, can you rearrange that into something you can use in your first integral?
As for your second question, try thinking about it conceptually - a Riemann sum is essentially an approximation of the area of a bunch of rectangles. The area of a rectangle is the product of the two side lengths, so there's a reasonable chance you should be multiplying something by something to get your results. Confusing Manifestation(Say hi!) 05:25, 6 February 2008 (UTC)[reply]
The problem is not the 2x term, which can be dealt with by u-substitution. It's the squaring of the sine. There are power reduction formulae at the trig identities article. Black Carrot (talk) 09:08, 6 February 2008 (UTC)[reply]
I think you misunderstood ConMan's post. Note that he specified "cos 2x", which can be expanded using a . This formula is memorized more commonly than the explicit formula for itself, and is thus a good starting point for the calculation. -- Meni Rosenfeld (talk) 23:09, 8 February 2008 (UTC)[reply]
(1)
HTH. CiaPan (talk) 10:02, 6 February 2008 (UTC)[reply]
(2) Of course you should multiply each f(midpointi) by Δx, which is the n-th part of the integration interval. Putting all Δx-es outside the parens makes the integral approximation = IntervalLength/n × sum of f(midpointi). --CiaPan (talk) 09:57, 6 February 2008 (UTC)[reply]

February 7

Singular Value Decomposition

Suppose that I want to decompose using singular value decomposition. So I define . Since B is a diagonal matrix, the eigenvalues of B are simply the diagonal entries so I label the singular values of A as . So the right and left eigenvectors are just and and their transpose. So the decomposition comes out to:

which is obviously not true because of the minus sign in front of 2 in our original matrix A. So the question is how are these eigenvectors chosen. I know that this decomposition is not unnique. Since, there is an infinite number of left and right singular vectors to choose from, how do we choose the eigenvectors? I thought, the standard was to choose some unit vector but that still seems not to be enough. A Real Kaiser (talk) 04:41, 7 February 2008 (UTC)[reply]

Let's see. . In the case of distinct singular values, V is determined uniquely up to multiplying by some diagonal matrix of unimodular values on the right. For square matrices and a fixed B and V, U is uniquely determined. In this case, once you had chose to use the identity on the right, you would have been forced to choose the diagonal matrix with 1 and -1 on the left. You must always use unit eigenvectors to make U and V unitary, and I believe the procedure is to choose V and then find U. This is in Johnson and Horn, Matrix Analysis, Theorem 7.3.5. (Sorry, I know that this could use some wikilinks) 134.173.92.17 (talk) 05:58, 7 February 2008 (UTC)[reply]

Real Analysis

On a completely different note, let be a set and let F be a -algebra on . Let be any measure on . In my real analysis class, our teacher told us that if is a nested decreasing sequence in F (meaning ) then we have that

The question is that is this still true if ? If not, can anyone provide a counterexample because I think that this would be false if the measure of is infinite to begin with. A Real Kaiser (talk) 05:00, 7 February 2008 (UTC)[reply]

Example 1.20 (c) in Rudin's Real and Complex Analysis, Third Edition: "Let be the counting measure on the set , and let . Then but ". —Preceding unsigned comment added by 134.173.92.17 (talk) 05:42, 7 February 2008 (UTC)[reply]
Just a small correction: If it's a nested decreasing sequence, then you mean . —Bkell (talk) 06:14, 7 February 2008 (UTC)[reply]
That is of course what I meant. Thanks a lot. I actually have the book and this is exactly what I was looking for. Awesome!

A Real Kaiser (talk) 06:36, 7 February 2008 (UTC)[reply]

Points on a plane

If I give a number of random points and asked them to be expressed by an equation. (i.e. the points must be able to be crossed by an shape that is able to be defined by an equation. As in, a line, parabola, the edge of a square. etc.) What is the maximum number of points for which there is definitely an equation to express them? I know that it is possible to use a circle to express 3. Is it possible to express 4? Thanks ahead of time. 99.226.39.245 (talk) 05:20, 7 February 2008 (UTC)[reply]

Well, for any finite number of points, you can always find a polynomial that goes through all of those points. Just like how you need a straight line to go through two points, you need a parabola to go through three points. So, if you give me any arbitrary n points (that are distinct, of course), I can always find at least one polynomial of degree n-1 that goes through all of those points.A Real Kaiser (talk) 06:38, 7 February 2008 (UTC)[reply]
A Real Kaiser is correct, but the resulting polynomial may be very ugly. --Gerry Ashton (talk) 07:45, 7 February 2008 (UTC)[reply]
It would be best to read the article Polynomial interpolation (and probably Interpolation) for the actual equation. --Martynas Patasius (talk) 14:05, 7 February 2008 (UTC)[reply]
That article only talks about polynomials in one variable (ie. y=P(x)), which makes it impossible to have two points with the same x coordinate. Certainly in some cases it is possible to use a polynomial in both x and y to go through points in the same vertical line (eg. given points (1,0),(0,1) and (0,-1) the polynomial x^2+y^2=1 goes through them), is that always possible? --Tango (talk) 17:30, 7 February 2008 (UTC)[reply]
Sure. If you want to go through say, just use the equation (say). It's not very edifying, but it's a polynomial equation, as desired. Algebraist 18:22, 7 February 2008 (UTC)[reply]
Or you could use , which gives you exactly the points you want. Algebraist 18:29, 7 February 2008 (UTC)[reply]
A more interesting version of your question is whether, given finitely many points in the plane, there exists an algebraic variety containing all of them. I strongly suspect that there is, but can't immediately see how to prove it and can prove it. Just transform co-ordinates by rotating the axes slightly until no two points share the same x co-ordinate and use the Kaiser's idea. Algebraist 18:34, 7 February 2008 (UTC)[reply]

Fitting regular polygons inside each other

Is there any existing proof out there proving that it's impossible to fit EXACTLY (all corners must touch a side) a regular pentagon inside a square? or a regular pentagon inside a regular hexagon?

What about a general test to determine if a polygon with X equal sides can fit EXACTLY inside another polygon with (X+1) or (X-1)equal sides?

Oh, yeah is there any special name to the polygons with such relationship? --Kvasir (talk) 18:40, 7 February 2008 (UTC)[reply]

My initial thought for the first one (the others are probably similar) is that it can't be done because the circle circumscribing the pentagon would intersect the square 5 times. Assuming the square and pentagon are concentric (I doubt it will work in the non-concentric case, but can't prove it yet), the circle would intersect the square an equal number of times on each side, by symmetry, so that's 0, 4 or 8 intersection points. So you have to go with 8 (the only one bigger than 5), and those 8 points will have a rotational symmetry of order 4, so I think that would end up requiring the pentagon to have a 4-symmetry, which it doesn't. So basically, you end up needing an n-gon to have a rotational symmetry of order n-1, which is only the case for n=2, which isn't really a polygon. I'm not 100% sure on that last bit, and I can't think of how to approach non-concentric polygons, but I think that's almost an answer to your question. --Tango (talk) 21:19, 7 February 2008 (UTC)[reply]
I think you can fit a pentagon in a hexagon by putting a vertex of one in a vertex of the other. Black Carrot (talk) 22:00, 7 February 2008 (UTC)[reply]
OK how do you prove that such a hexagon exist Black Carrot? or any other polygon with n>=5?
I should rephrase the question. The polygons need not be concentric. I should illustrate it better as I have already tried it both ways:
A: This is to fit the largest possible (n+1)-gon inside n-gon. Start with a circle (n=1) of a given area, then insert a triangle (n=3, n=2 doesn't form a regular polygon), then a square (n=4) inside the triangle. But there is no way to fit a regular pentagon (n=5) inside the square where all 5 vertices touch the square. Has anyone proven that it's impossible? Has anyone proven that the fitted (n+1)-gon has the maximum area when it is fitted inside the n-gon this way?
B: This is to enclose an n-gon with the smallest possible (n+1)-gon. Start with a circle of a given area, then enclose it with a triangle where the circle intersect the triangle exactly 3 times. Then the triangle is fitted inside a square; and the square is fitted inside a pentagon. I can only get this far, I could not get the pentagon to fit inside any hexagon where all 5 vertices end up on the sides of hexagon. Similarly, has anyone proven that it's impossible to fit a regular pentagon inside a regular hexagon? Has anyone proven that the fitted (n+1)-gon has the minimum area?
A related part of the question would be a generic formula to get the area of n-gon that fits exactly inside (n-1)-gon or (n+1)-gon of a given area, if it exists. --Kvasir (talk) 22:12, 7 February 2008 (UTC)[reply]
Here's a simple proof that you can't fit a regular pentagon inside a square in the way you describe. Suppose a regular pentagon fits inside a rectangle (which might be a square) so that all five vertices of the pentagon touch a side of the rectangle. Then one side of the rectangle must touch two vertices of the pentagon, so one of the sides of the pentagon must be part of one of the sides of the rectangle. Then obviously the other three vertices of the pentagon must touch the other three sides of the rectangle. Now consider the distance between parallel sides of the rectangle. One set of parallel sides is separated by the distance from a vertex of the pentagon to the midpoint of the opposite side of the pentagon, and the other set of parallel sides of the rectangle is separated by the distance between two nonadjacent vertices of the pentagon. Plainly these distances are not equal, so the rectangle is not a square. (This would be a lot easier to describe with a picture, but hopefully this is clear.) —Bkell (talk) 06:01, 8 February 2008 (UTC)[reply]
I was imagining it wrong. I tried to draw it, and it didn't work. Incidentally, part of what Bkell suggested works for all polygons. To fit an n+1gon inside an ngon, they must share a side. That is, a side of the inner one must be part of a side of the outer one. Imagine this side at the bottom, flat to the ground. Both shapes have a vertical axis of symmetry. If the inner one is touching the outer one correctly on both sides, their axes of symmetry must match, so the inner one must be centered on the bottom of the outer one. That means there's only one diagram to work out for each n, which isn't bad. (BTW, this isn't true for a triangle on the inside, since it doesn't have two points to compare to each other, and it isn't true for a square on the inside if the outer shape has a flat top. Not that a pentagon does, but still.) Black Carrot (talk) 06:30, 8 February 2008 (UTC)[reply]
A similar argument applies if the two polygons share a vertex, call it A. Imagine A at the bottom. For each pair of symmetric vertices of the inner one, there's a circle centered at A and going through both of them that can only (I think) intersect the outer polygon in two points, which will be symmetric with each other. If the inner polygon hits both these points, its axis of symmetry must be vertical. Black Carrot (talk) 06:38, 8 February 2008 (UTC)[reply]
That last one works best if you choose the two vertices adjacent to A. Black Carrot (talk) 07:14, 8 February 2008 (UTC)[reply]
Ok this is what I should've uploaded at the start. Yes, this does illustrate why (n+1)-gon can't fit inside an n-gon except for a square inside a triangle as Bkell explained. What about the other way? fitting an n-gon inside an (n+1)-gon? As you can see I've gotten as far as a pentagon. Is there a proof that the pentagon is the smallest there is to enclose the square? How can one disprove or prove that a hexagon in this series exist? I'm surprised no Chinese or Greek mathematician has pondered over this before. --Kvasir (talk) 15:59, 8 February 2008 (UTC)[reply]
Bkell's argument doesn't prove it can't be done for an n+1gon in an ngon (other than pentagon in square), it just shows that there's only one diagram to consider. You'd still have to work out the measurements. Black Carrot (talk) 19:03, 8 February 2008 (UTC)[reply]

February 8

Ever any lotteries with an expected value greater than the ticket price?

I live in the UK and I like to think I'm rational. While it could be fun to gamble, I am only prepared to gamble where the expected value of the lottery is greater than the ticket price. Does this ever happen with any lottery available in the UK please? Unfortunately it seems to be impossible to get hold of the statistics to calculate this (and possibly the lotteries are set up to make this impossible - I'm not sure). I've no idea how many "rollovers" it would take. 80.0.102.226 (talk) 00:02, 8 February 2008 (UTC)[reply]

Yes, it occasionally happens. To make it simple, let's consider just the main (UK) Lotto game, and just the jackpot. The probability of a given ticket winning the jackpot is in fact 1 in 13,893,816. So if the predicted jackpot is in the region of £14 million or above, it makes mathematical sense to buy a ticket (the National Lottery website gives the predicted jackpot for the next draw - it has to be a double rollover to get to that kind of figure). Incidentally, always buy your ticket on Friday afternoon or Saturday - if you buy it any earlier in the week, you're more likely to be dead by Saturday evening than to win the jackpot! AndyofKent (talk) 02:51, 8 February 2008 (UTC)[reply]
But any calculation of payout would have to include the possibility that more than one person wins the jackpot, thereby splitting the amount of prize money. If you assume that each person chooses their lottery numbers independently and randomly, then presumably you could estimate this as something like (# of people buying tickets) / (# of combinations), but as many people have "methods" of selecting lottery numbers, if you can find a method of your own that selects a less-popular combination, you will greatly increase your expected payout. Given that the average lottery is designed to make more money than it pays out in jackpots, it would require fairly specific circumstances to have such an occurence. Confusing Manifestation(Say hi!) 03:52, 8 February 2008 (UTC)[reply]
Plus, even when this does happen, the expected utility of buying a lottery ticket is still negative. —Keenan Pepper 04:45, 8 February 2008 (UTC)[reply]
If the utility is negative, how come millions of people buy tickets? 80.0.121.236 (talk) 12:23, 8 February 2008 (UTC)[reply]
The utility is only negative if you base utility solely on net worth. By that argument it would be even more irrational to go to the movies, where the ticket price is higher and the chance of winning the jackpot essentially nil. For many people the attraction of a movie is that it lets them forget their own lives for a while and vicariously live the life of someone else who has amazing adventures and does great things. I think the people who buy lottery tickets do it for the same reason. Instead of £5 for a few hours' fantasy they spend £1 for a week's fantasy. It's a cost-effective way of satisfying a basic human need, and I think it's perfectly rational, though also somehow sad. -- BenRG (talk) 12:57, 8 February 2008 (UTC)[reply]

If you ignore rollovers, the expected value of a £1 ticket is 50p, simply because that's the amount that goes into the prize fund (the rest goes to good causes, admin, tax and, of course, profit), and all tickets have an equal chance of winning. So you expect to lose half your money. For the expected net gain to be positive, you would need enough rollovers for the total prize fund to be double what it would normally be - that's a pretty easy calculation to check, I think, as they give estimated prize funds and jackpots, so it's just basic arithmetic once you've found the right numbers. --Tango (talk) 11:32, 8 February 2008 (UTC)[reply]

There are also more minor prizes in addition to the jackpot - these would affect the expected value a lot. Very good point that the expected value of a ticket must be 50p on average for the Lotto - is it the same for Euromillions? And are these the only UK lotteries that have rollovers? And what is the normal prize fund - I think, maybe I'm wrong, that the lottery TV rollover adverts only quote the forecast jackpot size, rather than the total prize fund. I'm not sure if extra ticket sales would be reflected in a bigger jackpot for that draw or if the extra sales money goes to make the prize-money in the next draw. And do the smaller prizes vary in value, or are they fixed? Sorry, so many questions. 80.0.121.236 (talk) 12:13, 8 February 2008 (UTC)[reply]

I don't know about anything other than Lotto. With that, the prize fund for any given week is 50% of the total ticket sales for that week, plus any rolled over jackpots. Extra ticket sales due to there being a rollover would be included, but would actually reduce the expected net gain, since the effect of there being more tickets out there is more significant than the extra prize money. I haven't really watched the lottery for a while, but I know they used to give both the total prize fund and the jackpot fund before the draw - I would assume both numbers are available somewhere in advance. The only fixed prize is £10 for 3 numbers, all the others are done in terms of percentage - I think it's x% split between all the people that get 4 balls, y% split between all the people that get 5 balls, etc., with the x, y... being fixed (and published on the official website somewhere, as I recall). --Tango (talk) 14:03, 8 February 2008 (UTC)[reply]
"Extra ticket sales due to there being a rollover would be included, but would actually reduce the expected net gain, since the effect of there being more tickets out there is more significant than the extra prize money." Does this mean its impossible in practice to get an expected value greater than the ticket price? I'm curious to see any statistics that demonstrate that rollovers decrease the expected value due to the increase in ticket sales. Does anyone know where I can find the stats to calculate the expected value - I need a) total prize fund, and b) number of ticket sales - both figures now seem to be kept secret. 80.3.47.25 (talk) 23:23, 8 February 2008 (UTC)[reply]

calculus

I was reading the article on lambda calculus, and it appeared to me that it is used almost exclusively for programming. Is this true, or does it have real mathematical applications? Thanks, Zrs 12 (talk) 03:27, 8 February 2008 (UTC)[reply]

Definitely has "real mathematics" applications. The lambda calculus predates electronic computers, and was used by its inventor Alonzo Church and his students to derive important results in mathematical logic in the 1930s. Although our lambda calculus article says it "can be thought of as an idealized, minimalistic programming language", this is a reversal of the actual historical development - programming languages such as Lisp were designed around the lambda calculus, which already existed. So it would be more accurate to say that programming languages are implementations of the lambda calculus. Gandalf61 (talk) 07:49, 8 February 2008 (UTC)[reply]
I think you've engaging in a bit of historical revisionism there. Lisp definitely was not designed around the lambda calculus — I'm not sure McCarthy had even heard of the lambda calculus when he started working on Lisp. The driving idea behind Lisp was symbolic computation. At the time electronic computers were seen as replacements for human computers, and the idea that programs might work with words instead of numbers was new. All of Lisp's other innovations were accidental byproducts. The lambda keyword was introduced to the language very early on, but Lisp did not at the time contain anything resembling Church's lambda calculus. It used dynamic scoping, not by choice but because nobody involved understood scoping. It had destructive update. It had statement labels and goto. It had car, cdr, ctr and cpr. The funarg problem was noticed by an outsider. McCarthy saw it at the time as a bug, not the fundamental design flaw it actually was, and Steve Russell fixed it with a trick called the "funarg device", now known by the less embarrassing name of "lexical closures". Lexical closures are a great implementation trick, but unfortunately they've permanently contaminated the semantics of the language because of their interaction with destructive update of bindings. Church understood the trickiness of lexical scoping, but McCarthy's team doesn't seem to have benefited from that. I don't think Church's work had any influence beyond the name lambda, which has pointlessly frightened generations of students. They should have called it function. It wasn't until lexical scoping finally became standard in Lisp that you could reasonably claim that it even contained the lambda calculus as a sublanguage, and it's still a major hassle to program in that style in Common Lisp; you have to write #' and funcall all over the place and you can't rely on tail recursion. The language was pretty clearly not designed for that kind of programming. Even Scheme's embedded lambda calculus isn't very faithful to the original, with its eager evaluation order and lack of automatic currying (and logical inconsistency, but that's a different story). I'm sorry to rag on Lisp so much; I actually like it a lot, but it gets romanticized these days in a way that has little bearing on reality. Historically speaking, Lisp is very thoroughly in the Perl school of "design". -- BenRG (talk) 19:17, 8 February 2008 (UTC)[reply]

Why does mathematics use the idea of 'truth'?

I don't get why mathematics uses the idea of statements being 'true' or 'false' rather than just being 'consistent' or 'unconsistent' or 'unattainable' commpared with the system in question.

What's truth got to do with it? What's truth but a second-hand emotion? —Preceding unsigned comment added by 212.51.122.27 (talk) 17:28, 8 February 2008 (UTC)[reply]

Well, to a Platonist, mathematical objects exist just as other objects do, and mathematical statements are statements about these objects which are true or false just as statements about regular objects are. From the point of view of other philosophical positions (such as formalism), 'true' is a defined term like any other which is used because it is useful. Results such as Gödel's completeness theorem tell us that the semantic notion 'true in every model of a theory T' coincides with the syntactic notion 'provable in T', and so to a certain extent we can equate truth with provability (is this what you mean by being 'consistent' or 'unconsistent'?). I have no idea what you mean when you call truth a second-hand emotion. Algebraist 17:46, 8 February 2008 (UTC)[reply]
What's Love Got to Do with It (song) --LarryMac | Talk 17:49, 8 February 2008 (UTC)[reply]
I think "true/false" and "consistent/inconsistent" are different. "True" means "provable" (these kinds of statements will always be of the form "A=>B", it's generally meaningless to say simply "B is true" in maths, although we don't always explicitly state the assumptions). "False" means "the negation is provable". "Consistent" means "does not contradict", it's not necessarily true, but there is nothing stopping you assuming it's true (for example, the axiom of choice is consistent with the ZF axioms, you can't use them to prove it or it's negation [ie. it's independent], so it's not "true" or "false", but you don't get a contradiction if you simply assume it's true). "Inconsistent" means you do get such a contradiction - A is inconsistent with B if A=>C and B=>not C for some C. That doesn't say anything about the truth or A, B or C, just their relationship with each other, a priori, either A or B can be true (given appropriate assumptions), just not both. --Tango (talk) 18:05, 8 February 2008 (UTC)[reply]
No, "true" does not mean "provable". The most important take-home message from the Gödel incompleteness theorems is that you cannot identify provability and truth (at least without losing properties of truth that we would like it to have, such as excluded middle).
Pre-Gödel formalists tried to identify provability with truth. Gödel showed that that sort of formalism was non-viable. Post-Gödel formalism simply does away with the notion of truth. The only major school that still identifies provability with truth is intuitionism, but it gets away with it by declining to fix a single formal notion of provability. --Trovatore (talk) 18:15, 8 February 2008 (UTC)[reply]
While you are of course correct, I feel I ought to defend my post by pointing out that modern formalists still admit truth (as a defined term) in the sense of true-in-a-model, just not in the sense of true-in-a-theory. Algebraist 18:31, 8 February 2008 (UTC)[reply]
I think this is getting a bit off track. Head-in-the-skies philosophy is important, but keep in mind, there is such a thing as being actually true. It's hard to dispute a winning Nim strategy, or the fact that every square is a sum of consecutive odd numbers. One artificial construction may be as good as another in some sense, but that doesn't change what's real. (This is not, BTW, Platonism. I'm not saying circles "exist", I'm saying that what exists exists.) Both those examples are discrete math with small numbers, which is the area of math most easily accepted as not made up. a(b+c) = ab+ac, no matter who you are. Black Carrot (talk) 18:40, 8 February 2008 (UTC)[reply]
Small detail - the name of the link references the scrollover text of the image. Black Carrot (talk) —Preceding comment was added at 18:41, 8 February 2008 (UTC)[reply]
BC, you make a good point, but it's surprisingly hard to draw the line. Where do you put Goldbach's conjecture, for example? The probabilistic evidence for it is extraordinarily convincing, yet it's entirely possible that it will never be proved from any "foundationally relevant" theory. That's a case where, if there were a counterexample, it could be checked by computer. If you're willing to take that as part of "reality", then what about the twin prime conjecture, where there's no such thing as checking an example or counterexample by computer? Before you know it you're having Woodin cardinals for breakfast and expressing your opinion on the truth or falsity of the continuum hypothesis before lunch. --Trovatore (talk) 18:45, 8 February 2008 (UTC)[reply]
Oh come on, it's not a slippery slope. There are obvious limits. I'm not saying the line is easy to draw, or even that there could ever be a single well-defined boundary, just that it's silly to paint everything with the same brush. Some things are confusing and uncertain, sure, but most of them are things nobody's heard of and nobody cares about. Number theory is uncertain, but arithmetic isn't. Black Carrot (talk) 18:58, 8 February 2008 (UTC)[reply]
By the way, you can't use probabilistic evidence for the Goldbach conjecture, no matter how you slice it. Within the range of numbers the vast bulk of humanity actually uses, it's been proven beyond reproach. Beyond that, it hasn't been shown probable at all. Black Carrot (talk) 19:00, 8 February 2008 (UTC)[reply]
No, that isn't true; the probabilistic argument is not simply the absence of a counterexample within numbers that can be checked. It's the fact that the bigger the numbers get, the harder it is for there to be a counterexample, because you have more primes available from which to make sums. This argument is so convincing that in my opinion the truth of the Goldbach conjecture is settled, to the point that a formal proof would not change its epistemic status much -- after all, the theory from which you prove it could also turn out to be inconsistent.
What I'm saying about the "slippery slope" is that it's extremely hard to do anything in mathematics without accepting, in one way or another, the real existence of (at least some) abstract objects. And once you've done that, the case against even-more-obviously-abstract objects is harder to make. --Trovatore (talk) 19:17, 8 February 2008 (UTC)[reply]
I don't find that argument convincing, though. As certain as it may be that almost all of them are sums of pairs of primes, even that almost all of them are sums of pairs of primes in many ways, that doesn't rule out, or even make less likely, that one or a few somewhere up the line won't be. It's not a matter of how many of them there are, it's how they fit together, and that's what's hard to prove.
Yes, it's hard to draw a line, but that's true of anything. In any area of study, there are some things that are too convincing for argument and a whole lot of fringe ideas that are just silly, and everything in between. I'm just saying that you don't have to ignore one end of the spectrum in favor of the other. Black Carrot (talk) 19:27, 8 February 2008 (UTC)[reply]
I think it does make it less likely. So much less likely that, as I say, the question is effectively settled. A counterexample to Goldbach would require an unbelievably massive "conspiracy" of prime numbers "avoiding" summing to the counterexample. A similar conspiracy resulting in a proof of 0=1 from Peano arithmetic is just as believable.
I think there is no "silly" end to this spectrum, because full-on hardcore realism about large cardinals is just fine, and it allows you to decline to draw an arbitrary line somewhere in the middle. --Trovatore (talk) 19:33, 8 February 2008 (UTC)[reply]
I have no problem with the prime numbers conspiring. They've done it before, and they'll do it again, they're fickle like that. There's nothing fundamental to my view of the world that says they're well-behaved. You might view them differently. Peano arithmetic, though, is abstracted barely an inch and a half from its roots in bean-counting. If there's something wrong with that, the universe is broken. The only claim it makes that isn't obviously true is that there are infinitely many whole numbers. If a paradox comes out of that, fine. Maybe there's a cap on how far you can push the pattern, but for sufficiently small numbers it's impossible for it to be wrong.
Every crackpot has a reason. I think most people would agree that a system with more patches than an old sweater, that reasons glibly about imaginary objects nobody could have conceived of a hundred years ago, is a bit fringy. I'm not saying it can't be justified philosophically, in the same way that you can justify any crime by a sufficiently extreme plea to moral relativism, I'm saying that that's what it takes to justify it. It doesn't take mental gymnastics to find a foundation for arithmetic. Black Carrot (talk) 20:07, 8 February 2008 (UTC)[reply]
There are no "patches" in set theory. That's a misunderstanding of the antinomies. The antinomies did not come from an informal notion of set, but from the wrong informal notion (the conflation of the extensional and intensional concepts). --Trovatore (talk) 20:15, 8 February 2008 (UTC)[reply]
Fair enough. I'm sorry, bad example. I think you should reread the discussion that prompted my first comment, though, and compare it to our discussion since then. I don't think you actually disagree with what I was trying to say. I was only arguing for a more balanced viewpoint. Especially in front of the OP, who needs it. Black Carrot (talk) 21:15, 8 February 2008 (UTC)[reply]
There is a big difference between finding a counterexample for something that all evidence points towards being true and finding a counterexample for something that's been proven true. One is very unlikely, the other is impossible, that being what "proven" means. By the standard definitions of 0, 1 and equals, . The only way you could change that would be to change the definitions, and that's moving the goalposts. --Tango (talk) 21:26, 8 February 2008 (UTC)[reply]
No, sorry, Tango, you're laboring under a common misconception here. Just because something has been proved does not imply that we know apodeictically that it is true. Possibly the assumptions we used to prove it were false. In the case of sufficiently simple arithmetic statements like Goldbach, for the assumptions to prove such a statement true even if it were actually false, the assumptions would have to be more than false--they'd actually have to be mutually inconsistent. But we do not know beyond all doubt that that doesn't happen. --Trovatore (talk) 21:30, 8 February 2008 (UTC)[reply]
Word. Could you point me to a good source on the words extensional, intentional, and antinomy? Our articles on them suck could be more detailed. Black Carrot (talk) 21:35, 8 February 2008 (UTC)[reply]
I'm afraid I'm caught short here -- can't think of a good source at the moment. --Trovatore (talk) 21:38, 8 February 2008 (UTC)[reply]
But mathematical theorems always take the form "A=>B", the assumptions are, by definition, true, because that are assumed to be. If those assumptions don't actually hold for what we intuitively think of as numbers, then the theorem isn't very useful, but it's still true. The statement explicitly said "Peano arithmetic" - that Peano arithmetic implies is not disputable. That Peano arithmetic is an accurate description of what we intuitively know about arithmetic is possibly disputable, but that's another matter. --Tango (talk) 22:38, 8 February 2008 (UTC)[reply]
No, that's the shallow sort of formalism, and is quite demonstrably wrong. Peano arithmetic does not define what is true about the naturals; it's a collection of statements that we believe are true about the naturals, and from which we can derive others that -- assuming we were correct in the first place -- must also be true. But maybe we were wrong in the first place.
I agree that Peano arithmetic implies 0≠1, but that isn't the question -- the question is whether it implies 0=1. It does not follow, merely because it implies the first, that it does not imply the second. --Trovatore (talk) 22:53, 8 February 2008 (UTC)[reply]
Has Peano arithmetic not been proven consistent? If not, then I need to do some more reading on the subject before continuing this discussion. --Tango (talk) 23:10, 8 February 2008 (UTC)[reply]
Ok, after a quick bit of reading, it seems it depends on what you actually mean by "proven consistent"... I think this is all getting a bit too deep for this time of night... (it's 2313hrs here). --Tango (talk) 23:13, 8 February 2008 (UTC)[reply]
Depends on what you mean by "proven". Sure, it's been proved, but not without assumptions. Maybe those assumptions are wrong.
Gödel's second incompleteness theorem is taken by some to eliminate all hope of proving that PA is consistent "by finitistic methods", as Hilbert wanted. However there has never been a good definition of "finitistic", so this is a little hard to pin down absolutely. As I recall Gödel himself disclaimed this interpretation of his theorem. From the other side of the question, Gentzen proved that PA is consistent by analyzing possible proofs of a contradiction and performing induction -- but it was transfinite induction up to a certain infinite ordinal number. Taking the two results together, the status of Hilbert's second problem is particularly muddled -- has it been resolved positively, resolved negatively, or is it as yet unresolved? --Trovatore (talk) 23:16, 8 February 2008 (UTC)[reply]

Power reduction writ large

Does anyone know of a relative of the addition theorem on circles rather than spheres? In particular, I'd like to have the coefficients in the expansion

The extension to would then be easy — apply the sum formula to each term and get a different sum of cosines and a sum of sines. Of course, one way to approach this would be to apply the power reduction rules repeatedly, but I'm not sure how (or if it's even always possible) to write as a sum of only first-power sines and cosines. --Tardis (talk) 20:46, 8 February 2008 (UTC)[reply]