# Wikipedia:Reference desk/Archives/Mathematics/2008 January 30

Mathematics desk
< January 29 << Dec | January | Feb >> January 31 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.

# January 30

## grid coordinates from sequence number

This is a very elementary question but I am no good at math

I am writing a program and I need a function to determine the coordinates of an element in a table given its serial position (counting left to right, top to bottom) and the number of rows and columns. to visualise, I might have a grid like this.

1234
5678


I might want to look up the row and column of 6, which would be column 2 row 2. element 4 would have column 4, row 1. I have a feeling I should use mod somehow but I can't figure out how. Thanks! -- 71.86.121.200 (talk) 02:21, 30 January 2008 (UTC)

Numbering from 0 would make the relationship more obvious.
0123
4567

The number 6 is located in row 1, column 2 (the left and top are row/column 0). The table is 4 cells wide. The relationship between those numbers is (serial=row*width+column). To reverse that, row=serial/width (rounded down) and column=serial-row*width, a.k.a. serial mod width. If you insist on numbering from 1, you'll have to subtract 1 from serial first and then add 1 to the results afterward. --tcsetattr (talk / contribs) 03:06, 30 January 2008 (UTC)
Thanks! that looks right. Plugging that equation into my calculator (I told you im not good at math ;-) I see that row=(serial-column)/width which eliminates the need to round down. -- 71.86.121.200 (talk) 03:54, 30 January 2008 (UTC)

## Finding the correct sequence for painting

The PVC figure like this one, requires manual panting.

http://www.entertainmentearth.com/images/%5CAUTOIMAGES%5CORG40601.jpg

Suppose it has 3 sections that require painting, A , B and C. And suppose each section has a painting failure rate. If a painting failure occurs, the whole thing would need to be discarded resulting in a loss of effort.

So the question is, in what sequence should the painting occurs? My gut feeling is that the section with the highest failure rate should be painted first, and then the second highest and so on until the section of the lowest failure rate is painted last.

Now if each section requires a different amount of effort, then this will change the problem. So what algorithm should be followed to find the sequence to minimize the loss of effort.

For a figure with only 3 section, the search is trivial and brute force can be used. But if a figure has 17 painting sections, each section with an effort rate and a failure rate then finding the optimum sequence is required.

Example:

sectionA {effort=5, failure=0.2}
sectionB {effort=1, failure=0.8}
sectionC {effort=20, failure=0.1}

Is this type of problem a hard problem to solve?

202.168.50.40 (talk) 02:30, 30 January 2008 (UTC)

Does it become clear only after the whole effort on painting one sector has been expended that this resulted in failure, or is that typically on the average detected more or less halfway?  --Lambiam 07:59, 30 January 2008 (UTC)
Never mind, I don't think that makes a difference.  --Lambiam 08:26, 30 January 2008 (UTC)
You can simply work out the average amount of wasted effort. With your numbers 0.2*5, 1*0.8, 20*0.1 = 1, 0.8, 2. Then you would do it in the order of LEAST wasted to MOST wasted (B, A, C), as you want to to minimal effort on something that fails.--58.111.143.164 (talk) 08:32, 30 January 2008 (UTC)
For a thourough analysis you can use a Decision tree, to calculate the Expected value. First choose the order you wish to use, say ABC.
Start -   A passes  - B passes  - C passes (effort 26, prob = .8 * .2 * .9)
\            \            \ C fails  (effort 26 + E, prob = .8 * .2 * .1)
\            \ B fails             (effort 6 +E, prob = .8 * .8)
A fails                          (effort 5+E, prob = .2)

where E is the expected effort. This will give you a formula E=26*.8*.2*.9+(26+E)*.8*.2*.1+(6+E)*.8*.8+(5+E)*.2. You can solve this for E. Repeat for each combination and viola the order with the least expected effort is the one. If you have 17 steps then this will be too much to do by hand, but a computer should be able to cope. --Salix alba (talk) 09:18, 30 January 2008 (UTC)
If you have 17 steps, then there are 17! = 355687428096000 possible orderings of the sections. In general, the complexity of this method grows as the factorial function, which is going to become infeasible very quickly. I think the "brute force" method mentioned in the original post is what you're describing, and 202.168.50.40 wants a more efficient algorithm. —Bkell (talk) 12:18, 30 January 2008 (UTC)
Highest failure rate first would fail for Job1 having cost 1 and failure rate 0.1. And Job2 having cost 100 and failure rate 0.2. Because the effort put into job 1 is so much smaller it's less of a loss. Anon; you method would fail for Job1 having cost 20 and failure rate 0.8 while Job2 have cost 20 and failure rate 0.1. Here you would clearly want to do the unlikely job first, but this has a higher amount of wasted cycles. I think perhaps it's possible to solve this by comparing elements pairwise. Ie the sequence A.B has an expected wastage of 5*.2+(5+1)*.8=5.8, while B.A has 1*.8+(1+5)*.2=2 so B should go before A. Then compare A with C for 5*.2+(5+20)*0.1=3.5 or 20*0.1+(5+20)*0.2=7, so A should go before C. This sorting could in theory be done in O(n*log n) time. Taemyr (talk) 13:34, 30 January 2008 (UTC)
I believe I can provide a reasonably good argument that, to minimize the expected waste of effort, the pieces should be painted in descending order of failure rate per effort. To see why, consider dividing each task into arbitrarily many small subtasks, each requiring the same amount of effort and, for subtasks of the same original task, having the same probability of failure. Since the effort per subtask is constant, by the same argument as in the constant-effort case above, they should be done in descending order of probability of failure — but this is achieved exactly when the original tasks are ordered by decreasing probability of failure per effort.
This proof sketch implicitly assumes that failures are detected as soon as they happen and that the failure rate is constant throughout each task, but as Lambiam mentions above, I don't think these assumptions should actually make any difference. It should be possible to construct a more rigorous proof by considering the integral of failure rate over effort, but I'm not going to start doing that just now. —Ilmari Karonen (talk) 07:25, 31 January 2008 (UTC)

Wow! I'm really impressed! I wonder how many anime otaku has actually considered the mathematics of painting anime PVC figures before they start painting their garage kits. I guess mathematics applies everywhere even where you would have least expected it. Thanks. Oh by the way, here is another nice figure http://www.tmpanime.com/images/medium/4560228201475_MED.jpg 122.107.141.142 (talk) 10:40, 31 January 2008 (UTC)

## Polynomial Fitting

I know that you can put a polynomial through any finite set of points, no two of which are directly vertical. Can you do the same thing with a power series and an infinite set of points? Does it matter if there are accumulation points? Black Carrot (talk) 04:18, 30 January 2008 (UTC)

If you take the infinite set of points to be *every* point, then specifying the value at those points is simply specifying the function, and some functions do not have expressions as power series. If you take an infinite set of points including a limit point, then a function continuous on those points will have to have its value on the limit point to be the limit of the values converging to it. Convergent power series expansions tend to be continuous, so this gives some restrictions on accumulation points. If you take the points to be on the real line, then accumulation points and arbitrary values do not mix.
If you take an infinite set of points with no accumulation point, then I believe a variation on Weierstrass factorization provides a function with prescribed values on the points, and that has a convergent power series expansion. The discussion of this is in the section of (Rudin 1987, pp. 304–305) called "An interpolation problem", and the result itself is theorem 15.13.
The theorem even says you can prescribe finitely many values of the derivative at each point as well. JackSchmidt (talk) 05:51, 30 January 2008 (UTC)

## integration and undesirable points

The purpose behinde this post is an attempt to expand integration space to include functions with undesirable points where the function could be undefined.I need help to understand if this post makes sense.The main question here, is this way could be more general than using the measurement set theory to expand the integration?

Assume the function F(x),

F:[a,b]→R


We will put the interval[a,b] as acombination of subsets, UGk,

UGk=GNUGQUGQ̀....etc.(U,here stands for combination symbol) which stands for natural set,rational set ,irrational set,….etc.

Lets define the subsets,

өi={gk:F(xi)≥gk≥0},

xiЄ[a,b],

Let,(pi) be apartitoning of Gk in [a,b] and (pөi) apartitioning of(өi),

(Mөi=SUPөi), and (mөi=infөi),

(UFөi,pi)= Σi Mөi(xi-xi-1), (UFөi,pi)=upper darboux sum.

(LFөi,pi)= Σi mөi(xi-xi-1),(LFөi,pi)=lowe darboux sum.

(UFөi,pi,Pөi)=inf{UFөi:piPөi,(Pөi) partioning of(өi), (pi) partitioning of (Gk)},

(LFөi,pi,Pөi)=sup{LFөi:piPөi,(Pөi) partioning of(өi),(pi) partitioning (Gk)},

now, we put the integration in the form,

∫Fөi,over,Gk={0,UFөi}={0,LFөi}=the subsets,sөi,

∫F,over,[a,b]=Usөi

for example;

f(x):[0,1]→R,

f(x)=x , xЄ irrational numbers=Q̀, within[0,1],

f(x)=1,xЄ rational numbers=Q, within[0,1],

∫F,over,[o,1]={0,1\2}Q̀,U{0,1}Q={0,1\2}R,U,{1\2,1}Q ,(Q̀,Q AND,R,ARE SUFFIXES HERE AND U STANDS FOR COMBINATION symbol.)

this way enable us to exclude the undesired points from the integration where f(x)may be undefined. 88.116.163.226 (talk) 11:02, 30 January 2008 (UTC)husseinshimaljasim

What you are talking about seems to be similar to the difference between Riemann integration and Lebesgue integration. In Lebesgue integration, if two functions are identical everywhere except a set of measure zero (very roughly, a set of points that isn't too big), then their integrals will be equal. Thus, for example, a function which is continuous everywhere except x=0, where it is infinite, will have the same integral as the same function, but where at x=0 it takes the value 0, or similar. Since the rationals have measure zero over the reals, you could even have a function which is poorly-behaved over an infinite set of points (as long as it's a "small" infinite), and still be integrable. Confusing Manifestation(Say hi!) 04:37, 31 January 2008 (UTC)

well,i dont think so,in Lebesgue integration,the above example would be, μ(0,1\2)inQ`set+μ(0,1)inQset=1\2+0=1\2,iam suggesting to keep integration in form of combination of subsets regardless they were countable or not.wouldnot this be more genral?210.5.236.35 (talk) 14:36, 31 January 2008 (UTC)husseinshimaljasim

## equation of natural numbers.

If ,N1,N2,s and k,represent 4 natural numbers,then why the equation, [(N1)^s]+2=[(N2)^k)],where,N1,N2,s,and k Є(N-SET) has only one solution in N-SET,where, N1=2,s=1,N2=2,k=2? i guess. 88.116.163.226 (talk) 11:19, 30 January 2008 (UTC)husseinshimaljasim

I don't think 21+2=22 is the only solution. Isn't 52+2=33 also a solution ? Unless maybe "N-SET" has some special meaning ?? Gandalf61 (talk) 11:29, 30 January 2008 (UTC)
There appear to be lots of trivial solutions, such as ${\displaystyle 40^{1}+2=42^{1}}$. —Bkell (talk) 12:09, 30 January 2008 (UTC)

what i was thinking!?i must be needing some sleep.210.5.236.35 (talk) 15:56, 30 January 2008 (UTC)husseinshimaljasim

## Capital-sigma notation

From the article Summation: Mathematical notation has a special representation for compactly representing summation of many similar terms: the summation symbol, a large upright capital Sigma. This is defined thus:

${\displaystyle \sum _{i=m}^{n}x_{i}=x_{m}+x_{m+1}+x_{m+2}+\cdots +x_{n-1}+x_{n}.}$

The subscript gives the symbol for an index variable, i. Here, i represents the index of summation; m is the lower bound of summation, and n is the upper bound of summation. Here i = m under the summation symbol means that the index i starts out equal to m. Successive values of i are found by adding 1 to the previous value of i, stopping when i = n.

An example of the above would be? --Obsolete.fax (talk) 13:40, 30 January 2008 (UTC)
Isn't the equation an example in of itself? I suppose you could have say ${\displaystyle n=\sum _{i=1}^{n}1}$ - is that what you mean? -mattbuck 13:50, 30 January 2008 (UTC)
By definition, "example" is something specific, while the equation above is quite general. I think the OP is looking for something like:
${\displaystyle \sum _{k=2}^{6}k^{2}=2^{2}+3^{2}+4^{2}+5^{2}+6^{2}=90}$
${\displaystyle \sum _{i=4}^{7}{\frac {1}{i}}={\frac {1}{4}}+{\frac {1}{5}}+{\frac {1}{6}}+{\frac {1}{7}}={\frac {319}{420}}}$
-- Meni Rosenfeld (talk) 13:57, 30 January 2008 (UTC)

## Spearmen correlation coefficient.

I have been reading on how to calculate correlation coefficient using spearman formula But I got confused on the way when ranking the scores of this nature. According to the procedure of ranking the arbitrary ranks of 64,64 and 64 of x if 9th 10th and 11th respectively. Now is it true that the ranks of each of the numbers above is 10? X Y 22 36 24 24 30 25 40 20 45 48 50 44 50 40 52 56 64 62 64 68 64 56 72 32 78 78 78 68 84 68 90 58 —Preceding unsigned comment added by Nkomali (talkcontribs) 14:31, 30 January 2008 (UTC)

Yes. See Spearman's rank correlation coefficient. If several values are tied, the rank of each of them is the average of the positions they occupy. -- Meni Rosenfeld (talk) 16:16, 30 January 2008 (UTC)

## Dual Curve

Hello,

I've been looking at the article on dual curves, but it wasn't very informative.

Am I right when I say this is the construction for the dual curve :

For each point on the curve, draw the tangent. Then take the perpendicular to that tangent that passes through the origin. Labelling the point at the intersection of those two lines P, take the point H that is at distance 1/OP from the origin, that lies on the perpendicular line. The set of all such points H forms the dual curve.

If that's the case, doesn't that give rise to two different curves : one when we choose H to be the same side of the origin, one when H is the other side ?

Finally, why does this rather unlikely combination of constructions lead to a curve of any interest ? Seeing as the position of the origin has an importance for the resulting curve...

Thanks. --Xedi (talk) 23:31, 30 January 2008 (UTC)

I can't comment on your construction, but Duality_(projective_geometry)#Points_and_lines_in_the_plane might help clarify duality's goal. As it happens, given certain conditions, absolutely everything that can be proven in geometry has a corresponding dual proof that exchanges lines with points. For instance, every pair of points determines a line that crosses them, and every pair of lines (unless they're parallel) determines a point of intersection. Three lines determine a triangle, as do three points, etc. The dual of a curve is the corresponding curve with similar properties, where points have been exchanged with tangents. Black Carrot (talk) 00:43, 31 January 2008 (UTC)
Xedi does pick up an important point, which is not really represented in the article. We should really be working in the projective plane, so points and lines are represented by triples (x,y,z) subject to the equivalence relation (a x,a y,a z) ~ (x,y,z). The ambiguity happens when we project from the projective plane into R2. Yes you can get different curves in R2 but they will be projective equivalent. --Salix alba (talk) 08:44, 31 January 2008 (UTC)
Thanks ! I'll have to investigate projective geometry a bit further. -- Xedi (talk) 17:28, 31 January 2008 (UTC)