Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Wikipedia Reference Desk covering the topic of mathematics.

Welcome to the mathematics reference desk.
Want a faster answer?

Main page: Help searching Wikipedia

How can I get my question answered?

  • Provide a short header that gives the general topic of the question.
  • Type ~~~~ (four tildes) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Post your question to only one desk.
  • Don't post personal contact information – it will be removed. We'll answer here within a few days.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we’ll help you past the stuck point.

How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:
Help desk
Village pump
Help manual

June 28[edit]

Everywhere discontinuously differentiable function[edit]

Can someone please give, if possible, an example of a (real-valued) function that is everywhere differentiable, but whose derivative is everywhere discontinuous? If such a function can't exist, why?--Jasper Deng (talk) 10:40, 28 June 2015 (UTC)

Apparently not... (talk) 14:03, 28 June 2015 (UTC)
If you want some fun "counterexamples", get hold at Counterexamples in Analysis by Gelbaum and Olmsted. It doesn't give this particular one, but many others. YohanN7 (talk) 10:12, 1 July 2015 (UTC)
The set of discontinuities of a derivative is a set of the first category. In particular, by the Baire category theorem, its complement is non-empty. For proof, see John Oxtoby, "Measure and category". Sławomir Biały (talk) 11:03, 1 July 2015 (UTC)
For a set in [0, 1] of category I and measure 1 and a set in [0, 1] of measure 0 and category II, see above mentioned book. YohanN7 (talk) 11:49, 1 July 2015 (UTC)

Almost conjugate matrices[edit]

We know that there is a nonsingular matrix A so AX=YA iff X and Y are conjugate iff they have the same Jordan normal form (at least over a complete field). What about when AX=YA where A is singular but still nonzero? The set {A:AX=YA} is a vector space which can have dimension from 0 to n2 depending on X and Y. If X and Y are diagonalizable and they have no eigenvalues in common then the dimension is 0, but what can be said about the dimension if they do have eigenvalues in common? And what about the nondiagonalizable case? I was mainly interested in 2×2 rotation matrices X = R(α), Y = R(β). In this case the dimension is 4 if α=β=0 or α=β=π; 2 if α=±β; and 0 otherwise. But I thought the more general question might be interesting as well. --RDBury (talk) 20:41, 28 June 2015 (UTC)

June 29[edit]

Logic puzzle[edit]

I can't search for answers to this, as I don't know what to call it.

Imagine a sparsely filled in grid with one location being the prize cell. X knows the x coordinate. Y knows the y coordinate. (1) Y says "I do not know where the prize is. And I know you do not know where the prize is". (2) X thinks and then says "I did not know where the prize was, but now I do". (3) Y then says "Now so do I".

How do you solve this? 1a permits you to eliminate all y coords with only one cell in them - else Y would know which cell. 1b obviously eliminates all x coords with only one cell in them - else X would know which cell. Additionally I think it eliminates all y coords with a cell that is solo in a x coord. But then I run out of steam.

In the example below

* * * A * * * * B *
C * * * * * * * * *
* * * D * E * * * F
* * * * * * G * * *
* * * * H * * * J *
* * K * * * * * * *
* L * * * * * M N *
* * * P * Q * R * *

1a eliminates CGK 1b eliminates FHL and thus DEF HJ LMN

That leaves ABPQR. What next? (BTW, I know the answer, I just can't work out why it is the answer.) -- SGBailey (talk) 07:48, 29 June 2015 (UTC)

For the first clue, Y does not know where the prize is, so Y must know a y-coordinate which matches more than one letter. Y also knows that X does not know where the prize is, which means that for each letter in the row Y has been given, there is another letter in the same column (otherwise, if that were the correct answer, then X would know it). So 1a eliminates C, DEF (since if the answer was F, then X could know it, any Y knows that X cannot know it), G, HJ (ditto for H), K, LMN. Which leaves just AB and PQR. I'll let you finish off working it out, but bear in mind you cannot get the answer until you apply the final clue (i.e. you can only know the answer after both X and Y know it). MChesterMC (talk) 08:34, 29 June 2015 (UTC)
You have just repeated the analysis that I presented in the question, leaving as an exercise the part of the solution that I haven't worked out how to do. I'm afraid I didn't find that helpful. -- SGBailey (talk) 09:32, 29 June 2015 (UTC)
Can you explain this logic further... Suppose Y knows it is in the DEF row, but doesn't know which one it could be. When X says he doesn't know which it is, that eliminates F. You state that it also eliminates D and E because it cannot be F. Why? If it were D, Y wouldn't know if it was D or E and X wouldn't know if it was A, D, or P. If it was E, Y wouldn't know if it was D or E and X wouldn't know if it was E or Q. (talk) 19:11, 29 June 2015 (UTC)
(2) tells us that after excluding all those that (1) excludes, X should know where it is. So, of ABPQR it couldn't be A or P which share an x coordinate. Then (3) tells us that once we exclude all those that (2) exlcudes, Y knows where it is. Therefore, of those allowed by (2), BQR, it must additionally have a unique y coordinate. That leaves B. (talk) 11:13, 29 June 2015 (UTC)
Thank you. A "very involved" solution. Do these kind of problems have a name? -- SGBailey (talk) 12:45, 29 June 2015 (UTC)
You just want a recursive solution. The solution takes the board and eliminates all rows and columns with a single solution. Then, recursively solve that board. To make it stop, do not make a recursive attempt if no columns or rows were removed. (talk) 18:44, 29 June 2015 (UTC)
That's not enough to solve it. That only gets you down to ABDEMNPQR. You also need that Y knows the answer once he knows that X knows, to solve it. StuRat (talk) 19:12, 29 June 2015 (UTC)
Since each person's strategy is based on assuming the other player is rational (and quite intelligent, in this case), this gets into game theory. StuRat (talk) 19:17, 29 June 2015 (UTC)
I really don't think game theory applies. Game theory is when people actually have a choice of what to do. Here the actors don't have a choice, they're merely reporting everything they know, and they're assumed to be perfect deducers. -- Meni Rosenfeld (talk) 21:25, 29 June 2015 (UTC)
Doesn't the fact that the conclusions they draw are dependent on the actions of others (the accuracy and honesty of their statements, in this case) qualify ? StuRat (talk) 22:38, 29 June 2015 (UTC)
I don't think so. "Dependence" on the other's accuracy and honesty is only meaningful if there is a counterfactual possibility that the other party is not accurate or honest. If each person had a probability of being incorrect, and a choice of whether to be honest or not, and a payoff table that depends on both of their actions, then it would be game theory. But here there is no choice - the people are merely following the script of the problem statement.
You could try to force this problem into game theory formulation, but - game theory analysis generally starts with a description of the game, and then deduces the best actions; here we are not given a description of the game, we are just given the actions that actually take place. Formulating it as game theory would take you further from a solution, not closer - once you did this, you will still have to solve the problem using normal logic as we did here. The tools of game theory simply don't help. It's like being asked to calculate 3*4, and trying to solve it by phrasing it as a decision theory problem where the player gets the most utility by choosing the correct answer for 3*4. You'd still need to calculate 3*4 to find his best action. -- Meni Rosenfeld (talk) 10:10, 30 June 2015 (UTC)
I don't know of a name for it, but a problem of this kind called the "Cheryl birthday problem" caused quite a bit of hype a short time ago. -- Meni Rosenfeld (talk) 21:25, 29 June 2015 (UTC)
Broadly, it's a type of logic puzzle. The "Cheryl birthday" problem mentioned by Meni has an artile Cheryl's_Birthday, and it is indeed very similar. Another logic puzzle that hinges around people reporting what they know is this one [1]. SemanticMantis (talk) 21:50, 29 June 2015 (UTC)

Thank you Semantic, I have to admit defeat. Do you have the solution? Widneymanor (talk) 07:47, 1 July 2015 (UTC)

"Unseriesable " Number?[edit]

I would like to know, if there is a definition to a real unseriesable number.
Meaning that there isn't any series that its limit is that number.
23:15, 29 June 2015 (UTC) — Preceding unsigned comment added by Exx8 (talkcontribs)

Every real number is the sum of some infinite series. Is that what you were asking? I'm not quite sure. --Trovatore (talk) 23:32, 29 June 2015 (UTC)
For any real number V there is a trivial sequence v, whose limit is V: \lim_{i\to\infty}v_i=V — a constant sequence (v_i)_{i\in\Bbb N} defined by v_i = V. If we take a sequence of differences:
d_i=\begin{cases}v_i & \text{for }i=1\\v_i-v_{i-1} & \text{for }i>0\end{cases}\ \ =\ \begin{cases}V & \text{for }i=1\\0 & \text{for }i>0\end{cases}
then the sum of d series is clearly V:
Of course there are also other sequences convergent to V, for example a sequence of longer and longer decimal approximations, and each such sequence defines a series summing up to V.
For \pi it might be a sequence
v = (3, 3.1, 3.14, 3.141, 3.1415, 3.14159,\dots)
and a corresponding series
d = (3, 0.1, 0.04, 0.001, 0.0005, 0.00009,\dots)
--CiaPan (talk) 05:12, 30 June 2015 (UTC)

June 30[edit]

Usefulness vs. naturalness[edit]

There is one special mathematical equation that is defined as such because it is useful, rather than because it is natural. This is that:

 0^0 = 1

Are there any other equations of this kind?? Georgia guy (talk) 15:14, 30 June 2015 (UTC)

Sort of similar to 0!=1. See Factorial#Definition and empty product. I get what you mean, but I don't think the useful/natural distinction is very compatible with the axiomatic nature of math. Do you think the irrational numbers "unnatural"? What about the axiom of choice? Perhaps a better distinction would be "definition by convention" as opposed to "properties that directly follow from axiom and inference". It is by convention that we define the empty product to be 1, but it is also useful, and in some sense natural (what other choice could be more natural?) SemanticMantis (talk) 16:03, 30 June 2015 (UTC)
0!=1 is actually very natural. It is consistent with the combination of 2 facts: 1! is 1 and that n! is related to (n+1)! simply by dividing by n+1, and 1 divided by 1 is 1; it's not an indeterminate form. Georgia guy (talk) 16:09, 30 June 2015 (UTC)
They both come down to the empty product though... SemanticMantis (talk) 18:47, 30 June 2015 (UTC)
Oh boy here we go again. Anyway, as I see it: whether  0^0 = 1 is natural depends on what sort of exponentiation you're thinking of. If you're thinking of repeated multiplication, that is natural-number-to-natural-number or real-number-to-natural-number or complex-number-to-natural-number exponentiation, then  0^0 = 1 is quite natural. That's because it's an empty product.
However, real-number-to-real-number exponentiation is a conceptually different operation. It cannot be viewed as repeated multiplication. In that context, the argument for the naturalness of  0^0 = 1 loses its force. (So, by the way, do most of the arguments for usefulness.) --Trovatore (talk) 16:27, 30 June 2015 (UTC)
You're implying that 0 is a natural number here, aren't you?? Georgia guy (talk) 16:42, 30 June 2015 (UTC)
Oh, there are levels to that question :-). But in the sense I think you mean it, yes, like most set theorists and C programmers, I start counting with 0. It's the natural choice :-) --Trovatore (talk) 17:30, 30 June 2015 (UTC)
You mean, to you June is the fifth month of the year?? (More generally, January is the zeroth and February is the first.) Georgia guy (talk) 17:45, 30 June 2015 (UTC)
No no; I'm still speaking English. No one would understand me if I adopted that convention. But my loop indices start with 0, and the base case of most inductions/recursions is naturally indexed by 0. --Trovatore (talk) 18:06, 30 June 2015 (UTC)
An exponentiation
for cardinal numbers A and V represents the cardinality of a set of functions from a domain of cardinality A ('A' for 'arguments') into a codomain of cardinality V ('v' for 'values'). For example there are 10^2 functions from a two-element set into a ten-element set (those function may be represented e.g. as 2-digit decimal strings '00', '01',... '99'). Then 0^0 is a number of functions from an empty set (whose cardinality is zero, |\{\}|=0) into itself—and there is exactly one such function: the empty function.
Hence naturally  0^0 = 1 .--CiaPan (talk) 08:40, 1 July 2015 (UTC)

Very fast growing function[edit]

Define g(n) to be the factorial of n taken n times (again with the convention that g(0) = 1 due to the empty product). For example, g(3) = 3!!! = 6!! = 720! and g(5) = 5!!!!! = 120!!!! etc. (the factorial signs mean iterated factorials, not the double factorial or any related kind). Has anyone investigated this function before? In particular, is there an analytic continuation of it? Obviously it's an extremely fast-growing function; I might even conjecture that it's faster than tetration.

I came up with this when considering the generating function of the sequence of reciprocals, f(x) = \sum_{k = 0}^\infty \frac{x^k}{g(k)}. This surely has to be an entire function as I can't come up with any singularities for it and it converges absolutely and extremely quickly. My question for it is, is it elementary? More generally, how do we show whether a given function is elementary or not?--Jasper Deng (talk) 18:15, 30 June 2015 (UTC)

I'm not sure about the "faster than tetration" thing (though it might depend on what you mean exactly). It's a well known-riddle to show that a tower of powers of 9's is bigger than applying ! to 9 the same number of times. Of course x!>9^x, but 9^9>9!, and since the argument of the function is much more important than the choice of function, the gap is enough to make sure the factorials never catch up. This result should generalize easily. -- Meni Rosenfeld (talk) 20:26, 30 June 2015 (UTC)
By "faster than tetration" I'm considering the question of \lim_{n\to\infty} \frac{g(n)}{^na} for any fixed value of a (not ^nn which I think grows faster than g). If g grows faster, then this limit increases without bound. For example, I am pretty sure that g(15) is greater than 159.--Jasper Deng (talk) 21:01, 30 June 2015 (UTC)

Also, generalizing g as follows provides a functional equation, just like for the factorial function. Let h(m, n) be defined as: 1 if m and n are both 0, m if n = 0 and m positive, h(m!, n - 1) if n > 0. Then g(n) = h(n, n).--Jasper Deng (talk) 21:48, 30 June 2015 (UTC)

Here's more information about extending g(n) to g(x). Since 1 and 2 are fixed points of the factorial function, I believe that g'(1) = 1 - \gamma \approx 0.422784 which is the derivative of \Gamma(x+1) at 1, and g'(2) = (3 - 2\gamma)^2 \approx 3.406124 which is the square of the derivative of \Gamma(x+1) at 2. It should be possible to calculate g(x) for half-integers by using the functional square root of \Gamma(x+1). Using the formula on this page I get g(1.5) ≈ 1.253. Getting more digits is hard because of huge cancellations in the formula, but it might be possible using multiprecision. Egnau (talk) 14:23, 2 July 2015 (UTC)

Distance matrix[edit]

I have a set of strings and I want to compute the edit distance between all pairs in the set. Is there a more efficient way to create the matrix of distances than computing the Levenshtein distance for each pair individually? (besides the obvious distance(i,j) = distance(j,i)). (talk) 23:08, 30 June 2015 (UTC)

Assuming that by "edit distance" you are referring to "Levenshtein edit distance", it is important to note that the order of the strings in comparison does not matter. Levenshtein(A,B) = Levenshtein(B,A). So, for string A, B, C, D..., the worst you could do is compare A to everything, then compare B to everything by A, then compare C to everything but A and B... The issue is trying to compare one string to more than one string. Doing so would not be faster using the standard Wagner-Fischer dynamic matrix solution - which is what most people use. You *could* compare one string to a N strings, but you would still make 3N comparisons per cell of the matrix. There is no benefit. To get an improvement, you need to sort the strings. Then, you have to get the longest common initial substring. If you have N strings that begin with "AHEKKDEG", you can compare a string to "AHEKKDEG" first. From there, you compare to the rest of the strings for each one. Then, you have a mostly complete matrix to work with from that point on. I personally wouldn't do it that way - too much work. What I would do...
  1. Sort the strings
  2. Compare string A to B (assuming that you call the first one A and the second one B after sorting).
  3. Get distance A-B and B-A from that.
  4. Replace the top of the matrix, currently holding B, with C - but note how many columns are the same as B (a lot since I sorted the strings first).
  5. Start my calculation of A-C and C-A from the first column of difference between C and B.
  6. Repeat for every string left - using the most precaculated columns as possible for each new string.
There is a problem here... Levenshtein functions are usually as optimized as they can possibly be. If you are writing your own, it will likely NOT be optimized. You need to look at the header file that includes your Levenshtein function and really understand the code so you can write nearly the same code. If you are using a scripting language, such as PHP, you simply cannot write code as efficient because the built-in function will be compiled, not scripted. Hopefully that is helpful. (talk) 17:44, 2 July 2015 (UTC)

Least Symmetric Triangle?[edit]

Let T be the set of all triangles in the x-y plane with one vertex at 0,0, one vertex at 1,0 and one at x,y where y is positive and x>=.5. Let L be the set of lines in the x-y plane. let t in T and l in L be chosen. S(t,l) is the percentage symmetry that t has in l, defined as the percentage of t, where the mirror across l is also in t. (So if you chose an icosceles triangle ABC where AB = AC and l is the line through A and the midpoint of BC, the value of S(ABC, line(A->MidpointBC)) woud be 1. Any other line would of course be inferior unless the triangle was equalateral.

Let MS(t) = maximum over all l in L of S(t,l) (so MS(t) is 1 for any icosceles triangle t). What triangle t in T has the smallest MS(t) and what is that MS(t)?Naraht (talk) 23:21, 30 June 2015 (UTC)

I have a hard time following your question, since "percentage symmetry" and "mirror across l" are both ill-defined to me.--Jasper Deng (talk) 23:47, 30 June 2015 (UTC)
The percentage symmetry for a triangle for a given l is the percentage of the triangle mirrored in l that overlaps l. If l doesn't intersect the triangle then the value will be zero.Naraht (talk) 00:03, 1 July 2015 (UTC)
That's still a bit ill-defined (it would be helpful if you could link an article on defining those terms). What does it mean for a triangle to overlap l? Are you talking about the ratio of area in t on one side of l to the area on the other side of l? In that case, the answer to your original question can be found by computing S as a function of the parameters that define l (namely its slope and y-intercept, or equivalently, two points along it) and the parameters that define t (the "free" point) and solving for \nabla S = \mathbf{0} for l that crosses t; any l that does not intersect t need not be considered. On obtaining the maximum of S for a given t, it then provides an expression for MS(t), which can be similarly solved as a function of the triangle's parameters.--Jasper Deng (talk) 00:25, 1 July 2015 (UTC)
I think it's fairly clear what the OP is asking for. If T is a triangle and l is a line, let Tl be the reflection of T through l and define S(T, l) as Area(T∩Tl)/Area(T). Now define S(T) as the maximum of all lines l of S(T, l). The value of S(T) for a given triangle is well defined and there is some l for which S(T, l)=S(T). So see this, it's clear that you only need to consider lines which intersect T, so if l is parametrized as l(t, r) = {(x,y): cost x + sint y = r} the set of pairs (t, r) with 0≤t≤π and l(t, r) ∩ T ≠ ∅ is bounded and therefor compact. Then the question is, what is the minimum possible value of S(T) over all triangles T. We know 0 is a lower bound for S(T) so there is a greatest lower bound, S say. But it's not clear that there is a triangle T that achieves it. It's conceivable that there is a sequence of triangles that get longer and thinner where S(T) approaches S as a limit. In practical terms, finding the area of intersection of two triangles a bit tricky, so S(T, l) would probably have a complicated formula and not be differentiable. So unless there is some simple way of finding l for a given T, finding S(T) for a given T would involve some kind of optimization algorithm. On top of that, you want to minimize over T, so it sounds like a difficult problem. Not that it couldn't be done with some effort and a computer. Maybe a good first step would be to find a better lower bound for S than 0.--RDBury (talk) 14:36, 1 July 2015 (UTC)
OP here, thank you. That's exactly what I meant. Area(T∩Tl)/Area(T). The definition of what Triangles could be chosen from is because a triangle have the same S(T) even after both translation and expansion/contraction. For starters, (to see if this can be attacked reasonably), how would S(T) be calculated for a 3,4,5 right triangle?Naraht (talk) 18:08, 1 July 2015 (UTC)
We can define a unique placement for each triangle; OP presented one attemp, here is another one: let a ≥ b ≥ c denote the triangle's sides lengths; place vertex C in (0,0), B in positive X axis; then A is in a blue area bounded by the X axis, a x=a/2 line and the circle arc.
Seeking asymmetric triangle 1.png
If we want the reflected triangle to overlap the original one, the reflection axis must intersect the triangle. A triangle is a convex figure, so the reflection axis has to meet the triangle's edge at two different points. We can define a uniform coordinate along the triangle circumference, so that each axis is described with two real numbers – coordinates of intersection points. Then the common area is a {\Bbb R}^2\to \Bbb R function, continuous and piece-wise differentiable (the domain pieces will depend on vertices of Tl passing the T edges as l changes). However the number of pieces and their boundaries may depend on a:b:c ratios and it may be difficult to give a general algebraic description. --CiaPan (talk) 19:22, 1 July 2015 (UTC)
My understanding of the problem:
  • for any triangle t with fixed area and arbitrarily chosen reflection axis L, let t(L) be an image of t in the reflection;
  • let A(t,L) be an area of the intersection t\cap t(L), and A(t) be a maximum possible area: A(t)=\max_L\{ A(t,L)\}; then:
  • what triangle t minimizes A(t)?
Am I right? Is that what you mean? --CiaPan (talk) 18:05, 1 July 2015 (UTC)
More or less, I was using two fixed points to try to cut down on the fact that similar triangles would have the same answer, but using fixed area works as well. (Rotations also give the same answer).Naraht (talk) 18:13, 1 July 2015 (UTC)
Here is a partial result: For any triangle T there is a line l so that if Tl is the reflection of T through l then Area(T∩Tl)>Area(T)/φ, where φ is the golden ratio. Proof: Let the triangle have sides a, b, c with a≤b≤c. Then a+b>c. Suppose b/c ≥ a/b. Let A be the vertex opposite a and let l be the bisector of A. Then both T and Tl contain the the isosceles triangle with vertex A and equal sides b. The area of this triangle is b2/2 sin A, so Area(T∩Tl)/Area(T) ≥ (b2/2 sin A)/bc/2 sin A = b/c. Now c/b-1 = (c-b)/b < a/b ≤ b/c. Putting x = c/b we get x>0 and x-1 < 1/x, so x2-x-1 < 0 and x < φ. ∴b/c > 1/φ. Now suppose b/c ≤ a/b. Let C be the vertex opposite c and let l be the bisector of C. As before, both T and Tl contain the the isosceles triangle with vertex V and sides a. Computing area as before, Area(T∩Tl)/Area(T) ≥ a/b. Now b/(a+b) < b/c ≤ a/b, a/b+1 = (a+b)/b > b/a. Putting y = b/a we get 1/x+1 > x, x>0, so x2-x-1<0 and x<φ. ∴a/b > 1/φ.
Judging from the constructions here it seems to me that attention should be focused on long thin triangles where the ratio of the shorter sides is around φ.--RDBury (talk) 08:06, 2 July 2015 (UTC)

July 1[edit]

Tennis problem[edit]

In tennis, players have a "first serve" action, which is faster and more difficult for the opponent to return but more likely to go "out", and a "second serve" action, which is slower and easier to return but more likely to be "in". A player's statistics include percentages of first and second serves "in" (call these respectively p1 and p2), and percentages of points won on first and second serves that are "in" (call these respectively w1 and w2). For example, if p1 = 0.6 and w1 = 0.8, it means that the player hits 60% of his or her first serves "in", and of the 60% that are "in", 80% result in the server winning the point.

In order for the p's and w's to "make sense", the overall probability of winning a point using a "first serve" action followed (if necessary) by a "second serve" action must be greater than the probability using any other combination of "first serve" and "second serve" actions (otherwise the player would be more successful using that other combination). In other words:

p1*w1 + (1 - p1)*p2*w2 > p1*w1 + (1 - p1)*p1*w1
p1*w1 + (1 - p1)*p2*w2 > p2*w2 + (1 - p2)*p1*w1
p1*w1 + (1 - p1)*p2*w2 > p2*w2 + (1 - p2)*p2*w2

I believe (correct me if I'm wrong) that this is equivalent to

p2*w2 - p1*w1 > 0
p2*w2 - p1*w1 < p2*w2*(p2 - p1)

While the interpretation of the first of these conditions is straightforward (total probability of winning point on a second serve must be greater than total probability of winning on a first serve), I cannot formulate an interpretation of the second one. Can anyone see how to describe or interpret the second condition in a way that can be more easily visualised? (talk) 17:34, 1 July 2015 (UTC)

The left-hand side is the marginal increase in winning on a second serve, compared to a first serve. The right hand side is the probability of winning on a second serve, times the marginal increase in the probability that the second serve is in:
dW2 < Prob(W2)*dp2
I believe this constrains the largest possible increase in winning on a second serve to a function of how much more likely your second serve is to actually be in. You can also divide both sides by p2*w2 and get:
1 - (Prob(W1) / Prob(W2) < dp2
So that 1 minus the ratio of the winning probabilities on first and second serve (which is less than one, from the first constraint) is less than the marginal probability of getting your second serve first in, compared to the first serve.

I don't think this quite gets you there, but hopefully it helps. OldTimeNESter (talk) 18:44, 1 July 2015 (UTC)

July 2[edit]

July 3[edit]