Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Scsbot (talk | contribs)
edited by robot: adding date header(s)
Line 417: Line 417:


= July 11 =
= July 11 =

== Simple Probability ==

Suppose that there have been 20 no-hitters in baseball history. What is the probability that there's been at least one on each day of the week?

The case for 7 no-hitters is simple enough, but I'm having trouble extending this to 8+ days. Help would be greatly appreciated. [[Special:Contributions/74.15.137.192|74.15.137.192]] ([[User talk:74.15.137.192|talk]]) 00:33, 11 July 2010 (UTC)

Revision as of 00:33, 11 July 2010

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


July 3

Notation for a non-derivative with prime (')

Let's say and .
What if, in certain cases, you want to identify and——therefore——, with respect to ?: . But what about ? Is there a standard/established modification for the non-derivative , maybe or ? seems awkward. P=|  ~Kaimbridge~ (talk) 18:06, 3 July 2010 (UTC)[reply]

It would just be . You just have to tell that it isn't a derivative from context. If the context isn't clear enough, then don't use primes to distinguish between functions. Use or something. --Tango (talk) 18:22, 3 July 2010 (UTC)[reply]


July 4

What no questions today? Is everyone taking a holiday? —Preceding unsigned comment added by 122.107.192.187 (talk) 13:02, 4 July 2010 (UTC)[reply]

I've done an IP lookup on the last 4 anons who posted questions here. Two are from Australia, one from UK and one from Germany. This suggests that a lot less people ascribe special significance to the 4th of July than you'd think. Also, the day is still young, of course. -- Meni Rosenfeld (talk) 14:23, 4 July 2010 (UTC)[reply]

Second Order Differential Equations

Hi. I'm currently working through a course on DEs and now have some SODEs to solve, two examples being:

subject to y(0)=y'(0)=0 and y(t), y'(t) continuous at

and

subject to y being bounded as , with being the Dirac delta function.

My problems are as follows. For each SODE I can find the complementary function and with the first one, I can find the particular integral in the cases , and but I don't know how to ensure that the solution is continuous at pi and 2pi. Then, for the second one, I haven't got a clue what to try for the particular integral. Am I supposed to do it in the same fashion as I did for the Heaviside step function and consider what happens for x=a and x≠a separately? Thanks 92.11.130.6 (talk) 15:09, 4 July 2010 (UTC)[reply]

Partial Derivatives

A quick one on PDs. If we set and subject to u(x,y)=v(s,t), I have to find and in terms of x, y, and .

Via the chain rule I get that = and = . Is this correct? I then have to find . Do I do this by finding ? It's quite messy when I try it and so I doubt I'm going about this the right way. Thanks 92.11.130.6 (talk) 19:21, 4 July 2010 (UTC)[reply]

Hint: y+ix=e−s+it. Bo Jacoby (talk) 07:04, 5 July 2010 (UTC).[reply]
Thanks, I hadn't actually spotted that and it's quite clever, but the final part of the question is about finding a partial differential equation for v, so I'm not allowed to know what the function is yet. 92.11.130.6 (talk) 10:04, 5 July 2010 (UTC)[reply]

Never mind, think I've got this one now. 92.11.130.6 (talk) 14:37, 5 July 2010 (UTC)[reply]


July 5

Clifford Algebra Question

Is there a way to perform the map

with "standard" functions acting upon the bivector on the left-hand side, where all the capital letters denote unknown, real-values constants? As it happens, the metric signature is . By "standard functions", I mean all the derived Clifford products (wedge with some other element, commutator, exp etc.) acting upon the bivector on the left.--Leon (talk) 09:45, 5 July 2010 (UTC)[reply]

Integration

?––Wikinv (talk) 23:24, 5 July 2010 (UTC)[reply]

Make the substitution and then use partial fraction decomposition.-Looking for Wisdom and Insight! (talk) 23:48, 5 July 2010 (UTC)[reply]

After doing the substitution, rather than immediately differentiating both sides, square both sides first so that from

And also, you get

Michael Hardy (talk) 17:58, 6 July 2010 (UTC)[reply]

Or let . Depending on the set you're integrating over, that'd turn the integral into , which is on any integration table.—msh210 18:24, 9 July 2010 (UTC)[reply]

Indeed. The integrals of 1/sin and 1/cos are the most difficult ones in first-year calculus, but they're in all the tables so no one has to do them from scratch. Michael Hardy (talk) 19:15, 10 July 2010 (UTC)[reply]

July 6

6 sided die different from the standard

So I hear there is a way to rearange the numbers on a pair of 6 sided die which are different from the standard die and which have the same probabilities and outcomes when you toll both and add the outcome.... What are the numbers on each face of the 2 dice? —Preceding unsigned comment added by KatRosenbaum (talkcontribs) 04:34, 6 July 2010 (UTC)[reply]

If you're required to use the same collection of numbers (1,1,2,2,3,3,4,4,5,5,6,6) then I think it's impossible. Getting the right probability for 2 and 12 (1/36 each) requires the 1s and 6s to be on opposite dice. Then for getting the right probability for 4 (3/36), the chance of getting a 1 and 3 combo is always 2/36 since where ever each 3 is, there's exactly one 1 on the opposite die. To get the final combination that leads to a 4 requires the 2s to be on opposite dice. Similarly the 5s must be on opposite dice. Then to get the right probability for 6 (5/36) the 3s must be on opposite dice, and similarly for the 4s.
If you can use a different collection of numbers then it would be pretty easy. An obvious way would be to put 0,1,2,3,4,5 on one die and 2,3,4,5,6,7 on the other. Rckrone (talk) 06:31, 6 July 2010 (UTC)[reply]
See Sicherman dice. --Matthew Auger (talk) 07:42, 6 July 2010 (UTC)[reply]

There was an article in the Monthly a few years ago titled Dice With Fair Sums. Try that on Google Scholar. Michael Hardy (talk) 20:47, 6 July 2010 (UTC)[reply]

...April 1988, pages 316 through 328. Michael Hardy (talk) 20:49, 6 July 2010 (UTC)[reply]

Equating an exponential function to a linear function


Solve for y
How do you solve an equation wherein an exponential function is equated to a linear function ?––220.253.104.38 (talk) 07:10, 6 July 2010 (UTC)[reply]

There's not an elementary way to solve for y from such an equation, however in this particular case there are some obvious values of y that should jump out that you might guess and check. Rckrone (talk) 07:31, 6 July 2010 (UTC)[reply]
Ah, yes, y=0 and y=1 are solutions. Are there any others?––220.253.104.38 (talk) 07:33, 6 July 2010 (UTC)[reply]
You might want to try a bit of graph sketching. -- The Anome (talk) 07:55, 6 July 2010 (UTC)[reply]
And convex functions, also. --78.13.141.245 (talk) 07:37, 7 July 2010 (UTC)[reply]

Sum of exponentials


solve for x
How do you solve an equation involving the sum of multiple exponentials?––220.253.104.38 (talk) 07:54, 6 July 2010 (UTC)[reply]

This looks very similar to the problem above. This leads me to think this might be a homework assignment: please read the comments at the top of this talk page. -- The Anome (talk) 07:57, 6 July 2010 (UTC)[reply]
No, it doesn't look similar to the one above at all. It's much simpler than the one above. Michael Hardy (talk) 17:55, 6 July 2010 (UTC)[reply]
This is a different type of question to the one bove and can be solved in a couple of ways. For starters try thinking of 2x as a unit rather than x itself. Forget to ask, first of all, have you come across complex numbers? Dmcq (talk) 08:03, 6 July 2010 (UTC)[reply]
And if you haven't come across complex numbers, or perhaps even if you have, are you certain about the signs in the equation? Dmcq (talk) 08:09, 6 July 2010 (UTC)[reply]
should be –3, not +3, oops.––220.253.104.38 (talk) 08:15, 6 July 2010 (UTC)[reply]
--220.253.104.38 (talk) 08:17, 6 July 2010 (UTC)[reply]
Actually the thinking of the previous section might be more use to you now you have that minus sign in there. Otherwise if you want to go along this line doing something like z=2x is how the chunking is normally done. Dmcq (talk) 08:28, 6 July 2010 (UTC)[reply]
Guess and check is so dodgy, I much prefer to solve it algebraically. Anyway, I've solved it now, thanks for your help.––220.253.104.38 (talk) 08:33, 6 July 2010 (UTC)[reply]

Substitution: Let

Then

So

becomes

Multiply both sides by u:

That's an ordinary quadratic equation. Once you've solved it for u, you've got the value of 2x. Take its base-2 logarithm to get the value of x. Michael Hardy (talk) 17:54, 6 July 2010 (UTC)[reply]

Connections and Differential Forms

Let be an -dimensional, real differentiable manifold and let be a smooth connection on . So, for a pair of vector fields on , say and , we have a new vector field . Now let be a differential n-form on . I've never quite understood how to understand the expression . I've seen expressions like

And this makes sense from a formal point of view. But thinking about it; it doesn't make any real sense to me. For example; is a differential form, and not a vector field. But is a connection, i.e. it takes a pair of vector fields and gives another vector field. (It's a type of covariant derivative.) Out of the last displayed line, the only term I don't understand is . How can I make sense of it? •• Fly by Night (talk) 20:26, 6 July 2010 (UTC)[reply]

Using the moment generating function to find E(y)

The p.d.f. is . When I find the moment generating functions, I set the two cases where one is when y is positive and the other where y is negative. When y is negative, and t < 1 for the integral to converge. When y is positive, and t > 1 for the integral to converge. Now, since E(y) is found by setting t = 0, I use the first m.g.f. to calculate the E(y) since that is when t < 1. I get the answer of -1/2 for E(Y).

However, when I try to integrate y * p.d.f. over the ranges from negative infinity to infinity, I get 0. I'm wondering where did I go wrong here. 142.244.151.247 (talk) 23:17, 6 July 2010 (UTC)[reply]

The moment generating function is an integral over y, not a function of y. There are not two cases. Bo Jacoby (talk) 06:14, 7 July 2010 (UTC).[reply]
The integral of ½e|y| from –∞ to ∞ is infinite, whereas the integral of a pdf over its support should be 1. Qwfp (talk) 07:37, 7 July 2010 (UTC)[reply]
The OP probably ment to say ½e−|y| Bo Jacoby (talk) 07:55, 7 July 2010 (UTC).[reply]
Yeah, I meant to say , sorry about that. What do you exactly mean by integral over y, not function of y? 142.244.148.125 (talk) 14:24, 7 July 2010 (UTC)[reply]
In that case your mgf is wrong, and once you have the right one you need to differentiate it with respect to t before setting t to zero to find E(y), as 'moment generating function' explains. Qwfp (talk) 14:49, 7 July 2010 (UTC)[reply]
I know the mgf is wrong, but I'm wondering shouldn't there be two cases for mgf when y is positive and when y is negative here, since there's an absolute value involved here. 142.244.148.125 (talk) 14:58, 7 July 2010 (UTC)[reply]
The mgf m(t) is for any given t defined by which does not depend on y (that's a bound variable), only on t, so there cannot be any cases depending on whether y is positive or negative. When you evaluate the integral properly, you will actually end up with three cases for t ≤ −1, −1 < t < 1, and t ≥ 1.—Emil J. 14:59, 7 July 2010 (UTC)[reply]

July 7

Equation to solve

? How does one solve?



apparently this is wrong ––203.22.23.9 (talk) 03:31, 7 July 2010 (UTC)[reply]

y is not a constant and so is not equal to . Bo Jacoby (talk) 06:01, 7 July 2010 (UTC).[reply]
Then how is this type of problem solved? I've never encountered one like it before.--203.22.23.9 —Preceding unsigned comment added by 220.253.104.38 (talk) 06:16, 7 July 2010 (UTC)[reply]
Substitute
This solution is on the form y=f(x). It is more general to state that
is constant. Bo Jacoby (talk) 06:38, 7 July 2010 (UTC).[reply]

Rank and stuff

Some notes I were reading gave the following two theorems without proof: "Suppose that the matrix U is in row-echelon form. The row vectors containing leading 1’s (so the non-zero row vectors) will form a basis for the row space of U. The column vectors that contain the leading 1’s from the row vectors will form a basis for the column space of U." "Suppose that A and B are two row equivalent matrices (so we got from one to the other by row operations) then a set of column vectors from A will be a basis for the column space of A if and only if the corresponding columns from B will form a basis for the column space of B." Would someone be so kind as to provide me with a simple proof of these, if they exist? Much thanks. 74.15.137.192 (talk) 06:02, 7 July 2010 (UTC)[reply]

  • The first thm, rows: they obviously span the space, it thus suffices to show independence. Consider a nontrivial linear combination of the rows. If i is the top-most row which appears with a nonzero coefficient in the combination, and j is the column containing the leading 1 from row i, then only row i will contribute to the jth entry of the linear combination, making it nonzero.
  • The first thm, columns: the selected columns are independent by much the same argument as above. Since their number is the number of nonzero rows of the matrix, which is an upper bound on the dimension of the column space, they also have to span the whole space.
  • As for the second theorem, the obvious strategy is to prove it by induction on the number of operations, hence it suffices to show it for the case where B is obtained by a single row operation from A. That should be straightforward enough, but I didn't think about the details.—Emil J. 11:46, 7 July 2010 (UTC)[reply]

Quine's NF Set theory

Does the set exist, for every relation R expressible by a stratifiable formula whose free variables are x and y? Eliko (talk) 16:30, 7 July 2010 (UTC)[reply]

Yes, you can define it as , and the latter formula is stratifiable (x, y, u can be assigned the same type, one less than the type of z).—Emil J. 16:59, 7 July 2010 (UTC)[reply]
Now I realized that you didn't specify whether R is stratifiable in such a way that x and y receive the same type, which I used in the above argument. If this assumption is violated, then your set won't exist in general: for example, does not exist, since otherwise we could construct the paradoxical Russell set by using (this can be easily written as a stratified formula).—Emil J. 17:08, 7 July 2010 (UTC)[reply]
Oh, yes, I've really meant to ask about x and y having the same type. Thanks. Eliko (talk) 07:26, 8 July 2010 (UTC)[reply]

Calculating asymetric confidence intervals for truncated rating scale data

Hi: (For reference, this question is related to calculated confidence intervals for data found at www.boardgamegeek.com and is almost completely for curiosity's sake. It's not to cure cancer or anything really important, so consider that in how much time you take to answer me.  :-) ).

Situation: People rate games on a Likert-like scale from 1 to 10 (1 being hate it, 10 being love it). They can assign fractional ratings (i.e. 7.5, 3.225), so the data are technically continuous, although few people will assign ratings of other than whole numbers. However, while continuous, the data are clearly bounded; 10 is the highest value and 1 is the lowest.

I do not have access to these data directly, however. What I do have access to is:

  • The mean of the ratings (calculated as if the data were continuous and non-bounded).
  • The standard deviation of the ratings (again, calculated as if the data were continous and non-bounded).
  • The number of raters

Is it possible to calculate, from this information, an asymetric 95% confidence interval for the mean of the ratings? That is, an interval that will not exceed 10 or go lower than 1? I suspect that this might involve recourse to the truncated normal distribution in some way, but I'm not sure how. It might also involve a transformation of the mean (e.g. logit?) but again, I'm not clear if that would be appropriate. Just to make clear, what I want is a confidence interval that is bounded by 1 and 10; I know how to calculate the confidence interval from the data I have without that restriction.

Thanks. EDIT: Sorry, didn't sign it. Skalchemist (talk) 17:47, 7 July 2010 (UTC)[reply]

Can you give an example of the data? My initial suspicion is that the mean will be sufficiently removed from the boundaries, and the standard error sufficiently small, that the symmetric confidence interval will already be within the bounds. -- Meni Rosenfeld (talk) 05:13, 8 July 2010 (UTC)[reply]
I think your idea of using a logit transform should work. Use the delta method to work out the standard error of logit(mean/10), calculate a symmetric confidence interval on the logit scale under the assumption of a normal distribution, back-tranform the interval to give an asymmetric confidence interval on your original scale. Qwfp (talk) 11:22, 8 July 2010 (UTC)[reply]

LIFO v. FIFO

You have an incoming unending stream of tasks to do. The amount of time it takes to do each task varies. Which tactic would result in the least average delay in the completing of a task - last in first out, or first in first out? Is there a tactic that has less average delay than either of these? Not a homework question. Thanks. 92.15.27.146 (talk) 20:11, 7 July 2010 (UTC)[reply]

How do you define the average delay? Based only on completed tasks? Or also taking into account the backlog? Do you know ahead of time how long the task will take? How does your work output compare to volume of incoming tasks, and what is the nature of incoming tasks -- uniform or random, with what distribution? And what is the distribution of time required for tasks? Some assumptions should be made, at least. 198.161.238.19 (talk) 21:32, 7 July 2010 (UTC)[reply]
That said I think the answer is to pick the easiest task first at any given time. 198.161.238.19 (talk) 21:57, 7 July 2010 (UTC)[reply]
Scheduling (computing) discusses all this Dmcq (talk) 23:35, 7 July 2010 (UTC)[reply]
Maybe someone could translate that into the best practice for a human officer worker. 92.29.125.22 (talk) 08:37, 8 July 2010 (UTC)[reply]

"How do you define the average delay? Based only on completed tasks?" Yes. "Or also taking into account the backlog?" Every task will be eventually completed. "Do you know ahead of time how long the task will take?" Yes can be assumed. "How does your work output compare to volume of incoming tasks" In the long-run it should be the same. "What is the nature of incoming tasks -- uniform or random, with what distribution?" You have a big in-tray that always has things in it. "And what is the distribution of time required for tasks?" Random, gaussian, or random, exponential.

The scenario is that you are a busy office administrator with a pile of material in your in-tray to work through. The in-tray keps getting more material put on it. Thanks 92.29.125.22 (talk) 08:33, 8 July 2010 (UTC)[reply]

If you want minimum average delay of completed tasks just complete one small task with no delay and then go on holyday. Then your average delay of completed tasks is zero. Your manager should not request minimum average delay of completed tasks. Bo Jacoby (talk) 09:29, 8 July 2010 (UTC).[reply]
Indeed. If you want to minimize average delay, do the quickest tasks first. This may, however, delay hard tasks arbitrarily long (depending on the distribution of incoming tasks). Also, if you talk about an actual human doing the work, it's probably easier to keep up with strict LIFO. On the third, hand, what works well for me is interest-based. Just pick the most interesting tasks, and put the rest in a stack. Occasionally that stack will grow so high that it falls over. Whatever lands on the floor goes into recycling. If it would have been important, someone would have reminded me about it earlier ;-). --Stephan Schulz (talk) 09:38, 8 July 2010 (UTC)[reply]
Agree with Bo Jacoby and Stephan Schulz. If you know the duration of each task before you start it, and if your goal is to minimise the mean delay per task (with each task given equal weight), and if you must always start a new task as soon as you finish the previous one, then your best theoretical strategy is to always do the shortest task in your in-tray first - the one that will take the least time.
To see why, imagine you put your in-tray tasks in a possible sequence of execution, and calculate the total delay for all tasks if executed in this sequence. Then imagine taking two tasks A and B that are executed one after the other - A then B - in this initial sequence, and swapping them over to make a new sequence in which you do B then A. Comparing the new sequence to the old one, you have increased the delay of task A by the duration of task B - but you have also decreased the delay of task B by the duration of task A. The delays for all other tasks apart from A and B are the same in the new sequence as in the old. So the new sequence will have a lower total delay than the old sequence if and only if the duration of task B is less than the duration of task A. Therefore a sequence with minimum total delay is one in which tasks are executed in order of increasing duration.
If you are allowed to wait between tasks, then there are circumstances in which it is more favourable to wait for a new task to arrive, rather than starting the shortest task in your in-tray. Specifically, if the expected time until the arrival of the next tasks is E(t), and the expected duration of the next task is E(d), you have n tasks in your in-tray and the duration of the shortest tasks is a then you should wait for the next task if
For example, if a 5-miute task arrives in your empty in-tray but you expect a 2-minute task to arrive in 1 minute, then you should wait for the 2-minute task to arrive, and then execute it first. By doing this, you have a total delay over both tasks of 3 minutes, whereas if you start the 5-minute task immediately, the total delay is 4 minutes. Gandalf61 (talk) 09:49, 8 July 2010 (UTC)[reply]
I the tasks are unending it indicates that there is more work than time to do them in and the average time to complete a task will tend to infinity. The first thing that should be done in such a situation with any incoming task is triage. Does it have to be done immediately? Is it a task which needs doing but maybe not just this instant? Or is it one which is a 'would like'? Normally the first two won't overload - if they do then more people are needed. For the 'would like' you may be able to do some but you should make it clear for the ones that obviously are too low priority or have started slipping down your list that they will not be done. If someone in authority complains point out the jobs and their priorites and time to complete according to you and let them figure out the schedule so they are completed. Dmcq (talk) 10:09, 8 July 2010 (UTC)[reply]
"If the tasks are unending it indicates that there is more work than time to do them in" - not if you have an average of forty hours work a week to do in 45 hours, for example. 92.29.125.22 (talk) 11:17, 8 July 2010 (UTC)[reply]
If you do 40 hours work in 45 hours then you have 5 free. The tasks will have ended for those 5 hours. This is about right, there should be leeway or the system is overloaded. In a computer you would aim to load it no more than 80%,i.e. not to work more than 32 hours on average in a 40 hour week. Dmcq (talk) 11:49, 8 July 2010 (UTC)[reply]

A problem is that if you have a very important task that requires a month of work, then its never going to get done under the "do the quickest thing first" tactic. 92.29.125.22 (talk) 11:20, 8 July 2010 (UTC)[reply]

You mean you go back to the beginning of the long task after getting interrupted? That would mean it took a very long time! However if there is more time available than work it will eventually be completed if the work isn't lost like that. Personally I count any quick interrupt as 15 minutes and changing tasks and then coming back again, yes that is extremely expensive. 11:49, 8 July 2010 (UTC)
So some tasks are more important than others ? I don't think you mentioned that before. Are there any other details you want to share ? Do some tasks maybe have deadlines ? Can work on a low priority task be interrupted by a higher priority task ? You will get better responses if you give the whole picture instead of drip-feeding key information. Gandalf61 (talk) 11:48, 8 July 2010 (UTC)[reply]
Have you never done office-type work? 92.28.250.159 (talk) 12:54, 8 July 2010 (UTC)[reply]
Your original question never mentioned office work. It could have been office work, a computer, a factory, a mail order dept, a harried mother, whatever. Even now you've not mentioned if it is from the point of view of a manager, a typist, a project planner or any of the others that might work in an office.. Anyway for office work the relevant article is Time management Dmcq (talk) 13:24, 8 July 2010 (UTC)[reply]
The OP did later say "The scenario is that you are a busy office administrator". -- Meni Rosenfeld (talk) 13:56, 8 July 2010 (UTC)[reply]
Reply to 92.28.250.159 - is that a rhetorical question ? As it happens, yes I have. Not sure how that helps you though. Your question has just gone right to the bottom of my personal in-tray, filed under "didn't ask nicely". Gandalf61 (talk) 13:29, 8 July 2010 (UTC)[reply]
For real office work, the criteria for performance are considerably more complicated than "least average delay". Finding the optimal solution requires knowing very particular details, and the solution can be very complicated. It's usually not worth it to try to construct a mathematical model for it (of course, you can gain some insight by modeling spherical cows). If what you really want to know is "how to be a productive office worker", this is way beyond the scope of a RefDesk question - there are entire books devoted to it (I can recommend Getting Things Done, though it focuses on issues other than you've asked here). -- Meni Rosenfeld (talk) 13:56, 8 July 2010 (UTC)[reply]
I'm interested in it from an operations research point of view. I've read GTD, and like most money-spinning self-help books it was a page of insight padded out to fill a book. You can see all you need to know in this diagram here http://www.mindz.com/plazas/Getting_Things_Done_%28GTD%29/blogs/GTD_Weblog/2008/10/4/GTD_Workfkow and other diagrams found by searching for "GTD" in Google images without having to read the book. 92.24.188.89 (talk) 18:10, 8 July 2010 (UTC)[reply]
Then I've got to agree you've got to make your question much more specific to get any sensible answer here. [[It all depends on what you're doing. Looking at the references here one other area you might find useful is Queuing theory which will tell you why shops try and have a shared queue to a number of tills. Also I'd warn that some managers get hung up on technical fixes and mechanisms whereas keeping the environment quiet and giving people a good bit of space can be more important, you need to be very careful indeed to not interrupt people more than necessary, the ping of an instant messenger is the ping of lost money unless they absolutely must respond immediately. It is almost always better to schedule looking at new messages. Dmcq (talk) 21:56, 8 July 2010 (UTC)[reply]
I like the 'considerably more complicated'. I once had to go around training managers in project planning using a package where originally we'd got them out for a day to trial a number with a few examples and see which one they liked the best. Then we were told by the head company that they had now themselves decided on a package, and of course it was the one that scored lowest in our trials. I wish we'd never even bothered asking our managers in the first place. Dmcq (talk) 14:26, 8 July 2010 (UTC)[reply]
Oh cmon, you don't need to make it more specific to get an answer. All things unspecified are assumed unknown. We have set of processes need to complete, each requiring a certain amount of time Xi from our linear processor. They come into the processing array at a certain time Ti from the beginning. We can defined an average processing time for n processes as related by the probability P(s) that total processing time since entering the array is s for a given process, such that the average processing time is the integral of sP(s)ds.
That said, I won't use this notation anymore, because I already have the answer: once the array starts bunching up and having delays, a FIFO approach has smaller average processing time. Think about it this way: in LIFO, if the first process we start running doesn't complete before the next 50 come in, then the processing time for that first process is going to be enormous. In FIFO, that processing time is small, but it adds a constant processing time to all the processes following in waiting for that first process, thus the buildup proceeds linearly. The point is that LIFO is heavily skewed while FIFO is less so, even though the total time to complete all processes is the same. The result is an average processing time closer to the ideal (no processes overlapping) for FIFO. SamuelRiv (talk) 08:59, 9 July 2010 (UTC)[reply]
I believe it doesn't matter in the least whether LIFO or FIFO is used, the best you can do to cut down the average waiting time per task is to do the quick ones first. This includes where you can interrupt a task and the interrupting takes zero time as in the reasoning above. LIFO is unlikely to keep customers happy though and often one has priorities. You'll be working the same amount of time overall unless your scheme aggrivates the customers in which case you'll have less work, so on that basis LIFO may cut down average waiting time :) Little's law is I believe the appropriate rule. If you don't sort the tasks by length you're going to have the same average number of tasks hanging around whether LIFO or FIFO is used. Dmcq (talk) 09:47, 9 July 2010 (UTC)[reply]
I just been thinking about interrupting and I think a bit more thought is required. I think what I said is right for straightforward LIFO with preemption but I've been known to be wrong before. It certainly wouldn't be right if you do round robin scheduling spending a little time on each task, or just switching back as each person with a task complains which seems to be what lawyers do. On the other hand preempting to do a short task can certainly cut down the average time per task. Dmcq (talk) 10:04, 9 July 2010 (UTC)[reply]
I was wrong, LIFO with preempting will normally have a longer average time as SamuelRiv says though without preempting it will be the same as FIFO. The best way to cut the time if you start preempting is to look at the expected remaining times of the tasks and do the one with the shortest remaining time first. Dmcq (talk) 15:08, 9 July 2010 (UTC)[reply]

Poisson distribution

Flaws on a used computer tape have a Poisson distribution with an average of 1 flaw per 1200 feet. Find the probability of (a) exactly 4 flaws in 4800 feet (b) at most 2 flaws in 3000 feet

I have my teacher's answer but I am getting a different one. Actually, she only has one answer and both answers I got differ from her one answer.

(a) My reasoning is if 1200 feet is a Poisson(1), then I can consider 4800 feet as the sum of 4 independent Poisson(1)s which would be a Poisson(4). Then Does this make sense?

(b) This is a bit different. Can I assume that this is a Poisson(2.5)? I am not sure. I did that and got an answer but I am not sure if that makes sense here. By looking at MGFs, I know that 1/2 a Poisson(1) is not a Poisson(1/2) so it doesn't make sense to think of this as the sum of 2 independent Poisson(1)s and a Poisson(1/2).

Any help would be appreciated. Thanks StatisticsMan (talk) 23:30, 7 July 2010 (UTC)[reply]

Did the original question use the precise words "Poisson distribution"? The correct way to refer to what it meant is Poisson process. A Poisson process is described by a single rate parameter , and the number of events in an interval of length follows a Poisson distribution with mean even for fractional .
Think of it this way: Is the distribution in the first 600 feet the same as in the next 600 feet? Are they independent? If so, the number of errors in the first 1200 feet, which is distributed Poisson(1), is also the sum of two independent copies of some distribution. This distribution is Poisson(1/2). -- Meni Rosenfeld (talk) 05:11, 8 July 2010 (UTC)[reply]
L feet of tape has on average λ=L/1200 flaws. In case (a) λ=4800/1200=4 and in case (b) λ = 3000/1200 = 2.5. Case (a): The probability that X=4 is e−λλ4/4!. (b): 'At most 2 flaws' means that X=0 or X=1 or X=2, and the probability for that to happen is e−λ0/0!+λ1/1!+λ2/2!). Bo Jacoby (talk) 07:20, 8 July 2010 (UTC).[reply]

StatisticsMan's reasoning is correct, except where he says "it doesn't make sense to think of this as the sum of 2 independent Poisson(1)s and a Poisson(1/2)". In fact it does make sense to view it that way. It is correct that if X ~ Poisson(1) then X/2 is NOT distributed as Poisson(1/2), but that's not relevant here. The number of flaws in the first 600 feet is NOT half the number of flaws in the first 1200 feet (but the EXPECTED number of flaws in the first 600 feet is indeed half of the EXPECTED number of flaws in the first 1200 feet). Michael Hardy (talk) 20:03, 8 July 2010 (UTC)[reply]

Okay, thanks all. This makes more sense. We actually barely talked about Poisson processes in this class, though I have had a lot more experience with them before and I had forgotten this basic idea. StatisticsMan (talk) 00:43, 9 July 2010 (UTC)[reply]

July 8

basic probability

I have two normally-distributed random variables X and Y. X has mean 100 and s.d. 10. Y has mean 105 and s.d. 20. Is there a clean way to find the Prob[Y < X]? I just see a real messy integral. Thanks. 71.141.88.179 (talk) 01:28, 8 July 2010 (UTC)[reply]

I assume they're independent, otherwise, I don't know. But, an easy way to do it is to consider the random variable Y - X. Perhaps you have seen that a linear combination of independent normals is normal. So, Y - X would be normal and the mean would be the mean of Y minus the mean of X and the variance would be Variance Y + Variance X. Then, you just need to do one integral from -infinity to 0 of that one random variable. StatisticsMan (talk) 01:43, 8 July 2010 (UTC)[reply]
Oh yes, of course. Then I just use an approximation to see where the cdf crosses zero. I just got discouraged when I thought of blasting it out directly. Thanks. 71.141.88.179 (talk) 03:23, 8 July 2010 (UTC)[reply]
No. You need to subtract the variables, which is completely different from subtracting the cdfs. You get a new random variable which is distributed normally with mean 5 and s.d. . so you need to use the standard techniques to find the probability that a normal variable is less than some value.
Again, all of this is under the unstated assumption that X and Y are independent. -- Meni Rosenfeld (talk) 04:58, 8 July 2010 (UTC)[reply]
Yes, independent variables. Yes I understood about subtracting the variables. Thanks. 71.141.88.179 (talk) 06:17, 8 July 2010 (UTC)[reply]
I like the notation (105±20)−(100±10) = (105−100)±√(202+102) = 5±22.4. This is < 0 when (5±22.4)/22.4 < 0 or ±1 < −0.224. Here ±1 has mean 0 and s.d. 1. Bo Jacoby (talk) 07:51, 8 July 2010 (UTC).[reply]
Ah, I get it now. I thought you meant something else, sorry. -- Meni Rosenfeld (talk) 08:09, 8 July 2010 (UTC)[reply]
If you misunderstood I might improve my communication? Bo Jacoby (talk) 09:37, 8 July 2010 (UTC).[reply]
I was responding to the OP, as can be understood from my level of Indentation. -- Meni Rosenfeld (talk) 13:37, 8 July 2010 (UTC)[reply]

No one else has tried being explicit yet, so I will. If X and Y are independent then

So the mean of X − Y is −5 and the SD is √500 = 10√5 = 22.36. So you want the probability that that random variable is less than 0. Michael Hardy (talk) 19:57, 8 July 2010 (UTC)[reply]

....oh. I see that Meni Rosenfeld said that already. Michael Hardy (talk) 19:58, 8 July 2010 (UTC)[reply]
And, I, as the first responder, said that, with the exception that I didn't fill in the numbers, which is pretty trivial. StatisticsMan (talk) 00:42, 9 July 2010 (UTC)[reply]

Kernel and Range of an Operator

So an exercise regarding operators on a Banach space says to find the kernel and range of the linear operator



defined by



My problem is that the solution the teacher wrote up says to just split the integral (using the sine subtraction formula) to



I have no problem with this. This makes sense and I agree. Then he says that looking at this you know that the range is the span (linear combination) of and . This is also good. But then he says that the kernel contains all functions which satisfy

and

simultaneously. I have two issues with this. First, how do we know that this is all that is in the kernel. What if we have a function where



and second, can we do a little better and figure out a better description of such functions. To me this seems like, "all functions with Kf=0 are functions in the kernel of K". Well duh, doesn't seem very enlightening to me. Thanks!-Looking for Wisdom and Insight! (talk) 05:25, 8 July 2010 (UTC)[reply]

The kernel of K is functions of x that are mapped to the zero function of x. K(f)(x) = a sin(πx) + b cos(πx) where a and b are constants. Notice that and don't depend on x. For K(f)(x) to be identically zero for all x, we need both a=0 and b=0 since sin(πx) and cos(πx) are linearly independent. Rckrone (talk) 07:54, 8 July 2010 (UTC)[reply]

Quoting:

What if we have a function where

That doesn't happen unless both integrals are zero. The two integrals DO NOT depend on x, and the identity needs to be construed as holding for ALL values of x. Michael Hardy (talk) 19:19, 10 July 2010 (UTC)[reply]

Non-Hamiltonian k-critical graph

Does anyone know of an example of a k-critical graph (that is, a graph with chromatic number k such that all proper subgraphs have chromatic number less than k) that is not Hamiltonian? I found a reference in Bollobás (Extremal Graph Theory, Theorem 2.1) to a theorem of Dirac: "If G is k-critical then either G is Hamiltonian or its circumference is at least 2k − 2." Reading between the lines, this would seem to imply that some non-Hamiltonian k-critical graphs are known; otherwise I would expect to see that theorem followed by at least a conjecture that all k-critical graphs are Hamiltonian. —Bkell (talk) 07:21, 8 July 2010 (UTC)[reply]

Rational numbers with integer sums

Let (x,y,z) be an ordered triple of positive rational numbers such that x + 1/y, y + 1/z, and z + 1/x are all integers. Find all possible ordered pairs (x,y,z).

The only ones I've been able to find are (1,1,1), (1/2, 2/3, 3), and (1/3, 3/2, 2) (as well as their cyclic permutations), but I haven't been able to prove that there are no more. --138.110.206.101 (talk) 12:22, 8 July 2010 (UTC)[reply]

There is also (1, 1/2, 2) and its cyclic permutations, but then there are no more. Is this a homework? Here is an outline. Write x = a/u, y = b/v, z = c/w, with gcd(a,u) = gcd(b,v) = gcd(c,w) = 1. Since x + 1/y is an integer, ub divides ab + uv, and using coprimeness of u and b, this gives easily u = b. Symmetrically, v = c and w = a, i.e., x = a/b, y = b/c, z = c/a, and a, b, c are pairwise coprime. The assumption on x + 1/y etc. being integers means that a + b + c is divisible by lcm(a,b,c). Assuming WLOG ca, b, we have a + b + c < 3c (unless a = b = c = 1 = x = y = z), thus lcm(a,b,c) is c or 2c, which implies that a and b divide 2. This, together with c | a + b, gives finitely many choices which can be checked to give the triples written above.—Emil J. 13:04, 8 July 2010 (UTC)[reply]

(ec). Let a=x+1/y, b=y+1/z, and c=z+1/x. Solve for x.

x=a-1/y, y=b-1/z, z=c-1/x. 
x=a-1/(b-1/(c-1/x))
x(b-1/(c-1/x))=a(b-1/(c-1/x))-1
bx-x/(c-1/x)=ab-a/(c-1/x)-1
bx(c-1/x)-x=ab(c-1/x)-a-(c-1/x)
bcx-b-x=abc-ab/x-a-c+1/x
bcxx-bx-xx=abcx-ab-ax-cx+1
(bc-1)xx+(a+c-b-abc)x+ab-1=0

y,z also satisfy quadratic equations.

(ca-1)yy+(b+a-c-abc)y+bc-1=0
(ab-1)zz+(c+b-a-abc)z+ca-1=0

Then the problem is reduced to find integers a,b,c such that the quadratic equations have rational solutions. Bo Jacoby (talk) 13:55, 8 July 2010 (UTC).[reply]

probability

we have a scratch off card with 25 squares and 5 have pots of gold underneath. If you scratch off exactly 5 squares revealing 5 pots of gold you win. what is the probability of winning? —Preceding unsigned comment added by 129.120.185.7 (talk) 15:35, 8 July 2010 (UTC)[reply]

Well, the chance of hitting a pot of gold on the first scratch is 5/25. If you do that successfully, then the chance of hitting a pot of gold on the second scratch is 4/24 since there are 4 pots left under 24 possible squares. So the chance of hitting on the first 2 is (5/25)*(4/24). And then continue that way down the line for 3, 4 and 5. Rckrone (talk) 16:04, 8 July 2010 (UTC)[reply]
See binomial coefficient. The probability of winning is = 0.0000188218. Bo Jacoby (talk) 16:39, 8 July 2010 (UTC).[reply]

Probabilistic proof of the squared triangular number identity.

Hi all. I'm wondering how to put as simple as possible the following probabilistic proof of the famous squared triangular number identity,

,

or also, if there is a ready reference (in which case I would add it to the other quoted proofs in the wiki article). The idea is normalize over and see the LHS and RHS as probabilities of two suitable equiprobable events. Precisely: choose randomly 4 numbers between and , uniformly and indipendently. How would you justify in the quickest way that has the same probability than ? As a first step, one could start noticing that the first event is the same as ... Thank you for your suggestions.--pma 18:42, 8 July 2010 (UTC)[reply]

That first step seems kind of problematic since the distribution of max(Z,W) is not independent of whether Z ≤ W because of the asymmetry involved in including the cases where Z = W, and so {X ≤ Y} does not have the same probability as {max(X,Y) ≤ max(Z,W)}. Rckrone (talk) 05:48, 10 July 2010 (UTC)[reply]

Tweaked Konigsberg

The bridges of Konigsberg

Hey all. I'm afraid I'd have no idea how to describe the problem put in front on in a standard way, but then, you lot are better than google anyways :)

It's like the Bridges of Konigsberg problem - so I'm guessing topologists are the people to ask - except with a load more islanda and bridges (100 islands 250 bridges to be precise), plus two restrictions:

  • Instead of being focused around the bridges, it is focused around the islands: you can visit each only once.
  • Instead of wanting an answer of "it's possible" or "it's impossible" (it's definitely impossible to visit every island - some islands have only one bridge, some have none), we want the route that visits the most islands before bring forced to revisit an island.

Are there any algorithms / methods that would help with this? Thanks in advance, - Jarry1250 [Humorous? Discuss.] 20:53, 8 July 2010 (UTC)[reply]

This is the longest path problem. Algebraist 20:57, 8 July 2010 (UTC)[reply]
With distance 1 between each point? Also, I am not given the starting point or ending point, though I guess I could arbitrarily choose them. - Jarry1250 [Humorous? Discuss.] 21:31, 8 July 2010 (UTC)[reply]

July 9

Ackermann function

The article gives A(1,n), A(2,n), A(3,n), and A(4,n), but what's A(m,n), without recursion? --138.110.206.101 (talk) 01:05, 9 July 2010 (UTC)[reply]

Didn't you notice the way it grew rapidly? It grows too rapidly to be representable using the functions and operations one normally comes across. Dmcq (talk) 01:16, 9 July 2010 (UTC)[reply]
The function is defined recursively. You can't change that. If there was a closed form expression for it, the article would give it (and the function wouldn't be very interesting, since it couldn't grow anywhere near as fast). --Tango (talk) 21:07, 9 July 2010 (UTC)[reply]
The fact that it's defined recursively doesn't mean that there isn't a closed form for it (e.g. Fibonacci numbers). --138.110.206.101 (talk) 21:32, 10 July 2010 (UTC)[reply]

Looking for a more intuitive way for expressing some property of some sets (and of a formula associated with them).

  • Let be a natural number.
  • For every let be a set.
  • Let be a well-formed formula (with the free variables as indicated).

Is there a more intuitive/common (or a shorter) way, for expressing the following property of ?

    • For every natural and for every , every satisfying satisfy for every natural .

If you don't know of a more intuitive/common (or a shorter) way for expressing that property (n being general), then how about the simplest case of n=2 ? HOOTmag (talk) 10:43, 9 July 2010 (UTC)[reply]

Well, I would find it more intuitive stated as follows: if φ(x1,...,xn), then either xiSi for every i, or xi  ∉ Si for every i. YMMV.—Emil J. 11:48, 9 July 2010 (UTC)[reply]
Ok, thank you, but it's still a formal formulation, while I'm looking for a more intuitive expression.
Let's take the simplest case, where n=2. Do you know of an intuitive way for expressing the following property: every x,y satisfying φ(x,y) satisfy that xX is equivalent to yY.
HOOTmag (talk) 13:03, 9 July 2010 (UTC)[reply]
n collectors bring samples of their coin collection to a bank. Each sample contains one each of some number of different types of coins. Some collectors may share coin types. The collectors each hand a clerk one coin from their collection. The clerk is new and mixes them all up. He needs to get the right type of coin back to the right collector, so he guesses which coins go where and computes phi, which tells him if he's entirely right or entirely wrong. Presumably he has another method of determining if he's entirely wrong--perhaps he remembered the second collector handed him a penny. Types of coins are used instead of coins themselves because coins are unique and not shareable among collectors, while types of coins are sharable. This is simpler if the are disjoint. Hopefully this is something like what you're after; I prefer Emil's formulation, personally. This would probably need to be reworked to fit whatever your application is, as well, since it's pretty far removed from the formal statements above. 67.158.43.41 (talk) 15:30, 9 July 2010 (UTC)[reply]
I do not know the answer but as a non-mathematician and following the Emil's formulation I would look for something like all-or-none... --CiaPan (talk) 13:24, 9 July 2010 (UTC)[reply]

Differential Forms

Let denote the space of differential -forms over a smooth manifold Furthermore, let denote the space of exact differential -forms over I would like to understand the quotient space:

For example, is connected? How could I calculate the homology groups of What about the homotopy groups of Even if you can't answer these question (which I can't either); any suggestions would be helpful. Thanks in advance. •• Fly by Night (talk) 19:13, 9 July 2010 (UTC)[reply]

speed cameras versus timestamped photos at 2 points.

The argument was made that speed cameras are fallible. So the city decided to put cameras at two ends of a road with timestamps (assume their clocks are properly synchronized).

If you were photographed at point A at 12:00:00 noon, and point B one minute later, at 12:01:00, where point B is 2 miles away, then it would seem like you were going two miles a minute, or 120 miles per hour. You would like to claim that you were going less than that. Is there any way, mathematically, that you could make the argument that being at A at noon and two miles down one minute later does not mean you had to go at 120 miles per hour or faster at any point along the trip? That you could go at various speeds, slowing down, accelerating, weaving and bobbing, and so end up at the specified point, but at no point in your stopping, going, accelerating, decelerating, backing into reverse, whatever else, would you have been going at 120 miles per hour or faster?

How would that mathematical argument look? Conversely what is the mathematical argument that you must have reached or surpassed 120 miles per hour at least at one point during the one minute interval? This is not homework. 84.153.230.67 (talk) 20:31, 9 July 2010 (UTC)[reply]

A distance-time graph makes it obvious to me that a constant speed between the points had to be 120 mph. Any other pattern will have to have had greater speeds somewhere. The UK approach is to put a limit on average speed between specified points, which is just distance over time regardless of any variation over the stretch.→86.160.105.64 (talk) 20:53, 9 July 2010 (UTC)[reply]
(edit conflict) To answer your first question: No. There is no way that you can be in one place at one time and then two miles away one minute later without travelling at at least two miles per minute. There's no need for a mathematical argument; it's just common sense. Let's assume you go in a straight line. Imagine your friend drives at a constant 120 mph and you decide to accelerate and decelerate. If you set off faster than your friend that you have already gone faster than 120 mph. If you travel slower than 120 mph at any point then your friend will go into the lead. If you want to catch him up then you will need to travel faster than he is, i.e. drive faster than 120 mph. If you bob and weave then you increase your distance and then you will have to travel further, so you will have to travel even faster to get to the same point at the same time as your friend. Remember:
As for the second question: it is possible for you to travel above 120 mph, but still be clocked as travelling at 120 mph. Assume the road has a width of w miles. You drive diagonally from (0,0) to (w,2), and it takes you one minute. Using Pythagoras' Theorem you have
•• Fly by Night (talk) 21:06, 9 July 2010 (UTC)[reply]
The argument that proves you must have been going at 120mph at some point between A and B is the Mean value theorem. --Tango (talk) 21:12, 9 July 2010 (UTC)[reply]
I'll have to admit I wondered a while ago about having two similar cars and putting the same false numbers on both, or even just copying one, and arranging them to pass two cameras like that where it would be obviously impossible for them to go at speed. I suppose it would count as wasting time at best and they would take a dim view of it. :) Dmcq (talk) 21:32, 9 July 2010 (UTC)[reply]
Driving with false plates isn't exactly legal... --Tango (talk) 22:25, 9 July 2010 (UTC)[reply]
But who has not done it at least once? PST 10:35, 10 July 2010 (UTC)[reply]
Uncountable millions of people.→81.147.2.107 (talk) 14:03, 10 July 2010 (UTC)[reply]
You should get out more and meet some people. I recall the days when my grandpa used to take me out in his car, but I never quite understood why he had so many car plates with different numbers in his garage ... ;) PST 14:48, 10 July 2010 (UTC)[reply]
But now you're all grown up you do understand why your grandfather found it necessary? What was it - Resistance, gun-running, drugs, ...?→81.147.2.107 (talk) 23:35, 10 July 2010 (UTC)[reply]
I should clarify: the MVT proves that you must have been going at exactly 120mph at some point. A straight line being the shortest distance between two points (Pythag. proves that) is all you need to prove that you must have been going at least 120mph at some point. --Tango (talk) 22:28, 9 July 2010 (UTC)[reply]
How does a straight line being the shortest distance between two points prove anything here? At least the MVT requires differentiability. What if the car travels your straight line but the graph of its displacement versus time is continuous everywhere but differentiable only almost everywhere such that the car had velocity 0 at all points where its velocity is defined. What does Pythag. have to say about that? -- 124.157.197.248 (talk) 14:17, 10 July 2010 (UTC)[reply]
It means the ride must have had infinite accelleration at some points. Very jerky, it wouldn't do your neck any good. Dmcq (talk) 14:28, 10 July 2010 (UTC)[reply]

July 10

Online virtual Turing machine

I'm working on Turing machine programs, but after a while it gets tedious to go through all of the steps manually. Is there someplace online which will allow me to enter a Turing machine program and the input value, and will then simulate the Turing machine? --138.110.206.101 (talk) 21:24, 10 July 2010 (UTC)[reply]

Cosets

Hi. I am currently battling my way through a book on Group Theory and am having a bit of trouble with cosets. We have already stated that for a group G with a subgroup H and an index set I with the element being an element of the ith coset, . We are now attempting to prove that is a decomposition of G into distinct left cosets. The first observation made is that if then we must have and hence , which is impossible unless i=k. How exactly have we established ? Is it simply by saying that because we know , then and must be in H also and so we can form the respective left cosets and set them equal to each other? Or have we performed some inversion such as , where H is its own inverse, of sorts? That suggestion may be complete and utter rubbish; just remember, I'm new to this. Then it says i=k; is this just using that two cosets are either identical or have no element in common? I have more questions but I don't want to make this too long and scare off and potential readers so I'll save them until I have a response! Thanks asyndeton talk 21:53, 10 July 2010 (UTC)[reply]

You are right (informally) that you invert both sides of to get . To prove this, take , then for . Then , so ... see where this is going? Finish that argument (to show ), and you'll have shown . The converse inclusion is exactly the same, with i and k swapped. Staecker (talk) 23:13, 10 July 2010 (UTC)[reply]

1+1=2

I was surprisingly asked this question in an Arabic forum:

When can 1+1=2?

I answered: Impossible mathematically, but might be possible if it were a philosophical puzzle.

I knew then the one asked me had a Ph. degree in Maths and said it was possible in Mathematics and as an exceptional case related to Relativity theory. Does this make sense? Help me please! --Email4mobile (talk) 22:08, 10 July 2010 (UTC)[reply]

You probably mean: when can't 1+1=2 ? i.e. when can 1+12, right?
Since you're looking for an answer in Relatibity theory, so I think it's really rather simple and trivial (for anybody who have learnt that scientific discipline): let's assume that - when travelling at a speed of 1 mile per minute - you meet another vehicle travelling to the opposite direction at a speed of 1 mile per minute; then you won't see it travel at a speed of 2 miles per minute, although this is what should be expected intuitively.
The reasoning behind this phenonemon is that the mathematical operation required here is not a simple "addition" (+), so it's not really 1+1≠2...
HOOTmag (talk) 22:30, 10 July 2010 (UTC)[reply]

Girls, buses, cats AGAIN.

Hi all,

On a late-night gambling game show, the old chestnut was posed:

Five girls are traveling by tourist bus. Each girl has 5 baskets, each basket has 5 cats, and each cat has 5 kittens. The bus stops and 3 girls get off. How many legs are on the bus?

I figured, great, I'll use some of my free cellphone money that keeps piling up to send the R7.50 SMS to try my luck. It's obvious: Each basket has 5 cats +5*5 kittens = 30 cats = 120 legs, so each girl has 120*5 = 600 legs + 2 of her own = 602; 2 girls mean 1204 legs, duh.

Someone guesses that before they call me, and it's wrong. Oh, right, sneaky! The girls get off, leaving their cats behind, so 3004 legs left. Wash, rinse, repeat. Oh, right, the driver! 3006 then. Someone guesses that too. Also wrong. Same with 1206 (assuming a driver, but the girls take their livestock with them.)

The answer they finally gave was 2708. What? I can't figure out how that came to be. Any ideas? --Slashme (talk) 23:50, 10 July 2010 (UTC)[reply]

err... I hate to point it out, but there aren't any legs on the bus. Busses have wheels. --Ludwigs2 23:57, 10 July 2010 (UTC)[reply]


No idea. Well, a couple. Maybe there were 751 boys on the bus? Or, a shipment of prosthetic legs in the cargo hold? By the way, was this the 3:42 for St. Ives? I hate it when I have to take that bus.... --Trovatore (talk) 00:02, 11 July 2010 (UTC)[reply]

July 11

Simple Probability

Suppose that there have been 20 no-hitters in baseball history. What is the probability that there's been at least one on each day of the week?

The case for 7 no-hitters is simple enough, but I'm having trouble extending this to 8+ days. Help would be greatly appreciated. 74.15.137.192 (talk) 00:33, 11 July 2010 (UTC)[reply]