Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Calculus chain rule problem: fixed the bugs but still doesn't match exactly
Line 129: Line 129:
:::Is the question about polynomials with integer coefficients? --[[User:Catslash|catslash]] ([[User talk:Catslash|talk]]) 22:55, 30 September 2013 (UTC)
:::Is the question about polynomials with integer coefficients? --[[User:Catslash|catslash]] ([[User talk:Catslash|talk]]) 22:55, 30 September 2013 (UTC)


:Instead of using standard [[polynomial long division]], which would have complexity O(M(N-M)) for numerator and denominator polynomials of degrees N and M respectively, you can the [[Fast Fourier Transform]] to [[deconvolution|deconvolve]] the two polynomials and achieve a computational complexity of O(N log N) (see this page describing how to use FFTs for polynomial multiplication; the modifications for division are straightforward if you are familiar with [[convolution]] and the [[Discrete Fourier Transform]]). In practice, you will have to worry about problem [[condition number|conditioning]] and a few edge cases, but this will hopefully get you started. ''Note'': There well may be a method to test for divisibility w/o performing the division at all, but I can't think of any except for special cases eg, when the roots of one of the polynomials is known (as mentioned above). [[User:Abecedare|Abecedare]] ([[User talk:Abecedare|talk]]) 05:48, 1 October 2013 (UTC)


= October 1 =
= October 1 =

Revision as of 05:48, 1 October 2013

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


September 24

Diagram

What you can or cannot conclude from a diagram. — Preceding unsigned comment added by Mike Boissonnault (talkcontribs) 00:50, 24 September 2013 (UTC)[reply]

That would rather depend on the diagram, wouldn't it ? Care to give us an example ? StuRat (talk) 13:09, 24 September 2013 (UTC)[reply]
In formal mathematics, a diagram may be used to illustrate a proof, but no conclusions should be drawn from the diagram as it may not be completely general or sufficiently accurate. For example, you cannot draw a completely general triangle, as the specific triangle in your diagram will be either a right triangle, an acute triangle or an obtuse triangle - it cannot be all three at the same time. The missing square puzzle shows the dangers of relying on diagrams - the puzzle relies on an apparently straight line which is in fact not quite straight. In less formal contexts, proofs without words and diagrammatic notations such as Feynmann diagrams, Penrose diagrams and Penrose graphical notation can all be useful. Gandalf61 (talk) 14:14, 24 September 2013 (UTC)[reply]
  • In case you are talking in the category sense, our article Diagram (category theory) would be a start; but the question would be hard to answer without a little more specificity.
  • In the generic sense of geometric diagrams/pictures, I would say that you shouldn't conclude anything from them, but that they can be useful for inspiration/insight. In the same way that working out specific example cases can be of value, but does not constitute proof- and, sometimes, a diagram can be misleading, so you should be cautious on over using them (sort of like how observing that 3, 5, 7, 11, and 13 are prime might make you think that all nonsquare odd numbers are prime.)
  • A more formal use of diagrams in logic can be found at our article Method of analytic tableaux.
  • Visual programming language gives examples of programming languages that interpret flowcharts, and other such.
  • There is an interactive proof system Diamond that uses diagrams to construct concrete cases, via interaction with the user, then generalizes and checks the proof. See here: [1], [2].
  • Pierce did some work with this. See our articles: Logical graph and Existential graph. For more, you might consult, [3].
  • For more information on reasoning and diagrams, etc. Have a look at: Diagrammatic reasoning, Mathematical diagram, [4], [5], and [6]. (the attached pdfs were free and online, I'm sure there are many more comprehensive sources elsewhere if you have an interest).
  • You may also want to take a look at Graph Theory (a branch of mathematics whose object of study is related to how connections can be modeled with a diagram- that's a bit glib, though). And, in a more general context, Mathematical visualization. Phoenixia1177 (talk) 05:24, 27 September 2013 (UTC)[reply]

True or False ?

Would anyone be so kind as to verify the following equality, either confirming it or infirming it ?

where

Thank you !79.113.233.194 (talk) 01:46, 24 September 2013 (UTC)[reply]

Sideways discussions
The following discussion has been closed. Please do not modify it.
The equation is meaningless as written as the factorial function only applies to non-negative integers, so can't be used on a variable of integration. You could perhaps replace by . AndrewWTaylor (talk) 13:22, 24 September 2013 (UTC)[reply]
I think Arfken's mathematical physics text suggests the reverse: use x! as a synonym for Γ(x+1).76.218.104.120 (talk) 22:48, 24 September 2013 (UTC)[reply]
Satisfied ? — 79.113.229.58 (talk) 13:41, 24 September 2013 (UTC)[reply]
Does everyone else see a red bold message about "Failed to parse (unknown error)" above (in two places)? (I don't know how it could be just me but no one else is commenting about it or fixing it...) RJFJR (talk) 13:44, 24 September 2013 (UTC)[reply]
I had the same problem when I was writing my comment - I seemed to get the errors every other time I saved.. AndrewWTaylor (talk) 13:48, 24 September 2013 (UTC)[reply]
I've been having this problem for several weeks now, I think... — 79.113.229.58 (talk) 13:57, 24 September 2013 (UTC)[reply]
Unresolved
 – — See link

First "block" of 100 non-primes...

I'm not looking for the first prime gap >= 100, I'm looking for the first instance where every number from k*100 to (k+1)*100-1 is composite. For example, the first "block" of 10 that is all composite is 200-209.Naraht (talk) 15:55, 24 September 2013 (UTC)[reply]

In PARI/GP:
forstep(n=0,10^7,100,if(nextprime(n)-n>100,print1(n", ")))
1671800, 2637800, 3117300, 3933600, 4640600, 4652400, 5178200, 5518700, 5837400, 5845200, 6012900, 6085000, 6333800, 6376200, 6789800, 6958700, 7129900, 7565200, 7803500, 7826900, 8027700, 8367400, 8421300, 8905200, 9549000, 9708000,
time = 516 ms.
An OEIS search shows this is oeis:A136295. PrimeHunter (talk) 16:09, 24 September 2013 (UTC)[reply]
Whoosh... Why did I expect you would be the one to answer? :) Interestingly enough, I can't find a link from this oeis link to one for "millenia", but I'm guessing it would start at a *much* higher number.Naraht (talk) 17:30, 24 September 2013 (UTC)[reply]
Based on a quick manual inspection of http://www.trnicely.net/gaps/gaplist.html#MainTable and http://www.trnicely.net/gaps/gaplist.html#SuppTable, the first case of 1000 appears to be a gap of 1122 after 31068473876462989 in this entry:
1122 CFC Be.Nyman 2001 29.55 17 31068473876462989
PrimeHunter (talk) 19:32, 24 September 2013 (UTC)[reply]
So all numbers between 31068473876463000 and 31068473876464000 are prime. Thank you!Naraht (talk) 20:05, 24 September 2013 (UTC)[reply]
You mean they are all composite. PrimeHunter (talk) 11:49, 25 September 2013 (UTC)[reply]
Yes, duh!Naraht (talk) 17:24, 27 September 2013 (UTC)[reply]


September 26

Computing the carry in a multiplication operation

For my own amusement, I work on a C++ program that is intended to solve a particular number theoretic problem. In that program, I need to multiply some numbers. Since the intermediate numbers encountered in the computations can be large I am creating my own functions for the basic arithmetic operations (such as addition, subtraction, multiplication). I chose to represent the numbers I am working with as strings. I already have a working addition and a subtraction function and now want to create a multiplication function. If I want to compute something like 543865864 × 343246, the function would iterate through the digits of the multiplier, convert the digit to an unsigned short and then multiply with each digit of the multiplicand (with proper carry). And here my problem starts:

Suppose I have two multi-digit numbers and the multiplication of two digits results in a carry (like 8 × 9, where the carry would be 7 (since 8 × 9 = 72)). How can I compute the value of that carry, so that I can store it into a variable? (Of course, it is easy to see for us that the carry is 7, but the program has no way of knowing that the value that is to be assigned to my carry variable should be 7).

In the end I want to have two variables (say unsigned shorts); one that contains 2 and one that contains 7. -- Toshio Yamaguchi 12:03, 26 September 2013 (UTC)[reply]

There's lots of packages that'll do that sort of thing very rapidly. But if you want to do this the appropriate operation you probably want is the modulo operation.
product = in1 * in2;
tens = product / 10;
units = product % 10;
is a low level way of writing it all. Dmcq (talk) 12:15, 26 September 2013 (UTC)[reply]
If I do 8 × 9 = 72 and then 72 / 10, I get 7.2 but I would prefer an algorithm that directly yields 7 (without the decimal part). If I do 72 % 10, I get 2, which is not what I want. -- Toshio Yamaguchi 12:26, 26 September 2013 (UTC)[reply]
Facepalm Facepalm
If I do
int a = 72
int b = 10
int c = a / b
then c would be truncated after the 7 due to the typecast, of course. -- Toshio Yamaguchi 12:31, 26 September 2013 (UTC)[reply]
Actually it is not the assignment that truncates it but the fact that you're dividing one integer by another. This can catch people out who do some adding up using an integer variable and then divide by the number of items to give the mean and assign to a float. You need to do something like (float)sum/count or sum/12.0 Dmcq (talk) 12:42, 26 September 2013 (UTC)[reply]
You might find our page on Binary_multiplier useful. You can multiply binary numbers of arbitrary size using addition and shifts. The Chinese_remainder_theorem can also be used to store very large numbers: our article mentions that this is used in RSA encryption, which calculates the product of two very large primes. OldTimeNESter (talk) 12:40, 26 September 2013 (UTC)[reply]
Thanks for the suggestion. In my algorithm I currently plan to multiply the numbers digit wise (using long multiplication), which I believe will be quite fast, since that way I never have to compute a product larger than 81. -- Toshio Yamaguchi 12:57, 26 September 2013 (UTC)[reply]


September 27

How fast can humans accelerate without dying from G-force

Hi refdesk peeps,

In conversation with friends, we wondered just how fast a person could theoretically travel to planet Mars. I did a little bit of research, and learned that it's unlikely anyone could live through more than about 9g for more than a few minutes - that probably needs to be way lower, but for the purposes of my calc, let's say 9g perhaps. But I don't the rate of acceleration that would result in 9g.

We're ignoring fuel requirements and propulsion, and all that stuff. And being super-lenient on survival. I'd just like to know, if you accelerated as fast as possible for 'a while', and then decelerated, how long would it take to travel the roughly 200 million km to the red planet.

Would the resulting high-speed actually be a problem, or is it just G that'd be the killer?

This isn't homework - it's just a silly pub-conversation. Thanks in anticipation of any replies, — Preceding unsigned comment added by 88.104.6.199 (talk) 04:03, 27 September 2013 (UTC)[reply]

For a pub, a back-of-the beer coaster estimate is appropriate. Distance in terms of acceleration and deceleration times would be . If we set , then for total time . Putting in the numbers (in meters) where is the number of gees, the total time is . Of course the highest velocity would be , or a half of a percent of the speed of light at one gee. It would be quite a trip :-) --Mark viking (talk) 04:32, 27 September 2013 (UTC)[reply]
Thanks very much, that's great :-) So if I assumed we could somehow survive 9g for a bit, I make it 2.8*10^5 / 3 = 93,333s - around 26 hours. I hope I got that last bit right. At a more 'realistic' (for want of a better word) 3g, about 45 hours. (Kinda surprisingly not massively different). Now we just need incredibly OP weightless propulsion, a ship that can handle it. Cool, thanks again. — Preceding unsigned comment added by 88.104.6.199 (talk) 06:08, 27 September 2013 (UTC)[reply]
At these speeds, particle shielding would become a problem. Solar wind is mostly a particle stream at 0.002c [citation needed], and at 1% c, one would go through five times the number of particles per second than at 0% c. These particles would have more kinetic energy, too. But I digress; it should probably be asked at RD/S if somebody wants to know more about that. - ¡Ouch! (hurt me / more pain) 12:34, 27 September 2013 (UTC)[reply]
Note that how many g's you can take very much depends on the duration. At 9g, for example, your blood probably won't circulate, so you have only seconds or minutes before you pass out, then die. A flight suit that contracts and relaxes to help pump your blood might help somewhat, but we're still only talking minutes, not hours. As for what acceleration a human could withstand for hours, we might be down to 1.1 or 1.2 g's, to avoid permanent damage, similar to bed sores. StuRat (talk) 16:30, 27 September 2013 (UTC)[reply]
Also note that the problem isn't caused by the acceleration rather that the body is being accelerated by contact forces. The contact force ends up being transferred into your body due to the body deforming and stresses building up in your body as a result. If you could apply a force uniformly to your body, you would not feel the acceleration. One example is gravity, you are accelerated at 100 g due to a uniform gravitational field, you will feel weightless.
Top accelerate a person safely at 100 g's, you could consider placing a large number of small magnetic pallets in the body of the person. The person could then be let to float in a strong magnetic field in the accelerating spacecraft. You could even try to regulate this system so that the magnetic field contributes to one g less than the spacecraft's acceleration. The contact forces will then yield the remaining 1 g, making the person feel like he's on Earth. Count Iblis (talk) 17:35, 27 September 2013 (UTC)[reply]
Liquid immersion would increase the tolerable acceleration to around 15 to 20 g's. Above that one needs a breathable liquid with suitable density (close to the density of water). Ssscienccce (talk) 17:06, 28 September 2013 (UTC)[reply]

Viability of a certain hash function?

Suppose we have two large prime numbers (say 128 bits each) "BASE" and "MODULUS" which are publicly known, and then some secret number of arbitrary length "EXPONENT". How secure is the result ((BASE ^ EXPONENT) % MODULUS)? 70.112.97.77 (talk) 09:46, 27 September 2013 (UTC)[reply]

I would think not very secure, because of the following:
With the base and the modulus you can compute ordmodulus(base), and then finding the exponent only requires checking multiples of ordmodulus(base), which I believe could be done relatively quickly. -- Toshio Yamaguchi 12:39, 27 September 2013 (UTC)[reply]
Okay, thanks! Your explanation eventually led me to the page on Diffie-Hellman (a very similar problem, apparently), which points out:
"The protocol is considered secure against eavesdroppers if G and g are chosen properly.[...]The order of G should be prime or have a large prime factor to prevent use of the Pohlig–Hellman algorithm to obtain a or b. [...] g is then sometimes chosen to generate the order q subgroup of G, rather than G, so that the Legendre symbol of g^a never reveals the low order bit of a."
So the above implies that, considering the group "G" of MODULUS(BASE), as long as I can (1) verify that there is some integer "E" such that BASE^E ≡ 1 (mod MODULUS) (thus proving that the order of G is prime) and then (2) find (and replace BASE with) the generator "D" of a subgroup of G (in order to protect the lower order bits of EXPONENT), then my algorithm (ie: D ^ EXPONENT % MODULUS) should be secure as well. Does that sound correct? (Sorry, my number theoretic abilities are somewhat lacking - I'm more of a programmer than mathematician.) Solving requirement #1 seems pretty straightforward, but I'm a little confused how to go about solving for #2 (or what it even means, really!). 70.112.97.77 (talk) 15:39, 27 September 2013 (UTC)[reply]
I think what you are describing is related to the discrete logarithm problem. If this is correct then your method is moderately secure, as there is no known algorithm for solving the problem in a time that is a polynomial function of the number of digits in the modulus. However, there are algorithms that can solve the problem in a time that is proportional to the square root of the modulus, so a 128 bit modulus is not completely unbreakable - see discrete logarithm records. A 256 bit modulus would be better. Gandalf61 (talk) 16:01, 27 September 2013 (UTC)[reply]
Interesting, but then the article you linked suggests that even a 1024 bit modulus wouldn't necessarily be sufficient. I was hoping for quite a bit more security, something requiring, say, the lifetime of the universe using all of the computing power in the world to break! Any idea how many bits would be required to achieve that? 70.112.97.77 (talk) 16:40, 27 September 2013 (UTC)[reply]
That would depend very much on what you mean by all the computing power in the world: the computing power that exists today, or the power that one would expect in the future based on Moore's law. Assume that processing power doubles every 18 months, and we have a key length that would take 1014 years to break (the estimated lifetime of the Stelliferous Era) with todays computing power: 1014=246.5 and 46.5*1.5= 69.75, so in 70 years time we will have enough computing power to break that same encryption in less than a year. Ssscienccce (talk) 17:33, 28 September 2013 (UTC)[reply]
Good point. Well that settles it, I guess I just need to use googolplex-bit numbers! Time to optimize that Miller-Rabin code... 70.112.97.77 (talk) 01:11, 29 September 2013 (UTC)[reply]
See also RSA (algorithm). Duoduoduo (talk) 16:31, 27 September 2013 (UTC)[reply]

Determining a coefficient

I know that the coefficient of x^k in (1 + x + x^2 + ... + x^k)^n is n+k-1 C k. Is there a similarly easy expression for the coefficient of x^k in (1 + x + x^2 + ... + x^r)^n, where r<k and nr>=k? 31.54.112.70 (talk) 11:51, 27 September 2013 (UTC)[reply]

It is:

Count Iblis (talk) 14:09, 27 September 2013 (UTC)[reply]

Thanks. Is the upper limit of the summation the largest integer not exceeding k/(r+1)? 31.54.112.70 (talk) 15:13, 27 September 2013 (UTC)[reply]
Yes, you can also say that the summation stops when p exceeds k/(r+1). Count Iblis (talk) 15:27, 27 September 2013 (UTC)[reply]
I believe the floor notation could be used to state it explicitly. lulzmango (talk) 07:58, 28 September 2013 (UTC)[reply]

September 28

TVS's and Dual Space Relation

I know that, in general, the dual space of a topological vector space (more specific context = Banach Spaces) is not isomorphic its dual space. Are there any results that measure the degree to which they fail to be isomorphic (in any sense)? More interestingly, any results that link this to obstructions derived from the original space? Phoenixia1177 (talk) 06:05, 28 September 2013 (UTC)[reply]

More specifically, given a category of topological v. spaces over the complex numbers, is C ever an injective object; and if not, can the failure of Hom(-,C) to be an exact functor be used to characterize the relation between A, A*, and A** in that category? That A->A*->A** gives a natural isomorphism in the finite algebraic case, and so the functor is exact, seems like it should have some extension to the tvs case, and failures there. Thank you for any help:-)Phoenixia1177 (talk) 07:14, 28 September 2013 (UTC)[reply]

Reflexive space and Stereotype space might help.John Z (talk) 18:46, 28 September 2013 (UTC)[reply]

Looking for a word that means 'constantize'

Hi all, what words exist to describe taking a multi-variate function and approximating some of the arguments into a single argument? Possibly even removing an argument or a set of arguments entirely and replacing them with a constant value that is the expected average for the removed argument(s). A few other explanations of the idea:

In realtime computer graphics there is a great term - "Baking" - (also known as "lightmaps") which means exactly what I'm thinking of in the context of shadows and ambient occlusion. Instead of rendering the lighting effects in realtime, the effects are pre-rendered with average lighting conditions and those pre-rendered images are then blitted onto the static texture images, and not computed in realtime. This is valuable to reduce the computation requirements during realtime rendering. The term baking is great, but I've never seen it used outside of that context and I'm sure there is a good generic way to describe it.

Considering the idea of Taylor polynomials, you could say what I'm interested in is a name for the 1st term when (equal to ), but a slightly more generic word to describe finding something like this. I know of no verb form for the word 'constant', but anything similar is what I'm after.

There is also a really solid interpretation of this idea in probability theory. It could be explained as taking a conditional probability and attempting to remove the condition to find . Taking the posterior probability and determining the prior probability.

You could also describe this word as what it is called to take a binary function and attempt to combine the arguments or approximate one of the arguments as a constant somehow such as to force it to be nullary.

An example of how this comes up for me is if, given a dataset, you are interested in determining who-knows-what based on the data. Instead of looking at every bit of data in the dataset, one can simply take the average value (effectively losing data) and then using that average to make any determination. This process of ignoring the rest of the data, and 'compressing' it into a single argument is exactly what I'm interested in.

Thank you for any insight regarding this. I encounter this problem on a regular basis and it kills me when I can't express myself. There has to be some kind of generic term for this, right? Thanks.

-- lulzmango (talk) 07:02, 28 September 2013 (UTC)[reply]

Sorts of words you'l get for that are instance instantiate or instantiation, currying, slicing, fixing, or constraining. Dmcq (talk) 09:59, 28 September 2013 (UTC)[reply]
Thank you, Dmcq. Most of those words are in the right vain. (currying is not relevant though, and I'm not sure what 'slicing' means here) --lulzmango (talk) 02:13, 30 September 2013 (UTC)[reply]
Slicing is applied more to matrices and there means to select entries in a particular row or column. For instance if you have figures about food and its nutrition then particular slices might be for cucumbers or salt. A selection would be another word. By the way, vein not vain. Dmcq (talk) 07:51, 30 September 2013 (UTC)[reply]
My guess is that you are talking about texture baking. In physics this would be a kind of perturbative approximation. In perturbation theory, one calculates a solution to a problem and uses it to find an approximate solution to a slightly different problem. In the computer graphics case, you solve one scene and use that solution to approximately solve a new scene in which say a new object is introduced. The scene affects the lighting of the new object, but we assume that the object has only a small, nay negligible, effect on all the other scene elements, so that they stay unchanged. This sort of technique is also used to incorporate expensive radiosity effects into raytraced scenes. A radiosity solution was first computed, then baked into the textures for the purposes of raytracing. It allows the incorporation of diffuse reflections into a render without having to recompute the radiosity solution for every scene. --Mark viking (talk) 12:18, 28 September 2013 (UTC)[reply]
Thank you very much, Mark. Pertubation theory is new to me and I'm finding it very interesting. --lulzmango (talk) 02:13, 30 September 2013 (UTC)[reply]
I would call this a restriction of the multivariate function to a lower-dimensional set, although there is also a reparametrization involved. This verbiage applies when replacing an argument with any known expression: for instance, taking in yields where . Sometimes one just writes since the change of argument count (or, non-rigorously, even renaming an argument!) implies the reparametrization. --Tardis (talk) 03:28, 1 October 2013 (UTC)[reply]

September 29

Importance of Zero in Numbers

What is the importance of zero in numbers — Preceding unsigned comment added by 92.99.40.158 (talk) 15:28, 29 September 2013 (UTC)[reply]

Makes math a lot easier, due to its importance in positional notation... If you don't believe me, try multiplying two numbers using Roman numerals... :-) — 79.113.244.214 (talk) 16:15, 29 September 2013 (UTC)[reply]
Please see our article Zero, specifically the section Zero#In mathematics. -- ToE 23:59, 30 September 2013 (UTC)[reply]

September 30

"Disproof" (not really) of Goldbach's conjecture

(Background. My previous try here on Wikipedia Reference Desk a few weeks ago, using a statistical/probabilistic/heuristic proof to prove Goldbach's conjecture, was shot down by it being pointed out that the "proof" did not preclude the - admittedly small - possibility of there being some 2N for which there is not even one pair of primes such that P1+P2=2N.)

So, I decided to see just what is the possibility of such a 2N for which not even one pair of primes P1+P2=2N.

SPOILER ALERT: The results are not at all what you might expect.

Obviously, 2N-3 must not be prime. The probability of that is 1-1/ln(2N-3).

Also, 2N-5 must not be prime. The probability of that is 1-1/ln(2N-5).

And so on, down to N+a not being prime, N-a being the largest prime .LE. N and "a" being an integer .GE. 0. The probability of that is 1-1/ln(N+a).

We note that all of these the probabilities are smaller than 1-1/ln(2N) and greater than 1-1/ln(N).

For 3, 5, ... , 19, 23, 29, ... , N-a, there are a total of ln(N) case, ALL of which must be true for 2N to have no pair of primes such that P1+P2=2N.

That is, the probability of 2N for which not even one pair of primes P1+P2=2N, is .LE. (1-1/ln(2N))^ln(N) and .GE. (1-1/ln(N))^ln(N).

These two results - "(1-1/ln(2N))^ln(N)" and "(1-1/ln(N))^ln(N)" - are very much like the expression for e, which is (1+1/n)^n as n grows without bound.

We can substitute -m for n in the expression for e, to get (1-1/m)^(-m), or 1/((1-1/m)^m).

The minus sign inverts the results, so in the limit we have (approximately) 1/e as the probability of 2N for which no pair of primes P1+P2=2N.

A 37% (thirty-seven percent) probability!!?? That's not so small.

And what's worse, it appears that this whole process can repeat itself for 4N, 8N, 16N, etc.

(Curiouser and curiouser, said Alice.)

Yet Goldbach's comet and numeric calculations, so far as they go, completely contradict this probability of 1/e.

Some possible suggestions (whether correct or not) to solve this paradox:

1. There is an error in my logic/mathematics. (What is the error?)

2. The primes have a distribution law that has not been taken into account. (What is this law?)

3. When actual numbers are plugged in, theories about primes suffer quantum collapse. (OK, just checking if you read this far.)

4. Something else. (Please state your suggestion.) Bh12 (talk) 10:58, 30 September 2013 (UTC)[reply]

I don't understand why you say "there are a total of ln(N) cases". Since there are approximately N/ln(N) primes less than N, I would have expected the number of cases (and hence your exponents too) to be N/ln(N), not ln(N). Gandalf61 (talk) 11:37, 30 September 2013 (UTC)[reply]

I said there are a total of ln(N) cases precisely because there are a total of ln(N) primes (3,5,...,19,23,29,...,N-a) that must have a non-prime as their complement to 2N, if 2N cannot be a sum of two primes.Bh12 (talk) 12:12, 30 September 2013 (UTC)[reply]

So take N = 100 (so 2N = 200). There are 24 primes between 3 and 100. So I would have thought you had 24 cases. But ln(N) = 4.6, so you say there should only be 4 or 5 cases. Which 4 or 5 primes from the 25 available primes do you pick ? And how do you pick them ? Gandalf61 (talk) 12:19, 30 September 2013 (UTC)[reply]

Yes, you are correct that the number of primes 3,5,...,N-a is N/ln(N), and not ln(N) as I had previously said. This changes the expression for the probability of 2N not being the sum of rwo primes to (1-1/(ln))^(N/ln(N)), or (1/e)^(N/ln(N)), which not only destroys the "disproof," it also shows that finding a 2N that is not the sum of two primes becomes exponentially more unlikely (yet still not impossible) as 2N increases.Bh12 (talk) 13:22, 30 September 2013 (UTC)[reply]

Confusion about Diffie-Hellman

According to the article on Diffie-Hellman, for the algorithm to be secure: "The order of G should be prime or have a large prime factor to prevent use of the Pohlig–Hellman algorithm to obtain a or b". But since "p" is supposed to be prime and given that the generator "g" of group G is the primitive root mod "p", then the order of G is always going to be p-1 (which itself cannot be prime, obviously!). So should the wording in the article instead be: "The order of G should have a large prime factor to prevent use of the Pohlig–Hellman algorithm to obtain a or b" or am I just missing something here? Thanks! 70.112.97.77 (talk) 14:39, 30 September 2013 (UTC)[reply]

The article says very little about the choice of group, which does not (as far as I know) have to be the multiplicative group of integers modulo a prime, only a group in which the discrete logarithm is hard. Nevertheless, the alternative of the order of the group being prime is superfluous, and I have made your suggested change. In the specific case of this type of group, your observation that the order cannot be prime is correct (except, of course, in the trivial case of p = 3). — Quondum 16:10, 30 September 2013 (UTC)[reply]
Great, thanks for the clarification and correcting the page in question. Cheers! 70.112.97.77 (talk) 18:33, 30 September 2013 (UTC)[reply]
Resolved

Polynomial divisibility

Is there an algorithm that can determine whether a polynomial is divisible (with no remainder) by another polynomial that is faster than the standard polynomial long division? 68.0.144.214 (talk) 21:16, 30 September 2013 (UTC)[reply]

Not quite... But for instance if you happen to know the roots of one of them, then you can check and see if the other one has them as well. And if it doesn't, then they're not divisible. — 79.113.232.83 (talk) 22:18, 30 September 2013 (UTC)[reply]
But precisely that computation can be done without explicitly knowing the roots, you just reduce the polynomial modulo that other polynomial (that other polynomial is, of course, zero modulo itself), and that is much faster than division, becase you don't need to keep track of the quotient. Count Iblis (talk) 22:30, 30 September 2013 (UTC)[reply]
Is the question about polynomials with integer coefficients? --catslash (talk) 22:55, 30 September 2013 (UTC)[reply]
Instead of using standard polynomial long division, which would have complexity O(M(N-M)) for numerator and denominator polynomials of degrees N and M respectively, you can the Fast Fourier Transform to deconvolve the two polynomials and achieve a computational complexity of O(N log N) (see this page describing how to use FFTs for polynomial multiplication; the modifications for division are straightforward if you are familiar with convolution and the Discrete Fourier Transform). In practice, you will have to worry about problem conditioning and a few edge cases, but this will hopefully get you started. Note: There well may be a method to test for divisibility w/o performing the division at all, but I can't think of any except for special cases eg, when the roots of one of the polynomials is known (as mentioned above). Abecedare (talk) 05:48, 1 October 2013 (UTC)[reply]

October 1

Calculus chain rule problem

I'm currently working my way through Professor Kleitman's Calculus for Beginners. Unfortunately, there are no solutions provided for the example problems, and I'm having some trouble arriving at the correct answers in some cases, so I would like some assistance in figuring out where I've gone wrong.

Given the functions and I am asked to use the chain rule to determine the derivative of .
I start by determining the derivative g(x):


Then I let g(x) = s and substitute it into f(x):


I then rearrange f(s):

Apply the product rule to get:

Apply the reciprocal rule:

Now applying the chain rule and substituting for s and 2x for s':

To check if this derivative is correct, Professor Kleitman provided an applet. To input the substituted expression: I used the notation ((((x^2)-1)^2)+1)/((x^2)-1). I checked this manually to make sure that the applet returns the correct value for f(x) and it does. The values the applet returns for f'(x) only match those returned by my derivative for x=-1,1,0. What's wrong with my working above? 202.155.85.18 (talk) 02:32, 1 October 2013 (UTC)[reply]

The first error I see is the at the end of the first set of equations. Since , . The chain rule doesn't involve substituting for , but rather multiplying (where the first derivative is with respect to s, of course!). Meanwhile, instead of using the product and reciprocal rules, I would write and just use the power rule twice. --Tardis (talk) 03:07, 1 October 2013 (UTC)[reply]
Sorry, I don't understand that at all. When I apply the reciprocal rule to the inverse of s, the result has s' as the numerator. Given that we let g(x) = s surely g'(x)=s'? I can't use the power rule at this stage as I haven't gotten that far in the course yet. 202.155.85.18 (talk) 03:38, 1 October 2013 (UTC)[reply]
Actually, rereading I can see what you mean. Instead of s' that should just be i.e. 1, correct? Then the expression for f'(s) reduces to and substituting and applying the chain rule gives , correct? 202.155.85.18 (talk) 04:07, 1 October 2013 (UTC)[reply]

Ok, that seems to have fixed my working so the final expression becomes . This matches the derivative that I get when I directly substitute g(x) into f(x) (i.e. skipping the chain rule and just using the product and reciprocal rules). The output that I get from this expression is still slightly different to the output from the applet though e.g. for x=-5 the applet gives -9.98 whereas I get -10.02 manually. Is this just an artifact from how the applet works out derivatives? 202.155.85.18 (talk) 04:58, 1 October 2013 (UTC)[reply]