Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Scsbot (talk | contribs)
edited by robot: archiving June 25
Line 252: Line 252:


= July 3 =
= July 3 =

== Fermat–Catalan conjecture and abc conjecture ==

Does the [[Fermat–Catalan conjecture]] follow from the [[abc conjecture]]? The Fermat–Catalan conjecture says so, but an editor took that out. The abc conjecture article also says it, with a reference that I can't check. The editor has taken it out of the Fermat–Catalan conjecture article again. What is correct? [[User:Bubba73|Bubba73]] <sup>[[User talk:Bubba73|You talkin' to me?]]</sup> 02:22, 3 July 2018 (UTC)

Revision as of 02:22, 3 July 2018

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


June 26

Generalizing to the continuous case, take II

A unistochastic matrix can be constructed from a unitary matrix by taking the norm-squared of each element of the unitary matrix.

Can this process be generalized to unitary operators on general Hilbert spaces, to create "unistochastic operators"? For convenience, call the map from a unitary matrix to a unistochastic matrix , , i.e. .

The motivation is this: As a unistochastic matrix is a stochastic matrix, it can be interpreted as the transition matrix of a Markov chain. Markov chains with unistochastic transition matrices have unique properties. I would like to see how these properties are preserved or changed when generalizing to general Markov processes (as opposed to finite-state).

This is what I've tried:

A unitary matrix can be written in the form where is a Hermitian matrix, or self-adjoint operator in general. An example of a self adjoint operator on the space of functions is .

If we approximate with (see finite difference)

then

This sort of looks like an infinite matrix multiplication. My guess is that

but I have no idea how to evaluate this, and I could be completely barking up the wrong tree anyway. PeterPresent (talk) 09:17, 26 June 2018 (UTC)[reply]

I'm having a bit of trouble with the example you're giving, and perhaps that stems from an error in the Self-adjoint operator#A simple example section in our article. First, I think you need to qualify the 'space of functions' a bit to get a Hilbert Space; say C functions on [0, 1] or square integrable C on (−∞, ∞) for example. Otherwise the inner product, which I'm assuming to be
isn't defined. Second, that the operator is self-adjoint relies on integration by parts and the assumption that the value of the function is 0 at the endpoints. But for the space to be closed under the action of the operator you'd need to restrict to a subspace where all derivatives, or all even derivatives at least, are 0 at the endpoints. (Later on in the article they talk about a multiple the first derivative being an operator as well.) You could take the subspace of odd periodic C functions on [0, 1] for example. In any case, the article seems to gloss over this point so it's not clear what space the operator is supposed to be acting on. Again I'm way out of my comfort zone here so maybe someone who deals with Hilbert space operators on a regular basis can chime in on this point.
Not sure about the overall question about whether the the operator U could be turned into some kind of stochastic matrix. I suppose the result would be a stochastic process of some kind, and it seems to me that it would be similar to Brownian Motion since the appears in the diffusion equation. --RDBury (talk) 02:54, 28 June 2018 (UTC)[reply]
Thanks for your response. There is a dearth of information online about this. I think one of the reasons this is difficult to define is that is not invariant to a change of basis, i.e. in general. PeterPresent (talk) 15:57, 29 June 2018 (UTC)[reply]

Confusion about prime-generating Diophantine equation

The formula for primes article gives a Diophantine equation for generating primes, describing it as "a polynomial inequality in 26 variables, and the set of prime numbers is identical to the set of positive values taken on by the left-hand side as the variables a, b, …, z range over the nonnegative integers", the equation in question being:

But how can the left hand side ever be positive considering that (k+2) is being multiplied by one minus all of those terms? It just doesn't make sense! Earl of Arundel (talk) 17:52, 26 June 2018 (UTC)[reply]

It could if the terms being squared are all 0, which looking at the article right above this statement is what's really being sought. Writing it like this is apparently just a way to make it be expressed as an inequality. –Deacon Vorbis (carbon • videos) 18:06, 26 June 2018 (UTC)[reply]
Yes, that is just another way of saying that all of the square terms are 0. For example, the first equation in the set is:

So when that is zero, the first square is 0. Bubba73 You talkin' to me? 18:12, 26 June 2018 (UTC)[reply]

I still don't see how this formula could be used to generate primes. The equation produces primes by plugging in values for N on the interval [0, 39]. How could the Diophantine equation be so rearranged to work in that sort of manner? Earl of Arundel (talk) 19:23, 26 June 2018 (UTC)[reply]
It is not a practical way of generating primes because it is so hard to find a set of the 26 variables that will satisfy the equation. Bubba73 You talkin' to me? 19:59, 26 June 2018 (UTC)[reply]
I was afraid you were going to say that. Well, at least I learned something new today. :p Thanks for the prompt responses everyone. Cheers! Earl of Arundel (talk) 20:20, 26 June 2018 (UTC)[reply]
You might like the generating primes article. Bubba73 You talkin' to me? 20:27, 26 June 2018 (UTC)[reply]
Yes, sieving might be the answer although for extremely large primes it seems like it might not be very practical. I'm inclined to think randomly selecting primes of a certain form and then subjecting them to the Miller-Rabin test or what have you would be more efficient. Earl of Arundel (talk) 20:44, 26 June 2018 (UTC)[reply]
It depends on how many primes you need and how large they are. There is PrimeGrid which looks for large primes of certain forms. Bubba73 You talkin' to me? 02:24, 27 June 2018 (UTC)[reply]

June 27

Boltzmann Distribution

I'm having a lot of trouble integrating the Boltzmann distribution with respect to velocity. I haven't done calculus for a lot of years, so I'm probably making some basic error. Any help would be appreciated. First off, the form of the function is (since I'm integrating by parts, I'm using x to represent velocity, otherwise it gets confusing once I start substituting the parts for u and v):

The I tried integrating by parts following the formula:

Now to work out the integral of I will need to do integration by parts again.


Now I can substitute everything back in to get this hideous integral:

Now I know it's wrong because 1) it gives crazy results when I try to determine the fractions of gas particles between particular speeds and 2) because when I plot it next to the original function it doesn't look anything like the integral of it (in fact the curve is almost the same). Clearly I'm doing something really dumb in performing this integral but I have no idea what. I've googled around a bit and came up with some references to reduction formulae which just left me more confused than when I started. 61.247.39.121 (talk) 09:18, 27 June 2018 (UTC)[reply]

Ok, I can see that I've been integrating wrong. Should have been rather than . I fixed that and did the integration again. Obviously got a completely different formula but it still doesn't look right nor return values that make sense. 61.247.39.121 (talk) 09:36, 27 June 2018 (UTC)[reply]
Scratch that. I can now see that this should be treated as a Gaussian integral, so is rewritten to and the integral is . Now I run into an even weirder problem where the two terms of the integration by parts end up with the same value and subtract to equal zero. 61.247.39.121 (talk) 10:30, 27 June 2018 (UTC)[reply]
First off, can you say how you reached the conclusion that the function f you specified describes the Boltzmann distribution? Because I don't think that's the case, and I think the actual function for the Boltzmann distribution is much easier to integrate (it is of the form , and can be solved simply with substitution).
But since this is RD/math and not RD/science, I'll ignore that and talk a bit about integrating the function you specified, which is not as easy.
You should first clarify what it is you are actually trying to do - find the indefinite integral (equivalently, a function whose derivative is f), or the definite integral in some interval (such as ). Because you didn't specify an integration interval at first; but in your latest edit said you need to treat an expression as a Gaussian integral and reduce it to some number. The integral only reduces to a number if it's definite, otherwise you get a function, not a number. And not an elementary function, either.
Here we should discuss your original mistake of using . You were probably tempted by but this does not work when there is a function in the exponent.
It's simpler with derivatives - the derivative of is , but the derivative of is not , it's because of the chain rule.
Integrals are inherently harder to calculate than derivatives, and here we're not so lucky. There's no general way to simplify , and sometimes we have to invent new functions just to have a concise way to describe such integrals. One such function is the error function, defined as , and useful for integrals like the one you specified.
The analogous rule in integration to the chain rule is the substitution rule - , from which we can get . For example, . But if we don't have that term - if we have or - this is much harder and requires special functions.
I'm not actually sure what is the best way to try evaluating , but my silicon overlord tells me it's equal to . This can be verified manually. Your integral is the same, up to constants.
To find a definite integral, simply subtract values at the endpoints. For your function, turns out to be , rather than 1 as we would expect, another indication that the expression is not what you were looking for.
But as I said, this kind of integrals are not easy, and I suggest practicing first by solving much easier problems before tackling this one. -- Meni Rosenfeld (talk) 12:40, 27 June 2018 (UTC)[reply]
ETA: Thanks to Ruslik0's pointers, I now understand better the derivation of your formula (I didn't consider 3-dimensionality, while at the same time I assumed the distribution was based on energy rather than velocity). Anyway, the error in your formula is smaller than I thought: The in the denominator of the exponent is superfluous, it should be . -- Meni Rosenfeld (talk) 21:56, 27 June 2018 (UTC)[reply]
can be done by parts, using and . --Wrongfilter (talk) 13:09, 27 June 2018 (UTC)[reply]
Right, now that you mention it it seems obvious. I'm a bit out of practice myself :) -- Meni Rosenfeld (talk) 13:32, 27 June 2018 (UTC)[reply]
Maxwell distributions are usually integrated by using differentiation by a parameter. Let's first introduce a new variable
then
Ruslik_Zero 19:41, 27 June 2018 (UTC)[reply]
I wrongly said that this was the Boltzmann distribution, when it is in fact the Maxwell-Boltzmann distribution. Sorry for the confusion. And yes, I entered in a superfluous pi. Using the fixed version of the expression that you gave above, and integrating by parts as per Wrongfilter,
so...
then this is the step I'm least sure about...
then...
and putting it all together...
This can be simplified to...
Still doesn't seem right though...any ideas where the error is? 61.247.39.121 (talk) 23:05, 27 June 2018 (UTC)[reply]
Ok, I can see the u' is wrong. I've doubled it for no reason. Fixing that error makes the second term in the final result . The result is still not right though. 61.247.39.121 (talk) 23:40, 27 June 2018 (UTC)[reply]
As you feared, the following step is incorrect:
Instead, it should be .
Try figuring out why that is the case - my comments above about the substitution rule might help.
Additionally, it appears you are still being very careless about whether you are calculating definite or indefinite integrals. It seems like you want the indefinite integral, but then you switch to definite when it suits you. In particular, when doing the Gaussian integral - the indefinite integral is a function of x which can be expressed with the error function. It is not just a number. If you want the definite integral, use the appropriate notation throughout.
I haven't checked for other errors. -- Meni Rosenfeld (talk) 00:23, 28 June 2018 (UTC)[reply]
I'm trying to get the indefinite integral first. Then I should be able to use that function to substitute in values to determine the definite integral at two different x values and subtract one from the other to get the area under the curve between those two points, right? 202.155.85.18 (talk) 01:21, 28 June 2018 (UTC)[reply]
The expression I'm looking for seems to be the cumulative distribution function given in the article Maxwell-Boltzmann distribution: (just with the difference that I'm using the molar mass of the gas and the gas constant, rather than the mass of one particle and the Boltzmann constant). It allows me to get the correct answers for the fraction of gas particles within different speed ranges. And intuitively, it follows that the CDF must be the integral of the probability distribution function. It also validates what you pointed out above with the error function playing a role in the integral. But not only can I still not figure out how to integrate the Maxwell-Boltzmann to get its CDF, I can't differentiate the CDF to get the Maxwell-Boltzmann either.
It's not really critical, but another annoying thing about having the error function in the integral is that I cannot enter it into graphing software to directly compare the plot of the integral to the plot of the original function. I had a look at the article on the error function, and I don't understand how it works really at all. Why does the definition of erf(x) include the integral of wrt t? What is t? 202.155.85.18 (talk) 04:43, 28 June 2018 (UTC)[reply]
First, I must renew my observation that it seems you are biting more than you can chew, and my suggestion to ease into this with easier exercises first. Get some muscle memory for both differentiation and integration - product rule, chain rule, substitution, integration by parts, fundamental theorem of calculus, derivatives of integrals, etc. This process will go through integrating , which has the form but lacks all those pesky physical constants that make things more cumbersome. Math is a pyramid - you can't build up without first laying a solid foundation.
In order to differentiate the above expression for the CDF, you will have to use the definition of erf (from which you can find its derivative), the chain rule and the product rule.
As I mentioned, the only way to have a simplified expression for is to invent a new function for this purpose. This is where erf comes in. It is defined in terms of this integral. In the definition stated in the WP article, t is just an integration variable. The value of erf at point x is given by some definite integral on an interval that ends on x. Since x is already taken, we need a new variable to use for the function we are integrating, so we use t. The fundamental theorem of calculus then guarantees that .
It's not true you can't enter erf into graphing software. That depends entirely on the software. I recommend the online Wolfram Alpha, aka poor man's Mathematica. For example, http://www.wolframalpha.com/input/?i=erf(x),+exp(-x%5E2). -- Meni Rosenfeld (talk) 11:04, 28 June 2018 (UTC)[reply]

You will have a more pleasant life by using exponents instead of fraction bars. Your formula can be written

Privately I let the minus sign signify the number "minus one". . This convention makes my life even more pleasant than yours. Bo Jacoby (talk) 21:30, 29 June 2018 (UTC).[reply]

June 28

Differential equation

I have constructed the following system of differential equations.

, .

I'm primarily interested in the most general form of a solution for . I'm pretty sure that it's a large family of solutions, and Mathematica can't seem to help me. A form for would be nice too; again, I don't anticipate anything other than a very general expression.--Leon (talk) 13:11, 28 June 2018 (UTC)[reply]

  • From the second equation, . Let us denote (this could be any function that is suitably differentiable). Then . The first equation becomes (assuming everything needed is nonzero at all relevant points) . Then where B can be any function (again, possibly subject to smoothness conditions).
Notice that in your original question, the first equation is an obfuscation of a simple differential equation of the kind ; so the solution to that first equation is rather simple, namely that . TigraanClick here to contact me 14:04, 28 June 2018 (UTC)[reply]
Thanks. However, I fear that I made a small mistake.
, is what I want to solve.
I think that , with similar results for the other full derivatives. Is there a way of doing this? I'm primarily interested in the general form of , much as before.--Leon (talk) 10:21, 29 June 2018 (UTC)[reply]
The derivative is meaningless without some way of specifying the dependence between x and v.--Jasper Deng (talk) 15:34, 29 June 2018 (UTC)[reply]
It is a function of and . Does that help?--Leon (talk) 16:09, 29 June 2018 (UTC)[reply]
Put it another way, is a general function, and I want a general procedure to move from this to and .--Leon (talk) 19:21, 29 June 2018 (UTC)[reply]
Then there is unlikely to be a general closed-form expression as the resulting differential equation is highly nonlinear, and the existence of , needed to expand the second equation's left hand side, is extremely dependent on the location of the roots of .--Jasper Deng (talk) 19:31, 29 June 2018 (UTC)[reply]
Okay, here's another system that might help me.
What about , ? Is this "solvable" in some sense?--Leon (talk) 19:57, 29 June 2018 (UTC)[reply]
Perhaps it will help if I give some context: suppose I have a phase portrait for an autonomous mechanical system. and are the phase space coordinates, and the trajectory that starts at is entirely determined by the function . How would I even set up the problem for finding the time between two points on a trajectory?
The idea of my function is as follows. By differentiating energy with respect to velocity , I get momentum . I wanted something similar such that differentiating time with respect to would give position . Can this be done?--Leon (talk) 22:53, 29 June 2018 (UTC)[reply]
Okay, maybe you want to look at the material derivative, which is the correct way to use the total derivative with respect to time.--Jasper Deng (talk) 02:02, 30 June 2018 (UTC)[reply]

June 30

Facebook page likes

My apologies, it's been about 20 years since I took algebra and I was wondering if someone would be kind enough to do the math for me? I run a Facebook page that has 1,500 likes and is growing at 1.8% per week. My competitor's Facebook page has 2,900 likes and is growing at 0.2% per week. Assuming the same growth rate, how long will it take my Facebook page to surpass my competitor's Facebook in likes? A Quest For Knowledge (talk) 12:45, 30 June 2018 (UTC)[reply]

This is a Geometric progression, whose sum can be found with the formula \frac{a(1-r^n)}{1-r}, and you want to find the smallest where your progression is larger than their progression. You have and and your competitor has and . Plugging these numbers in to a calculator gives an answer of 76 weeks (75 weeks and 1 day to be precise). IffyChat -- 14:48, 30 June 2018 (UTC)[reply]
Um, actually, what you want here is just the progression/sequence, not the series/sum. There's no need for the summation formula. You want the solution to , meaning . In our case we get . -- Meni Rosenfeld (talk) 20:24, 30 June 2018 (UTC)[reply]
If you assume continuous growth you get , slightly simpler. I get 41.2 weeks with this which is probably close enough, and the like count would be 3150. One thing to keep in mind with this kind of problem is that logistic growth is often a more accurate model than exponential growth long term in real life. --RDBury (talk) 21:08, 30 June 2018 (UTC)[reply]
Yep, you're both right, I edited my comment for accuracy. IffyChat -- 12:16, 1 July 2018 (UTC)[reply]

Is the following proposition used somewhere as an axiom (for defining identity)?

HOTmag (talk) 20:46, 30 June 2018 (UTC)[reply]

See identity of indiscernibles. --Trovatore (talk) 20:32, 2 July 2018 (UTC)[reply]
Which to me seems incompatible with the catholic belief in Transubstantiation. ;-) Dmcq (talk) 21:34, 2 July 2018 (UTC)[reply]
I know you're sort of joking, but really it doesn't seem to me to have much to do with transubstantiation. The unblessed wafer and the transubstantiated host do differ in at least one property, namely that of having been blessed, so they don't have to be identical. --Trovatore (talk) 22:32, 2 July 2018 (UTC)[reply]

"Normally distributed and uncorrelated does not imply independent" in practice

The examples in our article Normally distributed and uncorrelated does not imply independent are contrived ones that don't seem to me similar to any joint distributions that could occur by accident (i.e. without someone cherrypicking the definition of a variable -- if not invoking an otherwise-useless synthetic variable -- to create an example). Are there any real-world examples of data sets where the null hypotheses of normally-distributed and uncorrelated variables cannot be rejected, but the stronger null hypothesisn of independence between the variables can, where the variable definitions served some applied purpose other than demonstrating the possibility of dependence without correlation? NeonMerlin 23:40, 30 June 2018 (UTC)[reply]

July 1

"If you disagree, you're probably both wrong in the same direction" in ensemble learning

I've tried to ask this question before, but I don't think I worded it clearly, so I'm trying again. What ensemble learning models, if any, could make inferences that would translate in English to ones like the following?

  • "Model A says you probably voted for Donald Trump, and Model B says you voted for Hillary Clinton. But if you were a Trump voter or a Clinton voter, then the training data says both models would almost certainly agree about that; and most of the voters whom A and B disagree about in our training data, actually voted for Gary Johnson."
  • "Estimator A says X is 50 ± 2. Estimator B says X is 60 ± 3. But when their estimates are incompatible, they're usually both too low, and in this case the ensemble estimate is 75 ± 10."

NeonMerlin 00:17, 1 July 2018 (UTC)[reply]

Good question. I'm not too sure of the answer.
You could assign a prior probability to each of models A and B, where X denotes A or B.
Then update the probabilities using Bayes's theorem
Then calculate a probability that you voted for Johnson (J), say, weighting the prediction of each model with the probability of each model. . This could be regarded as an "ensemble model".
Now suppose, for example, and . Neither model predicts voting for Johnson. But it's possible the ensemble model does... maybe. I don't know. I'm not sure how the ensemble model would converge as you gather more data points; it's worth investigating at some point. I apologise if my answer is useless. PeterPresent (talk) 06:03, 2 July 2018 (UTC)[reply]

July 3

Fermat–Catalan conjecture and abc conjecture

Does the Fermat–Catalan conjecture follow from the abc conjecture? The Fermat–Catalan conjecture says so, but an editor took that out. The abc conjecture article also says it, with a reference that I can't check. The editor has taken it out of the Fermat–Catalan conjecture article again. What is correct? Bubba73 You talkin' to me? 02:22, 3 July 2018 (UTC)[reply]