Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 304: Line 304:
"{{OEIS|A246521}}: List of free polyominoes in binary coding, ordered by number of bits, then value of the binary code." It is a system to sort the [[polyomino]]es.
"{{OEIS|A246521}}: List of free polyominoes in binary coding, ordered by number of bits, then value of the binary code." It is a system to sort the [[polyomino]]es.


Also it has a sequence: "{{OEIS|A335573}}: a(n) is the number of fixed polyominoes corresponding to the free polyomino represented by A246521(n)."
Also it has a sequence: "{{OEIS|A335573}}: a(n) is the number of fixed polyominoes corresponding to the free polyomino represented by A246521(n)."


"{{OEIS|A152389}}: Number of steps in Conway's Game of Life for a row of n cells to stabilize."
"{{OEIS|A152389}}: Number of steps in Conway's Game of Life for a row of n cells to stabilize."

Revision as of 04:54, 6 February 2024

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 25

Testing a null hypothesis for tossing N coins

A Q from twitter [1], adapting the language slightly.

Suppose we have N coins, and we have a null hypothesis that gives coin a probability of coming up heads and of tails.

We toss the N coins. From the sequence of results, can we calculate a confidence level for our null hypothesis ?

If we were tossing the same coin N times, we could calculate a Binomial distribution for the total number of heads, and read off the probability of being nearer the expectation value than our observed number.

Can we do anything similar (but based on the actual sequence observed, rather than the total number) if the probabilities are not all the same  ? Jheald (talk) 01:39, 25 January 2024 (UTC)[reply]

Let the random variable represent a toss. Define the random variable by where
The expected value and variance of under the null hypothesis equal
For a sample obtained by a large number of independent tosses, the sample mean should then be approximately normally distributed with and Using a Monte Carlo method, a good approximation of the expected distribution under the null hypothesis for smaller values of may be obtained, giving a test with more power.
There may be better approaches, for example by letting the aggregate random variable be a weighted sum of the with weights depending on the probabilities. I have not attempted to investigate this.  --Lambiam 12:44, 25 January 2024 (UTC)[reply]
Thanks. I think that makes a lot of sense, and I like how it falls back to exactly the test with the binomial distribution that one would do, if the are all the same. I've fed it back to the original twitter thread here: [2].
One interesting wrinkle is using the number of observed events that had higher probability events as the test statistic S, rather than just the number of heads -- as presumably this should have just a slightly smaller variance?
Thank you so much again for thinking about this -- this answer makes a lot of sense to me. Jheald (talk) 14:21, 26 January 2024 (UTC)[reply]
Here is another approach. Let us identify with and with so that each toss corresponds with a vertex of the solid unit hypercube The expected value under the null hypothesis of the arithmetical average, taken coordinate-wise, of a number of tosses, is the point For a large sample, the probability distribution of this average approximates a multivariate normal distribution in which all components are independent. The iso-density hypersurfaces of this distribution approximate a hyperellipsoid whose axes are aligned with the hypercube; it can be turned into a hypersphere by scaling the -th coordinate up by a factor of
Then I expect that a good test statistic is given by the Euclidean distance between the point corresponding to the expected value and the observed arithmetical average of the sample, after scaling. I am confident that the distribution of this statistic is well known and not difficult to find, but I'd have to look it up. As before, this can be replaced by a Monte Carlo approach.  --Lambiam 13:57, 25 January 2024 (UTC)[reply]
@Lambiam: Not quite so sure about this one. When you talk of the "arithmetical average" it sounds like you're thinking of the case where you can perform multiple tosses on individual coin (i); and also to have enough data to look at conditonal dependence, or covariance, between toss (i) and toss (i'), whereas my question was more motivated by what you can say when each coin is tossed only once (and whether one can find evidence of skill in such a situation).
I also get nervous about bleaching distributions by applying scalings, and when that does or doesn't make sense; so I would need to think a bit more about that bit. Jheald (talk) 14:21, 26 January 2024 (UTC)[reply]
I did not realize there was just a single toss. (I use "toss" to mean a joint toss of all coins.) Then the "arithmetical average" is just the outcome of that one toss. If is the outcome of a toss, you might try the log of the likelihood under the null hypothesis,
The distribution of under the null hypothesis can be approximated à la Monte Carlo. This won't do anything for coins that are null-hypothesized to be fair, but, clearly, these won't give one any usable information in just one single toss. Unless the tend to be somewhat extreme, needs to be very large for one to be able to make any plausible determination, no matter how well-crafted and powerful the test statistic.  --Lambiam 15:02, 26 January 2024 (UTC)[reply]
@Lambiam: Interestingly, FWIW this in fact was also my first knee-jerk response [3]: do the probabilities appear to be well-calibrated, in the sense that they appear to be able to code the results actually obtained with about the expected message-length.
But then I spotted, as you note above, that this approach isn't capable of telling us anything if every . Whereas your first approach can, even in that case.
So just the observed message-length coded according to can't be the whole story. Jheald (talk) 17:49, 26 January 2024 (UTC)[reply]
As also noted below, there is the issue of what it is you are testing against, the set of alternative hypotheses. This is already obvious in the distinction between one-sided and two-sided tests. If someone is suspected of faking their experimental data, sleuths might choose to examine if the reported data is too good to be believable. If every coin is supposed to be fair, and the report says exactly heads were observed and an equal number of tails, this might be interpreted as confirming the suspicion.  --Lambiam 19:17, 26 January 2024 (UTC)[reply]
You need some sort of alternatives to test a hypothesis. If the actual probabilities are limited to p=0.5 and p=0.5001 it would take a lot of testing to distinguish the two and early results indicating p=0.3 doesn't provide much support for either actual probability. NadVolum (talk) 23:37, 25 January 2024 (UTC)[reply]
@NadVolum: No. Certainly in statistics you can compare the weight of evidence for different hypotheses. But that is not the only thing you can do. You can also ask questions such as : is my data reasonably consistent with what I would expect to see under Hypothesis H0 ? Which is what Lambiam does above.
On twitter Adam Kucharski took a slightly different approach [4], using the additional information that in the null hypothesis the probabilities could be seen as coming from an urn problem, so where and are the number of red balls and number of white balls in the urn at stage (i) respectively.
Kucharski suggested to look for evidence of deviations from H0 by consdiering the alternative model , where the skill factor A allows some deviation from H0 (thus introducing the alternatives you were wishing for), and looking at what sort of distribution you get for A. For a given set [5] of quite limited data, he was able to calculate an estimate of A = 1.3 with a 95% confidence limit of 0.5 to 1.9 -- which he summarised as "a bit better [than random], but can't be sure". Jheald (talk) 14:21, 26 January 2024 (UTC)[reply]
That's assuming a uniform prior distribution and applying Bayes Theorem to test against that. A very good way of doing things but even the uniform prior to assume can sometimes be a bit contentious, see Bertrand paradox (probability). NadVolum (talk) 14:37, 26 January 2024 (UTC)[reply]
Indeed. Always need to think about the effect of priors (sometimes implicit) and whether they are reasonable. If one didn't mind a little more complexity, something might be said for tweaking his to be , so that a flat prior of A would correspond to treating equally a-priori a down-weighting or an up-weighting of the number of red balls by a factor of 2; with remaining well-defined over the full range of A from -inf to +inf. Jheald (talk) 14:52, 26 January 2024 (UTC)[reply]
You can always calculate a p-value by brute force. Enumerate all of the 2N possible outcomes and calculate the probability for each one by multiplying the appropriate factors of and . Sort the list in order of increasing probability, from least probable to most probable (under the specified null hypothesis). When you toss the coins, look up your observed outcome in the table, and take the sum of the probabilities of all the entries up to that one. That sum is the p-value by definition. Then the math problem is to find a simpler way to compute the same cumulative sum, either exactly, or given some suitable approximation. --Amble (talk) 18:20, 26 January 2024 (UTC)[reply]
The statistic I introduced above is the logarithm of this probability. If is too large to make this brute-force approach feasible (you don't want to sort 260 values), and you have decided in advance on the significance level (if only to avoid being seduced to commit post hoc analysis), you can simply estimate in which is the actual toss outcome and is a random variable representing a toss under the null hypothesis, as follows. Generate a large number of tosses using a good pseudo-random number generator and count the fraction for which the computed value for is at least If that fraction is too low, given the chosen significance level, the null hypothesis can be rejected.  --Lambiam 18:55, 26 January 2024 (UTC)[reply]
(ec) @Amble: Thanks. Useful perspective.
Equivalently, if we take the take the logarithm of those probabilities, that essentially takes us to the "message length" discussion above. And we can apply either a one-sided test (what are the chances of an outcome less likely than this / a message-length longer than this), or a two-sided test (what are the chances of an outcome further from the typical, either way, than this).
So yes, that's a very useful observation.
But, as per the "message length" discussion above, it wouldn't give us any help in a case where every -- whereas we should be able to detect deviation in such a case, if 'heads' (or red balls) are coming up more often than expected... Jheald (talk) 18:58, 26 January 2024 (UTC)[reply]
If your null model is that every , then (under that model) every possible outcome is equally likely, and equally consistent with the model. But that's just the probability (or log-likelihood, p-value, etc.) relative to the null model alone, without considering any specific alternatives. I think that's the original question here, but there are other questions we may want to ask. We're used to thinking of certain types of plausible alternative models like "all the coins are biased and really have ." Different outcomes will have different p-values under that alternative model, so the comparison can give you information. And you can construct a random variable like "total number of tails" that is very sensitive to this kind of bias. But it will be completely insensitive to other kinds of alternative models like "even coins always give heads, odd coins always give tails". So a choice between two models given the data is different from a test of the null model that's intended to be independent of any assumption about the possible alternative models. --Amble (talk) 19:39, 26 January 2024 (UTC)[reply]
An interesting case to consider is one where the null model is independent, unbiased coins (all ) and the alternative model is that a human is making up a sequence of H and T while trying to make them look as random as possible. --Amble (talk) 19:54, 26 January 2024 (UTC)[reply]
This is very similar to the case of suspicious data above where the outcome is exactly evenly divided between and A fraudulent scientist just needs to use a good random generator. The likelihood of alarm being triggered is then just the same as that of false alarm for a scientist who laboriously tosses and records truly fair coins. And if they know the fraud test to be applied, they can keep generating data sets until one passes the test.  --Lambiam 21:30, 26 January 2024 (UTC)[reply]
I mean a task where the human has to make up the sequence themselves, without being able to use an RNG, as in a random item generation test. [6] --Amble (talk) 21:46, 26 January 2024 (UTC)[reply]

January 28

Riemann Hypothesis

Consider the non-trivial zeros of the Riemann zeta function, denoted by ρ=β+iγ, where β and γ are real and imaginary parts, respectively. The Riemann Hypothesis posits that all non-trivial zeros lie on the critical line β=21​.

Now, the question is: Assuming the Riemann Hypothesis is true, prove that there are infinitely many prime numbers.Harvici (talk) 13:24, 28 January 2024 (UTC)[reply]

I think you mean β=1/2, not β=21. The infinity of primes doesn't depend on the Riemann Hypothesis. See Euclid's theorem. --Amble (talk) 17:03, 28 January 2024 (UTC)[reply]
Yeah its β=1/2 , and infinty of primes doesn't depend on it, but many papers and researchers show that the existence of infinitely many zeros on the critical line implies the existence of infinitely many prime numbers. Harvici (talk) 05:39, 29 January 2024 (UTC)[reply]
OK, but it has been proven that there are infinitely many zeros on the critical line, independent of the Riemann hypothesis being true. See Riemann_hypothesis#Zeros_on_the_critical_line. There are interesting connections between the primes and the zeta function whether or not the hypothesis is true. --Amble (talk) 17:06, 29 January 2024 (UTC)[reply]
Euler's proof in that article may be what you're thinking of though it doesn't require any of the extra mechanism in the Riemann Hypothesis. A particularly nice bit there is that one can then show the divergence of the sum of the reciprocals of the primes. NadVolum (talk) 17:18, 28 January 2024 (UTC)[reply]
Right, and there's also a section "Stronger results" that uses the prime number theorem in one case. So you can do it that way if you want, but it's a lot of extra work if all you want is the infinity of primes. --Amble (talk) 17:32, 28 January 2024 (UTC)[reply]

Inverse problem about scalar multiplication on koblitz elliptic curves (or more exactly the secp256k1)

My problem is given Q=nP to find point P given 257 bits integer n and point Q. It’s something possible on other curves but Koblitz curves have extra characteristic and can’t be converted to Montgomery or Edwards curves.
P is a fixed point while n and thus Q can vary leading to several examples using the same unknown P.

So is such operation possible on the secp256k1 curve given a 257 bits integer n ? If yes, how to actually compute it in full ? 2A01:E0A:401:A7C0:D470:71EF:EDBF:3354 (talk) 13:47, 28 January 2024 (UTC)[reply]

The problem you're describing is commonly referred to as the "elliptic curve discrete logarithm problem" (ECDLP), where the goal is to find the point P on the elliptic curve. This problem is assumed to be hard, and the security of many elliptic curve-based cryptographic systems relies on the difficulty of solving the ECDLP.(Here is some work done by the University of Auckland.) Harvici (talk) 13:50, 31 January 2024 (UTC)[reply]
Wrong, ECDLP is finding given and . Here the problem is to find given and . This problem is not as hard as ECDLP. If you know the order of the curve, then you can find such that and then compute . --2A0D:6FC2:4C20:B000:C847:374F:66AE:340A (talk) 16:55, 2 February 2024 (UTC)[reply]

January 29

List of factorization of 2n-1

Is there a list of factorizations available of 2n-1 for n up to about 175? Bubba73 You talkin' to me? 04:00, 29 January 2024 (UTC)[reply]

Funny enough, I found such a list embedded in somebody's Python file on Github. Here ya go. GalacticShoe (talk) 04:38, 29 January 2024 (UTC)[reply]
Thanks a million! It is missing a few, starting at n=193, but I believe that is sufficient for my needs. Bubba73 You talkin' to me? 04:47, 29 January 2024 (UTC)[reply]
For future reference, the missing values of (up to 256) are 193, 211, 227, 229, 251, 253; attempting to factor them with WolframAlpha shows that indeed the composite numbers that need to be factored are very large. GalacticShoe (talk) 05:18, 29 January 2024 (UTC)[reply]
Yes, apparently they were using their own program to factor them, until they got to a certain size, then they switched to using Wolfram Alpha, which did most of the rest, except for ones that took too long. ~~
I also found this page with a list up to 263: [7]. It has a dead link for "more data", but you can find it on the Wayback Machine: [8]. This page has some very extensive data products such as [9] with an explanation of the format [10]. For 193 it lists:
     M( 193 )C: 13821503
     M( 193 )C: 61654440233248340616559
     M( 193 )D
The text on the page notes that "the largest prime factor is almost always implied, as some of them are _very_ large". --Amble (talk) 17:31, 29 January 2024 (UTC)[reply]
Thanks, I'll look at that. Bubba73 You talkin' to me? 00:37, 30 January 2024 (UTC)[reply]
See the Cunningham table: [11], also this list 210.244.72.152 (talk) 03:00, 3 February 2024 (UTC)[reply]

Followup remark: I implemented this in a program, to look up these factorizations rather than factor them each time. I did a test and it was 62x faster, so it can do in a minute what would take an hour, etc. Bubba73 You talkin' to me? 10:39, 1 February 2024 (UTC)[reply]

@Bubba73: http://factordb.com has a database with billions of known prime factors of various numbers. It accepts expressions like 2^253-1. You can also enter 2^n-1 and get a list of factorizations. Composite factors are blue. 2^1277-1 is the smallest without a known factor so the whole number is listed in blue. I don't know whether there are smaller numbers with a partial but not full factorization. PrimeHunter (talk) 17:59, 1 February 2024 (UTC)[reply]
Thanks, I didn't know about that. I needed a table of complete factorizations for four types of numbers. I got what I need with my own program on three of them, but 2^n-1 bogged down about n=139. But I got it up to n=192 from the source somewhere above, which I think will be sufficient. But if it turns out that I need more, I can use this. Bubba73 You talkin' to me? 20:53, 1 February 2024 (UTC)[reply]

@PrimeHunter: At factordb.com, can you enter a series and loop on that? That is, if is the sum of a series up to n, can you get the factors of , for n = 1 to x? Bubba73 You talkin' to me? 05:31, 3 February 2024 (UTC)[reply]

@Bubba73: I don't think so. PrimeHunter (talk) 10:41, 3 February 2024 (UTC)[reply]

January 30

How to disprove the claim that interest in banking will cause harm to at least one citizen?

A similar claim was used by one interest free banking supporter to denounce interest in banking:

  1. In a new country, there are 10 rulers and 10 subjects.
  2. The country's bank has 1,000,000 ¤
  3. Each subject has to buy a house so each subject loans 100,000 ¤ in the interest rate of 1%
  4. After one month, each subject has to pay the rulers 110,000 ¤ so the rules will get 1,100,000 ¤, thus earning 100,000 ¤.
  5. Given that the 100,000 that the subjects have to pay does not exist in the monetary system, they will have to borrow money one from each other to pay it thus their debt never ends and at least one of them won't be able to pay it, as in the musical chairs game.

How to disprove the claim that interest in banking will cause harm to at least one citizen?

Thanks. 2A10:8012:3:DE50:BDE7:2606:BEB6:CFDA (talk) 13:27, 30 January 2024 (UTC)[reply]

See Leverage (finance). They loan out more than they get. NadVolum (talk) 16:38, 30 January 2024 (UTC)[reply]
Is this a mathematical question? What is true in one contrived situation may be false in another situation. To make the question even more contrived, consider a frigid country with just two people, one a king who runs a real estate company and a bank, the other a peon. The homeless peon rents a hut from Regal Estates Ltd. for shelter, for 100,000 ¤ per month, to be paid in advance. For this he takes a loan from the Royal Bank. At the end of the month he needs to repay the bank 110,000 ¤ but he doesn't have any money. Does this prove that charging rent for shelter will cause harm to at least one citizen? No, it doesn't. Does this prove anything at all, except that life is sometimes unfair for fictional peons? No, so there is nothing to be disproved.  --Lambiam 17:07, 30 January 2024 (UTC)[reply]
User:Lambiam, I don't get your argument, how could the peon pay the interest-debt? and, do you approve the stance that at least one of the 10 subjects will be misfortunate? 2A10:8012:3:DE50:EDB4:DDED:F594:D4D5 (talk) 22:02, 30 January 2024 (UTC)[reply]
I said he doesn't have any money, so there is no way for him to pay the amount due. Your scenario is not very detailed but appears to assume an invariant total amount of money in the economy, so if you equate fortune with the amount of money one possesses, it is a zero-sum game. Either no one gains (but it appears that the rich are getting richer) or someone loses.  --Lambiam 22:40, 30 January 2024 (UTC)[reply]
Not economically aware so this may be a silly question on my part, but doesn't this normally happen in conjunction with the country printing out more money? GalacticShoe (talk) 17:11, 30 January 2024 (UTC)[reply]
There should be a reason to distribute the money fairly, how would that be done in such a situation? Also, the rules could also delete the debt.
Well I mean, yes, in this particular situation the rulers could delete the debt, and in fact it would make sense to given that there's only 10 people. But as other people have mentioned this is sort of a contrived scenario. We're to assume that for some reason the rulers arbitrarily decide to have interest and increase the amount owed despite there being no incentive for it, which is oversimplified from the real world. That being said, if the rulers wanted for some strange reason to have interest but also print out money for people to pay off their debt, they could print out exactly enough money to cover each person's debt to make it fair. GalacticShoe (talk) 02:13, 31 January 2024 (UTC)[reply]
The ruler's incentive is to become richer in a "fair" way which in their understanding is banking interest. The rulers may say to the subjects "we demand this interest money so you should give us a reason to print it and give it to you". 2A10:8012:3:DE50:EDB4:DDED:F594:D4D5 (talk) 06:59, 31 January 2024 (UTC)[reply]
If you assume a ruler who is interested in extorting money from the populace then this might make more sense, but you still only have 10 people who are put into arbitrary debt over some amount of money that has no intrinsic value, so that they are supposedly motivated to work to generate more money despite the fact that said money is not backed by anything. This might work as a simplification for an actual economy if you had more people, a more complicated system, a money system actually backed by some real value, etc. etc. but in this case it really sounds like the 10 people have no incentive to do anything of the sort. GalacticShoe (talk) 07:25, 31 January 2024 (UTC)[reply]
Well, I think that we should be able to create some general yet minimal model of the current common interest based banking of humanity. Perhaps the two people example, ruler (king) and subject (peon) is better, but then the peon has no other person to borrow money from unlike the ten subjects borrowing one from each other. 2A10:8012:3:DE50:EDB4:DDED:F594:D4D5 (talk) 07:49, 31 January 2024 (UTC)[reply]
Why do you say that the 100,000 does not exist in the monetary system? Where did it go? Surely the purchasers bought the houses from someone, and those sellers now have the money. Also: How do you determine whether it's better to be homeless, or housed but in debt? If it's a nonrecourse debt system, then the loans allowed the subjects to be housed for a month and then bankrupt, instead of homeless the entire time and equally bankrupt. --Amble (talk) 17:55, 30 January 2024 (UTC)[reply]
I didn't say it wen't anywhere or that they should consider homelessness. What if the rulers themselves built the houses? 2A10:8012:3:DE50:EDB4:DDED:F594:D4D5 (talk) 22:02, 30 January 2024 (UTC)[reply]
If the rulers sold the houses, then they have the money; it still exists in the system; and the premise of point 5 is false. The point about homelessness is that the argument is incomplete. Even if I take all the points as given, it needs to establish what is a harm, or how to balance costs and benefits. --Amble (talk) 22:34, 30 January 2024 (UTC)[reply]
While the rulers sold the houses the money was not existing in the monetary system, the rulers should print it or invent it. Harm in this case would be homelessness or infinite debt. 2A10:8012:3:DE50:EDB4:DDED:F594:D4D5 (talk) 22:51, 30 January 2024 (UTC)[reply]
Sure it exists. It starts in the bank, the bank lends it to the subjects, the subjects pay it to the sellers (who are not specified in the argument but may be the rulers). At every point it exists in the system. So point 5 has a false premise. And the argument itself needs to establish what constitutes a harm and what caused the harm. These are not things that can be waved away or taken for granted, they are serious work that the argument must do but hasn't attempted. --Amble (talk) 23:09, 30 January 2024 (UTC)[reply]
The argument claims that only 1,000,000 exists, nothing more. The sellers are indeed the rulers. I am sure that it is clear to you that generally, debt is a risk and may cause harm. Anyway, The rulers may say to the subjects "we demand this interest money and you should give us a reason to print it and give it to you". 2A10:8012:3:DE50:EDB4:DDED:F594:D4D5 (talk) 06:59, 31 January 2024 (UTC)[reply]
The concepts of credit, interest, and inflation in the sense of modern economies do not apply in your scenario of an static, zero-sum system with fixed capital.
Interest is a hedge against inflation and defaults. If someone defaults on a loan -- say in your government bank (more like royal coffers) scenario -- then that's a loss to the coffers. If the money supply is fixed and the economy is zero-sum, then in your scenario the amount of interest received would probably have to equal the amount of loans lost on default. There's no inflation.
In reality, your time, labor, land, assets, and credit thereof all have measurable and changing monetary value. You add money to the economy by working, creating, repairing, teaching, nurturing, helping, negotiating, etc. Every mutual agreeable transaction is a positive-sum addition to the economy. (Economics questions may be better addressed to Wikipedia:Reference desk/Science in future, btw.) SamuelRiv (talk) 19:24, 30 January 2024 (UTC)[reply]
I talk only about interest, not about inflation, why isn't interest relevant if the subjects loan by interest? 2A10:8012:3:DE50:EDB4:DDED:F594:D4D5 (talk) 22:02, 30 January 2024 (UTC)[reply]
From what I'm gathering, if there's no value entering or leaving the system, then asking for interest doesn't make a lot of sense. If you only have a million ¤, and someone demands 1.1 million ¤, then where's that additional 0.1 million ¤ supposed to come from? If you just add that amount of ¤ arbitrarily into the economy, then the amount of value and worth is still the same, only now each ¤ is worth less. As mentioned earlier, even if someone outright demands that these 10 people generate enough "value" to cover that 0.1 million ¤, there's no real incentive to do so with this particularly small system. If you force them to do so by suggesting punitive measures for their "debt", then that's less about interest and more about the system itself forcing people to labor for value. GalacticShoe (talk) 07:32, 31 January 2024 (UTC)[reply]
The concepts of credit, interest, inflation - and leverage and crashes too have been around since at least the Romans [12]. You can even see quantitive easing at work there! NadVolum (talk) 13:43, 31 January 2024 (UTC)[reply]
Let me try explaining again: the IP assumes a closed zero-sum system. They use this as a model to argue that concepts in economics essentially don't work. But real economic systems are not closed and not zero-sum.
Where does the missing 100k in value come from? From the loans themselves, and then from people doing work to pay loans. And btw, in a real economy, if the money supply doesn't expand with an expanding economy, the money deflates. (This is what you seen with a gold standard, if not enough is continuously mined.) SamuelRiv (talk) 14:02, 31 January 2024 (UTC)[reply]
UseR:SamuelRiv, the model only regards interest, not "concepts in economics". A real economy can be closed; imagine a planet with a single global state for example; "zero sum" can be a vague term here. Anyway, The missing 100k would have to be printed as payment for some work won't they? 2A10:8012:3:DE50:B96F:CC69:BBE8:B926 (talk) 22:02, 1 February 2024 (UTC)[reply]
I propose we close this protracted unfruitful discussion. It does not concern a defined mathematical problem; indeed, the issue appears to be lacking in definition in any sense of the term and it is unclear what, if anything, might lead to some form of closure.  --Lambiam 14:07, 31 January 2024 (UTC)[reply]

January 31

Where would this cone be cut?

If a cone with height H and width W is stood up vertically, how would I determine where along the height to cut it so that the two pieces produced as a result are of equal volume (originally said “area” by accident) to each other? Primal Groudon (talk) 19:45, 31 January 2024 (UTC)[reply]

That's gonna depend on whether or not you want to include the circular base as part of the surface area. The non-base portion of a cone of radius and height has area . The base naturally has area . The nice thing is that if you cut along the height, the leftover conic piece with radius and height still has the same slope . So if you consider a cone with no bottom:
In other words, you cut a conic piece with height that of the original in order to get two pieces of equal area, assuming the cone's base does not matter. If it does matter though, then cutting off the top yields a conic piece with no base:
This time the expression is a lot messier, and it does depend on slope, although again you cut a conic piece with height a constant (up to the slope of the cone) factor times that of the original in order to get two pieces of equal area (which makes sense, since the height you cut at should scale perfectly with the cone itself.) GalacticShoe (talk) 20:32, 31 January 2024 (UTC)[reply]
For halving the lateral surface area, this cut at a fraction of from the apex works not only for cones with circular bases, but for any solid formed by connecting a fixed apex by straight line segments to a piecewise smooth Jordan curve (a non-self-intersecting continuous loop) in a base plane. This includes pyramids, for which that loop is a polygon. The surface obtained by extending the line segments to whole lines is a conical surface.  --Lambiam 21:12, 31 January 2024 (UTC)[reply]
Sorry, I meant volume, but accidentally said area. Primal Groudon (talk) 21:23, 31 January 2024 (UTC)[reply]
Some additional clarification, I'm assuming that the cut is meant to be parallel to the base, otherwise there are many possible cuts that divide it into two equal volumes. If so then one of the two pieces is similar to the original cone. The volume of two similar cones is proportional to the cube of one of their dimensions, say height. So you'd want the height of the smaller cone to be the height of the original cone. Note that this would work for any cone, not just circular ones. --RDBury (talk) 22:17, 31 January 2024 (UTC)[reply]
For halving the volume cut it at the height times the cube root of a half from the apex. That gives a cone similar to the original and the volume ratio is as the cube of the heights. NadVolum (talk) 22:24, 31 January 2024 (UTC)[reply]

February 1

Fair 20 sided die by corner...

For a 20 sided die, the average of the face values is 10.5, and let the vertices have the value that is the sum of the faces around that corner, Is it possible to setup a 20 sided die so that half of them sum to 52 and half sum to 53? (this could be equivalently stated by giving values to the 20 verticies of a Dodecahedron and giving the faces the value that is the sum of the corners.Naraht (talk) 04:21, 1 February 2024 (UTC)[reply]

I don't have a script written up for it, but there is a note I'd like to make that might be able to help find if this the case. In an icosahedron (what I presume you mean by the 20-sided polyhedron), because faces are triangular, if two faces are adjacent, switching their values only affects the values of their two non-adjacent vertices. This means that you could probably simulate some kind of value "flow" between these kinds of vertices in an attempt to equalize their values. GalacticShoe (talk) 06:45, 1 February 2024 (UTC)[reply]
Note that the polyhedron formed from connecting each vertex to the 5 other vertices which are "two triangles away" (i.e. the vertices you can flow from/to by switching face values) is none other than the great icosahedron. GalacticShoe (talk) 06:59, 1 February 2024 (UTC)[reply]
The small stellated dodecahedron has the same edges. —Tamfang (talk) 00:45, 3 February 2024 (UTC)[reply]
A plausible constraint one might consider in an attempt to reduce the search space is to require the sum of the vertex values of each of the 6 pairs of diametrically opposite vertices to equal 105. (This is only – potentially – helpful if there is a solution.)  --Lambiam 11:11, 1 February 2024 (UTC)[reply]
Even more restrictive: require the sum of the face values of each of the 10 pairs of diametrically opposite faces to equal 21. This implies the restriction on sums of vertex values.  --Lambiam 18:34, 1 February 2024 (UTC)[reply]
     /\  /\  /\  /\  /\
    / 1\/20\/ 2\/10\/19\
   ---------------------
    \ 5/\11/\15/\12/\ 9/\
     \/16\/ 4\/13\/ 3\/18\
      ---------------------
      \ 8/\14/\ 7/\17/\ 6/
       \/  \/  \/  \/  \/
I looked at a related(?) question: [13] —Tamfang (talk) 20:26, 5 February 2024 (UTC)[reply]

Powers of two

Is the thing that all powers of two will eventually contain long sequence of zeros at the end? Some calculator softwares indeed make this. --40bus (talk) 16:27, 1 February 2024 (UTC)[reply]

No. Numbers ending in 0 are always multiples of 5. All powers of 2 end in 2, 4, 6, or 8. The cycle is 2-4-8-6-2-4-8-6-2-4-8-6-2-4-8-6... Georgia guy (talk) 17:07, 1 February 2024 (UTC)[reply]
@40bus: It sounds like a rounding problem. 264 = 18446744073709551616. This may for example be rounded to 1.8446744×1019, maybe written 1.8446744e+19 if it's written at all and not hidden internally in the software. This rounded value is equal to 18446744000000000000. PrimeHunter (talk) 17:29, 1 February 2024 (UTC)[reply]
It depends though; Python computes all digits for whole numbers, and gives 21000 as 1071...(about 300 digits)...9376. --RDBury (talk) 17:46, 1 February 2024 (UTC)[reply]
In binary, which is the base-2 numeral system, powers of two have a specific pattern.
For example:
2^0=1 (binary: 1)
2^1=2 (binary: 10)
2^2=4 (binary: 100)
2^3=8 (binary: 1000)
2^4=16 (binary: 10000)
So Some calculator software may take advantage of this to provide faster computation for powers of two. Harvici (talk) 03:05, 2 February 2024 (UTC)[reply]

February 2

Draw a segment of a cubic function exactly using a cubic Bezier curve

http://en.wikipedia.org/wiki/B%C3%A9zier_curve#Second-order_curve_is_a_parabolic_segment describes how a segment of a parabola or quadratic curve on a Cartesian plane can be drawn exactly with a quadratic Bezier curve.

Is it possible to draw a segment of a cubic curve exactly with one (preferably) or more cubic Bezier curves?

I.e. given f(x) = ax³ + bx² + cx + d, and endpoints (p, f(p)) and (q, f(q)), what are the coordinates of the two control points?

Thanks, cmɢʟeeτaʟκ 15:18, 2 February 2024 (UTC)[reply]

With a cubic Bezier curve , in order for you to be able to represent , you need to have it so that for all . I don't specify a domain on here but regardless of the domain we can just assume that in order to do so, we need for the coefficients of to match those of . We have to solve for four unknowns here, which are the coordinates of and , since and .
If you expand into a third-order polynomial, you can notice that is a ninth-order polynomial, while is third-order. The coefficient of the ninth-order term is just that of the third-order term of cubed, which means that said third-order term must be in order to match coefficients. In other words, is at most quadratic. When this is the case, is still sixth-order, with the sixth-order term being the second-order term of cubed. Again, to match coefficients, this means that the second order term is also , and is at most linear. This time, coefficient matching works, and you can get that and . In other words, the control points would be equidistantly spaced along the x-axis, which isn't particularly surprising. is naturally equal to .
The next part is finding values of and . I don't know of any tricks here, I just found the values of and manually. After expanding and , through direct comparison I found the values
Or in other words, the two control points are:
,
GalacticShoe (talk) 00:49, 3 February 2024 (UTC)[reply]
Graphs showing the relationship between the roots, and turning, stationary and inflection points of a cubic polynomial, and its first and second derivatives  Done
Thank you so much, @GalacticShoe: that's exactly what I needed. Cheers, cmɢʟeeτaʟκ 07:56, 3 February 2024 (UTC)[reply]
Glad I could be of help :) GalacticShoe (talk) 08:00, 3 February 2024 (UTC)[reply]
Here's a somewhat different take on the problem giving an equivalent result: Write
Then:
Evaluate at t=0 to get
You can easily solve for A, B, C, D in terms of F and its derivatives at 0:
Note that the last expression is the third order Maclaurin approximation for F(1), and this is exact since F is a cubic polynomial. If
is any cubic parametric curve then these formulas generate the endpoints and control points from t=0 to t=1. In particular, if the curve is y=f(x) from x=P to x=Q, where f is a cubic polynomial, take
Then:
This gives the endpoints and control points as:
The expression for the y-coordinate at Q is the Taylor series approximation of f(Q), and it's exact since f is a cubic polynomial. I don't think it would be hard to generalize these formulas to nth order Bezier curves, and you might say that the expressions for the y coordinates are actually generalizations of Taylor appromations since the last one is, in fact, the actual Taylor approximation. Since the formulas use derivatives instead of coefficients, you can use them to generate accurate Bezier approximations for any parametric curve of sufficient smoothness. --RDBury (talk) 11:08, 4 February 2024 (UTC)[reply]
Thank you very much for the general solution, @RDBury: cmɢʟeeτaʟκ 16:37, 4 February 2024 (UTC)[reply]

February 3

About lucky numbers

If x is a lucky number and n is a natural number, must there be infinitely many lucky numbers == x mod n? Is there an analog of Dirichlet's theorem on arithmetic progressions to the lucky numbers rather than the prime numbers? 210.244.72.152 (talk) 02:57, 3 February 2024 (UTC)[reply]

Funny enough, while looking for information on lucky number congruences online, I found a Math StackExchange answer that mentioned this very question. Unfortunately, as it stands, it appears to yet be unsolved. GalacticShoe (talk) 05:48, 3 February 2024 (UTC)[reply]
Note that the author of this answer (as mentioned within the answer itself) has a paper (archived here) where it is proven that each iteration of the lucky sieve removes certain sets of congruences. Although this is not enough to show that some congruences are never touched and thus infinite, it does at least show that there is some method to the lucky sieve madness. GalacticShoe (talk) 05:53, 3 February 2024 (UTC)[reply]

Always nice...

When you try to put together a question here, spend a minute or two trying to describe, write out the values that you've figured in the sequence and then find it in an oeis search. (A006561 Number of intersections of diagonals in the interior of a regular n-gon.). Sort of surprised that the answer is as clean as it is for odd numbers, but as dirty as it is for even.Naraht (talk) 03:19, 3 February 2024 (UTC)[reply]

Agreed, although conversely I find it rather disappointing when I generate the first few terms and there is no sequence instead. On the one hand, uncharted waters are cool, but on the other hand, they're kinda scary too. GalacticShoe (talk) 05:17, 3 February 2024 (UTC)[reply]
If it is an interesting sequence, submit it to OEIS. Bubba73 You talkin' to me? 05:47, 3 February 2024 (UTC)[reply]
Good idea, although unfortunately I highly doubt that any of the sequences I come up with are interesting or particularly novel. GalacticShoe (talk) 05:56, 3 February 2024 (UTC)[reply]
I think they just need to be good enough to start talking to somebody else about. They've lots of room. NadVolum (talk) 12:26, 4 February 2024 (UTC)[reply]


February 6

Is this sequence of interest? Could you add it to OEIS?

"(sequence A246521 in the OEIS): List of free polyominoes in binary coding, ordered by number of bits, then value of the binary code." It is a system to sort the polyominoes.

Also it has a sequence: "(sequence A335573 in the OEIS): a(n) is the number of fixed polyominoes corresponding to the free polyomino represented by A246521(n)."

"(sequence A152389 in the OEIS): Number of steps in Conway's Game of Life for a row of n cells to stabilize."

Thus we can make a new sequence: Number of steps in Conway's Game of Life for the free polyomino represented by A246521(n) to stabilize:

a(1) through a(22) are 0, 1, 1, 1, 0, 3, 0, 9, 2, 2, 4, 3, 2, 9, 3, 5, 4, 10, 1103, 3, 8, 6

125.230.24.97 (talk) 04:53, 6 February 2024 (UTC)[reply]