Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia
Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


September 18

Minimal set of prime-strings in base 10.

(Inspired by the question of Sep 15)

The set is OEIS A071062 is "2, 3, 5, 7, 11, 19, 41, 61, 89, 409, 449, 499, 881, 991, 6469, 6949, 9001, 9049, 9649, 9949, 60649, 666649, 946669, 60000049, 66000049, 66600049"

"Any prime number contains in its digits at least one of the terms of this sequence and there is no smaller set. There are 26 terms in the sequence."

Now the first few Prime numbers A000040 are "2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271"

Obviously primes 2,3,5,7,11 are in the set. 13 has 3. 17 has 7. 19 is in the set. 23 has 2 and 3. 29 has 2. 31,37 have 3. etc and when you get to 41 you have to add it to the set and ditto for 61 and 89. But I can't work out why 101 isn't in the set - there is no 1 nor 10. Are they really omitting digits and counting 11 as valid? The same question for 109? I find OEIS's terse descriptions very hard to follow.

If they are happy to omit digits (base 10), do they require the digits in the same order? Confusing. -- SGBailey (talk) 07:39, 18 September 2023 (UTC)[reply]

Yes, it is the difference between subsequence and substring also referred to earlier. The sequence "101" contains the subsequence "11".  --Lambiam 11:59, 18 September 2023 (UTC)[reply]
Perhaps it's worth noting that the name is "Minimal set of prime-strings in base 10", requiring the entries to be primes themselves. Otherwise the sequence 1, 2, 3, 4, 5, 6, 7, 8, 9 would work and that only has 9 terms. --RDBury (talk) 18:05, 18 September 2023 (UTC)[reply]
There are 77 elements in the minimal set of “primes > 10” strings in base 10: {11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 227, 251, 257, 277, 281, 349, 409, 449, 499, 521, 557, 577, 587, 727, 757, 787, 821, 827, 857, 877, 881, 887, 991, 2087, 2221, 5051, 5081, 5501, 5581, 5801, 5851, 6469, 6949, 8501, 9001, 9049, 9221, 9551, 9649, 9851, 9949, 20021, 20201, 50207, 60649, 80051, 666649, 946669, 5200007, 22000001, 60000049, 66000049, 66600049, 80555551, 555555555551, 5000000000000000000000000000027}, e.g. 857 is a minimal prime in decimal (base b = 10) because there is no prime > 10 among the shorter subsequences of the digits: 8, 5, 7, 85, 87, 57. The subsequence does not have to consist of consecutive digits, so 149 is not a minimal prime in decimal (because 19 is prime and 19 > 10). But it does have to be in the same order; so, for example, 991 is still a minimal prime in decimal even though a subset of the digits can form the shorter prime 19 > 10 by changing the order. 220.138.46.130 (talk) 01:06, 19 September 2023 (UTC)[reply]

September 19

Primes "in the" primes, using triangular addition

The famous Cannonball problem in number theory, which has been solved using much more complicated knowledge from mathematical analysis, states that the sum of first N perfect squares are never again perfect squares except for only finite cases (namely, when N=1 and N=24). However, if the "perfect squares" are replaced by the "prime numbers", the "analogous" problem seems not to be true. I have (partly) constructed a sequence for the sum of first N prime numbers- the Nth terms in this sequence can be considered as a function of the index N, or g(N). One has g(N)>Nth prime for all N larger than 1 and g(N) is a monotonically increasing function. There are infinite number of even terms and also infinite number of odd terms in the sequence g(N) sandwiched between them (This can be proved easily by noting that 2 is the only even prime number). In the half-sequence of g(n) with the odd terms, one might looks for the prime rather than odd composite numbers. In fact, g(N) is a prime number for corresponding to the value of (All prime-g(N) values after 2 result from the addition of all prime numbers from 2 to respectively). The prime numbers in the sequence tends to be rarer as g(N) increases toward infinity - note the curious "large jump" from N=14 to N=60 (this distribution can be understood from the fact the larger the natural number, the lower the probability allowing it to be a prime). Claim: "The sequence g(N) contains infinite number of prime numbers, despite their almost-zero natural density among all the terms of g(N)". Is this a known result, or is it still an unproven hypothesis in the number theory, much like the strong Goldbach conjecture? The trivial but unusual coincidence-the number 281 and its value g(N)=7699 both occur in the sequence prime-g(N).2402:800:63BC:DB8D:D90A:C8AA:2A80:2CF7 (talk) 13:43, 19 September 2023 (UTC)[reply]

Based on what I've seen, questions involving the infiniteness of a set of primes fulfilling any criteria more than "they are prime" are, most of the time, not solved, unless there is some very obvious reason why such a set would be finite or empty. I would certainly bet that this is one of those unsolved questions. In any case, following the OEIS sequence for prime reveals a 2018 paper by Romeo Meštrović on the subject which would indicate that the problem indeed does not have a resolution. GalacticShoe (talk) 14:28, 19 September 2023 (UTC)[reply]

September 20

Random integer factors

What is the probability that one integer chosen at random is a factor of another integer chosen at random? Jack of Oz [pleasantries] 21:59, 20 September 2023 (UTC)[reply]

It depends on how you're choosing the integers. Since there are infinitely many integers, there's no way to uniformly choose an arbitrary integer, so you'll have to specify a probability distribution first. Let's just assume a generic distribution for now.
You only have the scenarios that X divides Y and X < Y, X equals Y, Y divides X and X > Y. Note that for if and only if for . This means that you can write out the probability as:
If we assume, rather reasonably, that you use the same probabilities for X as you do for Y, then the first and third term are equal, so you get:
GalacticShoe (talk) 23:10, 20 September 2023 (UTC)[reply]
Unfortunately, the aforementioned equation doesn't lend itself very well to closed-form formulas. As an example, using a geometric distribution of parameter yields a probability of , which doesn't seem to have a nice closed-form equivalent. Probably purely by coincidence though, the sum for any given looks close to (which yields a quarter circle.) When , the sum is equivalent to , which converges rapidly to a value around . Compare that to . In any case, Poisson distribution does not fare any better; it doesn't seem that there is a closed-form expression for , let alone . At the end of it all though, there is one thing that does seem to be pretty consistent, and that is that as these distributions "approach" uniformity, the probability tends towards . I imagine that, in the same way that the probability of any two positive integers being coprime is , one can make an analogous notion that the probability of one positive integer being a factor of another is . GalacticShoe (talk) 02:16, 21 September 2023 (UTC)[reply]
Another distribution is the discrete uniform distribution on the range One can study how the divisor probability behaves as Given the probability equals in which (sequence A006218 in the OEIS) counts the number of pairs in the range such that divides evenly. It is known that so  --Lambiam 18:27, 21 September 2023 (UTC)[reply]
Why not look at it this way: the larger integer never divides the smaller integer. The smaller integer, call it n, divides every nth integer, so it has a 1/n chance of dividing the larger integer. Bubba73 You talkin' to me? 20:57, 21 September 2023 (UTC)[reply]
That makes a lot more sense to my post-mathematical mind than the previous explanations. Thanks. -- Jack of Oz [pleasantries] 21:40, 21 September 2023 (UTC)[reply]
It's just that changes depending on what the smaller integer is, and that's where you start getting into the probabilistic weeds. In general, if you already know what one of the integers is, then you don't have to go through all the rigmarole from earlier. GalacticShoe (talk) 22:49, 21 September 2023 (UTC)[reply]
Why is relevant? Apart from one case out of an infinity of cases, n is always going to be greater than 1. -- Jack of Oz [pleasantries] 23:46, 21 September 2023 (UTC)[reply]
is the probability that if you "randomly select an integer", will divide that integer. As mentioned earlier there's no actual way to "randomly select an integer" in a way that makes every integer equally likely to be selected, but you can think of it as being that the "density" of integers that divides is .
For example, the density of numbers that divides, i.e. the density of the even numbers, is . That isn't to say that there are half as many even integers as there are integers overall (rather famously, there are an equal number of even integers as there are integers), but in an intuitive way -- and a way that can be formalized, although I won't get into it -- there are clearly half as many even integers in any reasonable sample of integers as there are integers total. This of course is the case for all .
The major problem is that the statement "the probability of dividing an arbitrary integer is " is only helpful if you fix . So if you say "what's the probability that an integer A chosen at random divides another integer B chosen at random", then it's going to change with whatever the value of A is. And unfortunately, unlike the density argument, there's no intuitive way of determining probabilities of how will be chosen. In other words, earlier, we didn't specify how we would choose B, but we made the reasonable assumption that the probability of choosing B to a multiple of would be . Here, we're choosing A to be some arbitrary integer with no constraints, and there's no simple assumption or heuristic you can use here. That's where the probability distribution from earlier comes in. The probability distribution is a function that takes in the integer and spits out a probability, , that is chosen.
This is why it gets so complicated. When we jump from "what's the probability that a fixed integer divides some random integer we choose" to "what's the probability that some random integer A divides another random integer we choose", then we have to add in a brand new function to choose a value for A, and that confounds calculation.
Also, I'm just realizing now that I interpreted the original question as being "what's the probability that A divides the other random integer B or B divides A." I'm not sure if that's what the original question meant or not, but in any case it's largely a matter of semantics, and it doesn't actually change that much about the calculation, so let's not worry too much about it. GalacticShoe (talk) 00:56, 22 September 2023 (UTC)[reply]
Oh, yea. If you are talking about two arbitrary integers, that may not have a definite answer. On the other hand, the chance that two arbitrary integers have a common factor > 1 is relatively easy to calculate, so maybe something similar could be done. Bubba73 You talkin' to me? 23:49, 21 September 2023 (UTC)[reply]
Yeah for sure, based on my earlier examination of what happens where integers are selected with the geometric distribution and with the Poisson distribution, and based on Lambiam's examination of the discrete uniform distribution, I'm pretty sure the probability tends to , which makes sense; as increases, the probability of selecting a multiple of tends to 0, and as distributions "flatten out", they allow for greater relative probabilities of selecting larger . GalacticShoe (talk) 01:00, 22 September 2023 (UTC)[reply]
Yes, thinking about it with all integers equally likely, I think the probability would be infintessimally small, i.e. zero. Bubba73 You talkin' to me? 01:26, 22 September 2023 (UTC)[reply]
But that's not the same as saying it's impossible, is it - because it's clearly possible. Just so unlikely as to be zero for all practical purposes. Correct? -- Jack of Oz [pleasantries] 13:17, 22 September 2023 (UTC)[reply]
It all cruciously depends on what it means to choose an integer "at random". There is no way to do that in such a way that every number has an equal chance of being chosen, since you cannot divide 1 (100%) into infinitely many equal parts that sum up to 1. If all numbers are possible choices, some have necessarily a larger chance of being chosen than some others; it just ain't fair. So to answer the question, we need to come up with another interpretation. We examined a variety of methods to make the selection less and less unfair, and for each the probability gets arbitrarily close to 0 without ever getting there. Since total fairness is impossible, it is not correct to assert that 0 is the answer to the original question. Under some deviously non-uniform ways of reducing the unfairness the answer may even be some value greater than 0.  --Lambiam 16:36, 22 September 2023 (UTC)[reply]
I love the word "cruciously". May I borrow it? -- Jack of Oz [pleasantries] 22:01, 22 September 2023 (UTC)[reply]
Sure. Cruciously means crucially and curiously. Curiously is the same as strangely. It's like the word slithe – there are two meanings packed up into one word.  --Lambiam 06:19, 23 September 2023 (UTC)[reply]
Adding onto Lambiam's answer, I want to note that in asking this question you've gotten to something at the very heart of probability itself. In most undergrad or non-math major courses on probability, there are two scenarios which you can encounter: either you have a finite/discrete number of outcomes, which you can assign probabilities to (for example, there are 6 numbers that you can roll on a die), or you have an infinite/continuous number of outcomes (for example, the amount of time between events occurring.) Naturally, for the former, it's very easy to think of assigning a probability to a single event. You clearly have a very tangible 1/6 probability of rolling any given number on a die. For the latter though, single events rarely ever have a nonzero probability. You cannot say with any reasonable probability that some event will happen, say, exactly 6.527... seconds into the future. Yet, somehow, the event does have to eventually occur, even if no specific time has a nonzero probability. In some ways this does seem paradoxical, but, like you said, for any given value it's clearly possible, just so unlikely as to be zero for all practical purposes.
Mathematicians circumvent this paradox by assigning probabilities to ranges of possible values rather than individual values. For example, if an event is guaranteed to happen at some uniformly random time between 0 and 1 second, then even if no individual time has a nonzero probability, the probability of the event happening between, say, 0.5 and 0.8 seconds is 30%. I'm pretty sure that the reason you can assign "uniform" probabilities to ranges of time (more generally, subsets of real numbers), is because it makes sense for a range of time to have a certain density in a larger range. As mentioned earlier, the range from 0.5 to 0.8 seconds is 30% of the range from 0 to 1 seconds. You can't do the same for finite sets of integers. The density of a finite set of integers within the whole set of integers is always going to be 0, so there is no analog for uniform probability there.
Now, you can narrow down to a specific time/real number by taking successively smaller ranges (with successively smaller probabilities.) You may be able to take infinite sets of integers with a certain density and keep taking subsets until you narrow down to a certain integer, but I struggle to imagine what that would look like. GalacticShoe (talk) 23:36, 22 September 2023 (UTC)[reply]
Very interesting. So, if I modify my original question to positive integers in the range 1 to 1,000,000, what's the probability that one's a factor of the other? -- Jack of Oz [pleasantries] 01:41, 23 September 2023 (UTC)[reply]
Assuming you draw the two numbers so that every number from 1 to 1,000,000 has equal probability, the probability that one's a factor of the other would be , or , or . GalacticShoe (talk) 02:28, 23 September 2023 (UTC)[reply]
And if the two numbers are always different, I get 2.594009394 x 10-5. Bubba73 You talkin' to me? 02:47, 23 September 2023 (UTC)[reply]
Excellent. Thanks, all. -- Jack of Oz [pleasantries] 23:27, 24 September 2023 (UTC)[reply]
Resolved
Here is a possible interpretation which gives the answer zero. Let f(n) be the probability if both integers are chosen uniformly from 1 to n for a specific finite n. Then f(n) tends to zero when n tends to infinite. PrimeHunter (talk) 23:09, 22 September 2023 (UTC)[reply]
This is the discrete uniform distribution I examined above.  --Lambiam 06:48, 23 September 2023 (UTC)[reply]
Now I understand why Lambian was working with probability distributions - you can't come up with much of an answer if you choose integers at random from all integers with them equally weighted. I suggest that a probably distribution in which the probabily of n being chosen is proportional to 1/n might work. This type of distribution is used in analysis of Benford's law. Bubba73 You talkin' to me? 23:26, 22 September 2023 (UTC)[reply]
The sum of 1/n tends to infinite (even over primes) so proportional to 1/n is not possible. It would still tend to infinite with any positive proportionality factor. If we want a decreasing probability for positive integers then the simplest solution may seem drastic: Let n have probability 1/2n. 1/2 + 1/4 + 1/8 + ... = 1. PrimeHunter (talk) 00:38, 23 September 2023 (UTC)[reply]
That is a very unfair distribution. With this distribution, the probability is approximately 0.60669515241529176378.  --Lambiam 06:45, 23 September 2023 (UTC)[reply]

September 21

Vocabulary Question

What exactly are the differences between the words "perpendicular", "normal", and "orthogonal"? I more or less use these words interchangeably, but I assume that they have precise definitions. PuzzledvegetableIs it teatime already? 00:57, 21 September 2023 (UTC)[reply]

You can read perpendicular, Orthogonality, and Normal (geometry). They are pretty much the same, except that I think "normal" usually refers to vectors. Bubba73 You talkin' to me? 01:19, 21 September 2023 (UTC)[reply]
Perpendicular has to do with a string hanging straight down in the direction of gravity (perpendiculum = plumb line or plumb bob), typically at a right angle to the ground. Normal has to do with a right-angled stick (norma = carpenter's square). Orthogonal literally = right angled in Ancient Greek (came to English via French via Latin); the Latin analog is "rectangular". –jacobolus (t) 01:21, 21 September 2023 (UTC)[reply]
Each of these terms is used with several related but different meanings; the uses overlap, and which term is preferred in which context may be mostly a matter of tradition. One difference is that "normal" is often used as a noun ("the normal") and then refers to a one-dimensional object, while "orthogonal" is mainly used as an adjective and can apply to a pair of subspaces of a metric space that can have any dimensions. (Given an m+n-dimensional space, an m-dimensional subspace can be orthogonal to an n-dimensional subspace.) The geometric term "perpendicular" is mainly used in traditional Euclidean geometry.  --Lambiam 17:21, 21 September 2023 (UTC)[reply]

Curious about UTC-Time zone

I am wondering who I can contact to know more about UTC -LOCK and whether its related to covid emergency broad cast related to ism band specific to geographic location.


Thanks Shireesh Avenger (talk) 03:03, 21 September 2023 (UTC)[reply]

I have no idea what "UTC-LOCK" refers to, but I am pretty confident it is not a mathematical topic. We have articles on UTC and Time zone. neither of which contains the word "LOCK".  --Lambiam 17:05, 21 September 2023 (UTC)[reply]
A quick Google search for "UTC-LOCK" doesn't yield anything particularly useful. Rusty4321 talk contributions 23:56, 22 September 2023 (UTC)[reply]


September 23

Number of indistinguishable necklaces

If we wish to count the number of indistinguishable necklaces of length n with k different colours where rotation is only permitted, then what will be the count? The article Necklace gives the count as but I do not understand why they start i from 1 and not 0. 43.224.172.66 (talk) 03:03, 23 September 2023 (UTC)[reply]

Some mathematicians have a blind spot when it comes to including 0 as a natural number. To the rest, there is precisely one necklace with zero beads, regardless of the available colours. The given formula breaks down when so it is somewhat convenient to ignore this case.  --Lambiam 07:42, 23 September 2023 (UTC)[reply]
In this case, something something Pólya's enumeration theorem, something something Burnside's lemma. Basically, the number of equivalent necklaces under rotation is the average number of necklaces fixed under rotations. For rotations , there are groups of positions that must be of equal color when rotating, so assigning a color to each group yields necklaces fixed under rotation. For the last rotation, the identity rotation, clearly every position is fixed under rotation, so there are necklaces there. (which makes sense given that rotating times is equivalent to the identity rotation) so in reality the sum can either be from to or to . There just have to be total summands because there are rotations. And of course the final division by is because we are taking the average. GalacticShoe (talk) 11:34, 23 September 2023 (UTC)[reply]

Are there infinitely many primes whose first digit and last digit are both 7?

Are there infinitely many primes whose first digit and last digit are both 7? Generally, give a base b and two digits d1 and d2, where d1 is not 0 and d2 is coprime to b, are there infinitely many primes whose first digit is d1 and last digit is d2, in base b? 61.224.131.153 (talk) 03:27, 23 September 2023 (UTC)[reply]

For base 2, the answer is affirmative. I suppose it is open for all other cases.  --Lambiam 07:45, 23 September 2023 (UTC)[reply]
For base 3, there are only four types of primes (excluding the prime 3):
  1. begin with 1 and end with 1
  2. begin with 1 and end with 2
  3. begin with 2 and end with 1
  4. begin with 2 and end with 2
None of these four types are known to be infinite? Is there a possibility only one or two of them are infinite? 61.224.131.153 (talk) 07:54, 23 September 2023 (UTC)[reply]
There are trivially infinitely many primes with first base-3 digit 1. For all , by Bertrand 's postulate, there is at least one prime between and , which must necessarily start with base-3 digit 1. GalacticShoe (talk) 11:50, 23 September 2023 (UTC)[reply]
In general, there are infinitely many primes starting with any integer in any base . Based on Bertrand's postulate#Better results, for sufficiently large , there is always a prime between and , which must necessarily start with . Similarly, for any coprime to base , there are infinitely many primes ending in , courtesy of Dirichlet's theorem. Proving that there are infinitely many primes both starting and ending with certain numbers would probably require some extensive work, possibly combining the two theorems. GalacticShoe (talk) 12:23, 23 September 2023 (UTC)[reply]
Dirichlet's theorem on arithmetic progressions implies that there are infinitely many primes ending with any particular string of digits in any particular base, if there are any. For example, consider the arithmetic progression 100d + 77. 100.36.106.199 (talk) 12:33, 24 September 2023 (UTC)[reply]
..., if there are at least two.  --Lambiam 13:20, 24 September 2023 (UTC)[reply]
For base 10, there are 36 types of primes (not counting the primes 2 and 5), in general, for base b, there are A062955(b) types of primes (not counting the primes dividing b), none of these 36 types of prime are known to be infinite? 61.224.131.153 (talk) 08:09, 23 September 2023 (UTC)[reply]
As far as we know, the answer is that (apart from base 2) none of these classes is known to be infinite.  --Lambiam 15:41, 23 September 2023 (UTC)[reply]

Are these true?

  1. 0^0 = 1 (exponentiation)
  2. 0! = 1 (factorial)
  3. eulerphi(0) = 0 (totient)
  4. Phi(0,x) = 1 (cyclotomic polynomial)
  5. the sum of all elements in the empty set is 0
  6. the product of all elements in the empty set is 1
  7. the intersection of all elements in the empty set is the set containing all things
  8. the union of all elements in the empty set is the empty set
  9. the sup of all elements in the empty set is -infinity
  10. the inf of all elements in the empty set is +infinity
  11. the gcd of all elements in the empty set is 0
  12. the lcm of all elements in the empty set is 1
  13. the “smallest prime factor” of 1 is +infinity (and 1 is p-rough number of all p)
  14. the “largest prime factor” of 1 is -infinity (and 1 is p-smooth number of all p)
  15. 1+2+3+4+… = -1/12 (and thus sigma(0) = -1/12 (sum of divisors))
  16. 1+2+4+8+… = -1

I think that all of them are true. 61.224.131.153 (talk) 03:42, 23 September 2023 (UTC)[reply]

It depends on definitions, and in many cases the expressions in question are simply undefined or dependent on context. I don't want to get lost in the weeds on every one of these, but it's certainly the case that 0!=1. The union over the empty set is taken to be the empty set. There is no set containing all things; assuming there is one would lead to certain contradictions in set theory, see for example Russell's paradox. To avoid this simply leave the intersection over the empty set as undefined; only define intersections over non-empty sets. Infimum and supremums aren't guaranteed to exist even for non-empty sets so I would leaven them undefined, at least over the reals. For the last two, yes, there are certain summation methods that yield finite expressions for these sums, but they aren't sums in the ordinary sense. Normally it's better to simply think of them as divergent series and to take σ(0) to be an undefined expression. --RDBury (talk) 05:14, 23 September 2023 (UTC)[reply]
Both 0^0 are 0! are empty product, and the empty product is 1, right? 61.224.131.153 (talk) 07:54, 23 September 2023 (UTC)[reply]
No they are not both empty products. One could easily define 0^n as 42*0*0...*0 with n zeroes - and that is zero if n is 1 or more but 42 when n is zero. 1!= 0!*1 and 1!=1 and from that we get 0!=1. Anyway 0^0 is mostly defined as 1 as described in the article linked to just below but when you're dealing with reals rather than integers things become more complicated NadVolum (talk) 12:12, 23 September 2023 (UTC)[reply]
Well, a^0 is empty product for any complex number a including a=0, also in any field with additive identity and multiplicative identity , for every a in the field, including . 220.132.230.56 (talk) 23:02, 23 September 2023 (UTC)[reply]
True, but is 0 for any x not equal to 0, so no matter how you define it, some limit is going to break. You define it ultimately using whatever definition is most useful to you. If that definition is based on the empty product, then so be it. GalacticShoe (talk) 23:13, 23 September 2023 (UTC)[reply]
No, 0^x is undefined for negative x and complex x. 220.132.230.56 (talk) 02:38, 24 September 2023 (UTC)[reply]
According to your definitionm and I think it is the best way. But some consider 1+0i to be the same as 1 and would have 0^(1+0i) be 1. Personally I treat 1 as different from 1+0i or 1.0 NadVolum (talk) 10:53, 24 September 2023 (UTC)[reply]
All the more reason to leave 0^0 as whatever is most convenient. Not every definition of it is going to be consistent with each other. Choose whatever suits you. GalacticShoe (talk) 15:59, 24 September 2023 (UTC)[reply]
For the first question, see our article Zero to the power of zero.  --Lambiam 08:03, 23 September 2023 (UTC)[reply]

Why the largest number which a scientific calculator does not overflow is 10^100, not power of 2?

Why the largest number which a scientific calculator does not overflow is 10^100 (googol), not power of 2? Computers only have “on” and “off” two situations, and the largest number make single-precision floating-point format and double-precision floating-point format not overflow are also powers of 2 (2^127 and 2^1023, respectively), but why the largest number which a scientific calculator does not overflow is power of 10 instead of power of 2? 61.224.131.153 (talk) 04:30, 23 September 2023 (UTC)[reply]

who says it is? --jpgordon𝄢𝄆𝄐𝄇 04:33, 23 September 2023 (UTC)[reply]
It depends on the calculator, there is no standard "overflow size" unless the makers decide to follow IEEE guidelines. (See IEEE 754.) I was always under the impression that a lot of calculators worked in base 10 natively; that was certainly the case for the old mechanical models. --RDBury (talk) 04:42, 23 September 2023 (UTC)[reply]
My HP 20s and HP 35s go higher than 10^100 - I'm not sure how high they will go. I think that since calculators are designed for decimal input and output, and calculation speed is not important, they probably use binary-coded decimal to encode one decimal digit with four (or eight) bits. Bubba73 You talkin' to me? 06:25, 23 September 2023 (UTC)[reply]
But use binary-coded decimal will lose 3/8 of combinations and less economical, why not use the double-precision floating-point format directly? 61.224.131.153 (talk) 07:56, 23 September 2023 (UTC)[reply]
Speed and extra bits of precision re not problems for a calculator. Working in decimal gives less funnies that people can complain about. Like that dividing by ten doesn't work exactly in binary but it does work out okay when done by hand. Silly complaints using up their money are the last thing a manufacturer wants. NadVolum (talk) 11:58, 23 September 2023 (UTC)[reply]
That's right. People like thing to be right in decimal. I think there was an early HP calculator that gave 22=3.999. Bubba73 You talkin' to me? 17:09, 23 September 2023 (UTC)[reply]
Ha ha, that's a good one! ;-) Yes there wasn't much space for programs in the early calculators. Chips have got so much cheaper since then. NadVolum (talk) 18:05, 23 September 2023 (UTC)[reply]
The HP-35 got all of its functions into 767 instructions (7670 bits). It doesn't have a "square" key, so it must have used logs and exponentials to calculate xy, and had a rounding error, or did not use enough accuracy. Bubba73 You talkin' to me? 01:44, 24 September 2023 (UTC)[reply]
Dividing by ten doesn't work exactly in binary, but dividing by six also doesn't work exactly in decimal, right? 220.132.230.56 (talk) 22:57, 23 September 2023 (UTC)[reply]
Clearly the Babylonians got it right by working in base sixty. AndyTheGrump (talk) 01:48, 24 September 2023 (UTC)[reply]
"What Every Computer Scientist Should Know About Floating-Point Arithmetic". I would think a modern cheap scientific calculator would be decimal IEEE 854-1987 (now part of 754) or if they had a general purpose processor computer algebra system with arbitrary precision? fiveby(zero) 12:33, 23 September 2023 (UTC)[reply]
Depending on the calculator, it may be a limitation of the format chosen for displaying the output, not of the internal format used for performing the actual calculations. If there are only two digits available on the display for denoting the exponent in scientific notation, +9.999999999E+99 is the largest number that can be displayed.  --Lambiam 08:22, 23 September 2023 (UTC)[reply]
that's actually a good answer to an odd question. I imagine the internal overflow value is something a bit greater than 2^332. --jpgordon𝄢𝄆𝄐𝄇 17:11, 23 September 2023 (UTC)[reply]
According to the Owner's Handbook for my (c. 1984) HP-15C,

Regardless of the display format, the HP-15C always internally holds each number as a 10-digit mantissa and a two-digit exponent of 10

which presumably explains why

When the result of a calculation ... is a number with a magnitude greater than 9.999999999×1099, ... the overflow flag ... is set.

Mitch Ames (talk) 09:24, 25 September 2023 (UTC)[reply]
However, computer (and calculator) can only have “on” and “off” situations, thus they can only use base 2 to memorize numbers. 118.170.54.98 (talk) 09:50, 25 September 2023 (UTC)[reply]
They don't have to use binary - they can use binary-coded decimal; according to [1] the HP-15C did. Mitch Ames (talk) 11:05, 25 September 2023 (UTC)[reply]
Better than binary-coded decimal, there are the IEEE 754 decimal formats. — Vincent Lefèvre (talk) 11:15, 25 September 2023 (UTC)[reply]
But double-precision floating-point format is really the best. 220.132.230.56 (talk) 12:27, 25 September 2023 (UTC)[reply]
Although bcd calculations are slower than binary calculations, you save time by not doing decimal to binary and back to decimal conversions before and after every calculation (because calculators update the display after every calculation, while computers only print the final result). Also you save memory by leaving out the binary to decimal and vice versa code. 212.178.135.35 (talk) 13:17, 25 September 2023 (UTC) Martin.[reply]

Log and exponential functions

I think this is something I should be able to figure out myself, so I don't want to look up the answer. But, given the definition , or alternatively defining exp as the unique solution (with appropriate constants) of , I'd like to prove the basic properties and so on. "Prove" means at the level of "intro to real analysis" classes where they try to teach you to be fussy about proofs. I did take one of those classes years ago, but this didn't come up. I was just thinking about it recently: is it difficult? What is the basic direction? No complete answers please. Thanks. 2601:644:8501:AAF0:0:0:0:86EA (talk) 21:28, 23 September 2023 (UTC)[reply]

Rgdboer's answer for is excellent, but it may go further than hinting. Hopefully not detracting from his answer, summarizing it as a hint I would say "split the integral." GalacticShoe (talk) 00:36, 24 September 2023 (UTC)[reply]
Gregoire de Saint-Vincent noted in 1647 that rectangular hyperbolas are stable under squeeze mapping, so planar areas are preserved not only in the whole plane, but also under the hyperbola. Thus the hyperbolic logarithm developed a century before Euler’s exponential functions ax.
The correct statement is where t is a bound variable.
Here the case a>1 and b>1 is considered first so the areas extend to the right of x=1 and the integral is seen as the area over [1,x] and under the hyperbola xy=1. Using b as squeeze parameter,
Then
The cases where one or both of a, b are in the unit interval are similar when signed areas are noted.— Rgdboer (talk) 00:13, 24 September 2023 (UTC)[reply]
Being "fussy about proofs" applies to the basics, defining limits, defining derivatives, defining integrals, the fundamental theorem of calculus, intermediate value theorem, mean value theorem, etc. Once you you've proved them rigorously you can apply them just as you would in freshman calculus. One of the basics is the chain rule and (via FTC) substitution in integration. Another is that you can break up the interval of integration: So the solution give above is rigorous, being an application of facts already proven. (I would have left it at that as enough of a hint. But Wikipedia is not "spoiler free". Maybe if you said it was a homework problem the answer would be more just a hint since we have a "we won't do your homework for you" policy.)
Btw, you can define exp(x) as . It's a somewhat more practical definition because it gives actual approximate values instead of depending on a "unique solution". (To use the unique solution definition rigorously you'd need to prove that there is a solution and that there is no more than one solution. Actually there is more than one solution to ; you probably meant the initial value problem There are theorems which prove the existence and uniqueness of solutions to initial value problems, but the conditions are fiddly. Your intro to analysis course probably didn't cover most, or even any, of them. I'll dig up my copy of Protter and Morrey to see what they did.) You can also define exp(t) using it's series, and in fact that's what is usually done. In that case it's much easier to prove x = exp(t) satisfies . Of course when there are competing definitions you're required to prove that they define the same thing, so there's another challenge for you. --RDBury (talk) 00:58, 24 September 2023 (UTC)[reply]
PS. Protter and Morrey define exp as the inverse of ln, the general properties of inverse functions having been covered already. They also give a theorem proving there is a unique solution to an initial value problem under certain conditions, and the proof is an application of facts about contraction mappings in metric spaces. This seems to me to be overkill for the current question. I'll see what else I can find. --RDBury (talk) 01:33, 24 September 2023 (UTC)[reply]

Thanks all. I did think about separating the integral into the a part and the b-a part. I'll see what more I can do. The underlying idea is to use exercises like this to get back into "shape" math-wise, if that makes any sense. Yes I'm used to exp being defined as the inverse of log. It just seems ok to also do it the other way. exp' = exp having a unique solution follows from the Picard-Lindelof theorem of course. 2601:644:8501:AAF0:0:0:0:86EA (talk) 20:46, 24 September 2023 (UTC)[reply]

September 24

at least one identical vertex, one identical side...

Let W be the set of Convex Polygons. Let R be a subset of W consisting of those Convex Polygons with rotational Symmetry. Let F be the subset of W where f is in F when for every vertex v in F, there is another vertex u with the same measure as v *and* for every edge e in F, there is another edge d with the same measure as e. R is a subset of F, but is R equal to F?Naraht (talk) 01:07, 24 September 2023 (UTC)[reply]

I assume that the measure of a vertex is the interior angle at that vertex and the measure of an edge is the same as its length. The pentagon with vertices is in F but does not possess rotational symmetry.  --Lambiam 07:00, 24 September 2023 (UTC)[reply]
Lambian Yes, your assumption is correct and thank you for the counter example!Naraht (talk) 02:35, 25 September 2023 (UTC)[reply]

Smallest "prime" for which Fermat's last theorem was NOT proved

Hello. I'm French and I asked this question [2] in our Oracle. Now I ask the same question in your Reference Desk.
I've just reread the article on Fermat's Last Theorem. Before Andrew Wiles' proof, this theorem had been proved for almost all integers > 2. It had been proved for infinite families of primes... but not all. Hence several questions:
Q1) Do we know the smallest prime number for which this theorem was not proved?
Q2) Is the sequence of these very special primes (or part of it) available anywhere on the Internet?
Q3) Could I get some informations about this subject in the site OEIS? Thanks.Jojodesbatignoles (talk) 11:49, 24 September 2023 (UTC)[reply]

Fermat's last theorem is extremely difficult to prove for the primes p satisfying these three conditions simultaneously:
  1. All of 2p+1, 4p+1, 8p+1, 10p+1, 14p+1, 16p+1 are composite (sequence A152625 in the OEIS)
  2. p is Bernoulli irregular prime (sequence A000928 in the OEIS)
  3. p is Euler irregular prime (sequence A120337 in the OEIS)
The first few such primes p are 263, 311, 379, 461, 463, 541, 751, 773, 887, 971, …
—— 223.141.74.3 (talk) 01:47, 25 September 2023 (UTC)[reply]

Translated with the help of www.DeepL.com/Translator (free version) Jojodesbatignoles (talk) 11:49, 24 September 2023 (UTC)[reply]

In July 1993 (so, before Wiles' final, correct proof was published in May 1995) this paper was published, noting that FLT was true for all primes below four million. So, at the very least, the answer to Q1 is greater than that. Double sharp (talk) 12:27, 24 September 2023 (UTC)[reply]
For the history of proving for FLT: (of course, not count p = 1 and p = 2)
  1. p = 4
  2. p = 3
  3. p = 5
  4. p = 14
  5. p = 7
  6. p is Sophie Germain prime (i.e. 2p+1 is prime)
  7. p is Bernoulli regular prime
  8. p = 59, 67, 74
  9. p = 37 (thus complete all p <= 100)
  10. p <= 256 (in fact only complete all p <= 216, but all primes between 216 and 256 are either Bernoulli regular prime or Sophie Germain prime)
  11. p is Euler regular prime
  12. p <= 618 (p = 619 is extremely difficult to prove, and 619 is both Bernoulli irregular and Euler irregular, but note that 619*4+1 is prime)
  13. primes p such that at least one of 2p+1, 4p+1, 8p+1, 10p+1, 14p+1, 16p+1 is prime
  14. p <= 125000 (125003 is Bernoulli irregular, but 2*125003+1 is prime, thus the smallest prime such that FLT was not proved at that time should not be 125003, but I don’t know what is the smallest prime p > 125000 which is both Bernoulli irregular and Euler irregular, and none of 2p+1, 4p+1, 8p+1, 10p+1, 14p+1, 16p+1 is prime)
  15. primes p such that 2^(p-1) != 1 mod p^2
  16. primes p such that 3^(p-1) != 1 mod p^2
  17. primes p such that F_{p-(p|5)} != 1 mod p^2
  18. primes p not dividing the numerator of the Bernoulli number B(p-3)
  19. primes p such that q^(p-1) != 1 mod p^2 for at least one of q = 5, 7, 11, 13, 17, 19
  20. primes p such that q^(p-1) != 1 mod p^2 for some prime q <= 89
  21. primes p such that q^(p-1) != 1 mod p^2 for some prime q <= 113
  22. all p
223.141.74.3 (talk) 01:59, 25 September 2023 (UTC)[reply]
It seems to be no primes not satisfying any of conditions 1 to 21, but currently there is still no proof. 223.141.74.3 (talk) 02:16, 25 September 2023 (UTC)[reply]
The primes with was “the smallest prime number for which this theorem not proved” in history are 3, 5, 7, 11 (when p=7 was proved), 13 (when Sophie Germain primes were proved), 37 (when Bernoulli regular primes were proved), 101 (when p<=100 were proved), 257 (when p<=216 were proved), 263 (when Euler regular primes were proved), 619 (when p<=618 were proved), 751 (when primes p such that at least one of 2p+1, 4p+1, 8p+1, 10p+1, 14p+1, 16p+1 is prime were proved), >125000 (when p<=125000 were proved), >10^14 (when non-Wieferich primes were proved), … 223.141.15.192 (talk) 04:49, 25 September 2023 (UTC)[reply]

September 25

Are there infinitely many primes of the form 1000…0007, 333…3331, 7111…111, or 3444…4447 in base 10?

Are there infinitely many primes of the form 1000…0007, 333…3331, 7111…111, or 3444…4447 in base 10? (i.e. are (sequence A088274 in the OEIS) and (sequence A055557 in the OEIS) and (sequence A056719 in the OEIS) and (sequence A102969 in the OEIS) infinite?) Are there infinitely many primes of the form 3*2^n+1, 3*2^n-1, 2^n+3, or 2^n-3 (they are primes of the form 11000…0001, 10111…111, 1000…00011, 111…11101, respectively, in base 2)? In general, give a base b and a digit d and two strings (may be empty) x and y, are there infinitely many primes of the form xddd…dddy in base b? I know that, like Bunyakovsky conjecture and Schinzel's hypothesis H, some conditions (divisible by small primes or have an algebraic factorization) are necessary, e.g.

  1. There are no primes of the form 9444…4449 in base 10, since all such numbers are divisible by 3, 7, 11, or 13.
  2. There are no primes of the form 2777…777 in base 9 (not count the prime 2), since all such numbers are divisible by either 2 or 5.
  3. There are no primes of the form 3888…888 in base 9 (not count the prime 3), since all such numbers have a difference of two squares factorization.
  4. There are no primes of the form 1000…0001 in base 8, since all such numbers have a sum of two cubes factorization.
  5. There are no primes of the form BBB…BBB9B in base 12, since all such numbers are either divisible by 13 or have a difference of two squares factorization.
  6. There are no primes of the form 8DDD…DDD in base 14, since all such numbers are either divisible by 5 or have a difference of two squares factorization.

And thus I made a conjecture like Bunyakovsky conjecture and Schinzel's hypothesis H:

If b >= 2 and d is a digit in base b and x and y are two strings (may be empty) of the base b digits, there are infinitely many primes of the form xddd…dddy in base b unless it can be proved that there are no primes or only finitely many primes of this form, by the divisible by small primes or the algebraic factorizations, or a combination of them.

This conjecture is that the conditions above are not only necessary but also sufficient, can someone prove or disprove this conjecture (I think that this conjecture may be more difficult to prove than Schinzel's hypothesis H, but I really don’t know why this conjecture is not as famous as Schinzel's hypothesis H, and does not have an official name?)? 118.170.53.159 (talk) 02:33, 25 September 2023 (UTC)[reply]

Alternate calendar algorithms

Do other calendars (excluding the Gregorian and Julian calendars) have anything similar to the Doomsday algorithm, where I can plug the dates into some equation and find out the day of the week? Primal Groudon (talk) 04:23, 25 September 2023 (UTC)[reply]