Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 50.0.121.102 (talk) at 18:41, 23 January 2014 (→‎Best divisor approximation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 18

Expected proportion of completed files?

While waiting around for files in some large torrents to complete, I observed that, without specific files being assigned higher priority, the percentage of files complete is usually much lower than the percentage of the torrent complete, for obvious reasons (for example, if you have 30% of the entire torrent complete, it's likely that you have less than 5% of the files complete, because that 30%-completed is spread out between files, so that you have a little bit of many mostly incomplete files, and a very small number of fully-completed files where the pieces just happened to line up). I began to wonder exactly how many files you should expect to be complete, which got me to the following generalized question:

Suppose a torrent consists of n equally-sized files, each consisting of exactly p identically-sized pieces (with no pieces being shared between files), and pieces are downloaded at random and with equal priority. (So the total torrent will consist of np pieces.) If x is the fraction/percentage of the torrent that is complete (that is, the number of complete pieces divided by the total number of pieces), then what fraction/percentage of the files should we expect to be complete?

(Obviously, the above makes a few simplifying assumptions, as, in real life: not all files of a torrent will be equally-sized; users can configure clients to prioritize some pieces higher than others; piece are downloaded based upon availability, which is not fully random; and pieces at the front- or tail-end of files will usually be shared between the files, containing bytes from both files.)

Anybody know how to solve the above problem? Do we have an article on this generalized problem, since I could see it applying to other situations besides torrents? (A similar problem: suppose a bucket is filled with 100 balls of 10 different colors, so that there is one ball of each color; after picking x balls without replacement, what is the expected number of colors for which you have every ball of that color?) Thanks.

SeekingAnswers (reply) 01:48, 18 January 2014 (UTC)[reply]

You could get a fair approximation by computing the probability of each file being completed as if it was independent of the rest. So, say a given file has 10 chunks, and each chunk has a 60% chance of being completed. The chance of all 10 being completed is then PFILE 1 COMPLETED = 0.610. You would calculate all of the P values this way, then combine them to get the total probability (you know how to do this ?). This would be quite straightforward, but you'd want to use a program to calculate it quickly.
Of course, the files are not really independent of each other, since, if more chunks of one file are completed, this would mean fewer chunks are available to spread among the other files. So, to do it properly, you'd have to look at every possible distribution of completed chunks, calculate how many files are completed that way, then combine those together. Of course, this approach would quickly become impractical, even using a computer to do the math.
Another thought, can it download two chunks on the same file at once ? If not, then this complicates things a bit further, as the completion of each chunk is now dependent on the completion of the other chunks in the same file. StuRat (talk) 02:04, 18 January 2014 (UTC)[reply]
Suppose that Xi is the random variable that is equal to 0 if piece i (of np, numbered from 1) is incomplete and 1 if piece i is complete. Fix c, the fraction of pieces complete; then Xi is 0 with probability 1 − c and 1 with probability c. Define Yj to be the random variable that is equal to 0 if file j (of n) is incomplete and 1 if file j is complete. Since the Xi are independent random variables, Yj is 1 with probability cp and 0 otherwise, so its expected value is cp. Since the Yj are independent random variables, the expected number of complete files is ncp. Ozob (talk) 01:14, 20 January 2014 (UTC)[reply]
As StuRat explained, given that the fraction complete is c, the Xi are not independent - if one piece is known to be completed, it reduces the chance of all other pieces being completed.
E.g., if there are 2 files with 2 pieces each, and you know that 50% of the pieces have been completed, then the average number of completed files is 1/3, not 1/2.
However, it should be a good enough approximation. -- Meni Rosenfeld (talk) 14:06, 20 January 2014 (UTC)[reply]

Add up positive numbers to get negative number

Can someone say something about this.. http://www.slate.com/blogs/bad_astronomy/2014/01/17/infinite_series_when_the_sum_of_all_positive_integers_is_a_small_negative.html [1] How can adding up infinite number of positive numbers get you a negative number? 220.239.51.150 (talk) 03:40, 18 January 2014 (UTC)[reply]

The series is actually divergent and doesn't have a sum at all in the usual sense. When you apply formulas to a divergent series, even formulas that work perfectly well for a convergent series, weird things can happen. Looie496 (talk) 04:20, 18 January 2014 (UTC)[reply]
The appropriate article would seem to be Cesàro summation Rojomoke (talk) 04:55, 18 January 2014 (UTC)[reply]
Actually, we have an article on this very series: 1 + 2 + 3 + 4 + ⋯ --Mark viking (talk) 05:38, 18 January 2014 (UTC)[reply]

The article in the url link specifically says that

220.239.51.150 (talk) 12:28, 18 January 2014 (UTC)[reply]

The linked article (Ramanujan summation) gives that value for the Ramanujan sum, but it also says: "Ramanujan summation essentially is a property of the partial sums, rather than a property of the entire sum, as that doesn't exist", and Ramanujan himself also wrote: "If I tell you this you will at once point out to me the lunatic asylum as my goal". Sometimes, though, counter-intuitive working can give useful real results. Dbfirs 14:31, 18 January 2014 (UTC)[reply]
Even for partial summation, there is no way you can add up a finite number of positive numbers to get a single negative number. Ohanian (talk) 07:05, 19 January 2014 (UTC)[reply]

This is clearly false, here is the proof. You will pay me one dollar on the first day, two dollars on the second day, three dollars on the third day and so on and so forth for ALL ETERNITY. But in reality, you don't owe me anything, instead I owe you eight cents and one third of a cent !!! Does that make any sense to you??? Ohanian (talk) 07:15, 19 January 2014 (UTC)[reply]

The flaw in that argument is that you haven't gone all the way to "all eternity" - because you can't. You've stopped at some undefined point well before then. The trick depends on actually going ALL the way. Get back to us when you get there.  :) -- Jack of Oz [pleasantries] 07:58, 19 January 2014 (UTC)[reply]
Well, there are lots of operations you can do on a completed infinite collection. This particular one, though, I don't think really makes all that much sense except in very specific contexts. The sum in the sense of measure theory, for example, would be ∞, and that's what I would generally consider the default answer unless there is some contextual information that makes the Ramanujan sum seem appropriate. --Trovatore (talk) 08:19, 19 January 2014 (UTC)[reply]
I thought this "sum" was obtained somehow using analytic continuation. I'm pretty sure, as well, that it appears in a fundamental rôle in String Theory by Barton Zweibach. Don't have it available a t m. YohanN7 (talk) 08:39, 19 January 2014 (UTC)[reply]
Indeed, using the analytic continuation of the Riemann zeta function (in particular the functional equation) you can conclude . Since the zeta function is defined as for , you can (somewhat dubiously) declare
Gutworth (talk) 20:04, 19 January 2014 (UTC)[reply]
See also 1+2+4+8. Bo Jacoby (talk) 20:59, 20 January 2014 (UTC).[reply]
I think the previous remarks handle this pretty well, but I'll add that this basically comes down to abuse of notation. The popular press somehow got a hold of this (old) notion recently, and most of the press gets it at least partially wrong or sloppy. To be fair to the press, getting it right in detail requires the equivalent of a math degree :) Suffice it to say, the infinite sum has no limit, even under the common rules which do allow for infinite sums to converge to finite sums. The sum can behave as claimed--but only under a rather esoteric and looses sense of what it means to write that equals sign.
I'd be happier if 1/2 + 1/4 + 1/8 + 1/16 + ⋯=1 were being passed around as a "cool math fact" -- it's a concept many people haven't heard of, and it can be explained pretty clearly in a short piece! SemanticMantis (talk) 00:27, 21 January 2014 (UTC)[reply]

Ramanujan summation: Independent of f, or not?

So our article on Ramanujan summation, which is apparently what's used to assign a value to 1+2+3+..., strikes me as not the clearest math article I've ever seen on Wikipedia, but the idea filters through to me that one takes certain formulas for sums of the form Σf(n), where I'm being deliberately vague about the limits of summation, and then applies them outside the region where they can be proved to be correct in the sense of the ordinary absolutely convergent infinite sum, and then one declares that to be the answer. Yes?

If I've followed it that far, then to me, the obvious question is, do I get the same answer if I use a different function g, but one that agrees with f on the values that I'm summing? Maybe, if some regularity condition is imposed on g, say that it be an entire function or something? If so, then I can agree that this procedure is genuinely isolating a property of the sum itself. But if not, then it seems to be a property of a particular representation of the sum, and therefore not properly a sum of 1+2+3+... at all. Does anyone know? --Trovatore (talk) 00:42, 21 January 2014 (UTC)[reply]

I don't know, but this quote from the article hints that you might be right on the claim that "classical" R_sum might be ill-defined/multi-valued in some cases:
"More recently, the use of C(1) has been proposed as Ramanujan's summation, since then it can be assured that one series \scriptstyle \sum_{k=1}^{\infty}f(k) admits one and only one Ramanujan's summation"
The phrasing certainly implies to me that either the C(0) approach may fail to define an R_sum of certain series, or that the R_sum may not be unique. SemanticMantis (talk) 15:35, 21 January 2014 (UTC)[reply]

January 19

BODMAS

BODMAS MEDTHOD IS VERY CONFUSING FOR ME.CAN ANYONE EXPLAIN IT PLEASE? — Preceding unsigned comment added by G.gadhadharan (talkcontribs) 15:49, 19 January 2014 (UTC)[reply]

Did you see order of operations ? Did that not cover it ? StuRat (talk) 16:11, 19 January 2014 (UTC)[reply]

What part of it do you want explained? — Preceding unsigned comment added by Bliever1 (talkcontribs) 00:28, 20 January 2014 (UTC)[reply]

BODMAS is the easiest thing in the world.

Brackets, Orders, Division, Multiplication, Addition, Subtraction.

This is how you do it.

Step 1. Get an expression

Step 2. Eliminate the brackets by calculating the subexpression within the bracket using BODMAS (recursively)

Step 3. At this step there are no more brackets so you eliminate the "Order" or exponentials or roots by calculating the subexpression of the "Order" using BODMAS (recursively)

Step 4. Calculate the Divisions by calculating the numerator divided by the denominator. You may need to calculate the expression of numerator using BODMAS (recursively). You may also need to calculate the expression of the denominator using BODMAS (recursively). If there are multiple divisions then the direction of the division goes from LEFT to RIGHT

Step 5. Multiplications going from Left to Right. Because it is multiplication the direction is irrelevant but I go from Left to Right.

Step 6. Calculate the additions and subtractions going from Left to Right. Treat subtraction as addition of negative numbers.

Step 7. The end. That's all you need to do.

Oh I forgot, you need to understand the concept of recursion.

What my teacher did not teach me

My maths teacher did not teach me that

  • Multiplication and Division has the same level of priority
  • Addistion and Substraction has the same level of priority

Multiplication and Division

To understand why Division has the same priority as Multiplication, you need to convert Division into multiplication of a reciprocal

becomes

Now you can see division as multiplication. Just Multiple up the numbers from LEFT to RIGHT. You can do it on a calculator by tapping the following keys "1" * "2" / "3" * "4" / "5" =

Another example

becomes

Tap the following keys on the calculator "1" / "2" / "3" * "4" / "5" =


Addition and Subtraction

becomes

Again just Add up the numbers from LEFT to RIGHT


220.239.51.150 (talk) 09:39, 20 January 2014 (UTC)[reply]

For reference, our article is at Order_of_operations. Note that this is a convention that we often follow. There is nothing else that makes it the "right" interpretation. So, compared to the rest of mathematics, this is one part that is fairly arbitrary! Other types of notation also work just fine, see for example reverse polish notation. SemanticMantis (talk) 21:58, 20 January 2014 (UTC)[reply]

January 20

Algebraic factorisation

This expression arose in some probability calculations I was doing on a square grid:

p^4 (1 + 4q + 10q^2 + 20q^3) + q^4 (1 + 4p + 10p^2 + 20p^3)

For p+q=1 it should have value 1 (checked with some trial values), suggesting that some power of (p+q) is a factor. Is it possible to factorise it in these terms?→31.54.113.130 (talk) 22:11, 20 January 2014 (UTC).[reply]

p+q is not a factor. Setting q=-p does not give zero. Sławomir Biały (talk) 23:30, 20 January 2014 (UTC)[reply]
The fact that when means that is a factor of . The factorization is
but this is of no help whatsoever in factoring (without the -1 term). In fact there is no multinomial in and with with integer or rational coefficients which is is a factor of your original expression (other than 1 and the expression itself). --catslash (talk) 00:25, 21 January 2014 (UTC)[reply]
Thanks, I wasn't thinking straight. I now realise that if (p+q) is taken out as a factor of some of the terms it can then be set to 1 and vanish, whereupon the expression shortens and can be simplified in a like manner until eventually it does become a power of (p+q) and thus seen to have value 1. This is less tedious than substituting for one of p and q as 1 minus the other and then expanding all of the powers.→31.54.113.130 (talk) 19:53, 21 January 2014 (UTC)[reply]

January 21

Lottery and math question.

First lets imagine http://en.wikipedia.org/wiki/Mega-Sena.
Now, lets imagine a guy buy one ticked per month. This is 12 tickets per year.
Anyway, imagine he, instead of doing that, brought all 12 tickets in the first month of the day (and didnt broughtr tickets for the rest of the year), would this guy have a better chance of winning than with the way he used to bet?201.78.152.70 (talk) 11:44, 21 January 2014 (UTC)[reply]

The short answer - no. Only buying more tickets per draw would improve his chances. Start with Probability_theory and the related articles to see why. 196.214.78.114 (talk) 12:49, 21 January 2014 (UTC)[reply]

I think it does. a guy should buy 12 tickets in January only. Then when the results for January's draw is out the guy must hide in a mine shaft. Then he communicate with his trusted friend and ask if ticket #1 has won? If his friend said "no it has not" then the probability of ticket #2 to ticket #12 winning has increased!!! Next repeat by asking his trusted friend if ticket #2 has won. When his friend said "no ticket #2 has not won" then the probability of ticket #3 to ticket #12 winning has increased!!! Keep doing so until only ticket #12 is left. You will find now that the probability of ticket #12 winning is higher than it was before the lottery result has been announced.

I know you will not believe me. So try this thought experiment. Imagine a lottery which consists of just 13 different numbers. Calculate your chances of wining in two different ways.

(First way) Buy 12 tickets in January

(Second way) Buy 1 ticket a month from January to December

Next, Imagine a lottery which consists of just 14 different numbers. And keep increases the number until you hit the X different numbers where X is the total number of ways you can choose a lottery ticket.

Ohanian (talk) 15:22, 21 January 2014 (UTC)[reply]

Doing it the second way, you could (theoretically) win every time. You can only win once the first way... AndyTheGrump (talk) 16:03, 21 January 2014 (UTC)[reply]
I'm not going to write out the formulas, but the probability is actually slightly lower using the second method. However, the difference is on the order of p2, where p is the probability of winning, so it is totally meaningless. Also, as Andy said, it is compensated by the possibility of winning more than once. Looie496 (talk) 16:17, 21 January 2014 (UTC)[reply]
Thanks for the info.
"it is compensated by the possibility of winning more than once." Op here, well when someone win the lottery, they usually dont think "hey, winning again would be really cool", most would never bet on lottery again. So this compensation is not really one.201.78.152.70 (talk) 17:16, 21 January 2014 (UTC)[reply]
Looie and Ohanian are correct, if you only consider the chances of winning, and the lotteries are fairly "normal" -- but to make any precise statements or formalize the chances, you'd have to make assumptions about the lotteries. E.g. if lotteries occur once a month, with the same number of tickets sold, and the same jackpot, the analysis becomes much easier. Without assumptions, we can't say much of anything. What if e.g. the December lottery only sells one ticket? SemanticMantis (talk) 18:00, 21 January 2014 (UTC)[reply]
I was assuming that any realistic wining prize would be considered good enought and the guy would not care about betting again. — Preceding unsigned comment added by 201.78.155.202 (talk) 12:01, 22 January 2014 (UTC)[reply]
Assuming that there is no significant variation in procedure each drawing, then yes, buying all your tickets for one draw is the best strategy to achieve the goal of "win once", though for most lotteries, the increase in success is quite minimal. If you are considering this problem purely out of interest, though, then the case of a monthly raffle may be of interest. Specifically, where the winner is decided by drawing the winning ticket from the pool of purchased tickets; it is interesting since, then, the "buy all tickets for one draw" strategy can end up only being better in the event that not everyone follows it. For example, with two drawings and two players, each able to buy two tickets, and able to use strategies A = "buy 2 for first drawing" and B = "Buy 1 ticket for each"; if both use A, they each have a 1/2 chance of winning; if both use B, they each have a 3/4 chance of winning at least one; if player 1 uses A and player 2 uses B, then player 1 has a 2/3 chance of success and B is guaranteed to win one. Again, if you are interested in the mathematics of it, it might be fun to consider the general outcome of m players, n drawings, k tickets total per player, and allowing any distribution of tickets to drawings; specifically, the problem of what generates the highest expected number of players getting "at least one win". Sorry for the long rant, just food for thought:-)Phoenixia1177 (talk) 07:45, 23 January 2014 (UTC)[reply]

probability

Please help me to find the answer for the following question. In a particular town the probability that an adult owns a car is 0.7. while the probability that an adult owns a car and is employed is 0.6.If a randomly selected adult is found to own a car, what is the probability that he is also employed? Thank you.175.157.92.171 (talk) 16:10, 21 January 2014 (UTC)[reply]

Oh. I love mental arrhythmic homework question's. I have never driven 7 tenths of an automobile so can't answer you.--Aspro (talk) 20:20, 21 January 2014 (UTC)[reply]
Would you believe 6/7?→31.54.113.130 (talk) 20:24, 21 January 2014 (UTC)[reply]
I think you are wrong. My calculations shows it is "0.6"/"0.7" 220.239.51.150 (talk) 04:11, 22 January 2014 (UTC)[reply]
Please see our article Conditional probability. The relevant formula here is:
but math should never be about memorizing formulae. Read the article and study the illustrations until you fully understand the concept. Then, assuming that you understand the mathematical symbols, the formula will simply be the most concise way of expressing the concept. In the future you will be able to recreate the formula on your own after drawing your own diagram. If you have to look up the formula (or rely on memorizing it), then you don't understand the concept. -- ToE 05:28, 22 January 2014 (UTC)[reply]

Thank you very very much. Its not for me. For my son.175.157.57.250 (talk) 11:04, 22 January 2014 (UTC)[reply]

Best divisor approximation

Working on a recreational mathematical problem and came to think of this problem:

Given an integer n with a known prime factorization, is there an efficient algorithm for finding the divisor d of n, which best approximates some positive real number such that is minimized?

Another variant could be to find the divisor such that is minimized?

An obvious method is to iterate through an unsorted list of all divisors from the prime factorization (easy to implement) and pick the one that comes closest. However, I would like this to work for highly composite numbers, such as primorials, where the number of divisors are overwhelmingly large.

An alternative idea I came to think of was a binary search, but this requires that I can generate the kth ordered divisor relatively efficient, and I am not aware of such a method. I did notice though an interesting post from Numerical Recipes in 2009, divising a method for generating an ordered set of divisors of n without having to precalculate all of them and then sort them (although this appears to be the most efficient method unless yu are bounded by memory). At least then you know when to stop the iteration, but the method is less efficient than simply finding all unordered divisors. The method uses some rather complicated data structures, and it appears to me that it can be adapted in a way such that a more directed search for a divisor close to or bound by some number can be done thus cutting down the number of computations significantly. I probably just need to read it 5-10 more times, and maybe I will understand :-).

A third idea I have had is to consider , thus converting the bound variant of the problem into the knapsack problem, in general, an NP-hard problem. The floating point values can be converted into integers by finding a minimum difference between the log of the prime factors, consider the possible buildup of rounding errors from the known number of terms and multiply by a suitable multiple of the smallest difference and floor it. Given that the problems seems to be equivalent to an NP-hard problem, I guess this means there are no simple ways to do this? Or can some properties of the prime factors be used to make it an 'easier problem'?

--Slaunger (talk) 20:17, 21 January 2014 (UTC)[reply]

This problem isn't equivalent to the knapsack problem but NP-completeness can be used to eliminate possible appraoches which are ultimately dead ends. For example, from my reading of the description of the algorithm in the blog, it could be adapted to produce an ordered list of all possible sums for an instance of the knapsack problem. If this could be combined with a binary search to produce an efficient algorithm then it would prove P=NP, which is unlikely. Having an algorithm that produces a sorted list of values is different than having an algorithm that predicts the next value from the current value. In the first case the algorithm may be storing results from the computation of all the previous values. So to use the algorithm to predict a next value you may have to, in effect, compute all the previous values again, making this approach very inefficient for what you're trying to do. You would run into the same kind of issue trying to do a binary search.
One quality of the divisor problem that you don't get with the general knapsack problem is nondegeneracy. For example you may have knapsack items of weights 3, 5, 11. These can be added in different ways to get the same weight, e.g. 11+11=3+3+3+3+5+5. But every combination of log p, for p prime, is unique by the fundamental theorem of arithmetic. I doubt the nondegenerate knapsack problem is any easier than the general knapsack problem, but there are several problems where a nondegeneracy assumption simplifies the solution.
A variation of the kind of problem you're talking about involves smooth numbers, i.e. numbers whose prime factors are all less than a given bound B. You might ask, given n and B, what is the smallest B-smooth number greater than n? --RDBury (talk) 19:23, 22 January 2014 (UTC)[reply]
It is a good point you have about the nondegeneracy. Actually the sets of are just about as perfect for getting a unique solution as can be. I do not quite understand though, why you state in the beginning that the upper bounded version of my question it is not equivalent with the knapsack problem? In what way is it not equivalent? One thing I came to think about, where the NP character of the problem maybe can be circumvented is the special case where . There we know that if n has an odd number of divisors, the best approximant will be the middle one, e.g., in the ordered set of divisors, and if n has an even number of divisors, it will be one of . Could there be an efficient algorithm for just finding that single divisor among the ordered divisors? --Slaunger (talk) 21:45, 22 January 2014 (UTC)[reply]
It's not equivalent because in the divisor problem you're restricted to log p for weights while in the general knapsack problem there is no such restriction. So if you can solve the knapsack problem efficiently you can solve the divisor problem but the reverse does not necessarily hold. The binary search depends on finding the nth largest divisor dk for a given k. Suppose n is the product of the first 1000 primes, then the number of divisors is 21000. In order to find the middle divisor, d 2999, with the algorithm given in the blog you would need start from 1 and let it run until you got 2999 values, which would take a very long time. So being able to generate the divisors in order is not the same as having the entire list at once where you can easily jump to the middle; there may be too many divisors to even store them all at once, much less do a binary search on them. There may be a way to solve the divisor problem that uses some special properties of primes rather than approaching it as an instance of the knapsack problem, but I highly doubt it; primes don't have much in the way of such properties to work with. --RDBury (talk) 22:49, 22 January 2014 (UTC)[reply]
Ah, OK, I get it. You are right! Thanks. --Slaunger (talk) 23:23, 22 January 2014 (UTC)[reply]
Another simplifying aspect of this problem when converted into a knapsack problem is the fact that we can assume weight equals value. --Slaunger (talk) 09:26, 23 January 2014 (UTC)[reply]
(ec)Regarding the ordering I just played around a bit with brute-forced ordered lists of the divisors of the first few primorials. I came to think about representing the primorials in a prime code binary representation inspired by the Fibonacci code, so the least significant digit is the multiplicity of the lowest prime, 2 in the prime factorization, the next digit is the multiplicity of the next prime, 3, etc. Thus,
,
,
,
and
Now, any divisor in if found by either including a given prime in the primorial in the product or not. Thus, we can represent by any 'n'-digit binary prime code. For instance, represented by the three digit binary prime code 111 has the divisor 15 represented by the binary prime code 101. Now it is interesting to list the divisors for the primorials in ascending order written in their binary prime code form
n  p_n#    Ordered divisors in binary prime code form   
0     1    0
1     2    0 1
2     3    00 01 10 11
3     4    000 001 010 100 011 101 110 111
4     5    0000 0001 0010 0100 0011 1000 0101 1001 0110 1010 0111 1100 1011 1101 1110 1111
5     6    00000 00001 00010 00100 00011 01000 00101 10000 01001 00110 01010 10001 00111 10010 01100 01011 10100 10011 01101 11000 01110 10101 11001 10110 01111 11010 10111 11100 11011 11101 11110 11111
As can be seen here is a certain kind of order in this progression. For the first three primorials the divisors are ordered in accordance with their binary prime codes. For the fourth we begin to see some disorder as the divisor 5 (100) comes before 6 (011), but the underlying ordering for the lower bits are retained. Since it is easy to calculate the divisor with a given binary prime code, I was wondering if the this first rough sorting can be of any help to reduce the computational complexity or at least build up the solution recursively using dynamic programming (or maybe this is just the 0-1 knapsack problem restated in another form)? --Slaunger (talk) 23:03, 22 January 2014 (UTC)[reply]

Lenstra–Lenstra–Lovász lattice basis reduction algorithm. Count Iblis (talk) 22:12, 22 January 2014 (UTC)[reply]

Interesting (and complicated). Thanks, I'll look into that. Lattice reduction problems is a new concept for me. A whole new world opens... :D --Slaunger (talk) 23:22, 22 January 2014 (UTC)[reply]
OK, after trying to understand, I do not get it? How can this problem be mapped into a lattice reductions problem? How to choose the initial vectors and the bases? --Slaunger (talk) 23:38, 22 January 2014 (UTC)[reply]
I don't really see how LLL helps, but yes it's an important algorithm, along with PSLQ and other integer relation algorithms. For generating divisors in order given the factorization, I think you can do it straightforwardly, similarly to these schemes for enumerating (say) 5-smooth numbers (a standard programming exercise). Instead of having an "unlimited" supply of 2's, 3's, and 5's, you'd have some multisets giving the prime factorization of your number, and during the enumeration you'd keep track of which factors you'd already used. I'm getting sleepy but can explain it further tomorrow if this is too confusing. 50.0.121.102 (talk) 07:50, 23 January 2014 (UTC)[reply]
I'm realizing you're talking about primorials with a LOT of factors, so the above method won't help that much. In fact taking approximations of logarithms of both sides of the equation (log base b rounded to n places), you've got a subset-sum problem which is NP-complete, though the particular instance may not be hard (for one thing all the numbers are positive). Hmm. The subset-sum article mentions some heuristic or approximate algorithms that might help. 50.0.121.102 (talk) 18:41, 23 January 2014 (UTC)[reply]
The LLL method can be used in a heuristic way to generate possible solutions. The number n is given as:

Let's define m+1 vectors as follows. for put:

where Z and W are large normalization factors that can be determined by trial and error. The vector is then taken to be:

The number s will be of order 1 and is also determined by trial and error. The last components of the vectors will steer the reduced vectors to be within the constraints of the problem, the component before that favors optimal solutions to the problem. Of course, the outputs will not all be "legal solutions", but you can tune the parameters (or add extra components with more parameters) so that the outputs are steered more in the right direction. Count Iblis (talk) 12:43, 23 January 2014 (UTC)[reply]

January 22

Cigarettes smoked per day

Need to know how many cigarettes were smoked per/any day in average in each year in 2008 - 2014. Partial information accepted. worldometers is a source. This question applies worldwide.--78.156.109.166 (talk) 20:29, 22 January 2014 (UTC)[reply]

Not really a question for the Maths reference desk, but this [2] chart from the same source that worldometers cites gives annual data for 2000 and 2009 (5711 billion per year for 2000, 5884 billion per year for 2009). Any data for 2014 would have to be extrapolation. AndyTheGrump (talk) 20:33, 22 January 2014 (UTC)[reply]
Perhaps one should gather data on how many kilograms of tobacco are produced each year. Then estimate how many cigarettes are smoked each year from that data. 202.177.218.59 (talk) 23:52, 22 January 2014 (UTC)[reply]
To do that one would also need to know the proportion of tobacco that is consumed as cigarettes, rather than cigars, snuff, snus, chewing tobacco, dipping tobacco etc. Equisetum (talk | contributions) 14:07, 23 January 2014 (UTC)[reply]

January 23

Zeta Function and floor functions

The article for floor and ceiling functions says that it is easy to prove;

Using integration by parts. How is this?

It then also goes on to say

And also;

For s = σ + i t in the critical strip (i.e. 0 < σ < 1),

Can anyone prove these statements? — Preceding unsigned comment added by 31.48.36.7 (talk) 00:13, 23 January 2014 (UTC)[reply]

The article cites [3] which explains it in more detail. --RDBury (talk) 10:23, 23 January 2014 (UTC)[reply]
Can anyone prove these statements? — No... Not anyone... :-) — 79.113.237.228 (talk) 13:19, 23 January 2014 (UTC)[reply]

Cheers but that article starts by stating the first equation without explanation. I can sort of see some of the logic behind it but i can't prove it still. — Preceding unsigned comment added by 31.48.36.7 (talk) 17:19, 23 January 2014 (UTC)[reply]