Wikipedia:Reference desk/Archives/Mathematics/2011 May 4
Mathematics desk | ||
---|---|---|
< May 3 | << Apr | May | Jun >> | May 5 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
May 4
[edit]Proof of matrix exponential identities
[edit]How do you prove that for a matrix X? I don't see a proof on Matrix exponential. Widener (talk) 02:11, 4 May 2011 (UTC)
- You can't because it's not always true. See section 1.2 of the article for further details. — Fly by Night (talk) 02:34, 4 May 2011 (UTC)
- It is always true. I wrote , not . Widener (talk) 02:54, 4 May 2011 (UTC)
OK, let's have a go at it:
The "problematic step" relies on absolute convergence. Everything commutes wih everything since powers of X commute with each other. If we had aX + bY instead of aX + bX, then we wouldn't have that commutativity. Michael Hardy (talk) 03:47, 4 May 2011 (UTC)
- How did you get from line 3 to line 4? I don't know what you've done to the limits of the sums. Widener (talk) 04:54, 4 May 2011 (UTC)
- In you are summing over all ordered pairs with . This remains the same if you rewrite it as . Because the sums are infinite you need absolute convergence to ensure that changing the order of summation will not change the result. Interchanging summations is a powerful technique that should be kept in mind whenever you are dealing with double sums.
- By the way, I think Michael's derivation requires only the obvious minor modifications to deal with the case of where X and Y commute without requiring . -- Meni Rosenfeld (talk) 07:23, 4 May 2011 (UTC)
- You can visualize it like this:
f[0,0] f[1,0] f[2,0] f[3,0] f[4,0] f[5,0] f[6,0] f[7,0] f[8,0] f[9,0] -------- f[0,1] | f[1,1] f[2,1] f[3,1] f[4,1] f[5,1] f[6,1] f[7,1] f[8,1] f[9,1] ---------- f[0,2] f[1,2] | f[2,2] f[3,2] f[4,2] f[5,2] f[6,2] f[7,2] f[8,2] f[9,2] ---------- f[0,3] f[1,3] f[2,3] | f[3,3] f[4,3] f[5,3] f[6,3] f[7,3] f[8,3] f[9,3] ---------- f[0,4] f[1,4] f[2,4] f[3,4] | f[4,4] f[5,4] f[6,4] f[7,4] f[8,4] f[9,4] ---------- f[0,5] f[1,5] f[2,5] f[3,5] f[4,5] | f[5,5] f[6,5] f[7,5] f[8,5] f[9,5] ---------- f[0,6] f[1,6] f[2,6] f[3,6] f[4,6] f[5,6] | f[6,6] f[7,6] f[8,6] f[9,6] ---------- f[0,7] f[1,7] f[2,7] f[3,7] f[4,7] f[5,7] f[6,7] | f[7,7] f[8,7] f[9,7] ---------- f[0,8] f[1,8] f[2,8] f[3,8] f[4,8] f[5,8] f[6,8] f[7,8] | f[8,8] f[9,8] ---------- f[0,9] f[1,9] f[2,9] f[3,9] f[4,9] f[5,9] f[6,9] f[7,9] f[8,9] | f[9,9]
- You can verify that for both ways of writing the sum, you are summing the same terms - those above the line. -- Meni Rosenfeld (talk) 09:23, 4 May 2011 (UTC)
The double sum on line 3 says j and k
both run from 0 to ∞ but k never gets bigger than j.
The double sum on line 4 says j and k
both run from 0 to ∞ but j stays at least as big as k.
What makes the step "problematic" is that rearranging infinite sums can change the value of the sum unless it converges absolutely. Michael Hardy (talk) 19:06, 4 May 2011 (UTC)
- The proof follows directly from the Baker–Campbell–Hausdorff formula. Let X be a complex, n × n matrix, and w and z be two complex numbers. It is clear that zX and wX commute. Let us relabel A = wX and B = zX. It is clear that the Lie bracket vanishes, i.e. [A,B] = AB − BA = 0. Thus all but the linear terms in the explicit Baker–Campbell–Hausdorff formula vanish. In particular log(exp(A)⋅exp(B)) = A + B, and the result follows. Moreover, the result holds for any two commuting elements of the Lie algebra, over a field of characteristic zero, of a simply-connected Lie group. (Explicitly in terms of matrices, take a look at this subsection) — Fly by Night (talk) 23:51, 5 May 2011 (UTC)
PRNG based on Riemann Zeta Function
[edit]After reading Cryptonomicon, I am curious if there really is a PRNG based on the Zeta function. I googled but couldn't find anything. Does anyone know of any resource I can look into? Any good ideas on how the Zeta function would be used to produced pseudo-random numbers? Like using the digits of the (imaginary part of the) zeros or something (maybe a binary expansion of a zero and then XORing with plaintext binary)? Which leads me to another question, how are the (binary or decimal) digits of Riemann Zeta function zeros distributed? Are they uniform? Do we know anything about them? Also, do we know if they are rational or irrational (in base 2 I guess). Because if the binary expansion is nonterminating, then we have a stream cipher of an arbitrary length. The key could be like which zero it is and also maybe how many decimal places to truncate in the beginning before starting encryption just so that we can ignore like the first 10,000 digits and start XORing with the 10,001st digit after the decimal point. Thanks!-Looking for Wisdom and Insight! (talk) 06:00, 4 May 2011 (UTC)
- Cryptonomicon is a fiction novel and the author gets annoyed with readers who expect all the stuff in it to be mathematically or historically accurate. There isn't a (known) cryptosystem like that thing with the zeta function. The author made it up as a plot device. 69.111.194.167 (talk) 10:39, 4 May 2011 (UTC)
- As an aside why would the base have anything to do with whether a number is rational or not?-Shahab (talk) 12:10, 4 May 2011 (UTC)
I know that its a fictional source but there are some things in it that are based in reality such as places and battles and people. If I ask about Admiral Yamamoto, are you going to say that its foolish for me to expect Admiral Yamamoto to be a real person just because he is mentioned in a fictional novel? I don't know about this scheme and that's why I asked. A cryptographic scheme based on Riemann Zeta function is not that far-fetched or unlikely anyway. It seems rather plausible (in fact a good idea I think). There are plenty of technical things in that novel which are "real" so why did it irk people when I asked about this scheme? Just a simple "no, I have never seen one" would have been fine...and then wait to see what the consensus is, if others have never seen one or not. As for Shahab, I was thinking of the termination and periodicity of the expansion of the zeros in binary and decimal. Yes, I could have said it better. I thought of non-repeating non-terminating and said irrational.
On a completely different note, why is this reference desk becoming so snappy? I have been visiting here for a long time and I don't know much about any one topic so usually I just ask questions or read others' questions and responses hoping to learn something. And I have noticed that people just have a short temper all of a sudden. What's up with the condescending answers as if you are talking to an naive three year old or something? Why all this suspicion that everyone is asking you to do their HW? Does no one here have good faith anymore? What happened to the spirit of learning and teaching those who want to learn, spreading knowledge, and perhaps having an epiphany yourself as you explain something to another? The people who answer here, do they feel abused or taken advantage of? Most of us inquirers don't have that intention. And this is a voluntary service I thought. If it makes you that upset then just don't answer. If I know something, I don't have a problem answering. If you think you are being abused then either say so politely (to let the other person know that you are not stupid or gullible and you know that he is trying to manipulate you) or just stay quiet and not answer at all. I am a big fan of this desk and for many years I have been visiting here almost daily. And now I am truly sorry to say that some of the responses (their tone and what they suggest, not the technical matter) just seems ridiculous.
I ask again (in good faith) if anyone has ever heard of an encryption scheme based on the Riemann Zeta Function. Or if someone can please point me to maybe some studies (statistical analysis?) on the decimal/binary expansions of the non-trivial zeros of the Zeta function like how are the digits distributed or what is a fast way to generate the decimal/binary expansion of a zero or if there is some explicit way to generate the digits (like HEX expansion for Pi). Or I have to generate the first 20000 decimal places of a zero before I can get to the 20001st? Do we know anything about the zeros being irrational? Are they completely devoid of patterns in the long run (as far as we know) or do we see cyclic behavior or degeneration of some kind far out? Papers, books, hyperlinks, PDFs, anything would be nice. Thanks!-Looking for Wisdom and Insight! (talk) 00:01, 5 May 2011 (UTC)
- I agree, it seems like a reasonable question, especially given that the playing-card-based cipher in the novel is real (Solitaire).
- A Google search for "zeta function cryptography" turned up a letter from Neal Stephenson that says that the zeta-function cryptosystem in the novel was pure fiction, but there has been real cryptographic research involving zeta functions. As far as I can tell, these are zeta functions of elliptic curves and not the Riemann zeta function.
- It is possible to use the digits of mathematical constants as pseudorandom bits. The problem is that computing digits of π or ζ(3) is orders of magnitude slower than other known methods of generating pseudorandom bits, and I don't think there's any theoretical reason to believe that it's any more secure. As an example, Salsa20 can generate several billion bits per second on a commodity x86 processor, while generating a similar amount of π would take hours or days, I think (considering that you'd need to start from the 2128th digit, or so, to make it secure). -- BenRG (talk) 04:06, 5 May 2011 (UTC)
- π, ζ(3), etc. are not random or pseudorandom at all (they are fixed constants). The zeta function is important in number theory and no doubt shows up in algorithms for integer factoring and discrete logarithm, which are the basis of public-key cryptography. But it's a big leap from that to say that systems like RSA and Diffie-Hellman are based on the zeta function. Note the Blum-Blum-Shub pseudorandom generator is theoretically important and sometimes even used in practice, despite being quite slow, because of its provable reducability to factoring. 69.111.194.167 (talk) 10:36, 5 May 2011 (UTC)
- The idea of a π-based PRNG is that you generate digits starting at a large offset in the binary expansion (2128 or so), the exact position being derived from the seed. Any PRNG can be reinterpreted as the generation of the digits of some real number (a rational number, in fact, if the period is finite). -- BenRG (talk) 21:28, 5 May 2011 (UTC)
- And you are going to figure out the 2128th digit of π exactly how? The Bailey–Borwein–Plouffe formula might get you to 240 or so if you've got enough supercomputer time, but that's useless for cryptography. 69.111.194.167 (talk) 02:46, 6 May 2011 (UTC)
- The idea of a π-based PRNG is that you generate digits starting at a large offset in the binary expansion (2128 or so), the exact position being derived from the seed. Any PRNG can be reinterpreted as the generation of the digits of some real number (a rational number, in fact, if the period is finite). -- BenRG (talk) 21:28, 5 May 2011 (UTC)
- π, ζ(3), etc. are not random or pseudorandom at all (they are fixed constants). The zeta function is important in number theory and no doubt shows up in algorithms for integer factoring and discrete logarithm, which are the basis of public-key cryptography. But it's a big leap from that to say that systems like RSA and Diffie-Hellman are based on the zeta function. Note the Blum-Blum-Shub pseudorandom generator is theoretically important and sometimes even used in practice, despite being quite slow, because of its provable reducability to factoring. 69.111.194.167 (talk) 10:36, 5 May 2011 (UTC)
- I think RD/Ma has been calmer lately than it's been at some times. No need to freak out if occasionally an editor is a little tense. If you think someone is out of line just tell him so on a case-by-case basis. -- Meni Rosenfeld (talk) 20:42, 8 May 2011 (UTC)
Visually Displaying Divergence from a Planned Sequence
[edit]I work in a warehouse where we load and despatch about 300 lorries ("loads") to shops. Each day, the warehouse is issued with a planned despatch schedule, where each load is allocated a number (1,2,3,4...etc) that corresponds to the position in the schedule for despatch (i.e. first load of the day is #1, second is #2 etc). Each day, the warehouse despatches many loads outside of that planned sequence.
So, I effectively have two lists of numbers: the first list is the a list of the planned despatch numbers (1,2,3,4...etc), the second list is the actual sequence in which the loads were despatched (1,4,2,3...etc).
I want to easily show my superiors (=simple people) where the warehouse deviates from the load schedule i.e the difference between plan and actual. What is the best way to visually display the divergence from the planned sequence? (I have access to Microsoft Excel 2007 or Minitab, if that makes a difference).
Many thanks for your time. --sparkl!sm hey! 15:12, 4 May 2011 (UTC)
- It looks like you want "distance" from the planned schedule. I suggest using Levenshtein distance as a start. It shows how many "edits" have to be done to a sequence to change it from one sequence to another sequence. In a perfect schedule, you have a distance of 0 - no changes. You can then have a distance from the planned scheduled per week. It is a single integer that is easy to show on a chart, week by week. -- kainaw™ 15:18, 4 May 2011 (UTC)
- Thanks, that is a really interesting way of looking at it, and could easily be used to track our daily divergence and even compare our different warehouses. Great response! It isn't quite what I had in mind though, because I want to show how the schedule changes at different points during the day (due to many external factors). There are times of the day where the schedule is followed exactly, and there are times where it appears no schedule is being followed at all, and it is this activity that I want to display. Thanks again! --sparkl!sm hey! 15:31, 4 May 2011 (UTC)
- It gets rather complicated. As you look at different distance algorithms, you start asking more and more questions. For example, assume the original sequence is 1234. What value of distance do you place on a deletion: 134? What about an insertion: 12534? What about a neighbor inversion: 1324? What about non-neighbor inversion: 1432? Levenshtein distance is usually calculated with the Wagner-Fischer algorithm. If you look at Smith-Waterman and Needleman-Wunsch distances, you will how you can manipulate the Wagner-Fischer algorithm to produce a lot of different values. I did a writeup on it a while back here. Because it is a matrix solution to the distance problem, you can calculate the matrix solution step-by-step in real time as new data arrives, showing the change in distance with each new bit of information. Again, 0 would be a perfect day. The larger the number, the worse the distance. -- kainaw™ 16:48, 4 May 2011 (UTC)
- Hmm, I see your point. Insertions and deletions are not really of much interest at this point, just changes to order. In terms of Levenshtein distance, I understand the theory but how could we take into account the different lengths of the schedule i.e. a different number or loads? Thanks --sparkl!sm hey! 06:35, 5 May 2011 (UTC)
- If each time you have a deletion, rather than deleting it, you place it at the end of the current list you will be able to use you distance tools and have a plausible measure. If you delete the first load of the day it is presumably more noticeable than if you delete the last load of the day. For insertions, Add the new load to the end of the original list and initially to the end of the modified list but before all the deleted loads. Then if it is an extra load at the end of the day, no extra distance will be involved. You can then modify the new list to put the insertion where you want it and get a measure of that too. -- SGBailey (talk) 09:08, 5 May 2011 (UTC)
- Hmm, I see your point. Insertions and deletions are not really of much interest at this point, just changes to order. In terms of Levenshtein distance, I understand the theory but how could we take into account the different lengths of the schedule i.e. a different number or loads? Thanks --sparkl!sm hey! 06:35, 5 May 2011 (UTC)
- It gets rather complicated. As you look at different distance algorithms, you start asking more and more questions. For example, assume the original sequence is 1234. What value of distance do you place on a deletion: 134? What about an insertion: 12534? What about a neighbor inversion: 1324? What about non-neighbor inversion: 1432? Levenshtein distance is usually calculated with the Wagner-Fischer algorithm. If you look at Smith-Waterman and Needleman-Wunsch distances, you will how you can manipulate the Wagner-Fischer algorithm to produce a lot of different values. I did a writeup on it a while back here. Because it is a matrix solution to the distance problem, you can calculate the matrix solution step-by-step in real time as new data arrives, showing the change in distance with each new bit of information. Again, 0 would be a perfect day. The larger the number, the worse the distance. -- kainaw™ 16:48, 4 May 2011 (UTC)
- Do you care about timings or only about sequence? For example, suppose an earlier-than-planned shipment delays all subsequent shipments, but they still happen in the correct order. Do the subsequent shipments all get a bad score because they're late, or do they get an OK score because they are in sequence? 109.153.234.58 (talk) 17:42, 4 May 2011 (UTC)
- Timings are obviously important to us, but not for this exercise - we are only interested in displaying changes to the sequence at this point. --sparkl!sm hey! 06:29, 5 May 2011 (UTC)
- Your superiors probably want to see the graph y=f(x) where y is the second list (1,4,2,3...) and x is the first list (1,2,3,4,...). When the actual sequence deviates from the schedule, then the graph deviates from the straight line. Bo Jacoby (talk) 05:22, 5 May 2011 (UTC).
- For a numerical measure, a very simple method is just to add the (absolute) differences between the expected next sequence number and the actual sequence number. For example, consider 2,3,1,4,5. At the start we have 2 where we expect 1, so that's a score of 1. Then 3 where we expect 3, which is a score of 0. 1 when we expect 4 is a score of 3, then, in exactly the same way, further scores of 2 and 0 for items 4 and 5. I think one could scratch around and find some anomalies in this method (e.g. the exact reverse sequence is nowhere near the worst score), but it has the advantage of being very simple to understand and implement. 86.179.119.184 (talk) 11:30, 5 May 2011 (UTC)
- For statistical reasons, I'd suggest using the square root of the sum of the square of the differences between the expected and actual shipments. The general theory is no different than using absolute values. -- kainaw™ 12:27, 5 May 2011 (UTC)
These are all terrific answers folks. We have so far used Bo Jacoby's chart of y=f(x), which shows the divergence in a beautifully simple way, and longer-term I think we will use Kainaw's suggestion of plotting distance of some sort (possibly on an hour-by-hour basis) as a comparison measure for all of our warehouses. 86.179.119.184's idea is also a cracker, and we'll have a go at that too. Problem solved, thanks guys! Now, we just need to make sure your groceries are delivered on time.... --sparkl!sm hey! 12:44, 5 May 2011 (UTC)
- I second Bo's suggestion, and would put it in bar graph form, with one color used for the bits that go over the expected x=y line, and another color for the bits that fall short of that line. One hidden psychological advantage of Bo's method is that any upward pointing graph subconsciously indicates that progress is being made (and I suppose it is, in that deliveries are being made). StuRat (talk) 21:45, 6 May 2011 (UTC)
Group theory: generators and relations within the commutator subgroup
[edit]I'm trying to understand a paper so I'm taking this out of context a bit. Let F be the free group on generators a and b, G the commutator subgroup of F and H the normal closure in F of {[a2,b], [a,b2]}. I'd like to identify the isomorphism class of G/H (pretty sure it's C2). My idea is to use the generators for in H to reduce the generators for G/H to a finite list, then use the generators for H again to find relations. In general H could be the normal closure of any subset of G and I know you can get word problem issues if you try to find a general solution, but I'd like to find a technique that at least works some of the time. So is this more or less the right approach or are there more sophisticated methods I'm not aware of? My knowledge of free groups was always pretty sketchy and I've forgotten much of it.--RDBury (talk) 17:30, 4 May 2011 (UTC)
- Correction: It looks like G/H is actually infinite cyclic.--RDBury (talk) 17:51, 4 May 2011 (UTC)
- The general situation you're describing boils down to finding the commutator subgroup of F/H, since there's an easy isomorphism between [F,F] / H (this will always be defined, since you're requiring H be normally generated from a subset of G) and [F/H , F/H]. Since the isomorphism problem, given arbitrary presentations, is hard, so you're right, it doesn't seem like there'll be much hope for a general method of attack here. What is your proposed generator for G/H? I'm not great at messing around with presentations... Icthyos (talk) 09:47, 5 May 2011 (UTC)
- I'm starting with [a, b]aibj as a set of generators for G. In the specific case, a2 and b2 are in the center of F/H and it's pretty easy to expand the expressions to show [a, b]a2=[a, b]b2=[a, b] (mod H) which reduces the number of generators to 4. Expanding also gives [a,b]a[a,b]=[a,b]b[a,b]=1 (mod H) which reduces the number of generators to 1. If you map F to the infinite dihedral group, presented as <a, b: a2=b2=1>, then H is in the kernal and [a,b] maps to (ab)2 which has infinite order, proving that [a, b] has infinite order in G/H. It's a pretty basic approach, no Sylow theory or character tables, just grinding. I just wanted to know there wasn't an easier, more sophisticated way before I start on harder examples.--RDBury (talk) 15:50, 5 May 2011 (UTC)
- The general situation you're describing boils down to finding the commutator subgroup of F/H, since there's an easy isomorphism between [F,F] / H (this will always be defined, since you're requiring H be normally generated from a subset of G) and [F/H , F/H]. Since the isomorphism problem, given arbitrary presentations, is hard, so you're right, it doesn't seem like there'll be much hope for a general method of attack here. What is your proposed generator for G/H? I'm not great at messing around with presentations... Icthyos (talk) 09:47, 5 May 2011 (UTC)