Wikipedia:Reference desk/Archives/Mathematics/2010 October 11

From Wikipedia, the free encyclopedia
Mathematics desk
< October 10 << Sep | October | Nov >> October 12 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 11[edit]

Markov Chains[edit]

Hello everyone, I have a question here which I have been pondering for a while but can't figure out how to do a general case. So we are studying first step analysis in a Markov Chain class and I am trying to figure out how to do this. Let P be a transition probability matrix for a Markov chain and let be the expected number of visits starting at i to state j before the next visit to state i. Let denote the row vector . If the chain is irreducible and recurrent show that satisfies and that for all j in the state space. Note that by this definition . I suppose this would help us solve for such expectations by simply solving a left eigenvalue/eigenvector problem (with the constraint that the i-th component is 1, to have a unique solution). So I can do specific examples and I have done numerical examples to convince myself that it is true but I don't know where to begin for a general case. Any help would be appreciated. Thanks! -Looking for Wisdom and Insight! (talk) 05:53, 11 October 2010 (UTC)[reply]

It is hard to provide really helpful answers to questions like this because we don't know which useful helper properties your text has already established. Here are some hints that point toward a proof from first principles:
  1. Let Pi be the matrix whose ith row is that of P (that is, the transition probabilities away from state i) and the other rows are 0.
  2. Compute each mi(j) separately following the hint I gave 174.29.63.159 yesterday (see above). It turns out that mi is exactly the ith row of the inverse of the matrix I–P+Pi.
  3. Therefore mi(I–P+Pi) = ei, and the rest is just algebra, remembering that the probabilities in Pi sum to 1.
It may be helpful while thinking through the matter to fix i=1 (which you can do without loss of generality). –Henning Makholm (talk) 14:43, 11 October 2010 (UTC)[reply]

Understanding the Double Dual[edit]

Resolved

Dual_space#Injection_into_the_double-dual mentions that for an infinite dimensional vector space V, V** and V are not isomorphic, but I'm having trouble understanding why. Using the notation in the article, if I have the vector space V = K (the space of vectors (a1,a2,...) with each ai in K and all but a finite number equal to zero), then V* = KN (the space of all vectors (a1,a2,...) with each ai in K). But then what does the double dual V** look like? What is an example of an element of V** that is not in the image of the natural injection Ψ:V → V**?

Consider the subset of V* that consists of all the projection functions (which just return one of the components of their argument), together with the function that sums all of the components. This set is linearly independent (because no finite set of projection functions add up to the sum function). Extend it to a basis for V* (you may need the axiom of choice for this). Now consider the element of V** that expresses its argument in that basis and gives you the coefficient of the sum function. –Henning Makholm (talk) 17:49, 11 October 2010 (UTC)[reply]

In particular I'm trying to answer the following question: if I have some subspace W of V* from above, does the set W0∩Ψ(V) uniquely determine W? Here W0 ⊂ V** is denoting the set of annihilators of W and Ψ(V) is the elements of V** that can be represented as a vector with only finitely many non-zero terms. If anybody could help me understand this better I would appreciate it. Rckrone (talk) 17:26, 11 October 2010 (UTC)[reply]

I don't think your can distinguish between the entire V* and the subspace spanned by the projection functions (that is, elements of V* with finitely many nonzero coefficients). You get the empty set in both cases, because nothing in eliminates every projection function. –Henning Makholm (talk) 18:07, 11 October 2010 (UTC)[reply]
Yeah, you're right. Thanks a lot for the explanation. It's a lot clearer now. Rckrone (talk) 18:14, 11 October 2010 (UTC)[reply]

As a concrete example, think of the set of all trigonometric polynomials with period 2π with complex coefficients, with inner product given by

Each trigonometric polynomial has only finitely many terms. But the double dual of this space is a space of Fourier series, and generally such a series has infinitely many terms. Michael Hardy (talk) 23:42, 11 October 2010 (UTC)[reply]

Sorry, I should have specified in my question, but I was talking about the algebraic dual, rather than the continuous dual. I think in the algebraic case, the double dual can't contain any infinite series of (the images of) the basis elements of the original space since there will be some functional in the dual space for which the "inner product" diverges. There's probably a more precise way to say that. Rckrone (talk) 02:10, 12 October 2010 (UTC)[reply]


But note that there is a further issue here. In the notation of the linked article, the canonical inclusion in the bi-dual of the F-vector space is linear and injective, and it is surjective if and only if is finite, and Henning Makholm has showed a proof. Saying that and are not isomorphic is a stronger statement (for between isomorphic vector spaces of infinite dimension there are of course linear not surjective injections).
There was an interesting discussion here at RD/M on September 25, 2009 on the case of countable dimension. For future reference (this one!) that time I put here a last example that provides a continuum cardinality family of subsets of whose characteristic functions are linearly independent in (whatever is the field ).(I did not happen to think about uncountable dimensions). --pma 20:03, 13 October 2010 (UTC)[reply]
Assuming the generalized continuum hypothesis, your example generalizes nicely to uncountable dimensions. Let be any vector space of infinite dimension. I claim that the algebraic dual has strictly larger dimension than . Proof. Choose a basis for . Any subset of naturally corresponds to an element of . A chain in corresponds to a linearly independent set in . The following lemma guarantees that there are very long chains.
Lemma. Assume GCH, and let be any infinite set. Then contains a chain of cardinality .
  • Without loss of generality, assume that is an initial ordinal.
  • Let with the lexicographic ordering -- that is, iff where is the least element on which the two sets differ. (Intuition for : A is the set of binary fractions between 0 and 1).
  • Let . (Intuition: dyadic rationals are countable yet dense).
  • Because , we have . (GCH used for ).
  • Define an order-embedding by . (Intuition: Dedekind cuts). It is trivial that this mapping is monotonic, and fairly easy to see that it is injective.
Thus, the image of is a chain in of cardinality the same as , namely . –Henning Makholm (talk) 21:55, 16 October 2010 (UTC)[reply]
Here's a proof that works without GCH. Let the scalar field be and the dimenion of be . I use the notation for the set of all functions as a vector space over , that is, .
  1. When : Then . (There are scalar multiples of base vectors and the number of finite sums of such vectors is ). On the other hand contains at least elements. Since and have different cardinality, they cannot be isomorphic. In particular, must have dimension .
  2. For arbitrary : Let be the smallest subfield of , that is, if has characteristic 0 and for characteristic . In either case, is small enough to match the condition in case 1. Let be a basis for . By case 1, . But each member of is also a member of , and is still linearly independent over . (Namely, suppose that is an -linear relation among vectors . Then the 's are a solution to a homogenous system of linear equations with coefficients in , and this system has only the trivial solution in . Then of the equations must be linearly independent over , which means that their determinant is nonzero whether evaluated in or in . Therefore the 's must all be zero, too). So the dimension of is at least , Q.E.D.
Henning Makholm (talk) 03:11, 19 October 2010 (UTC)[reply]
excellent! pma 08:37, 20 October 2010 (UTC)[reply]

Reverse Fourier[edit]

I'm into burning new digital waveform patches into my homebrewed synthesizer... Here's the deal: I know the absolute values for each overtone, but not their relative phases. So for each set of overtones there's a practically infinite number of possible waveforms. Being close to original waveform is not really important (there's plenty of processing downstream, so the starting patch will be mushed anyway). I suspect the only meaningful criteria for setting relative phases will be maximum usage of available bits (bus width less processing headroom). The waveform, say, for 16-bit word size, must touch either +32767 or -32767 rail, and must be centered around precisely 0 (if it's asymmetrical it shouldn't touch both rails). That's quite obvious.

But the second objective is to maximize (or optimize?) the energy carried by the wave. A pure sinewave spanning 1 Volt from peak to peak has RMS content of 0.5 / 1.41414=353 mV. But a fixed set of harmonic overtones fitted into the same peak-peak range may be 400 or may be 200 milliVolts RMS - depending on the set of phases. So my question is - is there a simple empyrical algorythm for maximizing form factor of the synthesized wave for a given peak-peak limit?

TIA, East of Borschov 18:56, 11 October 2010 (UTC)[reply]

Are you sure that is what you want? Even though the waveform you synthesize fits within ±32767, there's no guarantee that this will also be the case after downstream processing -- even passive filtering could create peaks higher than your original ones. So if you're too clever choosing phases that will knapsack a lot of energy into your sample space, you're probably just setting yourself up for some clipping distortion further down the line. –Henning Makholm (talk) 22:01, 11 October 2010 (UTC)[reply]
"Downstream" is mostly analog. Digital processing is only for frequency modulation, shaping and mixing noise and delay effects - so a fixed headroom of two bits is more then enough. Perhaps I chose the wrong word: the target is not really maximizing but normalizing different voices to roughly the same (not necessarily high) energy. East of Borschov 02:49, 12 October 2010 (UTC)[reply]

Map from free abelian group onto the integers[edit]

I'm currently reading through a proof, and at one point it asserts that given any element in the free abelian group of rank n, it is possible to find a homomorphism onto the integers, whose kernel contains this element. I have no problem if the chosen element has at least one zero coefficient, for example (1,1,1,0) in Z4 (in that case, the map could send the first three basis elements to zero, and the fourth to 1), however I cannot see how to show the existence of such a map if the chosen element has no non-zero coefficients, for example (1,1,1,1) in Z4.

It just seems like by permitting such an element to be in the kernel, there will be (non-inverse) elements in the free abelian group which sum to something in this kernel, whose images cannot sum to zero in the integers, because it would cause us to fail to preserve the structure, under the map. Is there something that I'm missing? Thanks, Icthyos (talk) 20:33, 11 October 2010 (UTC)[reply]

The map that sends (a,b,c,d) to a−b is a homomorphism onto the integers with (1,1,1,1) in the kernel. Algebraist 20:36, 11 October 2010 (UTC)[reply]
Ah, of course - thanks. Is there a general way to spot such a map? I tried to extend what you did - say we are in Zn and we want the kernel to contain . Then the map sending to has such a kernel, but obviously isn't onto unless and are co-prime. For instance, I can't see how to construct a map from Z4 whose kernel contains (2,4,8,16), since none of the coefficients are pairwise co-prime. Icthyos (talk) 21:07, 11 October 2010 (UTC)[reply]
Just cancel the common factors: 2a−b. Algebraist 21:12, 11 October 2010 (UTC)[reply]
Right! Thanks for the help, it's something I'd never thought about before. Brain couldn't make sense of it! Icthyos (talk) 22:20, 11 October 2010 (UTC)[reply]