Wikipedia:Reference desk/Archives/Mathematics/2007 August 12
Mathematics desk | ||
---|---|---|
< August 11 | << Jul | August | Sep >> | August 13 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
August 12
[edit]I like this guy's ideas, but I don't know much abstract algebra yet. I'd appreciate a more experienced opinion. Is this wheel thing as clever as it looks, or am I missing some flaw? Black Carrot 04:04, 12 August 2007 (UTC)
- It's just another algebraic structure, just like groups, rings, fields, etc. etc. Don't forget that simple things like x - x aren't always what you expect them to be in a wheel, so there may be difficulties using it for computation.
- Why are you interested in this, anyway? (The preceding unsigned comment added by 203.49.243.115)
- While it is a fairly nice way of handling division by zero with more mathematical sophistication that other attempts, is it a particularly useful structure? Compare with other common extensions the Real projective plane and Riemann sphere which extend the real and complex numbers by adding infinity in very different ways. Both of these have proved to be very useful devices and the source of much study. I'm not convinced that wheels will turn out to be that useful. --Salix alba (talk) 16:20, 12 August 2007 (UTC)
- In practice it does seem to be similar to NaN: . --Salix alba (talk) 16:24, 12 August 2007 (UTC)
That's exactly what I'm asking, and why I directed my question to the more mathematically worldly of our editors. As far as I can tell, it makes arithmetic and basic algebra no less convenient, but I can't really go beyond that into, for instance, complex analysis. Does it tie itself in knots? Does it contradict itself under certain bizarre conditions? Does it really make (as is his stated justification for developing it) "partial" functions "total" in a useful way? Black Carrot 16:58, 12 August 2007 (UTC)
- I don't see how this is useful, and I'm also not convinced it is notable – I found exactly one paper citing this.[1] What is special about it is that it generalizes the method of adding two extra elements (NaN and INF – not three elements NaN, +INF and -INF) to a field, to a method that can be used to extend any commutative ring. However, in doing so, we lose the ring properties. The algebra of wheels appear rather complicated and not easy to reason with. I'd have liked to see a few examples of how this might be used to tackle some problems in a simpler way than before, but I haven't seen any example of how to use this. If we stick to fields, then this can be done in a much simpler way. --Lambiam 00:17, 13 August 2007 (UTC)
Linear Transformation Problems
[edit]Hello. I have the following linear algebra problems:
1.Let V be a finite dimensional vector space and let T be a linear operator on V. Suppose that rank(T^2)=rank(T). Prove that the range and the null space of T have only the zero vector in common.
I can show that range T = range T^2 and null space (T) = null space (T^2).
2. Let p,m and n be positive integers and F a field. Let V be the space of mxn matrices over F and W the space of pxn matrices over F. Let B be a fixed pxm matrix and let T be the linear transformation from V into W defined by T(A) = BA. Prove that T is invertibe implies p=m and B invertible.
Cheers.--Shahab 12:50, 12 August 2007 (UTC)
- Assume that . Write down what this means exactly. Can you use this, as well as your result that , to show that ?
- What are the dimensions of V and W? What does the existence of an invertible linear transformation between them, T, say about them? Now, if B is not invertible, then there is some vector such that . Can you use this to contradict the fact that T is invertible?
- -- Meni Rosenfeld (talk) 14:41, 12 August 2007 (UTC)
- Thanks for the hints. I think I can solve the 1st problem:
- Let T(w)=v. Then T^2(w)=T(v)=0 & so w is in N.S.(T^2)=N.S.(T). It follows that v=T(w)=0.
- But I think I will have to further trouble you for the 2nd problem: V and W have dimensions mn and pn. Since T is invertible so mn=pn (although the theorem from which this follows appears later on in my book). I don't know what to do further. Can you be a little more explicit?--Shahab 17:48, 12 August 2007 (UTC)
- Thanks for the hints. I think I can solve the 1st problem:
- Dimly I percieve the truth. We need a non zero matrix A which transforms into 0 to establish the contradiction. How can we build such a matrix using a non zero v where Bv is 0. A little hint would suffice.--Shahab 18:30, 12 August 2007 (UTC)
- Since the article Linear Transformation redirects to Linear Map I hope you don't mind me entering in this discussion.
- How does T become a Linear operator (article redirects to Linear Map again) on V?
- V in this case is a vector which can be defined any variable right?
- --Savedthat 17:33, 12 August 2007 (UTC)
- It seems your question is regarding the first problem above. To ask how T "becomes" a linear operator is beyond the scope of our discussion. The result says that if T is a linear operator such that [the other conditions hold], then [the result holds]. This will be true for any T satisfying the conditions, no matter how it is found: perhaps I thought it up while lying on the beach; perhaps it arises in considerations of beta decay in nuclear physics; or perhaps God engraved it on stone tablets. Perhaps it has never been considered by any human being at any point in history. The theorem still applies to it. It's kind of like watching a movie, and asking "where did that queen come from?" We assume that she was born, had a childhood, grew up, and assumed the throne, but we don't see any of that in the movie. It's outside the 90-minute scope of the film.
- Many theorems in mathematics are of this form: either "If T is a linear operator such that ..., then ..." or sometimes "Let T be a linear operator such that .... Then ..." The theorem doesn't address at all where such a T might come from; that's something for people using the theorem to worry about. (Indeed, there are many famous, deep, and important theorems that start with something like "If T is a linear operator satisfying [CONDITION 1], [CONDITION 2], and [CONDITION 3], then [RESULT] is true." -- and someone later proves that NO such T can exist! This just shows that worrying about where we can find such T is a very different question from proving a theorem about T.) Tesseran 19:00, 12 August 2007 (UTC)
- Let me rephrase, what does it mean in the above context where T is a linear operator on V? In the below explanation, T would equal V! --Savedthat 05:16, 13 August 2007 (UTC)
- Your question is not really understandable. T is a linear operator and V is a vector space. That is given. They are different kinds of things. If you take a train, say the Arkhangelsk Express, from Vladivostok to Arkhangelsk, the train is not Vladivostok, and you would not ask "How does the Arkhangelsk Express become a train?". Likewise, if you have a linear operator, say T, from V to W, the linear operator is not V, and you would not ask "How does T become a linear operator?" --Lambiam 16:46, 13 August 2007 (UTC)
- Let me rephrase, what does it mean in the above context where T is a linear operator on V? In the below explanation, T would equal V! --Savedthat 05:16, 13 August 2007 (UTC)
- Good job on #1, this is exactly what I had in mind. As for 2 - I'll give another hint. Suppose you have an matrix A and two column vectors and . Suppose that you know and , and you construct the matrix . What is ? You may have a theorem for this (and its generalization, which you can use to finish the proof); otherwise, it isn't hard to find this directly. -- Meni Rosenfeld (talk) 13:23, 13 August 2007 (UTC)
- Thanks for all the help. I first proved the generalization = and then chose A = . Cheers--Shahab 06:48, 14 August 2007 (UTC)
- Nice. I was thinking about myself, but this works just as well :) -- Meni Rosenfeld (talk) 19:20, 14 August 2007 (UTC)
Vector space
[edit]I cannot understand the article Vector space. I was wondering if someone can give me an example question to solve? Thanks. --Savedthat 17:27, 12 August 2007 (UTC)
- Hi. Please see this page to learn about vector spaces with examples. A linear operator is a linear transformation from a vector space into itself. Cheers--Shahab 18:01, 12 August 2007 (UTC)
- Consider picking up a book on linear algebra as well. —Cronholm144 18:18, 12 August 2007 (UTC)
- When I learned in school the teacher described it as anything that fulfills the ten criteria of a vector space. As you can imagine some students didn't like that very much. One student screamed, "just draw me a vector space on the board!" - a statement which is close to nonsensical. Another student, who was closer to understanding it, said "you can't draw it on the board - it like saying draw a picture of hate, or love, or anger". "Actually", said the teacher with a small grin, "I could draw a picture of love, but that wouldn't be appropriate." Jon513 19:51, 12 August 2007 (UTC)
- Consider picking up a book on linear algebra as well. —Cronholm144 18:18, 12 August 2007 (UTC)
Before I go and read this wikibook: http://en.wikibooks.org/wiki/Linear_Algebra/Vector_Spaces I was wondering what are the 10 criteria of a vector space? And a linear operator is a linear transformation from a vector space into itself - so T = V? Whoever wrote that wikibook thanks. What about Wikiversity, anyone willing to teach me Linear Algebra? --Savedthat 05:22, 13 August 2007 (UTC)
- The criteria for being a vector space (the exact number of which depends on their formulation) already appear in the article Vector space. I personally know the term "linear operator" as a synonym for "linear transformation", without the requirement that it be endomorphic. Anyway, if T is an endomorphism (what Shahab calls linear operator) then it doesn't mean that but rather that , that is, T is a function from V to V (and also, of course, that it is linear). -- Meni Rosenfeld (talk) 13:30, 13 August 2007 (UTC)
Probability?
[edit]Is there a (hopefully simple, as I am pathetic at math) equation for figuring out the number of different combinations for a given set of numbers? I've been playing a game online and I only need one last thing completed to beat it but I can't because I need to click five stars that play notes in ascending order but my computer has no sound card, so it's simply a matter of clicking them the right way. But I want to know if there's like a billion possibilities before I go wasting hours and driving myself insane. I know it sounds stupid, but being a Completionist it will drive me mad until I finish it! Jihiro 20:46, 12 August 2007 (UTC)
- There are 120 possibilities, you have 5 notes and 5 spaces for arrangement for the first slot you have 5 notes options times 4 options(since you have used one note) for the next slot times three options for the next spot and so on 5*4*3*2*1 yielding 120 possibilities. This is a horrible response, wait around for KSmrq, Lambiam, or Meni to respond. —Cronholm144 20:55, 12 August 2007 (UTC)
- I suppose you could go through every possibility, but I recommend doing it systematically if at all(friend with a sound card?). —Cronholm144 21:04, 12 August 2007 (UTC)
- Your explanation isn't that bad... and I'm not one of the geniuses ;) but I'll do my best. The type of operation to calculate this is called a factorial, though if you needed only some of the notes it'd be a permutation. The point is, since you have, as said, 5 notes, you have 5 different possibilities for the first slot. Once you use up a note on the first slot, you only have four left over for the second slot, and so on. This gives you 5*4*3*2*1 = 120 possibilities.
- Also -- the best way to methodically do this, if there's no other way, would be to hold the first number steady (at 1, to start) and go through all the permutations of the last 4 numbers (there are 4*3*2*1 = 24 permutations). To do this... hold the second number steady (at 2, to start), and do all the permutations of the last 3 (there are 3 * 2 * 1 = 6 permutations). To do that... oh, you get the picture.
- So, your first 6 tries would be:
- 1 2 3 4 5
- 1 2 3 5 4
- 1 2 4 3 5
- 1 2 4 5 3
- 1 2 5 3 4
- 1 2 5 4 3
- Hope this helps! Gscshoyru 21:14, 12 August 2007 (UTC)
- Thank you both! Wow that was surprisingly simple! As for asking anyone for the combination, the stars change position each time the game is opened, so someone with a soundcard can't help. It's at www.jayisgames.com in case anyone is interested (the game is within the banner, how nifty!) Now to go test the 120 possibilities ^_^; Jihiro 21:19, 12 August 2007 (UTC)
- Oh wait, so the first six tries you listed is just one permutation while 12345 is only 1 possibility? There's going to be a lot more possibilities then =S 120 permutations times 6 possibilities per permutation, I think? Which would make 720 possibilities... Me oh my! Jihiro 21:40, 12 August 2007 (UTC)
- Never mind, I was just confusing myself further. After thinking about it a different way, I understand it. After writing out so many ways I noticed a pattern. Each possibility starting with 1 comes in sets of 6, and those sets in groups of 4 (i.e.:
- 12345 13245 14235 15234
- 12354 13254 14253 15243
- 12435 13425 14325 15324
- 12453 13452 14352 15342
- 12534 13524 14523 15423
- 12543 13542 14532 15432
- So, 5 unique starting numbers times the 4 groupings of them times the 6 possibilities in each group, 4*5*6=120! At least I seem to have came to the same conclusions, whether thats right or wrong ^_^; Jihiro 22:18, 12 August 2007 (UTC)
- That's right, you got it. I told you the first 6 permutations, not the first 1 permutation... that's where you may have been confused. Gscshoyru 23:11, 12 August 2007 (UTC)
- Consider a walk-through, cheat-code, or hint, online. Try for example www.gameFAQs.com Rfwoolf 01:07, 14 August 2007 (UTC)
Is there any difference between these? If not, why two separate articles? deeptrivia (talk) 23:10, 12 August 2007 (UTC)
- I was wondering myself the other day why we had two articles, but the terms have somewhat different meanings, although they overlap on the class of linear transformations. Usually the term null space is used for an operator working on a vector space. The operator is not necessarily a homomorphism with respect to the algebra of vector spaces – it could be an affine transformation. So 0 is not necessarily an element of the null space. In contrast, the term kernel is reserved for homomorphisms and is relative to a neutral element of some essential operation, and the neutral element of the domain is always in the kernel. If you read the article, you see that the general case is even more general. --Lambiam 00:50, 13 August 2007 (UTC)
- Lambiam, what do you mean by "algebra of vector spaces"? Thanks. Tesseran 19:49, 13 August 2007 (UTC)
- I simply mean vector spaces viewed as algebraic structures, which means we include the scalars and the full complement of operations, including "distinguished elements" (which correspond to nullary operations), and all characterizing (non)identities, as structural elements. Homomorphisms preserve the structure, which means they respect and preserve all operations and identities. This implies specifically that the null vector in the domain space is taken to the null vector in the codomain space. --Lambiam 21:22, 13 August 2007 (UTC)
- Lambiam, what do you mean by "algebra of vector spaces"? Thanks. Tesseran 19:49, 13 August 2007 (UTC)
Leray-Schauder indices
[edit]What are Leray-Schauder indices? deeptrivia (talk) 23:10, 12 August 2007 (UTC)
- The Leray-Schauder index is a generalization of the mapping degree to infinite dimensional Banach spaces. Very roughly, it is a way of "counting zeros" of a mapping. See Teschl's (free) nonlinear analysis book; it has a whole chapter on Leray-Schauder and fixed-point techniques. Phils 22:51, 13 August 2007 (UTC)
Coercive mapping
[edit]What is coercive mapping? deeptrivia (talk) 23:10, 12 August 2007 (UTC)
- The only use I know of for "coerce" (similarly, "finesse") is to shift number systems without a clutch. My linear algebra teacher described shoving things back and forth between matrices and linear transformations as "coercion", whenever we skipped the formalities. It's not technically allowed, but it usually gets the right answer. Black Carrot 03:00, 13 August 2007 (UTC)
- I fear that this answer is only approximate (and would appreciate it being verified), but what I remember is that a coercive mapping is a function for some vector space V such that . It is related to the concept of positive definiteness, but isn't identical (what with the and all, which reminds me of uniform convergence). See also MathWorld's article on a functional form of the definition, which reminded me of the norm-squared bit, and coercive function, which discusses self-adjoint coercive operators (I'm not sure if the notion applies to non-self-adjoint ones). --Tardis 15:57, 13 August 2007 (UTC)
- I think this makes sense in the context I saw this term. It says a particular mapping in coercive in the sense that:
- as
Dimensions of operators
[edit]How is dimension of an operator defined? deeptrivia (talk) 23:10, 12 August 2007 (UTC)
- I can't think of anything in particular (rank maybe?), even though I know a bit of analysis. Can you provide context? Phils 22:39, 13 August 2007 (UTC)
What is the alternating symbol a(i,j,k)? deeptrivia (talk) 23:10, 12 August 2007 (UTC)
- Could it be another name for the Levi-Civita symbol? —Keenan Pepper 01:01, 13 August 2007 (UTC)
- I think you are right. Thanks a ton! deeptrivia (talk) 01:55, 13 August 2007 (UTC)
- Someone ought to make a redirect then.