Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 82.131.188.130 (talk) at 01:18, 17 June 2006 (→‎yo peeps, are hash functions just wild guesses, or is some bounds proven?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Science Mathematics Computing/IT Humanities
Language Entertainment Miscellaneous Archives
How to ask a question
  • Search first. It's quicker, because you can find the answer in our online encyclopedia instead of waiting for a volunteer to respond. Search Wikipedia using the searchbox. A web search could help too. Common questions about Wikipedia itself, such as how to cite Wikipedia and who owns Wikipedia, are answered in Wikipedia:FAQ.
  • Sign your question. Type ~~~~ at its end.
  • Be specific. Explain your question in detail if necessary, addressing exactly what you'd like answered. For information that changes from country to country (or from state to state), such as legal, fiscal or institutional matters, please specify the jurisdiction you're interested in.
  • Include both a title and a question. The title (top box) should specify the topic of your question. The complete details should be in the bottom box.
  • Do your own homework. If you need help with a specific part or concept of your homework, feel free to ask, but please don't post entire homework questions and expect us to give you the answers.
  • Be patient. Questions are answered by other users, and a user who can answer may not be reading the page immediately. A complete answer to your question may be developed over a period of up to seven days.
  • Do not include your e-mail address. Questions aren't normally answered by e-mail. Be aware that the content on Wikipedia is extensively copied to many websites; making your e-mail address public here may make it very public throughout the Internet.
  • Edit your question for more discussion. Click the [edit] link on right side of its header line. Please do not start multiple sections about the same topic.
  • Archived questions If you cannot find your question on the reference desks, please see the Archives.
  • Unanswered questions If you find that your question has been archived before being answered, you may copy your question from the Archives into a new section on the reference desk.
  • Do not request medical or legal advice.
    Ask a doctor or lawyer instead.
After reading the above, you may
ask a new question by clicking here.

Your question will be added at the bottom of the page.
How to answer a question
  • Be thorough. Please provide as much of the answer as you are able to.
  • Be concise, not terse. Please write in a clear and easily understood manner. Keep your answer within the scope of the question as stated.
  • Link to articles which may have further information relevant to the question.
  • Be polite to users, especially ones new to Wikipedia. A little fun is fine, but don't be rude.
  • The reference desk is not a soapbox. Please avoid debating about politics, religion, or other sensitive issues.

June 10

Sector of an oval

My stepdad ( a landscaper) recently asked me how to find the area of... I guess the best way to name it is a sector of a oval. There are to radii (?) that meet at a right angle and one is 10 feet and the other 8 feet. They are connected by an arc that is ≈17 feet. How would I go about finding the area of this shape? Thanks. schyler 01:43, 10 June 2006 (UTC)[reply]

Try finding the area of the ellipse ("oval") and dividing by four because it is one quarter of the ellipse (if the two beams meet at a right angle). —Mets501talk 01:57, 10 June 2006 (UTC)[reply]
The area of this one, by the way, would be 80π/4 = 20π, about 63 square feet. —Mets501talk 02:00, 10 June 2006 (UTC)[reply]
This oval is probably a stretched circle, more commonly known as an ellipse. When a circle has radius 1 ft, its area is exactly π = 3.14159… ft², or approximately 355113 ft². Stretching in a single direction multiplies the area by the same scale factor; so stretching to 10 ft one way and 8 ft the other multiplies the area by 80. Thus the full ellipse would have an area of approximately 251.3 ft². The two stretch directions at right angles to each other give the major and minor axes of the ellipse; these cut the ellipse into four quadrants of equal area. So far the calculations are elementary. However, if a sector is cut out by lines in two arbitrary directions, the area of the sector is somewhat more complicated to find. A conceptually simple approach is to "unstretch" the ellipse and sector lines back to a unit circle. The area of the circle sector is half the radian measure of the angle between the "unstretched" lines; scale that up to get the area of the ellipse sector. Unfortunately, stretching changes the angles between lines other than the axes, so we cannot simply measure the sector angle of the ellipse. --KSmrqT 03:02, 10 June 2006 (UTC)[reply]
These calculations assume, of course, that the oval really is a true ellipse. But I think if it were a true ellipse, the arc that connects those two radii should be about 14 feet 2 inches. So it doesn't seem to be an ellipse, and the "63 square feet" measure will be off by a bit. —Bkell (talk) 03:07, 10 June 2006 (UTC)[reply]

Mathematica to Wikipedia

I tryed to translate the Mathematica .nb-files to TEX- and Mathmarkup- files. Wikipedia did not understand these. The only way was to translate to HTML. It consisted of mainly .gif images, which i had to translate to .png using GIMP . If i used small fonts they were unreadable. I think this is my only article, Collocation polynomial,so i am not interested to become spesialist in TEX or mathmarkup. Now i am going to holiday. If someone has time and possibility to cleanup the article, please do. --penman 05:08, 10 June 2006 (UTC)[reply]

Are these copyrighted images and text that really shouldn't be copied into Wikipedia ? StuRat 15:41, 10 June 2006 (UTC)[reply]
Ugh. You don't write Mathematica code to demonstrate results in articles. I said this before, you write text or TeX explaining what you're doing. Dysprosia 23:51, 12 June 2006 (UTC)[reply]
Mathematica has Tex output command; turns a "notebook", .nb, into a TeX file, that you can cut and paste the relevant parts out of. Also, individual lines can be converted to TeX with TeXForm command, (or is it "TexForm"?) and then be cut and pasted. --GangofOne 02:15, 16 June 2006 (UTC)[reply]

Conic Sections

Yes, this is assignment work, but I have done most of the work. We are given the equation of the basic hyperbola x^2/a^2 + y^2/b^2 = 1, and are asked to prove that PF' - PF = 2a, where P(x,y) is a variable point on the hyperbola, and F' and F are the foci at (-c,0) and (c,0) respectively. I can prove this by taking the basic equation above, and manipulating it to show sqrt((x+c)^2+y^2) - sqrt((x-c)^2+y^2) = 2a. However, I find I need to substitute c^2-a^2 for b^2 in order to do this. In other words, I need to prove c^2 = a^2 + b^2. Looking around on the internet, because most people start with the difference of the distances (sqrt((x+c)^2+y^2) - sqrt((x-c)^2+y^2) = 2a) and use that to find x^2/a^2 + y^2/b^2 = 1, they simply define b as being sqrt(c^2-a^2). Obviously, since I am working from the base equation and using it to find the difference of the distances, it would not be right to just replace b^2 with c^2-a^2 without providing justification. Can it be done? Or is my method too complicated?

(Your hyperbola equation should be x2/a2 − y2/b2 = 1, with a minus sign instead of a plus.) Actually, you have completed the assignment. Why? Because you have showed that if you define the number c so that c2 = a2+b2, then the two points (±c,0) act as foci. Without such a definition, how are the foci derived from the equation, eh? --KSmrqT 11:17, 10 June 2006 (UTC)[reply]

Bypassing cyclic redundancy check?

Hi there,

Working on a publication using a lot of information burned onto a DVD by a friend of mine. But I keep getting cyclic redundancy checks. The article here is rather useful (but rather hefty)... what I need to know, though, is if there's a way to say "just skip that bit and keep copying, please" to the computer. If it's just a little little bit of data that the computer can't read, can't I just hop over that bit and see if the file's still basically okay later? If one pixel of one photo is FUBAR, that doesn't change that much for me. --MattShepherd 12:20, 10 June 2006 (UTC)[reply]

The problem, however, is compression. If one bit is corrupted, then it could corrupt later bits when it is unconpressed. That's why they have CRCs.— Preceding unsigned comment added by Zemyla (talkcontribs)
The filesystem itself won't be compressed, although the files stored on it could be. If you're on a Unixish system, you should be able to obtain a disk image (which you can burn to another DVD or mount directly via the loopback device) using dd conv=sync,noerror. However, you may still end up losing entire disk blocks (a couple of kilobytes) for even minor scratches. It's the best solution I know of, though. —Ilmari Karonen (talk) 00:05, 11 June 2006 (UTC)[reply]
Actually, it should be possible to use dd on the individual files as well. No need to make a disk image (unless you want to). —Ilmari Karonen (talk) 00:08, 11 June 2006 (UTC)[reply]
I remember once having come across a program called dd_something, which skipped unreadable sectors, but googling now didn't retrieve it. I found this one, though: safecopy, which may be useful. --vibo56 talk 12:43, 11 June 2006 (UTC)[reply]
You're probably referring to dd_rescue. Not having tried it, I can't comment on what, if any, practical differences it has from dd conv=sync,noerror, though the page suggests that it may be somewhat faster in certain situations. —Ilmari Karonen (talk) 13:41, 11 June 2006 (UTC)[reply]
There's also sg_dd, a variant of dd using raw SCSI devices, which might be able to extract more data using its coe=3 setting — except that the feature is apparently not supported by CD/DVD drives. —Ilmari Karonen (talk) 13:50, 11 June 2006 (UTC)[reply]
I shall plunge into some of these things (although I am a (shudder) Windows user, I lack the brainpower to make even SUSE roll over and bark on command). Thanks all for the advice and the clarification about how redundancy checks work. --66.129.135.114 14:12, 12 June 2006 (UTC)[reply]
SuSE is eager to roll over and bark, but not so easy to teach to play dead. It's not all that hard, even for Windows folks. As for the dd_rescue program, it works. But your problem doesn't sound like like the usual sector drop out trouble on magnetic media. The DVD file ssytem has layered CRC and other error correction, and if it fails, I don't think dd_rescue can help. It knows nothing about the structure of the file system, relying on the file system code to retry until it goes right or the operator gives up. There's a script companion for it which automates the bookkeeping for sector trials and saves the user a great deal of effort. ww 04:23, 16 June 2006 (UTC)[reply]

Is the axiom of choice necessary for constructing sequences recursively?

Suppose F is a set, is a binary relation on F, and for each aF there is bF such that (a, b) ∈ R. I am interested in recursively constructing a sequence (ai)i ≥ 0 such that for every non-negative integer i, (ai, ai+1) ∈ R. It is easy to show that finite sequences of this type with arbitrary length exist; However, I am having difficulties showing that an infinite sequence of this type exists. That is, of course, unless I am using the axiom of choice, in which case the proof seems straightforward. My question is, is the possibility of this construction provable with ZF, or is the axiom of choice (or a weaker form) necessary? Is it still necessary if it is also known that F is countable? I strongly believe that the answers are, respectively, yes and no, but I just want to make sure. -- Meni Rosenfeld (talk) 17:37, 10 June 2006 (UTC)[reply]

What you need, in general, is the axiom of dependent choice. But you don't need any choice if there's a wellordering on F—just choose the least element of F that works, at each step. If F is countable, then there's an injection from F into the naturals, and from that it's easy to recover a wellordering. --Trovatore 18:39, 10 June 2006 (UTC)[reply]

Great, thanks! -- Meni Rosenfeld (talk) 19:18, 10 June 2006 (UTC)[reply]

regular distribution disappearing when applied to every testfunction : must come from zero?

Hi,

let be open and nonvoid in

let

by that I mean and on every compactum it is integrable

suppose now that for every

(this means w is infinitely differentiable on all of but it has a compact support in )



show that f is almost everywhere zero


Now I have worked on this, and came up with the idea of convolving with an approximation of unity.

But then I got confused, what exactly to do with this open I have to respect the confinements of my domain right? Thanks,

Evilbu 19:56, 10 June 2006 (UTC)[reply]

The basic idea is that if, say, f>0 on a set with positive measure, then you can construct a w such that . (Cj67 01:31, 11 June 2006 (UTC))[reply]

Yes I see what you mean, but if f was continouos and nonzero at some point p, it could make it strictly positive or negative in an open ball around it, and then a proper w can quite easily be found, but what to do here, a set with positive measure, so many cases? Evilbu 08:52, 11 June 2006 (UTC)[reply]

Measurable sets can be approximated with unions of intervals. I am a bit concerned that this is a homework problem, so I don't want to say too much detail. (Cj67 16:34, 11 June 2006 (UTC))[reply]

Well I'll be honest, it's a proof from a syllabus that me and my costudents dispute. The proof works with convolving, but seems to show little regard for necessary analytic subtleties (like discontinuity even).Evilbu 17:00, 11 June 2006 (UTC)[reply]

If you post it on my talk page, I'll take a look. (Cj67 17:21, 11 June 2006 (UTC))[reply]

June 11

Help with MASM32

I need to create an array that can hold 10 million integer numbers and fill it with random numbers ranging from 1 million to 10 million (minus one), When it is filled I need to write the index and contents to a file. I know how to generate random numbers in MASM and how to write from memory to a file using debug but I need to put them together in a MASM program. Anyone have a demo or example? ...IMHO (Talk) 00:52, 11 June 2006 (UTC)[reply]

It would be helpful if you rephrased the question to pinpoint the problem more exactly. Do you need help with the memory management/indexing, or with making your "random" numbers fall in that particular range, with writing from memory to a disk file from outside of debug, or with writing a self-contained MASM program? I see from your user page that you program in C. You might try to first write a C-program that does the job, with as few outside dependencies as possible, and then compile the C-program to assembly and study the output. --vibo56 talk 10:13, 11 June 2006 (UTC)[reply]
Yes, that is quite easy to do with C (or C++) with a few "for" loops and the rand() function (see here for help using that), and then using fstream to write to files (see here). Hope this helps. —Mets501 (talk) 13:57, 11 June 2006 (UTC)[reply]
With the range of pseudorandom numbers that IMHO needs, rand() will not be sufficient, since RAND_MAX typically is quite small (32767). You might of course combine the results of several calls to rand() by bit-shifting. If you do so, I would recommend checking the output with a tool such as ent, to make sure that the result still fits basic requirements to pseudorandom numbers. If you want to write your own pseudorandom number generator, you can find a thorough treatment of the subject in D. E. Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley, 1997. --vibo56 talk 15:00, 11 June 2006 (UTC)[reply]

Yes this information helps. Thanks. However, my goal in part here is to learn (or relearn) MASM. Back in the late '60's and early '70's assembly language was quit straight forward (and can still be that straight forward using the command line DEBUG command). Where I am having trouble currently is with INCLUDEs. Irvine32.inc in particular so I am trying to avoid even the use of INCLUDEs and do this (if possible) using only a DEBUG script. Don't get me wrong I have spent ALL of my programming career writing in high level language simply so that I could get far more work done but now my goal is to go back through some of the programs I have written in a high level language like Visual Basic and convert what ever I can to concise assembler or machine code which might help bridge the gap between Windows and Linux whereas a program written in C++ for Linux (source code) may otherwise find difficulty (after it is compiled under any version of Window's C++) to run. What I need specifically is to 1.) know how to create and expand a single dimension integer array with the above size. Therefore I need help with both the memory management and indexing, 2.) Although I can make random numbers fall into any range in Visual Basic I'm not sure about doing this in assembler, 3.) I also need help in writing the array contents and index to a file since even though I know how to write something at a particular location in memory to a file using DEBUG and how to write an array to a file using Visual Basic it has been a long, long time since I used assembler way back in the early '70's. Your suggestion to try writing in C and then doing a compile to study the output is a good and logical one but my thinking is that by the time I get back into C so that I can write such a snippet of a program that I could have already learned how to do it using MASM. Even still it is not an unreasonable or bad idea. Any code examples would lend to my effort and be appreciated. Thanks. ...IMHO (Talk) 14:58, 11 June 2006 (UTC)[reply]

I followed your suggestion to look at the disassembled output of the following C++ code and was shocked to find that while the .exe file was only 155,000 bytes the disassembled listing is over 3 million bytes long.

#include <stdio.h> 
#include <stdlib.h> 
main() 
{ 
   printf("RAND_MAX = %u\n", RAND_MAX); 
} 

I think I need to stick with the original plan. ...IMHO (Talk) 15:43, 11 June 2006 (UTC)[reply]

Wow! You must have disassembled the entire standard library! What I meant was to generate an assembly listing of your program, such as in this example. You will see that in the example, I have scaled down the size of the array by a factor of 100 compared to your original description of the problem. This is because the compiler was unable to generate sensible code for stack-allocated arrays this size (the code compiled, but gave runtime stack overflow errors).
To bridge the gap between Windows and Linux, I think that this is definitely not the way to go. If you are writing C or C++ and avoid platform-specific calls, your code should easily compile on both platforms. For platform specific stuff, write an abstraction layer, and use makefiles to select the correct .C file for the platform. If you want gui stuff, you can achieve portability by using a widget toolkit that supports both platforms, such as WxWidgets. I have no experience in porting Visual basic to Linux, but I suppose you could do it using Wine. --vibo56 talk 17:53, 11 June 2006 (UTC)[reply]
Looks like I need to learn more about the VC++ disassembler. I was using it to created the execute file and then using another program to do a disassembly (or reassembly) of the execute file. I'll study the VC++ disassembler help references for at least long enough to recover some working knowledge of MASM and then perhaps do the VB rewrites in VC++ if it looks like I can't improve the code. Thanks ...IMHO (Talk) 23:05, 11 June 2006 (UTC)[reply]
You don't need to use a disassembler. In Visual C++ 6.0, you'll find this under project settings, select the C/C++ tab, in the "category" combo select "Listing files", and chose the appropriate one. The .asm file will be generated in the same directory as the .exe. Presumably it works similarly in more recent versions of VC++. --vibo56 talk 04:58, 12 June 2006 (UTC)[reply]
All of the menu items appear to be there but no .asm file can be found in either the main folder or in the debug folder. With the C++ version of the program now up and running as it is supposed to with all of the little details given attention (like appending type designators to literals) the next step is to take a look at that .asm file ...if only it will rear its ugly head. ...IMHO (Talk) 01:21, 13 June 2006 (UTC)[reply]
Strange. You could try calling the compiler (cl.exe) from the command line, when the current directory is the directory where your source file lives. The /Fa option forces generation of a listing, the /c option skips the linker, and you might need to use the /I option to specify the directory for your include files, if the INCLUDE environment variable is not set properly. On my system that would be:
E:\src\wikipedia\masm_test>cl /Fa /c /I "c:\Programfiler\Microsoft Visual Studio\VC98\Include" main.c
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8804 for 80x86
Copyright (C) Microsoft Corp 1984-1998. All rights reserved.
 
main.c
 
E:\src\wikipedia\masm_test>dir *.asm
  Volumet i stasjon E er ARBEID
  Volumserienummeret er 4293-94FF
  
 Innhold i E:\src\wikipedia\masm_test
  
 13.06.2006  19:23             2 292 main.asm
which, as you can see, works fine. The problem may be related to the fact that you have the free version, maybe assembly generation is disabled? Would that be the case if it only compiles to .net bytecode? If so, just about any other C compiler will have an option to generate an assembly listing, try using another compiler instead. --vibo56 talk 17:38, 13 June 2006 (UTC)[reply]
There must be something seriously wrong with my installation. Even after multiple reinstallations of VC C++ v6 Introductory I keep getting command line errors like it can't find the include files, etc. I'll keep working on it. Thanks. ...IMHO (Talk) 00:01, 14 June 2006 (UTC)[reply]

Okay, finally got it! The thing that was messing up the command line compile under VC++ v6 Intro seems to have been a "using namespace std;" line (although oddly enough it has to be removed when the contents of an array variable are incremented but required when the same variable is only assigned a value). It looks like VC++ Express 2005 has the same settings function in the GUI but I have not yet been able to figure out and follow the procedure to get it to work. Its command line .asm intruction might also work now but I do not have time right now to test it. Thanks for all of the detailed suggestions and for helping to make the Wikipedia more than I ever dreamed it would be. Thanks. ...IMHO (Talk) 21:35, 15 June 2006 (UTC)[reply]

I'd add a caution here re the random business. It is remarkably hard to generate random sequences deterministically. See hardware random number generator for some observations. If you have to do it in software, you might consider Blum Blum Shub whose output is provably random in a strong sense if a certain problem is in fact intractable computationally. It's just slow in comparison to most other approaches. ISAAC and the Mersenne twister are other possibilites adn rather faster. On a practical basis, you might consult the design of Schneier and Ferguson's Fortuna (see Practical Cryptography). The problem is one of entropy in the information theory sense, and it may be that this doesn't apply to your use in which case the techniques described by Knuth will likley be helpful. Anything which passes his various tests will likely be satisfactory for any none security related purpose. However, for security related issues (eg, cryptography, etc) they won't as the entropy will be too low. Consider Schneier and Ferguson's comentson the issue in Practical Cryptography.
And with reapect to using libraries, I suggest that you either roll your own routines or install a crypto library from such projects as OpenBSD or the equivalent in the Linux world. Peter Gutmann's cryptlib is in C and has such routines. There are several other crypto libraries, most in C. Check them very carefully against the algorithm claimed before you use them for any security related purpose. G luck. ww 04:56, 16 June 2006 (UTC)[reply]

computer

what is html???

HTML stands for HyperText Markup Language and is the language used to code web sites. —Mets501 (talk) 13:58, 11 June 2006 (UTC)[reply]
See HTML. You can use our search box in the left to find out other things. Conscious 15:47, 11 June 2006 (UTC)[reply]
This ain't math. -- Миборовский 05:18, 12 June 2006 (UTC)[reply]
The mathematics reference desk is also the place for questions about computers and computer science. -- Meni Rosenfeld (talk) 11:15, 12 June 2006 (UTC)[reply]
Html has more to do with 1) language and 2) information science than computing. You can do everything with a computer, writing, drawing, publishing, searching the net, playing ; computer science uses languages the same way we use them, with grammar, lexicon, good and bad words, orthographic correctors, art of the discourse. Our computer scientists are hegemonists the way some well-bred nations are. So Reference Desk/language or /sciences are good candidates for this question. --DLL 17:01, 12 June 2006 (UTC)[reply]
This logic certainly looks bizarre. Will the question "what is Visual Basic?" also belong to refdesk/language because VB is a language? At this rate, what question can belong on the computers\CS category? -- Meni Rosenfeld (talk) 19:17, 12 June 2006 (UTC)[reply]
Is it time for a separate computers reference desk yet? In the time since computer questions were directed to the mathematics desk, I don't recall seeing a single "hard" computer science (as in, theory of computation etc.) question that would remotely fit in with the math stuff. Now I personally don't mind much, since I do find many of the "How to install Linux?" questions interesting too, but it does get confusing. —Ilmari Karonen (talk) 23:35, 12 June 2006 (UTC)[reply]
On the other hand, there haven't been that many questions about computers, so I don't know if this is a large enough topic (in terms of number of questions) to deserve its own section. I don't know how are things at the other refdesks, but perhaps a general repartition can be useful - for example, separating humanities from social sciences, and adding a "technology" section for questions about computers, electronics etc. -- Meni Rosenfeld (talk) 16:05, 13 June 2006 (UTC)[reply]
The Math(s) desk seems manageable, about 6 topics/day recently. The other reference desks, except language, handle over 15 topics most days. If desks were split I think Science, Misc., and Humanities would be first. (How do you split Misc? :-). Walt 17:05, 13 June 2006 (UTC)[reply]
Not that long ago the Microsoft public newsgroup WindowsXp subject only got maybe 50 to 100 questions per week. That is about how many hits it gets every hour now-a-days so you are luck if you ever get anyone to reply to a question. Maybe one of the reasons there are not that many computer questions here is because there is no computer desk. ...IMHO (Talk) 23:44, 13 June 2006 (UTC)[reply]

June 12

no idea how to do this thing i dont know wat to call

how would i do these problems, i have an exam tomorrow and i would love an answer soon:

b. x/(2x+7)=(x-5)/(x+1) and c. [(x-1)/(x+1)]-[2x/(x-1)]=-1

i have no idea how to approach this problem the directions say: Solve each equation. --Boyofsteel999 01:09, 12 June 2006 (UTC)[reply]

OK, to solve these equations using rational functions, you generally go through these two steps:

1. Multiply by the terms on the denominator (ie. the bottom). For example, if you had the equation you multiply by the and terms, giving you .

2. Solve the problem as you would any kind of quadration equation - gather it into a normal quadratic form, and either factorise or use the quadratic formula. In this case, we first get , which reduces to , the solutions of which (using the quadratic formula) are .

Technically there's a third step - make sure that the solutions you get are not going to make the denominators zero - but a. this shouldn't happen anyway, and b. once you get to complex analysis you treat these solutions that aren't really solutions as solutions that just aren't explained clearly. Confusing Manifestation 02:15, 12 June 2006 (UTC)[reply]

I'm sorry, but I think your solution to the quadratic equation is wrong. has solutions . – b_jonas 08:39, 12 June 2006 (UTC)[reply]
Whoops, sorry. I was doing the calculation in my head and was so worried about getting the factor in the square root right I screwed up the other term. 144.139.141.137 13:52, 12 June 2006 (UTC)[reply]

projective limit of the finite cyclic groups

The inverse limit of the cyclic groups Z/pnZ for p a prime gives you the group of p-adic integers (a group about which I know little). It seems to me that the collection of all finite cyclic groups also forms a direct system of groups over the directed set of natural numbers ordered (dually) by divisibility. Thus shouldn't there be an inverse limit of this system as well? What is it? Probably it's just Z, right? -lethe talk + 06:10, 12 June 2006 (UTC)[reply]

I don't think so, because I think I can construct an element of the direct product that satisfies the conditions for being an element of the inverse limit, but which obviously doesn't correspond to an integer. It corresponds to 0 modulo every odd number, n modulo 2n for all odd n, 1 modulo 2n for all n, and the following for multiples of 4:
4 8 12 16 20 24 28 32 36 40 44 48 ...
1 1 9 1 5 9 21 1 9 25 33 33 ...
Oddly enough, this sequence doesn't appear in Sloane's, but it seems easy enough to keep generating new terms with the Chinese remainder theorem. —Keenan Pepper 01:58, 13 June 2006 (UTC)[reply]
So then I guess the resulting group will be kind of elaborate, if it contains weird sequences like this. Another way to see that the result will not be Z: the p-adic integers are uncountable, and my limit group has a surjection to every group of p-adic integers, and so must be uncountable as well. Would you expand a little on how you came up with that sequence? -lethe talk + 17:53, 13 June 2006 (UTC)[reply]
Well, if you know the sequences for all powers of primes (that is, if you know all the p-adic numbers it corresponds to), then you know the whole sequence by the CRT. I tried to think of a simple example that obviously didn't correspond to an integer, so I made it 1 mod 2 but 0 mod all the other primes. Then you can choose whether it's 1 or 3 mod 4, and so on, but it works if you just make it 1 for all powers of 2.
Do you think this group is isomorphic to the direct product of all the groups of p-adic integers? —Keenan Pepper 22:33, 13 June 2006 (UTC)[reply]
My intuition is no. A direct product of the p-adic groups won't have numbers from the various composite cyclic groups like Z/6Z. Well, we could probably take the direct product over all naturals, instead of just primes. But then this would be too big; it would have independent factors for Z2 and Z4 (here I denote cyclic groups with fractions and p-adic groups with subscripts). Maybe we could take the direct product over all numbers which are not perfect powers. -lethe talk + 03:34, 14 June 2006 (UTC)[reply]

the algebraic structure of sentential logic

In sentential logic, it seems to me that the set of well-formed formulas (wffs) may profitably be thought of as a set with an algebraic structure. One has a set of sentence variables, and one may perform various operations on the sentence variables; usually disjunction, conjunction, negation, and implication. The sets is then some sort of free algebraic structure in these operations on the set of sentence symbols. Another algebraic structure with these operations is the set {0,1} (with the obvious definitions of the operations), and a truth assignment may then be defined as a homomorphism of this kind of structure from the free structure of wffs to {0,1}, which is used to determine an equivalence relation called tautology. The Lindenbaum algebra is the quotient of this free structure by this equivalence relation and is a Boolean algebra.

This description in terms of algebraic language differs in flavor a bit from the way I was taught mathematical logic (from Enderton), and I have some questions. It seems that this algebraic structure is completely free; it doesn't satisfy any axioms. So I guess it's not a very interesting structure. Is this a standard construction? Does it have a name? I've been using the name "free pre-Boolean algebra", so that a truth assignment is pre-Boolean algebra homomorphism.

I like the algebraic description here, one reason being that it gives a concise way of defining semantic entailment. On the other hand, I don't see any nice algebraic way of describing syntactical entailment. Is there one? Can I describe modus ponens as an algebraic operation in this structure? -lethe talk + 06:10, 12 June 2006 (UTC)[reply]

Isn't this what is known as a term algebra? I think you also get an initial algebra by the construction. Valuations like truth assignment are then catamorphisms. An immediate advantage of the algebraic view over the prevalent view as strings over an alphabet (as in the article Formal language) is that you get a structural view in which you don't need to apply handwaving about parentheses and ambiguities, or put them in explicitly all the time (as in the article Formula (mathematical logic)). --LambiamTalk 09:35, 12 June 2006 (UTC)[reply]
Yeah, term algebra sounds like exactly what I'm describing. It's a term algebra with a signature of 1 unary and 3 binary operations (depending on my choice of logical symbols). As for initial algebras and F-algebras, I couldn't make sense of those articles. -lethe talk + 14:12, 12 June 2006 (UTC)[reply]

Combining Cubes...

First I would like to state that this is not for homework purposes, merely some discrepancy with a textbook we discovered recently. We've experimented with combining cubes in a variety of completely different formations (that is, excluding replicas via reflection or any other direct transformation), and the particular one we decided to do on that occasion was 4. However while we could find only seven different combinations, the textbook insisted that there was eight. Can anyone help me to either confirm the textbook answer, or our own answer, and if possible state a brief "proof" of why a certain answer is so.

Also, the reason that we were experimenting was because we were trying to develop a general algebraic processes to find the number of combinations as a function of the number of identical cubes. If anyone can possibly give any directions towards this search at all it would also be helpful and greatly appreciated. LCS

These types of shapes are called polycubes. Six of the 4-cube polycubes are present in the component pieces of the Soma cube; the other two are the 1x1x4 "line" and the 1x2x2 "square". Notice that two of the soma cube pieces - the "left screw tetracube" and the "right screw tetracube" - are mirror-images of one another. If you count these pieces as the same, you get a total of 7 polycubes; if you count them as different, you get 8. I suspect you and your book are using different conventions on the transformations that are used to identify "identical" polycubes with one another - you are allowing reflections; the book is not. I imagine that the general problem of counting the number of different polycubes of n cubes is very hard. See [1] for values from n=1 to 13. Gandalf61 11:56, 12 June 2006 (UTC)[reply]


These moths have been bugging me

I saw a question on the science page that bugged me and made me think of the following quesiton:

Say you have two moths flying toward each other carrying a light source. They are attracted to light 10 meters away, and their paralell paths are seperated by 5 m, would they crash into each other? ...Sounds like this would make a good text book calculus question. Anyone have an answer? XM 16:50, 12 June 2006 (UTC)[reply]

It depends on what "attracted" means, e.g., is it like a force pulling them, or do they try to aim themselves at the light? (Cj67 18:49, 12 June 2006 (UTC))[reply]

Once they detect the light, they are pulled towards the light at the same speed they are traveling--(XM) but too lazy to sign in.

That's not what the article says. It says that, of the two prevailing theories, the one that relates to this suggests they maintain a "constant angle" to the light source. If they were pulled towards the light, at the same speed they were originally travelling, starting when they got within ten meters of each other, they would fly straight, make a sudden, sharp turn, and fly straight into each other, without ever changing speed. Black Carrot 22:08, 12 June 2006 (UTC)[reply]

Java JPanel

I am making an applet at a certain point in this applet i clear a JPanel with the removeAll(), after i have done this it seems as thought it is impossible to add anything to this JPanel again.

Is it possible to add to a JPanel after having used removeAll(), or how would i go about placing new content to the same JPanel while taking off the content thats already there Thank you very much

--70.28.2.95 19:37, 12 June 2006 (UTC)[reply]

What do you mean, impossible? Did your program throw an exception or did it just not update? If it didn't update, force it too, invoke JPanel.validate(). Oskar 00:02, 13 June 2006 (UTC)[reply]

I tried using validate() but it still wont update, the JPanel stays blank. I tried invoking it both before and after adding to my JPanel but the results were the same --70.28.2.95 00:47, 13 June 2006 (UTC)[reply]

Allright then, lets put on our thinking caps. A few things off the top of my head:
  1. If you have access to the parent container, use validate() on that.
  2. Also, use validate() on all components in the container.
  3. Make sure that the JPanel is actually showing on screen (since a JPanel with no components is invisible). Try giving it a border, or just paint the background red. You could also assign it a mouselistener that would print a message every time you clicked it so that you'd know it was there.
  4. Remove and readd the JPanel itself (but still making sure it's actually showing, as in previous point)
  5. Try calling invalidate() before you remove the components or after you remove the components but before you add the new ones and then validate(), or both. Then call validate().
  6. Try calling updateUI(). I'm not sure it'll work, but it's worth a shot.
  7. Try calling doLayout(). You shouldn't really do this manually, but what the hell, we're getting desperate
  8. The docs say that removeAll() does something with your layout manager, so assign a new layout manager to to the panel before adding all the stuff.
  9. Not that it'd make a difference, but try using remove() on each component individually instead of removeAll(), maybe that'd help.
  10. If none of these work, lets get unorthodox: Try resizing your panel while it's running (ie. if it's in a standard frame, resize the frame). Try writing some sort of code that removes the panel and then adds it back in. Try to think of anything that might make a panel update while you run it.
  11. Let's do some debugging: use getComponents() to get all the components and then System.out.println() all of them to make sure that you actually have added them.
Let me know if any of these helps, otherwise we'll have to come up with somehting else. Cheers Oskar 01:23, 13 June 2006 (UTC)[reply]

validating the parent container and all the components seems to have done the trick. Thanks alot for your time. --70.28.2.95 19:00, 13 June 2006 (UTC)[reply]

What are these two solids called?

Mystery solid 1
Mystery solid 1
Mystery solid 2
Mystery solid 2

For the WikiGeometers out there...I wonder if you could help me identify these two solids by name? I'd like to include them in articles which might be lacking in illustrative images...Thanks! --HappyCamper 21:42, 12 June 2006 (UTC)[reply]

One of those kinda look like one of those freaky d100 dices Oskar 01:33, 13 June 2006 (UTC)[reply]
Makes me think of a Buckyball. --LambiamTalk 07:32, 13 June 2006 (UTC)[reply]

The one on the left is impossible to make unless you are talking about spherical geometry. As for the one on the right, I have no idea.Yanwen 00:21, 14 June 2006 (UTC)[reply]

The caption on the globes mention something about "V 3 1" - this is some sort of parameterization I think, but does it help? --HappyCamper 01:00, 14 June 2006 (UTC)[reply]
The one on the left is reminiscient of a truncated icosahedron, but at first glance looks impossible, since three hexagons meet at some of the vertices. However, the edges are probably not uniform, nor the faces, meaning the object is certainly possible, its just not a regular polyhedron. I'm not sure, but I believe that one on the left is derived as the dual of the one on the right. The right one looks like it might be a subdivision surface generated by an actual truncated icosahedron. In computer graphics, polygon models for spheres are sometimes generated by starting with a platonic or archimedean solid, and subdividing the faces into triangles in a symmetric way. The subdivision can be continued to any depth, allowing high resolution models without parametrization artifacts. If you look closely at the one on the right, you can see that most of the vertices connect six edges, but a few of them connect only five; I suspect that if you were able to count how many there are of each, you'd find twelve vertices that connect five edges - the same number of pentagonal faces can be found on a truncated icosahedron. Check out the dual of the truncated icosahedron to see what the first stage of the subdivision would look like. --Monguin61 18:57, 14 June 2006 (UTC)[reply]
These are actually closely related to, among all things, a certain class of viruses, which have exteriors with icosahedral symmetry. See here, for example (scroll down to "The theoretical basis..."). You can make a structure like the second one out of 20(a2+ab+b2) triangles, where a and b are integers, and at least one of a or b is non-zero. The corresponding dual always has 12 pentagons, and 10(a2+ab+b2-1) hexagons. Chuck 20:42, 14 June 2006 (UTC)[reply]
Why is the one on the left impossible to make? It looks like a soccer ball to me. -ReuvenkT C E 22:19, 16 June 2006 (UTC)[reply]

June 13

A Very generic thank you

I would like to say thanks to the editors of this project. I have learned more in a few weeks of perusing the portals in the math section than I did all through my engineering curriculum. Absolutely fascinating stuff, and very well organized. In my mind, mathematics is learned best by a first organizing the general ideas of math, then discovering how they are sometimes connected. This thorough organization has been a hugely interesting, and, I suspect, many others who just read but don't say anything. Anyway, sometimes simple thanks justifies long hours of effort, I've found. I hope it does for all of you. Denmen 02:33, 13 June 2006 (UTC)[reply]

Thank you. Well, our math portal is the fifth in a goolge search. What shall we do to be no 1 ? --DLL 19:27, 13 June 2006 (UTC)[reply]
Get Google spelt right? Oh, and thanks Denmen. Reward all the hardworing editors with bleeding fingers, a bad case of Wikiholicism, and more knowledge than either of us will ever have, with a mention wherever you go. Talk it up everywhere. Vote on featured article candidates, and find some articles to contribute to and supervise, ... ww 05:05, 16 June 2006 (UTC)[reply]

Measuring Heights

How can a person measure the height of a tall object such as a telephone pole, a tree, a tall building, etc? — Preceding unsigned comment added by Stockard (talkcontribs) 20:08, 12 June 2006 (UTC)[reply]

At a certain time of the day, the position of the sun is such that the length of the shadow cast by the object is equal to the height of the object. – Zntrip 04:54, 13 June 2006 (UTC)[reply]
Some options:
  1. Use optical interferometry with a laser and a corner reflector. To get the corner reflector on top of the object, use a micro air vehicle.
  2. Cut down the pole and the tree; measure them on the ground. Consult the architectural records of the building.
Methods like trigonometry may have been fine in ancient Egypt, but surely we can do better today! --KSmrqT 05:16, 13 June 2006 (UTC)[reply]
  • What could be better than trigonometry? You need nothing besides a length of measuring tape and the sun. Anything else requires a lot more work and makes it only slightly more accurate, probably more accurate than you need it to be. - Mgm|(talk) 09:16, 13 June 2006 (UTC)[reply]
    Perhaps you should acquaint yourself with modern methods of responding to homework questions before jumping to the defense of antiquated trigonometry approaches. --KSmrqT 10:54, 13 June 2006 (UTC)[reply]
There's also the barometer trick: measuring the atmospheric pressure at the top and the bottom of the building, and using the barometric formula. – b_jonas 11:15, 13 June 2006 (UTC)[reply]
There is also another 'barometer trick': drop the barometer from the top of the building, time how long it takes to hit the ground, and use Newton's laws of motion to calculate the height :) Madmath789 11:26, 13 June 2006 (UTC)[reply]
We may be thinking along the same lines. --KSmrqT 11:28, 13 June 2006 (UTC)[reply]
This question was also asked on the Miscellaneous Ref Desk. There are a couple additional methods presented there. --LarryMac 14:35, 13 June 2006 (UTC)[reply]
I tried this for a pine in my garden. Just measure its shadow and yours. You know your height, then : ph = yh * ps / ys. For a building amongst others, it is hard to get the full shadow on the ground. As for the accuracy ... --DLL 19:24, 13 June 2006 (UTC)[reply]
I find using trigonometry a lot easier than cutting down the tree/pole and using the laser. Why not use the easiest method that will give you a fairly accurate answer? Besides, this is posted on a mathematics reference desk so I think we should provide an answer that has to do with math, not physics. Yanwen 00:19, 14 June 2006 (UTC)[reply]

Symbols

Can someone tell me what ΔQ% means?Groc 10:39, 13 June 2006 (UTC)[reply]

"Δ" is the uppercase version of the Greek letter delta. In mathematics and physics, "Δ" is often used as short-hand for "change in"; more specifically, "Δ" represents a macroscopic change, whereas a lower case delta, "δ", represents an infintesimal change. So "ΔQ%" would mean "percentage change in Q" - which doesn't help you much unless you know what Q is. Q could represent heat energy, especially if you came across this term in a thermodynamics equation such as the first law of thermodynamics. Alternatively, Q is also used in physics to represent a quantity of electric charge, or the fusion energy gain factor in nuclear physics. Gandalf61 11:09, 13 June 2006 (UTC)[reply]

Least squares approximation question

I have a set of N points (xi,yi). I want to find out the radius and subtended angle of the circular arc that can best approximate those points and the least square error in this approximation. How can I do this? Thanks. deeptrivia (talk) 19:47, 13 June 2006 (UTC)[reply]

Don't know. But just to define the question more precisely for the benefit of those who may be better able to help, how are you defining your error in this case? Perpendicular distance? Distance parallel to one of the coordinate axes? Arbitrary username 19:55, 13 June 2006 (UTC)[reply]
Well, suppose the center of the arc is located at (x0,y0), and the radius of the arc is R. Then, the error I am looking at is . Yes, I think this would be the same is perpendicular distance of the points from the arc. deeptrivia (talk) 21:09, 13 June 2006 (UTC)[reply]
Defining , you may be looking more for minimizing something like , which is the rms error in the perpendicular distances. Here is a possible approach, assuming N is at least 3. For general use this has to be made robust for handling degenerate cases, like collinearity of the points.
  1. First find three points that are more-or-less as far away from each other as possible, for example start with some point p0, find the point p1 the farthest away from p0, find the point p2 the farthest away from p1, and finally find p3 maximizing min(d(p1,p3), d(p2,p3)).
  2. Find the circle through p1, p2 and p3, giving an initial estimate of the centre of the circular arc.
  3. Given an estimate of the centre, compute an estimate of the radius as , where is as before.
  4. Given an estimate of the centre and an estimate of the radius, obtain an improved estimate of the centre by shifting it by the average of the "discrepancy vectors", where the i-th discrepancy vector is the difference between the vector from the estimated centre to point i and the vector with the same direction and length R. So it is as if each point is pulling on the centre with a force proportional to its perp distance to the circle.
  5. Repeat steps 3 and 4 until convergence.
LambiamTalk 11:53, 14 June 2006 (UTC)[reply]
For a serious study of possibilities try this paper by Chernov and Lesort. They note that short arcs can cause many algorithms to fail. --KSmrqT 12:49, 14 June 2006 (UTC)[reply]
Thanks for your responses. Just figured out that there's a readymade solution, which will do for a dumb engineering student. deeptrivia (talk) 18:42, 14 June 2006 (UTC)[reply]

Nifty Prime Finder Thingy

I've been fiddling around with primes for a bit, and I found a property that I've never heard of before. I'd like to know if it already exists, or if I'm the first to find it. Given a prime P, the product of all primes less than P is A. If a prime N<A can be found that is close to A (meaning A-N<P2), there is a corresponding prime number at A-N. Of course, this can't break any records, since it looks down for primes instead of up, and can only find new primes between P and P2, and then only if there happens to be a prime known between A and A-P2, but still. Anyone heard of it? Black Carrot 22:48, 13 June 2006 (UTC)[reply]

Take P = 5. Then A = 2 × 3 = 6. N = 2 is prime and satisfies N < A and A - N < P2. And yet A - N is 4, which is not a prime number. You also need to require that N > P, which follows from A - P2 ≥ P, which in turn follows from P ≥ 11. Then it is not difficult to prove this property. I am sorry to say that it is not terribly exciting, which may be why we haven't heard of it before. --LambiamTalk 00:20, 14 June 2006 (UTC)[reply]
Damn. And of course, I was thinking of larger numbers than that. How exactly would you prove it? Black Carrot 00:58, 14 June 2006 (UTC)[reply]
Let's define B = A - N. Since B < P2, to establish that B is prime it suffices to prove that no prime Q < P is a divisor of B. (For if B is not prime, we can write it as B = D × E in which D and E are proper divisors, and since they cannot be both ≥ P at least one of the two is smaller than P, and then so is its least prime factor Q.) To prove now that no prime Q < P is a divisor of B, we show that the assumption that some prime Q < P divides B leads to a contradiction. So assume prime Q < P divides B. Q also divides A, since A is the product of a set of primes that includes Q. Then Q also divides A - B = A - (A - N) = N. Furthermore, Q < N (since Q < P < N), so Q is a proper divisor of N. But this contradicts the given fact that N is a prime number. --LambiamTalk 01:58, 14 June 2006 (UTC)[reply]

June 14

World cup betting

I don't think I want to bet, but I'm curious about the way the odds work. Currently at an online betting site, the odds for the first five teams are:

Brazil          4.1
England         8.4
Argentina       8.6
Germany 	 9.6
Italy 	         12

Lets say I were to place my bets so I placed $1 on Brazil and $.50 on each of the other four. Am I right in thinking I'm guaranteed to win something if any of the first five win? Likewise, would there be some way to pick my amounts so I'm guaranteed to win something if any of the first ten win? Finally, am I right in thinking that statistically, even doing this, my expected wins should be zero overall (if the odds are fair and accurate), as the pennies I win when the top teams win would be exactly balanced out by the dollars I'd lose when one of the underdogs won? (obviously there are also fees to pay, I assume, but I'm not counting that). — Asbestos | Talk (RFC) 14:01, 14 June 2006 (UTC)[reply]

For the given data, yes, you could bet such that you'd win if any of those 5 win (though you require a bankroll that grows far faster than the expected rewards, see Martingale (roulette system) for a discussion of a similar problem). However, you can't even out betting on the whole thing, because then the bookies wouldn't get a cut. Betting odds never add back to 1. — Lomn 14:32, 14 June 2006 (UTC)[reply]
Surely its not the same as the roulette example. Here there are a limited number of teams, unlike in roulette, where you can lose an unlimited number of times in a row. You're not placing any more money when you lose, you bet it all at once. In the example above, I would have bet $3 and no more. I could halve all my bets, betting $1.50, and still expect to win if any of those five teams won, right? But I'm still wondering about my last question above, rephrased here a little more generally: If all the odds given are fair and accurate, and the bookie doesn't take a commission, am I right in thinking that no matter how I place my bets, how many bets I place, and how much I put on them, my expected earnings will always be exactly zero? — Asbestos | Talk (RFC) 15:21, 14 June 2006 (UTC)[reply]
To those interested in world cup winning chances from a statistical point of view, this page from the Norwegian Computing Centre might be of interest. All remaining matches are simulated, taking every little detail of the rules into account. --vibo56 talk 15:57, 14 June 2006 (UTC)[reply]
It's not the same, no, but it's similar in that the flaw lies in having a limited bankroll to accomplish a meaningful gain. However, please note the latter half of my point -- betting odds are not fair and will be tilted towards the house. You cannot find a real-world scenario where every possible outcome is unity or better for the player. To extend into the theoretical, a strictly fair system should allow you to find a unity point, but it won't be "no matter how you place your bets" -- it will be the particular pattern of betting that corresponds to the odds. — Lomn 19:30, 14 June 2006 (UTC)[reply]
Actually, I take that back, at least as stated. If you go with multiple iterations over time, a fair system allows you to distribute your money however you want and, over time, you'll average out to zero. However, for a one-shot event, you must match the odds to guarantee a lack of loss. Consider fair betting on a fair coin. If you flip the coin a lot, you can bet all your money on tails every time and will, on average, net zero. However, to guarantee a lack of loss on one flip, you must put half your money on heads and half on tails. — Lomn 19:38, 14 June 2006 (UTC)[reply]
I still don't see how the limited bankroll affects anything, because, as nothing is growing exponentially, and there is a hard limit on the number of teams, I can always halve or quarter my bets to fit my bankroll, no matter how small my bankroll is. If I had an unlimited bankroll, I still wouldn't be any better off. But thank you for answering my question in the end. I did try to be quite specific as to what I meant, by asking what the expected gain would be, and whether or not my expected gain would change depending on how I placed my bets. If I've understood right, in a fair system of this kind (were it fair), it should make absolutely no difference what bets are placed, how many or for how much, the expected gain will always be zero. — Asbestos | Talk (RFC) 20:30, 14 June 2006 (UTC)[reply]

lognormal/normal

If x is lognormal distributed how is s=s0*exp(x) distributed?

I've seen the term "log-log-normal distribution" for this (or "loglognormal"), but wouldn't consider it standard. --LambiamTalk 16:23, 14 June 2006 (UTC)[reply]

Extending the number set

A first-year maths lecturer last year gave us a delightful little insight into where our progressively less intuitive number sets come from. We start with the positive integers, the normal, every day counting numbers. But then we have no solution x for equations like 1 + x = 1. So we need another number, and enter stage left, zero. But we still have no solution for equations like 3 + x = 2. So we need more numbers, and lo, the negative integers give us the complete set of integers. But now we have no solution for equations like 2 * x = 1, and again, we need more numbers, so we get the rationals. Then equations like x * x = 2 yield the irrationals (giving us the set of reals) and if we expand our number set one more time to solve x * x = -1, here we are finally with the complex numbers.

But is that as far as we need to go? Are there any equations like this that we still can't solve, that lead us to extending our number set yet further? Is this where quaternions, octernions et al become needed (I've not read much on them, I admit) , or are they just useful extensions of the concept of complex numbers that have nifty results for physicists? I've played with complex numbers idly while thinking about this but I can't think of any problems left. Are complex numbers finally the end? -Maelin 15:26, 14 June 2006 (UTC)[reply]

In a way, they are "the end", depending on what you want to achieve. By the Fundamental theorem of algebra, any polynomial equation over the complex numbers has a complex solution. However, there are many larger fields that can be considered (the space of meromorphic functions, for example). All of these are infinte-dimensional, though. If you want a finite dimensional extension of the real or complex numbers, you have to give up some of the properties of a field: commutativity for the quaternions, associativity for the octonions. Kusma (討論) 15:34, 14 June 2006 (UTC)[reply]
But if you are willing to consider infinite numbers, there's a whole lot of different infinite cardinal numbers. --vibo56 talk 16:01, 14 June 2006 (UTC)[reply]
Not to mention ordinal numbers, hyperreal numbers and surreal numbers... Cardinals and ordinals are not extensions of the reals, only of the non-negative integers, so they demonstrate a different path one may take in his quest for extensions. Regarding your original question, indeed, as long as one is only interested in solving polynomial equations with one unknown, the complex numbers suffice. But if you want to solve an equation like ab - ba = 1, the complexes aren't up to the task - This is where non-commutative rings of, say, matrices, come in handy. In short, there are enormously many ways of extending the elementary notions of "number" - it all depends on what features one wishes in the structure he investigates. -- Meni Rosenfeld (talk) 16:20, 14 June 2006 (UTC)[reply]
And of course, let's not forget the equation x + 1 = x, which is solvable in the real projective line and the extended real number line. -- Meni Rosenfeld (talk) 16:23, 14 June 2006 (UTC)[reply]
... or you can consider questions like "what if there were a solution to x2=1 that was not 1 or -1 ?" - and you get the split-complex numbers. Gandalf61 16:14, 14 June 2006 (UTC)[reply]
Just one comment on this point: "Then equations like x * x = 2 yield the irrationals (giving us the set of reals)" Actually, real solutions to polynomials only give us some of the irrational numbers, namely the algebraic numbers. They don't give us transcendental numbers. Chuck 21:02, 14 June 2006 (UTC)[reply]
The progression of simple polynomials is an excellent way to motivate and introduce number systems. Both logically and historically this route has been important, culminating in the system of complex numbers and the fundamental theorem of algebra, which suggests we need go no farther. However, another motivation is geometry. A basic example is the circumference of a circle with unit diameter. Archimedes was able to provide lower and upper bounds for this length based on sequences of regular polygons, inscribed and circumscribed. However, the value itself, which is π, is not the solution of any polynomial equation with complex coefficients. The real line consists almost entirely of such values, required to form a geometric continuum.
Quaternions also emerge from geometry. Sir William Rowan Hamilton had worked with complex numbers both as algebraic objects and as ordered pairs suitable for plane geometry. Through his interest in mathematical physics he was naturally curious if there was a number system that could play the same role for space, meaning the 3-dimensional Euclidean space of physics at the time. For 15 years he tried unsuccessfully to create a system of triples instead of pairs. Habitually and unconsciously he assumed that multiplication was commutative, so that ab = ba. Then one evening as he and his wife were walking through Dublin to a meeting, the thought struck him — like a bolt of lightning — that if he let ij = k but ji = −k he would obtain a system of quadruples instead of triples, but otherwise the number system would work as he required. This was the famous invention/discovery of quaternions.
It was also the beginning of the crucial realization that we could devise number systems and algebras with great latitude in their rules. For example, a few years later Arthur Cayley explained how to calculate with matrices, whose multiplication is also non-commutative. William Kingdon Clifford built on earlier work of Hermann Grassmann to produce a family of arithmetic, or more properly algebraic, systems called Clifford algebras, suitable for geometry in any dimension. The examples of such inventions are too numerous to list.
Each system of numbers has its own motivations, its own uses. Sometimes these go far beyond the original impetus. For example, we now know that the structure of any Clifford algebra is based on matrices built from one of three fundamental systems: real numbers, complex numbers, or quaternions.
So, no, complex numbers are not the end. They are just a particularly scenic and historic stop on a tour of a beautiful country. --KSmrqT 05:46, 15 June 2006 (UTC)[reply]

a question

hello, i hope i am looking in the right section. what is archetypal systems analysis? also i found it as archetypal social systems analysis. thank you very much for you time. --Marina s 19:12, 14 June 2006 (UTC)[reply]

I have no idea, but I tried googling for it. Almost all hits point to the same source: "Mitroff, I. L. (1983). Archetypal social systems analysis: On the deeper structure of human systems. Academy of Management Review, 8, 387-397". If you contact your library, you could get a copy of the paper. --vibo56 talk 16:54, 15 June 2006 (UTC)[reply]

Game Archive Browsing/Windows Shell Integration

Recently I had the idea of somehow creating a shell extension similar to Microsoft's .ZIP CompressedFolder extension that would enable users to browse through a game archive file. It would handle the game archive files almost the same way as .ZIP files are handled (with shell menu items and being able to open the file and browse through as though it were a folder). Is this possible? If so, how would I go about doing it? What language would be best for this project? I know C#, some C++, and Visual Basic.

Any help, comments, or input on this would be greatly appreciated.

--Kasimov 19:14, 14 June 2006 (UTC)[reply]

Well, if you don't know the format to the game archive, then you can't really do much, can you? Dysprosia 08:47, 15 June 2006 (UTC)[reply]

It's the Halo/Halo 2 .map format. --Kasimov 12:21, 15 June 2006 (UTC)[reply]

Again, if .map is not an open standard, or is a synonym for a non-open standard, you can't do much. Do you know anything about the map format? Are there libraries available for manipulating map files? Dysprosia 12:49, 15 June 2006 (UTC)[reply]

Alright, I don't know if this is enough information but here's some that could be helpful:

The file itself is divided into 4 major sections:

Header
BSP(s)
Raw Data
Tag Index and Meta

The header is uncompressed and is always 2048 bytes. However, the rest of the file is zip compressed with zLib.

Now, I figured since it's compressed with zlib that would make it easier to make a shell extension, right?

I hope that's enough information, because really it would be pointless for me to type here the entire structure of the file. If you're looking for a complete breakdown of it then visit these two pages: Page #1 Page #2.

Thanks --Kasimov 13:37, 15 June 2006 (UTC)[reply]

If there's no specific libraries, binary read past the header and other nonsense, and somehow feed the rest to zlib and uncompress. I don't know about shell extensions in Windows, but you could write wrap/unwrap programs and then use Explorer to do all the manipulating. I've never used zlib so I can't be more specific, but if you can use zlib, then there you go. Dysprosia 00:11, 16 June 2006 (UTC)[reply]

June 15

Multi Choice exam strategies

Last night a friend and I got into a dispute. In a multi-choice exam with 4 choices (e.g A/B/C/D), where the answers are randomly selected among the 4 possibilities (so for any one question, a random guess at the answer has a 0.25 chance of being correct), what is the best strategy if you have to guess at an answer?

She said that sticking with one letter (e.g. Always guess "A") gives a 0.25 chance of getting the right answer, BUT that choosing at random between two letters (e.g. random guess between "A" or "B") gives a 0.125 (1/8) chance of being right instead of 1/4. Her reasoning: First there is a 0.5 choice between A and B, and then a 0.25 chance of being right. 0.25*0.5=0.125.

I'm sure that's only correct if the real answer is always the same letter.

I think that regardless of whether you guess randomly between A - D, or any two choices, or stick with just one, your chances of being right approach 0.25 in all 3 cases. Because of you always choose A, on average the answer will be A 25% of the time. If you guess randomly between say, A and B, on average each letter will be right 12.5% of the time, and 12.5+12.5=25% (because while A is correct 25% of the time, by choosing between 2 letters the number of A's chosen has been halved. Of course, this also applys to the choice of B, thus the total proportion of right answers is still 25%). Increase the guess to between ABC and D, we get 6.25*4=25%.

Who is correct here?--inksT 00:28, 15 June 2006 (UTC)[reply]

You are correct. Most people just choose to stick with one letter just because of the mindset, but it really makes no difference. —Mets501 (talk) 01:30, 15 June 2006 (UTC)[reply]
An interesting variant: let's suppose there's a trickster daemon (in the same meaning of the word as Laplace's or Maxwell's) that, whenever you try to make a random choice, changes it to the worst possible outcome (if the answer was A, it'll make you chose B). In that case, chosing randomly for each answer, even between two letters, will result in a 0 chance of being correct, while sticking with one letter (the daemon will make you chose the worst possible one) will result in a 0.25 chance of being correct, given enough questions. Of course, this only works if the daemon can't influence which answer was the right one, only the outcomes of your choices. --cesarb 02:41, 15 June 2006 (UTC)[reply]
In case that's unclear, cesarb is suggesting a daemon who can affect luck—whenever you make a random choice, Lady Luck tries as hard as she can to screw you over. If you pick randomly every time, you give Lady Luck lots of oppotunities to mess with your choices, whereas if you pick A everytime she can't do anything. Tesseran 03:39, 15 June 2006 (UTC)[reply]
I think I got that. Thanks all for the replies. I have since used Excel to verify this experimentally (generating 4000 "questions" and 4000 "guesses") and the odds are as I expected. She owes me an ice cream :)--inksT 04:05, 15 June 2006 (UTC)[reply]
One unnoted flaw here is the assumption that the correct answers are evenly distributed, which is often not the case. My own anecdotal observations are that human-generated tests tend to avoid the first and last options being correct, so I would expect that in many real-world situations, picking "always B" is generally superior to rotating through the options. Also of interest is this PDF regarding multiple choice and "look random" generation, where (assuming you fill in what questions you are confident of first) strategies based on least-seen answers can raise an SAT score an average of 10-16 points over pure random guessing. — Lomn 15:55, 15 June 2006 (UTC)[reply]

foray into linux

I'm off to college this fall, to major in Computer Science. I thought it would probably be a good idea to get a laptop, so, hearing that ThinkPad hardware is well-supported by Linux, I bought a very nice ThinkPad. I want to dual-boot windows and linux.

I've looked into Linux in the past (and even tried to install Slackware, though I was unable to resize my Windows partition so I gave up) and I've decided on SUSE Linux, primarily because of the easy setup (especially in partitioning) and the focus on ease-of-use.

I have three questions:

1) Which desktop envoronment should I choose, KDE or Gnome? I've had a pretty good experience with KDE trying out Knoppix, but I want to know if I'm really missing out on good stuff in Gnome. Can someone give me a comparison feature-by-feature of what they like about each? Which is used in this video?

2) Will YaST automatically configure my boot menu to dual-boot with windows if it detects OEM Windows XP installed, or am I just going to be stuck with Linux until I can figure out LILO or GRUB?

3) When I upgrade to Vista this fall, will there be any problem getting it to stay in the Windows partition and keeping it from taking over the whole hard drive when it installs? Will I have to rewrite the boot settings, or will Vista do this for me? Or will it rewrite the entire record and take out Linux? In that case, how do I modify it from within Vista to allow access to Linux again?

--Froth 01:46, 15 June 2006 (UTC)[reply]

Kudos on buying a ThinkPad. Don't give those other closed-hardware people any of your money. I use GNOME because it "just works", and the panels are nifty, especially Workspace Switcher and Character Palette. The Nautilus file manager is a nice piece of software too, although I don't use it much. Not sure about number 2, but number 3 brings up the question of where you're going to store all your files. Linux doesn't like NTFS and Windows will have nothing to do with Ext2 or ReiserFS, so you'd better put your files on a FAT partition. —Keenan Pepper 02:22, 15 June 2006 (UTC)[reply]
For number 3, the Windows installer will usually overwrite the MBR. That means that in order to restore a bootloader that will run linux, you may have to boot off the install CD and re-run the bootloader installer after you install Windows. As for Keenan's suggestion that you need a FAT partition to swap files between Windows and Linux, it's not really true. Linux has read support for NTFS, so you can always access your Windows files while you're running linux (write support exists, but is pretty spotty in my experience). Windows can't see the linux partitions though. -lethe talk + 02:43, 15 June 2006 (UTC)[reply]
I'm not interested in swapping files between filesystems; I'm going to have one 60GB NTFS partition and let YaST play with the other 20GB. Also, how would I restore the MBR? Would it automatically be brought to my attention as a "repair" option or something instead of "install" or would I be better off getting a live-cd distro and using the copy of GRUB included? --Froth 15:00, 15 June 2006 (UTC)[reply]
You can backup your MBR from a command shell opened from knoppix by typing:
$dd if=/dev/hda of=mbr-copy.bin bs=512 count=1
I would recommend doing this with a usb disk/stick partitioned as FAT32 connected, with the current directory being on that removable drive, thus saving your MBR copy on a separate drive. To make sure that the copy is valid, do a hex dump, and verify that the last two bytes are aa55. (Magic signature at end of MBR).
While you're at it, I would also suggest saving a copy of your laptop's main partition on the usb disk, using partition image, before repartitioning. It's on the knoppix CD, and you can also get it here.
When reinstalling the MBR, you should keep in mind that the information about the main partitions on the disk is located at the end of the MBR. Therefore, if you have repartitioned the disk after making the MBR backup, and want to preserve the new partitioning, you would want to do
$dd if=mbr-copy.bin of=/dev/hda bs=446 count=1
Otherwise, it's
$dd if=mbr-copy.bin of=/dev/hda bs=512 count=1
NOTE: The last command will overwrite your partition table. Before restoring an MBR backup, it's a good idea so save the current setup (as described above), so that whatever you do is reversible. --vibo56 talk 17:34, 15 June 2006 (UTC)[reply]
If you can't decide between KDE and Gnome, I recommend getting an Ubuntu live CD (uses Gnome) and a Kubuntu live CD (uses KDE), and using each for a while. --Serie 22:09, 15 June 2006 (UTC)[reply]
Reinstalling the MBR shouldn't be too hard. I'm not familiar with the distro you're using, but generally, yeah, it should be there in the repair or installation methods. Most distros with high level GUI installers have this. You can do it yourself from the command-line as well. If your installer previously configured a GRUB bootloader, then all that's required is the command "grub-install /dev/hda". This will read the grub.conf file and set up a bootloader in your MBR as before. You can also backup your MBR as vibo suggests. -lethe talk + 01:18, 16 June 2006 (UTC)[reply]

taxonomy of real numbers..

taxonomy of real numbers

There are various ways of classifying the real numbers. One scheme starts with the integers, which are a subset of the rational numbers, which are in turn a subset of the real algebraic numbers. The rest (i.e. real numbers that are not algebraic numbers) are transcendental numbers. An alternative scheme divides the real numbers into positive numbers, negative numbers and zero. Gandalf61
This shows that ordering by type instead of size is sometimes better. If you order real numbers by absolute size, beginning with 0, then alternating positive and negative ones, you'll never see an integer in your life. --DLL 21:59, 15 June 2006 (UTC)[reply]

DOM Inspector

In Mozilla Firefox, is it possible to install the DOM inspector after you've already installed the browser earlier without it? I didn't install DOM inspector because I thought I wouldn't need it, but now it looks like I do, and would like to avoid completely reinstalling and losing bookmarks, extensions and history info in the process. - 131.211.210.12 11:59, 15 June 2006 (UTC)[reply]

You can download the installer again and install over it.. that's how updates used to work, and I assume the functionality is still there --Froth 15:05, 15 June 2006 (UTC)[reply]

I'm totally stumped!!!

Me and all my friends cannot get this one. It seems easy enough but there's always a part where we can't get any further..

2x = (x+1)(ln10)/lne

What is x???? — Preceding unsigned comment added by Gelo3 (talkcontribs) 13:07, 2006 June 15 (UTC)

Please do not directly answer questions like this. Some people have a bad habit of mining the reference desks for homework answers. Stated clearly at the top of this page is the following:
  • Do your own homework. If you need help with a specific part or concept of your homework, feel free to ask, but please do not post entire homework questions and expect us to give you the answers.
The appropriate response to such a question is something like, "Show us what you have done, and explain why you get stuck." It is totally not appropriate to do the problem for someone and provide the answer. Not only is that unethical, it is educationally counterproductive. Everyone's cooperation in these matters is appreciated. --KSmrqT 14:13, 15 June 2006 (UTC)[reply]

This ISN'T homework, so PLEASE stop making assumptions. This was a question from a past exam paper I found on the internet for study. 220.239.228.252 14:43, 15 June 2006 (UTC)[reply]

Sorry, we have no way to verify that. Either way, the appropriate response is not to give the answer, but to find where understanding fails and help bridge the gap. It's a trivial problem, and the intrusion of logarithms is mostly an irrelevant distraction. The equation might as well be written
2x = (x + 1)c.
So, please, enlighten us. Show us concretely what you can do, and where you can't get any further. Then we can honestly help — with your understanding. That we're happy to do. --KSmrqT 15:16, 15 June 2006 (UTC)[reply]


Read this : http://en.wikipedia.org/wiki/Logarithm#Other_notations Evilbu 15:44, 15 June 2006 (UTC)[reply]

JPEG image strangeness

I am feeling very daft, as I can't seem to figure this one out and I hope that someone smarter (and more awake!) than me will be able to help.

I scanned in a photo using my scanner, it returned a 150kB file. All well and good. I open it (in Paintshop, if that makes a difference), find out it's sideways, rotate it 90°, save and close. Imagine my surprise when the same file is now suddenly 750kB! What in the world is going on - I just rotated the image, surely it contains the same amount of information?

My guess (after much reading through JPG) is that my scanner is sending me a JPG which is already compressed somewhat, but when Paintshop saves it at 'no compression' the filesize obviously increases. Does this make sense? Or do you suspect something else may be at work?

Thanks in advance! — QuantumEleven 14:09, 15 June 2006 (UTC)[reply]

It's unusual, but quite possible, for your scanner to be giving PSP a compressed jpeg. Rotate it in Paint and save as PNG, or rotate in PSP and save with a higher compression setting - be careful not to overdo it, it'll ruin your image. --Froth 15:08, 15 June 2006 (UTC)[reply]
Because of the way JPEG works it's possible to rotate through multiples of 90° without decompressing and recompressing, thus not losing quality or increasing filesize. The library that does it is called jpegtran. See here for a list of applications which use this to provide lossless rotation. —Blotwell 18:18, 15 June 2006 (UTC)[reply]

a very basic but fundamental question : when is tensorproduct zero?

Hello,

I am studying tensor products and I think it would really be useful to think about this problem in general

let M be a right R module, and N a left R module

Now let us assume nothing about the ring (commutativity, division ring,...)

now consider and an element in it of form

Now when is

I know just saying "one of them must be zero at least" is simply not true, at least when I am working with non-divisionrings... But then what is the criterion?

Could this be it : one of them must be "divisible" by an element in the ring R such that the other one, multiplied with it, gives zero?

Thanks,


Evilbu 14:16, 15 June 2006 (UTC)[reply]

I'm not sure what the full result is, but I'll say that all tensor products of all modules over a ring with zero divisors will have lots of such pairs. For exampleif rs = 0 in your ring, then mrsn = 0 for all m in M and all n in N. I can also say that for any torsion group G, the entire group GQ = 0, where Q is the group of rationals. Thus every single tensored pair in that group is equal to zero. This is a consequence of the fact that the tensor product functor by a torsion group is not left-exact. -lethe talk + 15:15, 15 June 2006 (UTC)[reply]

Thanks. So what do you think? This criterion is not correct? I stress again that the ring (apart from having unity) can be as free as it pleases in all it weirdness, and so can the modules. Nobody seems to be comfortable answering this question. I studied constructing tensor products (with balanced products and all) , which eventually implied taking a quotient (which is a result of the relations) of a free abelian group. So an element in the big free abelian group gives zero in the quotient if it is a finite sum of several elements given by those relations ( I am taking of elements like ~. There can really be millions of terms in a sum like that, so I don't see proving a criterion in this way can ever be done?


Anyway, as always, I stress my gratitude for the kind, quick and to the point I help I receive from this wonderful site. Evilbu 14:25, 16 June 2006 (UTC)[reply]

Yes, I think that's about as concise a description as you will find. The tensor product MRN may be defined as the quotient of M×N by the ideal generated by terms (mr,n) – (m,rn); (m1 + m2,n) – (m1,n) – (m2,n); and (m,n1 + n2) – (m,n1) – (m,n2). Therefore any element of this ideal will tensor to zero. This doesn't really answer your question though, because none of the elements of this ideal can be represented by a single tensor product of two module elements. You wanted a criterion to tell you when two nonzero elements tensor to zero, which I don't know the answer to. For example, in general, given m1, m2, n1, and n2, we cannot assume that there are a, b such that ab = m1n1 + m2n2. Thus we have no guarantee of nonzero elements which satisfy ab = 0 (and indeed, in general, there be no nonzero elements which satisfy, for example if R is an integral domain). -lethe talk + 14:54, 16 June 2006 (UTC)[reply]

Arc vs. curvature?

If one holds a piece of string between two points on a sphere, the string would be tracing the arc/curvature/perimeter/circumference——i.e., "great circle"——segment between the two points, which would equal the central angle, . To find the distance between the two points you would multiply the central angle by the sphere's radius, as the radius equals the radius of the circumference. With an ellipsoid, however, the radius of the body and the radius of its circumference is different. You have two principal curvatures, (north-south,east-west), and their corresponding radii,

. Curvature in a given geodetic direction, , is given as

. The corresponding radius of curvature ("in the normal section") is then given as

. But, if you take a minuscule distance (i.e., ≈ 0), then

not ! There was a stub for arc recently created. Would the second equation, equaling , be the "radius of arc", thus the equation of arc would be ? If you divide any north-south distance by it equals the average value of M within that segment, and a minuscule east-west distance (since, except along the equator, east-west along a geodesic only exists at a single point——the transverse equator) equals N. So what is a minuscule distance, in a given geodetic direction, divided by , a radius of? Curvature? Arc? Perimeter? If I Google "arc" or "radius of arc" (or even "degree of arc"), all I find are simplistic spherical contexts, nothing elliptical, involving M and N! P=(
I understand basic, concrete geodetic theory (besides ellipticity, there is curvature shift towards the pole as the geodetic line grows, culminating in a complete shift to north-south for an antipodal distance, since north-south is the shortest path), so I know you can't simply take the spherical delineation, average all of the radii of curvature/arc along the segment and multiply it by to get the true geodetic distance (though, the difference does seem directly proportional to the polar shift involved——i.e., the smaller the distance, the closer this "parageodetic" distance is to the true geodetic one!). But I digress... P=)  ~Kaimbridge~17:20, 15 June 2006 (UTC)[reply]

Markov-like chain bridge?

I am currently making a program to generate a random name using United States Census data and Markov chains. However, I want to have a little more flexibility in the process. So I want to be able to make a bridge between a given beginning, some given middle letters, and a given ending, so I can generate a name like Mil???r?a. Currently, I am using a three-letter window. Does anyone know how to generate this bridge? --Zemylat 21:35, 15 June 2006 (UTC)[reply]

Word Puzzle

There is a certain word puzzle, that I have heard many times before. I have searched for said puzzle, but cannot seem to find it anywhere. I was wondering how the math works out this way in this puzzle:

Three men go to a motel and rent a room. The deskman charges them $30 for the room. The manager of the motel comes in and says that the deskman has charged them too much, that it should only be $25.

The manager then goes to the cash drawer and gets five $1.00 bills, and has the bellboy take the money back to the three men. On his way up to the room, the bellboy decides to give each of the men only one dollar apiece back and keep the other two dollars for himself.

Now that each one of the men has received one dollar back this means that they only paid $9.00 apiece for the room. So three times the $9.00 is 27.00 plus the $2.00 the bellboy kept comes to $29. Where is the other dollar?

Why does it come out to $29 and not $30? I always suspected that it was because you cant multiply the remaining money the men had to get the right amount, but I'm not sure... Just curious.

The multiplication is fine. You are tricked by the ungrammatical run-on sentence where it says "... is 27.00 plus the $2.00 ...". The 27 dollars is what the men paid. The 2 dollars is what the bellboy took, so it is a "negative" payment. So the sentence should have gone: "So three times the $9.00 is 27.00 minus the $2.00 the bellboy kept comes to $25 which is now in the manager' cash drawer." --LambiamTalk 23:14, 15 June 2006 (UTC)[reply]
Right. Of the $30 paid, $3 were returned, $27 kept. Of the $27 kept, $2 are in the bellboy's pocket, and $25 are in the cash register. Black Carrot 23:38, 15 June 2006 (UTC)[reply]
Missing dollar paradox (Igny 15:58, 16 June 2006 (UTC))[reply]

One-to-One Correspondence

I've been learning over the years, when I find myself in disagreement with nearly all experienced mathematicians in existence, to start with the assumption that I'm completely, shamefully, blasphemously wrong, no matter how it looks to me, and go from there. Because it pisses people off less, and because it's usually true. So, tell me how I'm wrong.

I don't get one-to-one correspondence as a way to measure infinite numbers. I understand(the nonrigorous version) how it works, and I can see how it's a natural extension of normal counting, but it's not how I think, and it's not how I've ever seen infinite numbers. Take the alleged one-to-one correspondence of, for instance, natural numbers and their subset, even numbers. That doesn't make sense to me. Even numbers are sections of an extent. It doesn't make sense to rip them off the number line, jam them together like the vertebrae of a crash victim, and shove them back on, while not doing anything of the sort to the numbers they're being compared to. Here's how I'd compare them, and here's where I need correction. They are each prespecified, patterned, easily identifiable sections of an extent of number line. It is guaranteed that no natural number can exist that is more than one away from an even number, and no even number exists that is not a natural number. They are already, inherently and inextricably, in a particular correspondence with each other. So, take a number x. x can be anything we want, a positive number of some amount. Now, count the number of whole numbers up to (and including, if possible) x, and the number of even numbers up to and including x. Keep doing this as x grows, and let x pass each and every natural number in turn. How many natural numbers will there be as it grows? floor(x). How many even numbers? floor(x/2). What, then, is the ratio of whole numbers to even numbers? 2:1, not 1:1 I'm think the rules of limits back me up on this. This is just a long (and hopefully clear) way of saying what seems so obvious to me: that there are many many whole numbers, and exactly half of them aren't odd.

This last bit is assuming I didn't screw up above. I can understand how the lack of one-to-one correspondence is excellent reason to seperate different infinite numbers, but why do people seem to think that the presence of it proves they're exactly the same? Any help is appreciated. Black Carrot 23:34, 15 June 2006 (UTC)[reply]

Cardinality has nothing to do with the structure that a set might have, just the "number" of elements. It doesn't matter if the elements are numbers, people, functions, etc. I think this is the main mistake you are making, thinking about what the elements "are". Another point is that if you believe a lack of one-to-one means they are "different infinities", then what do you think of this: if there exists a map f from A to B that is onto but not one-to-one, then #A >= #B. By this, the number of evens is >= the number of naturals. One final point -- it is not that mathematicians "seem to think" that the naturals and evens are the same size, it is that they do in fact have the same cardinality (by the definition of cardinality). (Cj67 00:25, 16 June 2006 (UTC))[reply]
You seem to be using an inductive argument to show that the ratio of whole numbers to even whole numbers is 2:1. Keep in mind, though, that mathematical induction doesn't go all the way to infinity. An inductive proof generally shows that some property is true for any finite integer n; such a proof can't show that it's true if n is infinite. Your inductive statement is basically saying that of the first n whole numbers, half of them are even. This is true. And since it's true for some n, you can use an inductive argument to show that it's true for n+1, and so therefore it's true for any whole number n. But this argument doesn't show that the entire set of whole numbers has twice as many elements as the entire set of even whole numbers, because in that case n=∞. Your inductive argument doesn't apply, because you never show that your statement is true for n=∞−1, whatever "∞−1" means. This is a subtle point; I hope it makes sense. —Bkell (talk) 00:39, 16 June 2006 (UTC)[reply]
I don't know if this will help or not, but maybe you can see why your "limit" idea (which I think is a form of mathematical induction) doesn't work when you try to prove a property about an infinite set based on properties of its finite subsets. Think about this: If I take any finite set of whole numbers, then I can find a real number that is larger than every element in the set. This is true if the finite set has one element, or two elements, or 18,372,364,735,078 elements, or even no elements at all (see vacuous truth). It's true for every possible finite set of whole numbers, so in the "limit" it seems that it should apply to the entire set of whole numbers. But it doesn't. I can't find a real number that is larger than every whole number. In the same way, in the set of the first n whole numbers (where n is a finite integer), there are twice as many whole numbers as even whole numbers. But this doesn't mean that this is true for the entire set of whole numbers.
These comments might not convince you that there are the same number of whole numbers as even whole numbers, but I hope that they can help you see why your argument against the claim doesn't work. —Bkell (talk) 00:50, 16 June 2006 (UTC)[reply]
As others have mentioned, comparing cardinalities is a very coarse kind of comparison. It is purely set-theoretic, and ignores any other information that a structure may have. For example, it's true that there is a bijection between the naturals and the integers, but there is no order isomorphism. As ordered sets, they are quite different. It's true that there is a bijection between the real line and the Cartesian plane, but as topological spaces, they're quite different. Comparing the underlying sets is important, but it's not everything. There are other kinds of comparisons that are important for other kinds of spaces. -lethe talk + 01:25, 16 June 2006 (UTC)[reply]
Black Carrot, you need to tell your intuition to shut up, it doesn't know what it's talking about. Let me explain. We build intuition based on our experience, and extrapolate past events to present circumstances. This is not a bad thing; it has helped us survive as a species. Intuition is also valuable to a mathematician, but it must be used wisely, with caution.
In mathematics we have definitions, axioms, and rules of inference that allow us to create new kinds of objects and worlds with new properties. We can create negative numbers, where adding b to a can create a quantity less than a. We can create fractions, where we must add b repeatedly just to get from a to a+1. These creations are remarkably useful, but also remarkably counter-intuitive — until we re-train our intuition to conform to the new definitions.
Cardinal numbers, and especially infinite cardinals, are mathematical creations, just like negatives and fractions. They do not obey the old rules. One of the definitions of an infinite set is that it can be put in one-to-one correspondence with a proper subset of itself. Strange? Yes; but so is a number that can square to −1.
In the words of the ancient Greek playwright Aeschylus,
"Against a spike
Kick not, for fear it pain thee if thou strike." — Agamemnon [2]
If your intuition says there should be half as many even numbers as all, congratulations: your intuition works just fine for the numbers it was trained on. Just don't expect it to apply to infinite cardinals. If you think there should be a way of counting infinite sets that distinguishes between equal cardinal infinities, go play with some axioms and see if you can make it work. Will the results, if any, be intuitive? Hmmmm. --KSmrqT 05:22, 16 June 2006 (UTC)[reply]
Perhaps one thing should be emphasized: Cardinality is a very specific way of comparing sizes of sets. Saying that two sets have the same cardinality is not the same as saying they have the same size - The latter doesn't really mean anything by itself, and cardinality is one way to interpret it. For example, the sets [0, 1] and [0, 2] "have the same size" if you look at their cardinality, but have different sizes if you look at their Lebesgue measure. The great thing about cardinality is that it applies to any set whatsoever - But this is also its weakness, as it completely ignores the strutcture of the set. For example, your idea above basically assigns sizes to sets of natural numbers according to the value of:
Which is fine, but it isn't hard to see that it uses specific properties of natural numbers in a specific, and rather arbitrary, way. This means that it only applies to sets of natural numbers, and not even to all of them (for some, the limit doesn't exist). Also, I could just as well give other definitions which would make, say, the odd numbers be twice as numerous as the even numbers. Perhaps your definition looks more "intuitive", but that doesn't make it more correct. So it is okay to have definitions of size specific to a given structure we wish to investigate, with properties we wish to have; But that doesn't discredit cardinality, which does exactly what it was meant to do - Measure the size of any set, without being influenced by the structure of this set. -- Meni Rosenfeld (talk) 08:15, 16 June 2006 (UTC)[reply]


That's a pretty impressive response. I've read them carefully, and I'd like to list the main points, so you can tell me if I've left any out. I'll respond to them as I can.

1.1) It doesn't matter to "cardinality" what the elements actually are.
Yes. I said that.
1.2) One-to-one correspondence isn't the only way to measure "cardinality". There can be overlap and stuff.
Good point. I take back what I said about lack of it meaning something.
1.3) They must have the same size, because they have the same "cardinality". And by the definition of "cardinality", they must have the same "cardinality" if they show one-to-one correspondence.
WTF?
2.1) That looks like induction. If it is, it doesn't work, because induction can't handle infinity.
I can see the resemblence, but it wasn't meant to be induction. Nothing that formal. I was just trying to make my thoughts as clear as possible so they could be hacked to pieces more efficiently. Since you mention it, though, why wouldn't that work as an inductive proof? That article says, "Mathematical induction is a method of mathematical proof typically used to establish that a given statement is true of all natural numbers." So, change "let x pass each and every natural number in turn" to "let x move up to each and every natural number in turn". It doesn't make a difference as far as my point is concerned. Now, at what point does that move beyond the natural numbers and the capabilities of induction?
3.1) Infinite sets and finite sets behave differently.
Awesome. Anything more specific?
3.2) "I can't find a real number that is larger than every whole number."
Dandy. That's not what I'm doing, though. That's a totally different kind of property from what I'm looking at. At least, I'm almost certain it is.
4.1) "As others have mentioned, comparing cardinalities is a very coarse kind of comparison. It is purely set-theoretic, and ignores any other information that a structure may have."
I agree completely.
4.2) "For example, it's true that there is a bijection between the naturals and the integers, but there is no order isomorphism. As ordered sets, they are quite different."
I'm blushing.
4.3) "Comparing the underlying sets is important, but it's not everything. There are other kinds of comparisons that are important for other kinds of spaces."
All this agreement may turn my head.
5.1) Shut up.
Bite me.
5.2) You don't know what you're talking about.
Bite me twice.
5.3) I'm both thoughtful and sharp-tongued, and I can compare you to a baby playing with blocks! I must be brilliant and emotionally complex.
Great. Anything else?
5.4) You're uneducated, untrained, and probably superstitious. These things are "counterintuitive", a pretentious way of saying they don't make sense. To you. They make perfect sense to me, of course. You may have said this at length at the beginning of your post, but it's worth repeating. To get to the point, "cardinality" is something we made up. As such, it can do whatever we want. It's self-consistent, and it's also consistent with a lot of other stuff we made up, and even with some of the stuff we didn't. You can't make up anything, though, because "we" (meaning people who'd died before my parents were born) did it first. It doesn't matter if it's consistent with the things you want it to be consistent with, like actual numbers, only if it's consistent with the things we want it to be consistent with.
"You're a cowardly dumbass."
-Pretentious Greek quote
Hmmmm.
6.1) "Cardinality" is not the same as size. There are lots of ways of measuring and interpreting the size of an infinite set, and they all look at it in different ways. "Cardinality" is good for what it does, which is divvy up all sets, regardless of what they contain, into basic types.
See, that's what I kind of figured, but I couldn't find any other measures, and people seemed to think "cardinality" was all there was.
6.2) "[Your idea] uses specific properties of natural numbers in a specific, and rather arbitrary, way."
Really? Leaving them exactly where they belong on the number line is "arbitrary"?
6.3) "Also, I could just as well give other definitions which would make, say, the odd numbers be twice as numerous as the even numbers."
I'd like to see that.

BTW, please don't toss comments into the middle of my post. These discussions are a lot easier to follow if they stay chronological. I'd like to boil the answers down even further, to the things that seem most important.

A)You don't understand the philosophy of mathematical proof. (false)
B)The proof you gave, to use that word loosely, was slovenly. (Sorry.)
C)"Cardinality" isn't the same as size. It's a way of dividing up infinite sizes with a broad stroke, in a convenient way, a bit like Big O notation. There are other things that could divide them up more. (That's what I thought, and what I tried to say at the end of my first post. But again, people seem to keep using the word (which I never brought up, if you'll read my post carefully) as the be-all and end-all of infinite sizeness.)
D)The specific way you've shown of dividing them up may not be entirely valid. (Really? Are you sure?)

Using these responses and the language in them, I'd like to rephrase my original question in a clearer, more concise way: "Is the idea of one-to-one correspondence really the only valid way of showing infinite amounts are different from each other? Doesn't it seem like they could be seperated out more than that? How about this way that makes sense to me; it seems just as common-sense as one-to-one correspondence, yet gives an answer that seems more right." As best I can tell from the responses, I was both right and wrong, which I see as a net success. All further comments and corrections are welcome. Black Carrot 19:42, 16 June 2006 (UTC)[reply]

Don't listen to all those people above. You're spot on: there are various kinds of way to measure or quantify infinite sets depending on what you're trying to achieve, and all of them have drawbacks. The one based on bijection is pretty robust, but it's also, as you point out, pretty coarse. It's the one mathematicians associate with words like cardinality, number of elements, size of set, but that doesn't mean it's the only thing you can do no matter how many bigots tell you otherwise. What you're proposing is called natural density and, as was said, it has the problem that it's not defined for every subset of Z (because the limit need not exist). But it has perfectly good applications in number theory. —Blotwell 19:49, 16 June 2006 (UTC)[reply]

June 16

2 = 1

This is entirely accurate.

Simplify both sides two different ways.

Divide both sides by (x-x).

Divide both sides by x.

Look what I got. Can anyone explain? Political Mind 01:32, 16 June 2006 (UTC)[reply]

x - x = 0. Division by zero. —Keenan Pepper 01:34, 16 June 2006 (UTC)[reply]

Brilliantly simple. So when I am at , it is really ? Ok, thanks! Political Mind 01:37, 16 June 2006 (UTC)[reply]

Also, the first step is wrong. Did you mean instead of ? —Keenan Pepper 01:40, 16 June 2006 (UTC)[reply]

Thank you, will change. Political Mind 01:46, 16 June 2006 (UTC)[reply]

Derivatives and Laplace transforms commuting

In a differential equations textbook I'm working with, there's an exercise where the student is asked to compute the Laplace transform of the function f(t)=t*sin(ωt). Doing it from the definition, by integrating t*sin(ωt)*e^(-st) from 0 to infinity is tedious, but works. The book offers a hint for a simpler method: begin with the fomula L[cos(ωt)]=s/(s22), and just differentiate both sides with respect to ω. This works out nicely enough, as long as you assume that differentiation w.r.t. ω commutes with the Laplace transform operator, but that seems like a highly unobvious thing. Can someone help me see why it's valid to say that d/dω[L[f(ω,t)]]=L[d/dω[f(ω,t)]]? -GTBacchus(talk) 02:58, 16 June 2006 (UTC)[reply]

; so long as the integral converges uniformly on some interval (which it does, for your f, for all ω and s, except at ), you can interchange differentiation and integration (with respect to different variables, of course), so . Hope that helps. --Tardis 03:26, 16 June 2006 (UTC)[reply]
Yes, that does help. I'd like to make sure I'm clear about uniform convergence of the integral. Do you mean that there's some region in the ωs-plane over which, for each ε there is a b such that, for every (ω,s), the integral from 0 to b is within ε of the integral from 0 to infinity? -GTBacchus(talk) 04:20, 16 June 2006 (UTC)[reply]

Proof that lim x->(infinity) of Ln(x) = infinity

Can anyone give me a rigurous proof based in the formal definition of limit? Thank you very much ;)

Looks like a proof that can be found in any elementary calculus textbook. A good way to do it would probably be to use the definition of ln as an integral to show that the above limit is greater than the harmonic series, which diverges. -- Meni Rosenfeld (talk) 08:22, 16 June 2006 (UTC)[reply]
You can prove that lim x->(infinity) of f(x) = infinity from first principles provided that two conditions are satisfied: (1) function f is monotonically increasing, and (2) f has a pre-inverse (or right inverse), that is, some function g such that f(g(x)) = x for all x. The logarithm function satisfies both conditions. --LambiamTalk 10:11, 16 June 2006 (UTC)[reply]
That is the approach I would take, but you need to be a bit careful in formulating the conditions. For instance, the limit of the arc tangent as x goes to infinity is 1.
To the original poster: Start by writing down what you want to prove, then use the formal definition of limit, then use the definition of the logarithm, and then you're almost done. How far did you get? -- Jitse Niesen (talk) 10:34, 16 June 2006 (UTC)[reply]
Recommended reading is George Pólya's 1945 book How to Solve It (ISBN 978-0-691-08097-0), whose guidelines are summarized in the cited Wikipedia article. Obvious questions are:
  • "What is your working definition of the Ln function?"
  • "What definition do you have for a function having a limit of infinity?"
Almost certainly you have seen related problems. Try to imitate them.
On an introspective note, a strange phenomenon in solving problems is that, often, the greater the struggle the sweeter the success. (Imagine how Wiles must have felt when he finally proved Fermat's last theorem!) Also, it seems that a struggle often indicates exactly where more insight is needed, so that after the dragon is slain, a post-mortem is especially revealing. Here follows some inspiration:
❝When asked what it was like to set about proving something, the mathematician likened proving a theorem to seeing the peak of a mountain and trying to climb to the top. One establishes a base camp and begins scaling the mountain's sheer face, encountering obstacles at every turn, often retracing one's steps and struggling every foot of the journey. Finally when the top is reached, one stands examining the peak, taking in the view of the surrounding countryside — and then noting the automobile road up the other side!❞ — Robert J. Kleinhenz
❝Since you are now studying geometry and trigonometry, I will give you a problem. A ship sails the ocean. It left Boston with a cargo of wool. It grosses 200 tons. It is bound for Le Havre. The mainmast is broken, the cabin boy is on deck, there are 12 passengers aboard, the wind is blowing East-North-East, the clock points to a quarter past three in the afternoon. It is the month of May. How old is the captain?❞ — Gustave Flaubert (as a young man, writing to his sister, Carolyn)
Aren't quotations fun? ;-) --KSmrqT 15:17, 16 June 2006 (UTC)[reply]

Sharing swap space between XP and Linux - reprise

What I would like my GRUB to do is the following:

  • ask what OS to boot;
  • check what type of partition is FOO;
  • according to the OS chosen for booting, if FOO is "compatible" (swap for Linux, fat32 for Windows) then boot, otherwise quick-format it in a compatible way (same as above) and then boot.

All this comes from me wanting to use the same partition for the two OS's paging space. Anyone knows if it can be done? Thanks in advance. Cthulhu.mythos 09:46, 16 June 2006 (UTC)[reply]

continuation of question on derived functors, that actually aren't functors

http://en.wikipedia.org/wiki/Wikipedia:Reference_desk_archive/Mathematics/May_2006#How_can_we_see_left_derived_functors_even_as_actual_functors.3F


Hi, some time ago I asked this, I have given the link.

I wanted to know, if you had a covariant functor F from the category of R modules to the Ab category, how you could see the left derived functor as a functor, from to Ab

Now there were people proposing I go to derived category but it still doesn't clear things up.

This is my proposal to understand this for myself :

see L_n F as a functor from the category R-modwpr to R-mod

R-modpwr is the category , of which the objects are pairs with M a left R module, and C a positive complex, over M

morphisms between them are morphisms between and , along with a chain morphism \alpha (for which everything commutes) I mean if :

then

So my module does really depend on the chosen complex over the module.

Is this the best approach? Or am I just way off with this. It seems be the only way I can understand it.

Evilbu 17:30, 16 June 2006 (UTC)[reply]

Example of continuous not differentiable function

Please. I couldn't find one.

See Weierstrass functionMets501 (talk) 22:02, 16 June 2006 (UTC)[reply]
Or if you're simply looking for a function that's continuous at a point but not differentiable there, take the absolute value function at zero. —Keenan Pepper 22:10, 16 June 2006 (UTC)[reply]

Polyomino tiling

What is the smallest simply connected polyomino that cannot be tiled to fill the plane using translation, reflections and rotations? -- SGBailey 23:18, 16 June 2006 (UTC)[reply]

There are three heptominoes that satisfy this criterion: http://mathworld.wolfram.com/PolyominoTiling.htmlKeenan Pepper 23:39, 16 June 2006 (UTC)[reply]

yo peeps, are hash functions just wild guesses, or is some bounds proven?

Yo,

So it seems that since MD4, MD5, SHA, SHA1 etc are all "broken", with a recommendation not to use in new infrastructure implementations, they must not have been "proven" in the first place. What I mean is that there was a day when MD5, for example, was thought secure, let's say in 1995, and this meant that some smug researcher could say: "You know, if every computer at this university were networked and at my disposal, there still wouldn't exist a set of instructions I could fill their memory with such that if left plugged in for 72 months, the array would be guaranteed to churn out two distinct files with the same MD5 checksum by the end of that time. Maybe within 100+ years, but not in 72 months." (The 100+ years is meant to allude to brute-forcing without reducing the bitspace, whereas the 72 months alludes to the fact that MD5 is in fact "broken" and does not require a full brute forcing).

So, in fact, this researcher would have been wrong, because even without newer technology (using his 1995 university equipment), we can now construct a set of instructions (program) that, were he to run it on all the computers at his university, would produce the collision in 72 months instead of 100+ years. So what I mean is that a mathematical proof must not have existed in the first place that no such program could exist.

So, now, I am asking, is there any hash function today that isn't just wild conjecture, but actually PROVEN not to reduce to fewer than x instructions on, say, an i386 instruction set to break?

As far as I understand it: does any hash have a mathematical proof that no program exists (the turing computer cannot be programmed to) to produce collisions in fewer than 2^x operations, where x is guaranteed to be at least a certain number?

I understand that quantum computing can "break" cryptography, but only in the sense of using a different physics. No program will make the computer in front of me turn into a quantum computer, but surely there is a hash for which there is a proof that no program exists that will turn the computer in front of me into a speedy collision-producer ???? —The preceding unsigned comment was added by 82.131.188.130 (talkcontribs) .

I think any proof of this nature would first require a proof that P!=NP, which is worth a million dollars. —Keenan Pepper 00:17, 17 June 2006 (UTC)[reply]
I thought hash functions, like finding large prime factors, aren't questions of p or np, but just a dearth of algorithms. Finding large prime factors isn't hard because it's equivalent to other non-polynomial time problems, it's hard because we're led to believe no good algorithm exists for it. It's just a "social" trick, there's no NP equivalency. —The preceding unsigned comment was added by 82.131.188.130 (talkcontribs) .
All hash functions run in polynomial time; if they took longer they'd be useless for practical purposes. Therefore, finding collisions is in NP. If you proved that finding collisions for a given hash function could not be done in polynomial time, that would prove P!=NP. —Keenan Pepper 00:43, 17 June 2006 (UTC)[reply]
I think that it doesn't follow that if hash functions run in polynomial time, finding collisions also happen in polynomial time (although of course you could not verify your findings). You could just print the two files that don't match. For example, a weakness could allow you to do some hand-waving that's barely more complicated than printing the resulting file. My problem is that hashes are all just hand-waving, and mathematicians don't even assure me that if it takes my computer ten seconds to produce a hash of a certain file, there cannot exist a program that can produce a competing file with the same hash in five seconds. (Although of course the program could not also verify its product). Since there's no math, it's all just hand-waving! I'm looking for some hash that has been mathematically proven to take a certain number of operations to reverse, on a turing machine. (Obviously a quantum computer might be able to sidestep these numbers). It seems like one big social prank.
Okay, i think I misinterpretted. All hashes run in polynomial time, you said, but do any of them guarantee that a competing file with the same hash cannot be produced in, say, constant time for that length hash? (Of course constants are technically polynomials) For example, here is an MD5 of a password I just chose:
47869165bfa3b3115426b0b235a2591e *-
(Not sure why that's what the outlook looks like, this is produced with the line "echo -n "secret" | md5sum" in Bash on Cygwin, but secret is in fact something I chose.) So, can MD5 the algorithm even mathematically assure me that on this architecture I'm typing on (i386, Windows 2000) for that many bits of md5 hash there doesn't exist a CONSTANT time algorithm for producing a file to match it? I don't mean hand-waving social engineering, I mean mathematics.

June 17