Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Matt Crypto (talk | contribs)
Line 166: Line 166:


:Under your assumptions, the identity you want simplifies to <math>(T_1(x))((T_2(x))(x))=(T_2(x))((T_1(x))(x))</math>, so you might want to use that as an additional assumption.—[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 17:07, 11 November 2010 (UTC)
:Under your assumptions, the identity you want simplifies to <math>(T_1(x))((T_2(x))(x))=(T_2(x))((T_1(x))(x))</math>, so you might want to use that as an additional assumption.—[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 17:07, 11 November 2010 (UTC)
:: Thanks for taking the time to unravel the definitions -- and you were quite correct in working out what I meant -- and for the counterexample (pretty obvious now you point it out!). I'll go and think some more, but just in case it triggers any further suggestions, I'll share a little more about my specific problem domain. ''G'' is a set of labelled directed graphs, and the transformations <math>T_i</math> are rules which find matches of a pattern graph in an input graph and produce a ''graph edit'' as output. A graph edit is a consistent set of atomic edits on the input graph -- remove particular specified edges and vertices, relabel vertices, add edges and previously unseen vertices. You can also apply a graph edit to a different graph than that with which it was created, and the effect is that it simply ignores any atomic edits that don't apply to the given graph. I've got a method of calculating dependencies (with the meaning given above) between rules, and, assuming that the dependency graph is a DAG, I wanted to show that if I executed the rules in any topological order then the output graph would be the same. [[User:Matt Crypto|&mdash; Matt <small>Crypto</small>]] 19:11, 11 November 2010 (UTC)

Revision as of 19:11, 11 November 2010

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


November 5

Predicting the college football season

Let's say that:

  • Oregon has a 59.2% chance to win every game before the BCS.
  • Boise State has a 67% chance.
  • Auburn has a 15% chance (noting they would have to play a conference championship).

In addition, either TCU or Utah may go undefeated, but not both, since they play each other. TCU has a 58% chance of beating Utah and a 99% chance of winning both of its other games, while Utah has a 42% chance of beating TCU and a 65.7% chance of winning all of its other games. That means, I'm guessing, that there is an 85% chance that either TCU or Utah will go undefeated.

Don't kill me over these percentages -- they come from a rankings website.

  • What are the chances that no team among the above goes undefeated? (I'm guessing 1.7%)
  • What are the chances that exactly one team goes undefeated (OR, Boise, Auburn or TCU/Utah)?
  • What are the chances that exactly two teams go undefeated?
  • What are the chances that exactly three teams go undefeated?
  • What are the chances that exactly four teams go undefeated? (I'm guessing 5.1%)

Thanks -- Mwalcoff (talk) 03:13, 5 November 2010 (UTC)[reply]

One key assumption that should be explicitly stated is that the above percentages, unless otherwise stated, are independent probabilities, e.g., the chance of Boise winning is unaffected by what Oregon does. This may be approximately true, but may not hold if, for example, Boise was hoping for a Boise/Oregon grudge match in the finals, and as such is demoralized by an Oregon defeat. Anyway, it makes the calculations easier, so we'll assume it. Will also assume that when you ask for "three teams undefeated", your referring to three teams from those listed - that is, that some team not listed being undefeated doesn't count toward the three.
The probability of an event not happening is one minus the probability of it happening. The probability of two (probabilistically) independent events both occurring is simply the product of the independent probabilities. The probability of either of two mutually exclusive events occurring is simply the sum of the probabilities that either occurs. This is all we need to work out the probabilities.
So TCU has a .58*.99=.574 chance of being undefeated, while Utah has a .42*.657=.276 chance of being undefeated, which means that there's an .574+.276=.85 chance of either winning (since the two probabilities are mutually exclusive. For four teams undefeated, it's .592*.67*.15*.85=0.051. For no teams undefeated, it's (1-.59)*(1-.67)*(1-.15)*(1-.85)=.017 chance. The exactly one team defeated/undefeated is a little harder, but can be calculated with (prob.of only Oregon) + (prob. of only Boise) + ..., as each term is mutually exclusive (you can't have both Boise and Auburn be the only undefeated team). One team undefeated is .59*(1-.67)*(1-.15)*(1-.85) + (1-.59)*.67*(1-.15)*(1-.85) + (1-.59)*(1-.67)*.15*(1-.85) + (1-.59)*(1-.67)*(1-.15)*.85 = 0.161 and one team defeated (three teams undefeated) is (1-.592)*.67*.15*.85 + .592*(1-.67)*.15*.85 + .592*.67*(1-.15)*.85 + .592*.67*.15*(1-.85)=0.355 chance. Exactly two teams undefeated is a little harder, but becomes easier if we realize that we've exhausted all other cases - if it isn't no, one, three or four teams undefeated, it has to be two teams so 1-(.017+.161+.355+.051)=0.416 chance, as each sub-case is mutually exclusive. -- 174.21.240.178 (talk) 16:47, 5 November 2010 (UTC)[reply]

Calculus

How would one set up a calc equation to solve this. If the avg life expectancy of a person is some constant A, and their current age C, and the rate at which the life expectancy is extended be x, how would one set up a calculus equation to solve for the minimum value of x that would allow the person to live forever. Assuming that x is linear, I came up with something like t = (A-C) + (A-C)xdt, and then I would integrate and try to solve, but I don't think this is right. I think it would take the integral and not the derivative, but I am not even sure, as it has been quite a while since I have taken calculus. Can someone help me figure this out? I feel dumb for asking about this, as it should be a simple problem 98.20.180.19 (talk) 09:15, 5 November 2010 (UTC)[reply]

x is linear... in what? Time? I have difficulty interpreting your equation meaningfully, but maybe the answer is in there. If x is just a constant, then the minimum is x=1: each year a person lives, the average life expectancy must increase by at least 1 year to keep up with their growing age. In general, the rate of change of average life expectancy is a function x(t), you have initial average life expectancy of L(C) = A, so at any time the average life expectancy is given by the function
from the fundamental theorem of calculus. For x(t) = c, this simply becomes L(t) = c(t-C) + A. Your own age is just t, so you want L(t) - t > 0 always. If c < 0, L(t) is negative sometime, L(t) - t can't always hold. If c = 0, L(t) is a constant, which also won't work. If c >= 1, for A and C "reasonable" the equation does hold. For 0 < c < 1, L(t) grows more slowly than t, so L(t) - t is eventually negative, hence my statement above. 67.158.43.41 (talk) 10:51, 5 November 2010 (UTC)[reply]
If the population grows the average life expectancy could decrease and still have the possibility of a person living forever. I'm assuming life expectancy is calculated by seeing how long all the people lived who died in the current year. You might want to do something mor complex but you can't stick in an infinity for a person who's going to live forever! Dmcq (talk) 12:57, 5 November 2010 (UTC)[reply]
Actuaries already do these types of calculations, and have a standard terminology and notation for these concepts - see our articles on life table and force of mortality. Gandalf61 (talk) 13:33, 5 November 2010 (UTC)[reply]

Suggestions re learning LaTex, please?

Hi all. I think I need to finally spend the time to learn LaTex. My reasons for wanting to do so are:

  • to allow me to document my self-study in symbolic logic, set theory, foundations, especially,
  • to communicate effectively in online forums devoted to those topics,
  • to send documents I've created about these topics (aka "homework" or "examples") via e-mail to professors or tutors, as yet unknown, and (ideally) to allow them to electronically comment on, correct, and markup the same, and
  • to be able to ask (and occasionally answer, when I can help) questions about those topics here, on-wiki

I've read maybe two-hours worth of material about this so far, via the web, and our own articles, too, of course, and have naturally looked in the archives here. But I was hoping folks here could help me start out on the right foot by anwering a few probably naive questions, as well:

  1. I like the idea of being able to generate pdf files easily, or better still(?), having a setup that uses pdf as a native file format for whatever I create using Tex, and I presume that means using the pdfTex extension. If this is correct, is there any LaTex/pdfTex editor that's free, relatively "standard"/ubiquitous (easy cross-platform availability), and that can edit pdf files directly?
  2. About how much time will I need to reach a point point of minimal/reasonable proficiency? A point at which I can focus more on content, on writing math/logic documents, than on learning LaTex, that is?
  3. I'd like to be able to use Russell & Whitehead's Principia Mathematica notation easily, if possible, and yes, I know it's been largely superseded in modern practice. ;-)
  4. I'm using Ubuntu GNU/Debian/Linux, in case that matters.

All observations and suggestions will be most welcome. Many thanks,  – OhioStandard (talk) 16:04, 5 November 2010 (UTC)[reply]

I think it's difficult to give a general answer to this. In my case I wanted to write a mathematical paper, decided TeX and LaTeX would be the best way to do so, so downloaded the free TeXShop, bookmarked some documentation and started writing. I had a clear idea of the mathematics I wanted to write, it was just a case of finding the correct way to do it. As I did this on a paragraph by paragraph and equation by equation basis I slowly taught myself TeX. More recently I went through a similar process learning how to edit WP formulas, at the same time as learning Mediawiki syntax. Here my reference was WP itself and its help pages. In each case for me all I needed was an idea of what I wanted to do, access to documentation and examples, and a way to try out my ideas.
I notice from TeXShop there's a Comparison of TeX editors where you should be able to find a free editor. I don't know if you'll find one able to edit PDF files: PDF is designed for viewing only. Fast preview perhaps using PDF might be the best you can do.--JohnBlackburnewordsdeeds 17:16, 5 November 2010 (UTC)[reply]
I'd suggest that a better place to look for answers to your questions might be http://tex.stackexchange.com/. --Qwfp (talk) 17:39, 5 November 2010 (UTC)[reply]
You seem to want general/open-ended answers. Any plain text editor will do for a LaTeX editor, since the point is to have everything in ASCII. I don't know about specific editors for Linux, sorry, other than the usual standard text editors. I have used one called TeXworks on Windows which I've liked, and it's apparently cross-platform. PDF files are generated from LaTeX source, for instance with the pdflatex command available in some/most Linux distros. To directly edit a PDF file, you would need (I believe) to edit postscript code, which isn't at all the same thing as editing LaTeX code (postscript isn't really meant to be hand-edited, anyway). A PDF file is to LaTeX as a compiled binary is to C++ source code--the compilation process isn't really reversible.
I've never read Principia myself, but a glance at this page suggests LaTeX would have no trouble at all typesetting it.
For me, it didn't take long to be minimally proficient. As a brief example, "\prod_{i=1}^{N^2} \int_{R_i}\Psi\,dA" renders as
If you can pattern match what most of that is doing, you can do a surprisingly large amount in LaTeX right now if given a few examples to work from. Depending on your computer proficiency, a good afternoon working with it should get you started. After figuring out how to make your system compile your source, I'd find a math paper online in TeX format and just try to write something up working from it as an example.
Personally, I hand write most of my math. I just find it much more freeing. Then again, I've heard that writing in LaTeX can become quite natural. 67.158.43.41 (talk) 17:53, 5 November 2010 (UTC)[reply]
Bullshit. It takes 4 weeks to learn LaTeX. It is the biggest waste of 4 weeks I have ever spent -- or would have been, except for the results. 84.153.205.142 (talk) —Preceding undated comment added 18:33, 5 November 2010 (UTC).[reply]
"It takes 4 weeks to learn LaTeX"--it definitely did not take me this long to learn LaTeX enough to be reasonably proficient. I have a background in computer science, but still, a day was realistic for me to "get started". I'm curious, how long did it take others? (I know the comparisons are very flawed because the goal is unclear, but still.) 67.158.43.41 (talk) 19:03, 5 November 2010 (UTC)[reply]
As for Linux LaTeX-friendly editors/IDE: a good choice is Emacs+AUCTeX. I assume that vim should have a LaTeX mode too, if you prefer that.
As for Principia Mathematica, the inverted iota may be nontrivial to typeset, but otherwise I'm not aware of anything problematic.—Emil J. 18:50, 5 November 2010 (UTC)[reply]
Just because it's a good example, I googled "latex inverted iota" and got this page as the third result, which gives the commands "\usepackage{graphicx}" and "\newcommand{\riota}{\mathrm{\rotatebox[origin=c]{180}{$\iotaup$}}}" to define a handy macro "\riota" to display an upside-down iota. I've found the vast majority of my LaTeX questions are quickly answered similarly. 67.158.43.41 (talk) 18:58, 5 November 2010 (UTC)[reply]
Yes, this is a very good example, namely it is an example of the principle that 90% of macros that you can google on the web are crap. First, there is no such thing as \iotaup among standard LaTeX symbols, and the instructions didn't tell you which package to find that in, so the code will not even compile. Second, the \mathrm in the macro is completely pointless as the innards are put in a box, hence it should be omitted to reduce useless bloat. Third, the instructions didn't warn you that the graphicx package relies on PostScript specials to implement the rotation, hence it will not work in plain dvi viewers or interpreters.—Emil J. 19:20, 5 November 2010 (UTC)[reply]
Good points, though the package isn't hard to find with a further search, \iota may suffice (or may not; again, I haven't read the book), and the DVI issue may or may not appear depending on your setup. But certainly some issues take quite a bit of fiddling / searching / asking around about, and some online help sucks. Ultimately, though, I do have an upside-down iota on a page, regardless of anything else. 67.158.43.41 (talk) 21:52, 5 November 2010 (UTC)[reply]
I would strongly disrecommend LaTeX. It is my opinion that every mathematical symbol or function that could be typeset in Latex could be more easily typeset in unicode, using a well-designed word processor. The fundamental premises of TeX are invalid in comparison to modern word processors:
  • TeX claims to separate formatting from content. But this is evidently not the case, as you will notice that you need to learn formatting codes.
  • TeX intends to perfectly specify paper layout. In today's era of reflowable digital documents, why waste your time specifying pedantic details about pixel-spacings? Use a modern tool that reflows intelligently and properly.
  • TeX claims to be portable. But as EmilJ has correctly pointed out, one set of TeX code will not compile on a different computer unless all the same packages are installed; there is no standard set of TeX packages; many are mutually incompatible with each other. In fact, even the tool is not standard: There is CWEB TeX, there is LaTeX, MiKtEX, MacTeX, and so forth. Realistically, there is minimal compatibility between these tools.
  • Formats like HTML now support mathematical symbols. ODF and .docx are free and open formats that provide all your pedantic typesetting needs, but are supported by reasonable free and commercial platforms like OpenOffice and Microsoft Word.
  • Every useful special mathematical symbol can be represented directly as a unicode character; TeX uses non-standard representations. Even Donald Knuth (inventor of TeX!) acknowledges this lack-of-standardization is a problem!
  • Open-source and free software word processors, such as OpenOffice.org, provide powerful equation editors as add-ons and plugins, if flat formatting and symbolic text are insufficient.
  • Most major academic journals, including many physics[1] [2] and mathematical journals, will no longer accept TeX source, and will require PDF or .doc files - so who do you intend to impress with your .tex source? And AMS requires you to install a nonstandard version of LaTeX! (Amazingly, AMS TeX is not even compatible with itself - there is a separate old-version archive for reproducing old documents!)
  • When you finally give up on TeX and decide to painstakingly export to a reasonable document format, converting to HTML will produce buggy, ugly, and non-standards-compliant documents.
TeX is really an antique - like the linotype, it was a huge improvement over previous typesetting technology, but is now a dinosaur whose clunky interface and primitive feature-set pale in comparison to the latest and greatest in mathematical typesetting and word processing. Invest your time learning a modern tool. Nimur (talk) 20:26, 5 November 2010 (UTC)[reply]
Oh, sorry, I just saw your link. It is a lot of fun even though I can never get it to find the symbol I want. Eric. 82.139.80.73 (talk) 13:49, 6 November 2010 (UTC)[reply]
I think texlive is the standard / most popular TeX distribution for Linux users, though I still have an old copy of tetex on my system. In my setup, I use vim to edit .tex files and the latex / pdftex commands on the command line to compile, with xdvi and xpdf to view the output. (Generally using dvi instead of pdf gives me a faster turn around time while composing, and I find xdvi a more solid piece of software than xpdf. YMMV.) I've heard that Emacs+AUCTeX is good, as Emil suggests. I assume any standard programming environment will come with at least syntax highlighting for TeX commands.
Two handy resources I have found include: For math and equation typesetting, amsldoc.pdf is an introductory manual for the amsmath latex packages. symbols-a4.pdf, a very long reference list for latex symbols -- look up the symbol you want to typeset here. The detexifier is cool but I haven't found it particularly useful. Eric. 82.139.80.73 (talk) 13:45, 6 November 2010 (UTC)[reply]

I want to thank everyone who replied so far. I'm not sure whose advice will "win" re what I eventually decide, but I'm very grateful for the discussion. Thank you!  – OhioStandard (talk) 09:24, 6 November 2010 (UTC)[reply]

Despite some arguments by nay-sayers above (Nimur: geology is only an infinitesimal fraction of "all of math and physics" and not even representative for math/physics research), doing without laTeX is not practical in physics and math. When preparing large documents full of equations with large numbers of cross references etc., you want to actually edit the source code of the document, not the document itself. That source code has to be easily readable and editable.

Journals prefer to have your LaTeX source code for typesetting, they don't want to have some compiled PDF file because they typically need to change your document as it appears in print. Such changes can affect equation numbers that are cited in the text, the ordering of the references, etc. etc. Count Iblis (talk) 16:05, 6 November 2010 (UTC)[reply]

I found The Not So Short Introduction to LaTeX2e useful for learning LaTeX, and WP:Formula a good reference for writing formulas. -- Meni Rosenfeld (talk) 10:50, 7 November 2010 (UTC)[reply]

Sorry, but there are many misunderstandings in Nimur's comment. First, he confuses TeX and LaTeX. TeX specifies low-level formatting. LaTeX does not (although it allows you to uses the low-level stuff if you want to teak things). Secondly, while there are different implementations of the TeX translator, they are indeed compatible, and there is a rigorous test specification that ensures that. No, you do not need "all the same packages" to compile documents on different computers - you need the packages you actually used in the document on both machines. Unless you do fancy stuff, the packages you are indeed at least de-facto standard. This is no different from macros, styles, and fonts in WYSIWYG tools. Yes, compatibility of old and new TeX installations is not always 100%. But it is excellent in practice for a tool chain that has been around for 20-30 years. Try opening a Word for Windows document in Word 2010. At least in TeX you get the human-readable document source and can fix it. Because TeX source is plain text, it's easy to use a plethora of existing tools on the documents, like grep, subversion, sed, and diff. It's also easy to programmatically generate TeX/LaTeX. For me, the ability to type everything out in plain text, with no menus and drop downs and weird mousing is also very valuable. Generating a document with even trivial stuff like sub- and superscript is much faster in LaTeX then in Pages or OpenOffice, or Keynote. There is a reason for the existence of LaTeXiT and equivalent tools on other platforms. And finally, the typesetting is still much better than with any other widely used tool. --Stephan Schulz (talk) 11:20, 7 November 2010 (UTC)[reply]

Thanks, everyone! I'm very often surprised at the extraordinarily high quality of the responses I've received when I've asked questions before on the various boards here, and this occasion has been another instance of the same. I'm sure your opinions, links, and advice will be very useful to me as I proceed, and will save me from many missteps, as well. My best appreciation to all of you who replied. Cheers,  – OhioStandard (talk) 05:23, 8 November 2010 (UTC)[reply]
The main mistake I made was not learning LaTeX properly. It took me 4 years to do theorems properly. /o\ Nevertheless, 4 years has made me wise to LaTeX and I can now do lots of intermediate stuff, including PGF/TikZ (intermediate, at least). I would browse the full amsmath and the Short Guide to Math - get to know how to do all the constructs properly, from day one! And the Not So Short Guide to LaTeX (?) is the best overall guide to LaTeX. x42bn6 Talk Mess 23:44, 8 November 2010 (UTC)[reply]

Can we draw every mathematical concept?

In the case of simple concepts, it is evident that we can. Two segments on the same line are a sum. Two segments on a right angle are the equivalent of a multiplication. But, how about more complex concepts? Quest09 (talk) 17:40, 5 November 2010 (UTC)[reply]

If you want to really blow your mind, imagine that you write, in your programming language of choice (say C++), a programming environment, call it Crayon, to compile x86 code form a TOTALLY VISUAL programming paradigm. Now you draw in Crayon, and get x86 code. But Crayon the application is really still just a C++ program.


So imagine that you proceed to use Crayon to draw the EXACT SAME THING you had wrote in C++ to produce the Crayon-the-C++-application you are now running. Compile, and VOILA: you are now running Crayon, written in Crayon. Now you use Crayon -- written in Crayon -- to sketch out a rudimentary version of an operating system. It will take you years, but when you're done sketching it and compiling it, you can burn it to a CD-ROM.


Then you take it to a fresh computer, you put that CD-ROM in, you boot an Operating system not written in any programming language but sketched in Crayon, and you start up a copy of Crayon. YOUR WHOLE COMPUTING ENVIRONMENT WAS MADE WITHOUT A SINGLE LINE OF CODE! Without a single programming keyword. Without a single +, =, *, opening or closing brace, or any other symbol. It was all purely sketched out visually.


Well, what you have now, is what I imagine the Greeks would be running these days if it weren't for the Peloponnesian War. 84.153.205.142 (talk) 17:56, 5 November 2010 (UTC)[reply]
You might be interested in befunge. 67.158.43.41 (talk) 18:04, 5 November 2010 (UTC)[reply]
Pshaw. In my mind I'm imagining beautifully rendered spheres and pyramids of various shades and colours, a totally immersive 3D abstract environment! Whose primitives just happen to translate to x86 code... 84.153.205.142 (talk) 18:06, 5 November 2010 (UTC)[reply]
Maybe some of the things built in minecraft are a little closer to what you had in mind (though not the same), e.g. this 16-bit adder. 67.158.43.41 (talk) 18:20, 5 November 2010 (UTC)[reply]
I'd say no. Draw me the axiom of choice or 15 dimensional Euclidean space, which some would argue can't fit in our universe. Of course, your question isn't specific enough to have a real definitive answer. 67.158.43.41 (talk) 18:04, 5 November 2010 (UTC)[reply]
I'd also say no. Mathematics, unlike most sciences, is largely concerned with abstract concepts. The examples given above are some and not the most abstract or difficult. For example it's possible to consider not just finite but also infinite dimensions, or vector spaces over other than the real numbers, or non-Euclidian spaces instead of Euclidian. All these, although they have real applications, move further away from how we describe our universe and so are more difficult to represent within it. There are many more such examples, many even more abstract, which would similarly defy any attempt to draw them. --JohnBlackburnewordsdeeds 18:12, 5 November 2010 (UTC)[reply]
This is ultimately a problem of encoding and representation. I will not attempt to 'draw the axiom of choice', but one can easily draw a picture things that don't exist as concrete individual physical objects, such as as the real projective plane, hyperbolic plane, 4D euclidean space, etc. Of course, understanding the drawing is a matter of convention, but then so is understanding a 'drawing of the concept of addition'. SemanticMantis (talk) 18:18, 5 November 2010 (UTC)[reply]
Isn't any written language just a more sophisticated, and more abstract "drawing" of your idea? Nimur (talk) 20:05, 5 November 2010 (UTC)[reply]
Yes Nimur, I was also thinking along similar terms. Still, there is something that separates a diagram of the hyperbolic plane from a formal definition, no? SemanticMantis (talk) 21:14, 5 November 2010 (UTC)[reply]
That's an interesting point. Both language and drawings just describe some idealized thing, so in that sense they're basically the same. 67.158.43.41 (talk) 22:05, 5 November 2010 (UTC)[reply]
You might also be interested in our article on Mathematical visualization. WikiDao(talk) 20:24, 5 November 2010 (UTC)[reply]

Different universe, different maths?

I understand that, if other universes exist in a Multiverse, that they could have different physical laws. But must maths be the same in all of them? 92.29.112.206 (talk) 20:04, 5 November 2010 (UTC)[reply]

The answer depends on your philosophy. See Philosophy_of_math#Contemporary_schools_of_thought. In short, the Platonist view would say yes, all universes have the same math. In contrast, the Embodied mind theory (WP:OR not espoused by any professional mathematician I've known) would say mathematical constructs are constructs of the human mind, and cannot be examined in a universe without humans. The greater issues here are the ontology of mathematical objects, and the epistemology of mathematical claims. SemanticMantis (talk) 20:28, 5 November 2010 (UTC)[reply]
Could other systems of maths be imagined, or is there only one system of maths that is internally consistent? 92.29.112.206 (talk) 20:31, 5 November 2010 (UTC)[reply]
It depends in which sense you are asking. You may enjoy reading Hilbert's_program, which describes a famous (failed) attempt to make all of mathematics provably consistent. It is a common interpretation that Gödel's_incompleteness_theorems prove that such a task is impossible to accomplish. More to the (what I think is) the spirit of your question, many alternative systems have proven useful. For example, 1+1=2 in conventional everyday (integer) terms, but 1+1=0 in the Finite_field of two elements. These statements are not contradictory, and both are useful in different contexts. Another good example of an 'alternative system' of math is Non-Euclidean_geometry. You may get better answers if you give us some indication of what types/levels of math you know/are comfortable using. SemanticMantis (talk) 20:55, 5 November 2010 (UTC)[reply]
My only maths qualification is GCE "O" level, if that means anything to you. It was the exam taken by the brighter British sixteen year olds. 92.15.2.255 (talk) 20:36, 7 November 2010 (UTC)[reply]
I researched this in depth, including e-mailing the likes of Noam Chomsky, and leading philosopher (now at princeton) called Peter Singer. The answer (even from just the linguist, noam chomsky), was resounding: other universes MUST have the same laws of logic and math. It is simply impossible for a mathematician-creature to explore primes and found that the billionth prime in his Universe has a different value from the billionth prime in ours; or to explore pi and find that it has a slightly different value. To me, this is a really stupid fact, because it means we know a lot about potential Universes about which we know nothing (to my mind, a contradiction). Namely, we know what their mathematicians would find to be true about a given system (such as our integers) should they choose to explore that system. But, even though I disagree with all the leading philosophers' ideas, and think it's really stupid, I think I owe it to you to tell you this. There is unanimous consent that math must work exactly the same (if they explore the same system). 84.153.222.232 (talk) 21:51, 5 November 2010 (UTC)[reply]
It depends on whether you believe that God made the integers; all else is the work of man. I find it quite hard to imagine a universe where counting is not possible, but quite easy to imagine one where the concpt of real number was not developed.--Salix (talk): 22:13, 5 November 2010 (UTC)[reply]
84 above, now my IP has chamged. the question is not about what is developed, but what would be found to be terue once it is devekloped. Can you imgagine that when pi is developed, they calculkate it to some precision, but what they find is a sligghtly different value from ours (diverging agter our millkionth digit)? You cannot imagine it, any more than you can imagine a square having five corners yet remaining a square: you can convince yourself you are imagining it, but you are simply not thinking with rigor. When you imagine a universe whose pi diverges from oursd after a million digits (regardless on when, how, and whether this is even calculated by anyone in that univerde!) you are simply not imagining with rigor. Think more rigorously and you will find thaat the five cornerned object you imagined isn't really a square, or that it is a square but dfoesnt have five corners. Think more rigorously and you will find that the value of pi is exactly the same in any possible universe (again, regardless of who looks at that value). I think this is a really stupid fsact, and don't agree with it, but it is a fact nontheless. 93.186.31.236 (talk) 23:12, 5 November 2010 (UTC)[reply]
The idea that you can answer this question by trying to "rigorously imagine" other math seems to me to miss the point. The question is whether there is the possibility of different consistent math that we aren't capable of imagining. Rckrone (talk) 06:11, 6 November 2010 (UTC)[reply]
(same IP again). And it seems to me, with all respect, that you miss the point. Math is a part of TRUTH. It is a part of REALITY. When you say "whether there is the possibility of different consistent math that we aren't capable of imagining" it has NOTHING TO DO with the OP's question. His question is: physical laws can be different, can math laws be different the same way? The answer is "no". Let us say that an alternative universe DOES use math that is consistent and is something we aren't capable of imagining. No one can imagine it in our universe for 1000 years. But then a single person is born with an IQ of a million. That one person imagines the same consistent system and explores it. What he will find is EXACTLY THE SAME RESULTS as what the other universe has. That is because math works EXACTLY THE SAME in all possible universes. Whether it is ever discovered/invented is totally irrelevant, since it is a part of the truth or reality in a universe. Let me give you an example. We have no way of proving at the moment whether there are infinitely many twin primes. Nevertheless, either there are or there are not. It is impossible that in our Universe, there are infinite twin primes, whereas in a different Universe there is a largest one. What is possible, of course, is that we decide the answer is "undecidable" under our set of normal axioms, whereas another Universe starts, for intuitive reasons, with entirely different mathematics, and they have no problem proving that in their own axiomatic system there is a largest prime. But, if we were somehow, in a thousand years, with a single genius who has an IQ of a million, to explore that same axiomatic system, we would find the same results as them. (On the flip side, if a genius in their universe imagines the same set of axioms we are working, that genius would find the same truth: it is impossible that for us, there are infinite twin primes under ZFC, whereas when a genius in another Universe imagines the exact same ZFC system for an afternoon, he proves rigorously using it that there is a largest twin prime.) Math, and logic, simply work exactly the same in any possible Universe: it is not like the laws of physics at all in that respect. Please don't confuse yourself further by failing to differentiate between exploring a DIFFERENT system, one which we can't comprehend, and exploring the SAME system and finding different results. P.S.: Again, I don't agree with anything I've just said, I think it's really stupid, but it just so happens to be the universal consensus among scientists, mathematicians, and philosophers, that this is the case. I am just reporting the facts, even though I disagree with them. 84.153.222.232 (talk) 10:36, 6 November 2010 (UTC)[reply]
I'm not sure that anything you posted contradicts what I said. I never implied for instance that in another universe the same axioms would lead to different results. In fact the opposite, the idea that there is math we are incapable of imagining is to say that there may be axiomatic systems that are so unnatural to us that we would never or can't consider them (or would never have any reason to consider them). That said, the axioms that come naturally to us are closely bound to the physics of the universe we live in. For example in your original post when you're talking about the value of pi, how exactly are you defining the concept of pi? If you're defining it as some specified value, then obviously it has no other value. If you're defining it as the ratio of the circumference and diameter of a circle (which is how the concept arose for us humans) then that depends on the geometry of our particular universe. That ratio could certainly be different than the value we've labeled pi. I don't think it's possible to fully divorce physics from the resulting math. Rckrone (talk) 18:14, 6 November 2010 (UTC)[reply]
sorry, you just have to do more research. pi is simply a value that arises in different situations. In all situations where this value arises, regardless of the reasons such situations are explored, it will have the exact same value in this and any other possible Universe. It doesn't matter how, or even whether, the concept arises. Maybe there is a constant, like e, and pi, that is super commonly used in another possible Universe, since it arises very naturally from exploration of the physical laws that Universe follows. If in the next 10,000,000 years, anyone in this Universe is crazy enough to chance upon that constant for no good reason whatsoever, but purely theoretical crazywork, but they are a good mathematician, they will calculate the same value for it as it has in that other Universe. It doesn't matter that it is totally inapplicable here and obviously descriptive there. If there is a formula for it, that formula yields the same result. Basically, you are using semantics to imply that what we call "pi" could be "e" in another Universe, where the ratio of a slkwjegircle's circumference to diameter is e. But I say slkwjegircle because it is not a circle, a mathetmatically rigorous construct. If anyone ever explores an actual cirlce, instead of a slkwjegircle as found in that Universe, they will find the same value of pi that we have. To any precision. 84.153.207.135 (talk) 21:56, 6 November 2010 (UTC)[reply]
Yes, obviously it's semantic. That's the whole point. I don't disagree with the things you said in that post nor ever did I. What I am saying is that the physics of the universe has a strong influence on what we choose to study, or perhaps even what are brains are physically able to comprehend. Suppose the ratio of a slkwjegircle's circumference to its diameter is not e, but some value x that we don't care about. Then that value x will probably become an object of study for mathematicians in that other universe, and maybe it has some nice properties, but we don't know this or even care because what is a slkwjegircle anyway? Sure if we studied slkwjegircles we would find the same results, but we don't. Maybe we can't. Rckrone (talk) 19:51, 7 November 2010 (UTC)[reply]
I think we'll just have to agree to agree then. 84.153.212.109 (talk) 11:53, 8 November 2010 (UTC)[reply]
There is the anthropic principle, which tries to answer the question "why is our universe like it is", with the answer "because if it wasn't we could not exist to describe it", as if the laws were just a little different carbon based life would never have evolved.--JohnBlackburnewordsdeeds 22:26, 5 November 2010 (UTC)[reply]

The stability of pi is mentioned above, but wouldnt the value of pi be different for a circle on the surface of a sphere? Our universe may have some analogous 'curve' that we are not aware of. 92.15.2.255 (talk) 20:36, 7 November 2010 (UTC)[reply]

π is the ratio between the circumference and diameter of a circle in Euclidean geometry, not in whatever universe we happen to live in. It remains the same even if we happen to live in a non-Euclidean universe (indeed, we live in a non-Euclidean universe, and we are aware of it!). Also, which has nothing to do with geometry. -- Meni Rosenfeld (talk) 06:13, 8 November 2010 (UTC)[reply]
Yes, but I think the interesting part of this question is, could the actual logic we use to work such things out, turn out to be wrong in another universe? And if not, how do we know? A related question is, Is logic empirical?
Typically most math/science types will answer, no, that cannot happen. I am not sure how they can be quite so sure. My pragmatic, provisional answer is: We have no way of talking intelligently about such a possibility, if it is a possibility, because to do so we would have to use the logic that works here.
The protagonist of Heinlein's They faces a similar dilemma as he warns himself not to use the sorts of reasoning they have taught him. But what sort, then? --Trovatore (talk) 06:36, 8 November 2010 (UTC)[reply]

Things like Wave–particle duality and this quantum hocus-pocus suggest that there are counter-intuitive things in the universe, so different maths may be a possibility, particularly as the theorised ten dimensions are further explored. 92.15.3.137 (talk) 13:46, 8 November 2010 (UTC)[reply]

Am I misunderstanding your comment, or are you greatly underestimating the capability of mundane mathematics (which can easily deal with 10- or million- dimensional spaces)? -- Meni Rosenfeld (talk) 20:11, 8 November 2010 (UTC)[reply]
There's a difference between counter-intuitive physical results and "different maths". Fundamentally, even wacky physical theories like quantum and string theory are described using very conventional, if complicated and unintuitive, math. I don't see how their existence suggests the existence of another universe which has "different" math. In fact, that they are described using our regular math suggests the opposite, to me. 67.158.43.41 (talk) 04:28, 11 November 2010 (UTC)[reply]

Yes, you are misunderstanding my comment. 92.24.186.80 (talk) 20:30, 8 November 2010 (UTC)[reply]

Could there be a universe where 1+1=3? 92.15.3.20 (talk) 19:30, 11 November 2010 (UTC)[reply]

If '1', '+', '=', and '3' are defined the same way as in our universe, then no. If they are defined differently, than 1+1=3 by these definitions even in our universe. Math is the study of what follows from a set of definitions. Any self-consistent system imaginable is part of our math. Any inconsistent system is also part of math, but inconsistent systems are less interesting, so we already know everything there is to know about them. (If you want to follow up on this, note that paraconsistent logics look inconsistent, but there is a difference.) 74.14.109.89 (talk) 19:52, 11 November 2010 (UTC)[reply]

why don't I like databases?

Databases are the only aspect of computer science that I hate the very concept of. I refuse to even be a simple user. I will tell you my impressions and feelings and hope you will be able to tell me the real, mathematical reasons behind it, which I can't grasp. basically, databases feel about as slow as the difference between ram and a roundtrip ping to china: my feeling is that everyy time I do a query, I am waiting for a bubble sort on a huge data set to finish. Then I don't like sql, to the point that I would rather build up a hugely complex datatype (hashes of arrays of arrays of hashes etc) and fill it with constants in my source code, than use sql. Even though it takes me longer, and each lookup takes more code to write (for example, manually iterating over my data structure to see what meets conditions). Nevertheless, I am very satisfied with the results. Why? Same with my personal informationH. I use excel instead of a db every day. Does anyone have any ideas about what, mathematically, coulld really be behind my aversion? Thank you. 93.186.31.239 (talk) 23:00, 5 November 2010 (UTC)[reply]

Its worth reading up on the Relational model which does put the relational database on a pretty sound mathematical footing not far removed from first-order logic. It does require a different data-oriented approach rather than a more procedural way of thinking which might be more natural for humans. The object-relational mapping shows that data-structures are essentially the same as the set of relations in a database but you might find that the Object-relational impedance mismatch may explain some of your problems. Speed issues may be due to the fact that many database strive to be ACID compliant, good for banks with billions of customer records, less good the everyday hacker. Client server ACID transactions is hard to do right but quite an interesting intellectual challenge.--Salix (talk): 23:54, 5 November 2010 (UTC)[reply]
If a database goes slow then the queries that are taking some time need to be analysed to see what the problem is and sticking in indexes and using unique keys where necessary. The big question is, how much time will one waste on optimising the performance compared to just waiting for the result? One can hardly fault Google for speed for instance, they have optimized for speed to the nth degree but it is worth it for them. SQL databases put a lot of effort into optimising the queries, and for most users who don't want to get into the innards or who don't have feel for how it works this is best. It is exactly the same reason we now use high level languages rather than assembler, sometimes it can give awful code but overall the gain is huge. Dmcq (talk) 09:50, 6 November 2010 (UTC)[reply]
I share your aversion against traditional databases. Computer files suffer from backwards compatibility all the way back to the punched cards, which have now become rows in databases. In order to save paper the data were organized with several fields on each card. This is the original sin of database design, ending up with the ugly data hierarchy. In order to adress a field in a database you must provide the database name, the name of the table within the database, the name of the column in the table, and the value of some key field to identify the row. This fourfold increase of complexity makes life a misery for database users. Bo Jacoby (talk) 12:41, 6 November 2010 (UTC).[reply]
I don't share your aversion to databases, but I do share your aversion to SQL, which reminds me of COBOL in its verbosity. Its inadequacies have led to many incompatible versions. I think the SQL#Criticisms of SQL are entirely justified. You may like to read The Third Manifesto. --Qwfp (talk) 13:41, 6 November 2010 (UTC)[reply]
Answering the query about what causes the posters problem where they waste time doing every minute thing themselves and probably putting in errors compared to just using a well debugged package, a lot of people are like this. They think they are safer if they drive a car themselves instead of being driven by a taxi or bus driver. They want to pilot the plane themselves, see [3] for how to combine your love of doing it in Excel and lack of ability to trust somebody else's work. Dmcq (talk) 13:49, 6 November 2010 (UTC)[reply]
From your brief description, I'd say much of your dislike is based on misunderstanding. Running a database query is certainly slower than, say, a RAM lookup of a known array index; the latter requires a few processor cycles while the former may require file IO. Complex operations are what databases are designed to be quick at, though. They've been optimized (for decades, in the relational case) to be much more clever than the naive solution you'd most likely come up with implementing things yourself, for instance with the use of indexes and b-trees. So, the "mathematical reason" you dislike databases is, perhaps, that you choose poor use cases to extrapolate from. Also, SQL/relational databases are by no means the only show in town database-wise; an interesting one conceptually which I had to work with recently is RDF, which is queried using SPARQL.
Thanks for disambiguating my RDF link. 67.158.43.41 (talk) 21:45, 7 November 2010 (UTC)[reply]
That said, I have no great love for databases myself. Relational databases often become evil nests of bad design choices over years of use. Sometimes the best tool for the job is simply serialization of your data structures. As your structures get more complex, though, you'll almost certainly re-implement database features, possibly poorly, incorrectly, or slowly. Databases are often used for "big" data sets, say in the hundred megabyte or higher range (very approximate). Perhaps you simply haven't hit a good point to use them. 67.158.43.41 (talk) 15:55, 7 November 2010 (UTC)[reply]


November 6

Showing that lim (1+1/n)^n = e as n –> infinity



Find a series for :



Hence show that as n approaches infinity.

Since is equal to the series if I can show that converges to 1 as n approaches infinity, I can conclude that and hence , but how do I do this? --220.253.253.75 (talk) 00:58, 6 November 2010 (UTC)[reply]

Your series is wrong, firstly. We begin with
Now substitute x = 1/n to get:
Multiply through by n:
Clearly as n goes to infinity, (*) goes to 1, so that the argument of the logarithm goes to e. —Anonymous DissidentTalk 11:31, 6 November 2010 (UTC)[reply]

Correction to my previous Refdesk question

Hello,

I posted this [4] a while back in August, and I thought I'd solved the problem then. But I've just had another think about it and I've realised that there's no guarantee K_m is going to be a subgroup of K[alpha]! Even if you take K_m=K_m[alpha] as K[alpha]/(X^m-c), it's not necessarily the same using the tower law on [K[alpha]:K_m[alpha]][K_m[alpha]:K] and then saying both m, n divide [K[alpha]:K], because there's no guarantee as far as I can see that [K_m[alpha]:K]=[K_m:K]: surely the fact we're introducing a new dependence relation on our powers of alpha is going to change things.

So could anyone tell me where I went wrong? At the time it seemed right but I've unconvinced myself now! Thankyou, 62.3.246.201 (talk) 01:14, 6 November 2010 (UTC)[reply]

I just looked at your previous question, and hopefully I'm following along correctly:
We assume that is irreducible, so is a field. We wish to show that , where is any root of . What do we know about ? Eric. 82.139.80.73 (talk) 13:28, 6 November 2010 (UTC)[reply]
That it's a root of ? In which case we know is a constant multiple of the min poly for , so which then is contained in ? Hope that's right! Also I think I meant to write K() rather than square brackets previously, sorry! 131.111.185.68 (talk) 14:27, 6 November 2010 (UTC)[reply]
I think I get it now anyway, thankyou! 62.3.246.201 (talk) 05:42, 7 November 2010 (UTC)[reply]

Series Summation

Hi. I have to sum the series using the substitution . I'v given it a go but I only get as far as and now don't see how to proceed. Originally, I was hoping to use the sum of a geometric series but clearly the n on the denominator stops this from being feasible. Can someone suggest what to do next? Thanks. asyndeton talk 11:36, 6 November 2010 (UTC)[reply]

First, you can factor the Im out of the summation to get (because taking the imaginary part is linear). Second, a nice trick for summing things of the form is to take the derivative or integral (depending on whether k is negative or positive) with respect to z and work from there. Eric. 82.139.80.73 (talk) 13:17, 6 November 2010 (UTC)[reply]
A devilishly cunning trick. Cheers Eric. asyndeton talk 13:56, 6 November 2010 (UTC)[reply]
Where, by linear, you only mean additive or R-linear, not C-linear. – b_jonas 21:13, 7 November 2010 (UTC)[reply]
Right, I was thinking R-linear. "Additive" probably would have been clearer. I just didn't want to leave it at "factor the Im out" so that someone reading along wouldn't think "Im" was a number. Eric. 82.139.80.73 (talk) 18:09, 8 November 2010 (UTC)[reply]

Divisor

Hello, everybody! Suppose, that p is prime number in form . Can we always find such natural number n so that is divisible by p? How we can prove this. Thank you! --RaitisMath (talk) 18:33, 6 November 2010 (UTC)[reply]

The multiplicative group mod p is cyclic of order divisible by 8, it therefore has an element n of order 8. If follows that n4+1 is 0 mod p.--RDBury (talk) 04:26, 7 November 2010 (UTC)[reply]


November 7

Check my math please

Could someone check my math here? A random sample of wastewater shows a BOD of 959 mg/L. Total wastewater is 5.8 million gal/day. 959 x 3.875 (liters to gal) = 3.72gm/gal x 5.8 mgd = 21,576 kg/day. Right?

The same sample shows lead at 2.49 mcg/L so 2.49 x 3.875 x 5.8 mil = 55.96 grams.

Have I got all the zeros and decimals in the right places here? Thanks! —Preceding unsigned comment added by 148.66.156.170 (talk) 23:31, 7 November 2010 (UTC)[reply]

Google Calculator comes close to agreeing with you with 959 mg/L * 5.8 million gal/day in kg/day = 21 055.2174 kg / day. The second calculation gives 54.668917 g / day. 67.158.43.41 (talk) 23:47, 7 November 2010 (UTC)[reply]

November 8

Fibonacci-like Series

Consider the series , which is the number in base b that has ones in its expansion at indices corresponding with the Fibonacci numbers. Is this number algebraic? Similarly, is the number algebraic in the p-adic numbers? Black Carrot (talk) 01:34, 8 November 2010 (UTC)[reply]

The answer to the first question is "almost certainly not, but I very much doubt anyone has a proof". It should be the case that all algebraic irrationals are normal numbers to every base. Basically that's because almost all real numbers are normal, so if a number is not normal then there should be a reason it isn't, and algebraic irrationals don't really know from positional number representations so they shouldn't have such a reason. Given that there are only countably many algebraic irrationals, and the probability of any of them not being normal should be zero, the probability that there's even a single algebraic irrational that's not normal should also be zero.
But of course that's not a proof. --Trovatore (talk) 07:37, 8 November 2010 (UTC)[reply]
I don't believe in this one. Some rational numbers have a very good reason not to be normal, so why couldn't some algebraic irrational numbers also have a good reason? On the other hand, I do think that this particular sum is not algebraic. – b_jonas 11:18, 8 November 2010 (UTC)[reply]
All rational numbers have a very good reason not to be normal. They have repeating expansions in every base. No such reason is apparent for algebraic irrationals. There probably isn't one, and most likely all algebraic irrationals are in fact normal.
This is a case with a huge disconnect between what we basically know (in the sense of things that we'd say we know in, for example, physics) and what we can prove by the rules of mathematics. We basically know that all algebraic irrationals are normal to all bases. But I am unaware that anyone has managed to prove that even one algebraic irrational is normal to even one base. --Trovatore (talk) 18:38, 8 November 2010 (UTC)[reply]
So if a number is normal with respect to all digits but 8, but its decimal expansion has no 8s in it, you're saying it's probably transcendental? Michael Hardy (talk) 16:56, 10 November 2010 (UTC)[reply]
Yes. --Trovatore (talk) 18:53, 10 November 2010 (UTC)[reply]
You might be interested in the article Liouville number, though I don't believe it applies in this case. I also suspect your first number is not algebraic in any base, but do not have a proof. A plausible generalization might replace F_k with nearestint(e^k) for suitable real e, in light of Binet's formula. 67.158.43.41 (talk) 11:55, 8 November 2010 (UTC)[reply]
This is not an answer, but still. It is easy to see that the two related sums and both have approximation exponent ≥ φ2 > 2, hence they are both transcendental by Roth's theorem. This does not preclude their sum from being algebraic, but it seems quite unlikely.—Emil J. 15:15, 8 November 2010 (UTC)[reply]


At a first glance, my impression is that for any integer the number is indeed trascendental, and that there should be an elementary proof, arguing on the "base b support" of the number and of its powers. Here by "base b support" of a number x I mean the subset of corresponding to the non-zero digits of x, e.g. for , it is the set of all Fibonacci numbers (negated). Note that where is the number of representations of k as a sum of m Fibonacci numbers, with possible repetitions, and distinguishing the order of summands (in general this is not a base b representation, since the coefficients may exceed b-1). However I think it should be true that:
i) for any m, c(k,m) is bounded by a constant C(m);
ii) for any m, the set of all natural numbers that have a representation as a sum of less than m Fibonacci numbers has density 0 relative to the set of all natural numbers that have a representation as a sum of exactly m Fibonacci numbers, meaning that the number of elements of less than x is little oh of the number of elements of less than x, as Let be a polynomial of degree m with integer coefficients and with of degree less than m.
As a consequence of (i), the base b support of should be only a sligtly perturbation of the set and by (ii) it could not be covered by the support of , that would make it imposible for to be a root of . Though I think that the best proof is the one shown by EmilJ via Roth's theorem, I'd be glad to hear any comments or ideas on the above elementary approach. --pma 23:37, 8 November 2010 (UTC)[reply]

Quick query - integral tending to 0 on [0,1]

Hi there,

Could anyone quickly suggest a method I might use to show that  ? I've spent a long time staring at it and the only possible method I could think of was to say , but this only gives us a bound of via arctan, which is not good enough unfortunately.

I am happy to do all the working myself if anyone could point me in the direction of an effective method! (Limit tests, basic measure theory etc. is fine with me, incidentally).

Thanks! 131.111.1.66 (talk) 15:01, 8 November 2010 (UTC)[reply]

For , this is just which is obviously 0 (since ). For , it's , which is rather not 0. --Tardis (talk) 15:09, 8 November 2010 (UTC)[reply]
I suspect that the OP didn't write properly what they actually want, since they talk about integrals, which there aren't any in this expression.—Emil J. 15:18, 8 November 2010 (UTC)[reply]
Sorry! Corrected now, I'm an idiot :-P 131.111.1.66 (talk) 16:15, 8 November 2010 (UTC)[reply]
OK, trying again (and assuming a dx in the integral): dropping the cosine factor for simplicity, we have where and . As , the first term goes to zero because so which vanishes, the second goes to zero because on that interval so which vanishes for suitably small , and the third goes to zero because on that interval which vanishes (and the interval length is bounded). --Tardis (talk) 19:34, 8 November 2010 (UTC)[reply]

when will math end?

what is the most likely avenue for demise of the entire field of mathematics as we know it today and when is it most likely to take place? My impression is that it must be empirical evidence (not proof, of course =) ) that any axiomatic system is inconsistent, and it seems to me this is most likely to take place in the next 150 years. I would place a 90% chance on this happening. But this is just my impression. What are the facts, referenced ideas on this subject? Thank you. 84.153.236.235 (talk) 18:42, 8 November 2010 (UTC)[reply]

Take a look at Gödel's incompleteness theorems, and the references linked at our consistency article to learn about formal mathematical consistency. At the reference desk, we will not speculate about future developments in mathematics; that means we won't delve into guessing about "the demise" of mathematics (whatever that even means). Nimur (talk) 18:51, 8 November 2010 (UTC)[reply]
My guess would be because life has ended and the timescale would be some thousands maybe millions of millions of years. Hopefully evolution and technical fixes would have made people better at it by then. Dmcq (talk) 19:33, 8 November 2010 (UTC)[reply]
Agreed with Dmcq, the demise of humanity is by far the most likely scenario for math ending. But I think it's very likely we will destroy ourselves this century.
A discovery like what you describe, if indeed it takes place, may change some views in philosophy of mathematics, but it will not change how mainstream math is practiced.
I'd say that the probability that humanity as we know it will continue, but math as we know it will end this millennium, is about 0.00000001% (though I am in no way qualified to make such estimates). -- Meni Rosenfeld (talk) 20:00, 8 November 2010 (UTC)[reply]
I'd nitpick on Dmcq's and Meni Rosenfeld's answer on account of the fact that the study of mathematics has only been able to pick up due to the luxury of humanity's ability to focus on the mental exercises of math, sciences, and the arts, instead of the constant need to survive a harsh environment. It would not take the complete demise of humanity to end math as we know it; only the end of modern civilizations. --COVIZAPIBETEFOKY (talk) 21:31, 8 November 2010 (UTC)[reply]
Depending on what you mean by "as we know it", the "field of mathematics as we know it today" could end abruptly if a strong AI were developed which ordinary humans couldn't keep up with, effectively ending humanity's direct participation in mathematical discovery. Math would certainly exist and continue after that, but not really "as we know it". Staecker (talk) 00:53, 9 November 2010 (UTC)[reply]
If humans can get along with each other and the planet well enough, I dont think math will ever end. I recall Max Born once saying "physics, as we know it, will end in 2 weeks" only to find out the explosive development of new stuff aft that. I think, as new theories are developed to answer current problems, new problems arise from the new theories at a rate thats faster than the rate that current problems are being solved. Money is tight (talk) 14:26, 9 November 2010 (UTC)[reply]

Every time a theorem is proved, a part of mathematics dies. Count Iblis (talk) 23:59, 9 November 2010 (UTC)[reply]

November 9

Differential geometry textbook

Hello. I'm looking for a good introduction to differential geometry; can anyone suggest a particular author or text? Thanks in advance. —Anonymous DissidentTalk 12:27, 9 November 2010 (UTC)[reply]

I hear Lee's introduction to smooth manifold is pretty good, but I havent looked at it yetMoney is tight (talk) 14:22, 9 November 2010 (UTC)[reply]
Spivak's Comprehensive Introduction to Differential Geometry is well-used and praised. [5] Depending on your background, you may find it more comprehensive than introductory :) I also recommend his book on introductory real analysis, which is unfortunately named `Calculus'. SemanticMantis (talk) 16:43, 9 November 2010 (UTC)[reply]
It depends on what are your interests and where you want to go. Very beautiful is Foundations of Differentiable Manifolds and Lie Groups, by F.Warner. --pma 20:14, 9 November 2010 (UTC)[reply]

are these the same thing

Toroidal coordinates Bispherical coordinates —Preceding unsigned comment added by 129.67.37.227 (talk) 18:33, 9 November 2010 (UTC)[reply]

No — they rotate their common bipolar coordinates base system around two different axes. --Tardis (talk) 20:57, 9 November 2010 (UTC)[reply]

Probability

If a number between 0 and 1 is chosen at random, then what is the probability that exactly 5 of the first 10 digits are less than 5?

My textbook says (1/2)^10, but I would think that you would have to take into account permutations (ie 10/5!5! * (1/2)^10). Why am I wrong? And suppose the question asked to find the probability that at least 5 of the first 10 digits are less than 5? My first guess would be 10/5!5! * (1/2)^5, but the actual number is absurd (ie >1). —Preceding unsigned comment added by 76.68.247.201 (talk) 20:52, 9 November 2010 (UTC)[reply]

Your textbook is wrong on the first point: it is (see combinations (not permutations) for the notation). It's easier to see the pattern if you say "7 of them less than 4" instead: . Just 2-10 is the probability that, say, the first five are less than 5 and the next five are not (e.g., 0.4124165869...). But for the "at least" question, it's not enough to drop the power to 5; you have to consider each possibility that's sufficient (5, 6, …, 10). So it's . (Again, it's clearer if the probabilities aren't even: "at least 5 less than 4" is . --Tardis (talk) 21:07, 9 November 2010 (UTC)[reply]
Okay, thanks. 76.68.247.201 (talk) 23:59, 9 November 2010 (UTC)[reply]

That exactly 5 of the first 10 digits are less than 5 is not a property of a real number, because 0.00000999999... = 0.0000100000.. . Bo Jacoby (talk) 19:31, 10 November 2010 (UTC).[reply]

But the set of troublemakers is countable, hence of probability zero, so it does not matter.—Emil J. 19:47, 10 November 2010 (UTC)[reply]
Hah, made me laugh! It's finite, even, since most troublemakers have representations which differ only after the first 10 digits. 67.158.43.41 (talk) 02:38, 11 November 2010 (UTC)[reply]

Determining all odd primes for which 15 is a quadratic residue modulo p

Hello everyone,

Pretty much as the question says, what is the best way to find all odd primes p such that 15 is a quadratic residue modulo p? I have been playing around with Jacobi symbols, but I tend to get into about 10 different special cases: and then I seem to get into lots of different cases based on whether each of these terms is ±1, and even then for each of those cases there are some subcases and it seems to be a bit of a mess. Could anyone suggest a better way, or must it be done on a case-by-case basis?

I am happy using standard Legendre/Jacobi symbols, but outside of that I'd prefer not to use too much machinery (though a few modular arithmetic results like Gauss' lemma, FLT etc. are fine) - if that's possible, of course. Thankyou for the help in advance! Otherlobby17 (talk) 20:54, 9 November 2010 (UTC)[reply]

The expression you wrote is exactly what you need to think about. If you can evaluate all three factors, you know whether or not 15 is a quadratic residue mod p. You need to think about cases, but there are more than 10-- do you see from your formula what the cases are?
If we didn't know about quadratic reciprocity, we might think there were infinitely many cases, since there are infinitely many odd primes. But, in fact, there is a pattern to the answer, and the cases you must consider reveal the shape of that pattern. If you can see the simplest way to state the answer, it's not that messy. (At least, it's a vast improvement on infinitely many cases!) 140.114.81.55 (talk) 02:25, 10 November 2010 (UTC)[reply]
I would guess the cases are just dependent on the values of p modulo 3, 4 and 5? The 3/5 come from the Legendre symbols, and the 4 from the (-1) factor? Otherlobby17 (talk) 03:10, 10 November 2010 (UTC)[reply]
Small typesetting thing: \left( and \right) can make your Legendre symbols prettier, on the off chance you didn't know it already. 67.158.43.41 (talk) 19:48, 10 November 2010 (UTC)[reply]
Yes, those are the cases. (And you can combine those three 'modulos' into one big 'modulo'.) Once you write out the whole answer, you can check a few p's to test it. (For example, 15 is clearly a square mod 11.)140.114.81.55 (talk) 02:27, 11 November 2010 (UTC)[reply]

November 10

Irrational power

Is there a more accurate way to calculate an irrational power of a number than to approximate it with a fraction (rational number) and take the [denominator]th root to the power of [numerator]? 24.92.78.167 (talk) 00:06, 11 November 2010 (UTC)[reply]

I'm not certain what you're saying but if you want lots more digits than a standard calculator gives then just look for a high precision one on the web and stick in the figures. The first I looked at said it did 50 figures which is way more accurate than even most mathematicians will know what to do with. Dmcq (talk) 01:46, 11 November 2010 (UTC)[reply]
Irrational exponents are often defined by first defining rational powers and then taking the irrational case to be the continuous extension of the rational case. That is, irrational powers are defined to be the limit of rational power approximations, as the approximations approach the irrational exponent. There's no way to get more accurate than the definition.... I'm also puzzled by what you might have meant. 67.158.43.41 (talk) 02:31, 11 November 2010 (UTC)[reply]

I mean this: to evaluate, say we can approximate , and so . I know we can approximate these numbers closely enough for all practical purposes, but is there a way to find the exact value? Thhans 24.92.78.167 (talk) 02:40, 11 November 2010 (UTC)[reply]

Well, in special cases, sure. For example you can certainly get closer to directly than you could get by using rational approximations to . I don't see any obvious better way of getting , though. --Trovatore (talk) 02:42, 11 November 2010 (UTC)[reply]
(ec) Well, there's no way to write down a decimal expansion of the exact value. But the same is true of . For all irrational numbers, the decimal expansion goes on forever and never repeats. But, writing or is writing down the exact value, because we have defined exactly what these expressions mean. If you're more comfortable with , that's probably just because it's more familiar. One nice way to rewrite messy exponents is: ab = eln(a)b. (That's how calculators would compute the value for you, actually.) One thing I have to add for fun-- it is possible to write down the exact value of the decimal expansion of .140.114.81.55 (talk) 02:51, 11 November 2010 (UTC)[reply]
According to some perspectives, ab = eln(a)b is how ab is defined, if b is not rational. At least according to my calculus textbook, prior to the definition of ln x through integral of 1/x, there wasn't any operation defined over all the real numbers which functioned like exponentiation, so taking something to an irrational power was nonsensical. The inverse of ln x had the same properties as exponents, assuming a particular irrational base (henceforth called e), and so was taken as the definition of exponentiation to irrational powers. Irrational powers of other bases followed from the basic properties of powers and logarithms. - 174.31.204.207 (talk) 16:34, 11 November 2010 (UTC)[reply]
You may be interested in computable numbers. What precisely do you mean by "find[ing] the exact value"? Writing the full decimal expansion? Having a way to compute the full decimal expansion to any desired precision? Something else? 67.158.43.41 (talk) 04:11, 11 November 2010 (UTC)[reply]

Direct sum

Let f:A->B be a map between projective modules. Is A the direct sum of ker(f) and im(f)? Money is tight (talk) 05:03, 11 November 2010 (UTC)[reply]

Provided is projective, the short exact sequence splits (so that A would be the direct sum of ker(f) and im(f)). However, B being projective does not necessarily imply is projective, though I don't have an example on hand. I doubt A being projective makes any difference. Eric. 82.139.80.73 (talk) 05:37, 11 November 2010 (UTC)[reply]
Your right about how A doesnt make a difference. Take a projective B with a non projective submodule K, and we have the composite f:F(K)->K->B, where F(K) is the free module on K generators. If the sequence splits, then K must be projective because direct summands of projectives are projective.
My real question was the following: can every map ker(f)->C be extended to a map A->C? This is implied by my above question, but I bet it's false as well (I have very bad intuition in algebra). Money is tight (talk) 07:11, 11 November 2010 (UTC)[reply]

growth versus "now"

Suppose I have a time-discrete function C(t+1) = gC(t). The parameters can be set up in such a way such that x% of the function goes to "growth" and y% goes to "present benefit". Thus B(t) = (1-g)*C(t). Of course, growth has no utility while benefit does. At 100% growth, the function doubles.

Assuming the function is allowed to proceed indefinitely, is there a cutoff point for the growth allocation where the accumulated benefit as t goes to infinity for some high growth rate, will be actually be less than the accumulated benefit for some function with a low growth rate?

I notice that at 100% growth and 0% benefit, B(t) = 0 for all t, so NO benefit accumulation occurs, so clearly 100% growth is not an optimum point, and 99% growth will give you more benefit at t=infinity. Is 100% a singularity, is there an optimum somewhere?

I suppose, the proportion must stay the same throughout all times. How do things change if the proportions are variable or if 100% growth corresponds to multiplying the current function value by 1.5 or 4 to get the next function value?

Does this problem have a name?John Riemann Soong (talk) 07:18, 11 November 2010 (UTC)[reply]

First you'll have to define what your target is. Any will give you an infinite accumulated benefit. You need a way to distinguish different outcomes.
For any reasonable such way which preserves the idea that you're only interested in the infinite run, you'll find that for , is preferable. So you can only find a supremum, not a maximum.
Usually with such problems you have some discount rate or (possibly unknown) stopping point which makes it better defined.
The problem is the same if you have for some .
Ramsey growth model might be relevant. -- Meni Rosenfeld (talk) 09:23, 11 November 2010 (UTC)[reply]
If this was a finance problem, you would choose whichever values gave you the highest NPV. As this continues into the infinite future, the Gordon model might apply. Interesting question. 92.29.112.73 (talk) 13:51, 11 November 2010 (UTC)[reply]

Calculating the central angle in polyhedra

Hello, I asked a question on the science reference desk here: Wikipedia:Reference_desk/Science#Molecular_geometry regarding how to calculate the bond angle between a central atom and it's nearest neighbours in polyhedral structural units. I was referred here since it is essentially a geometry question. The article Molecular_geometry states that the central angle for a tetrahedral unit is cos-1(-1/3)=109.47 degrees. My question is how does one obtain this formula? I am looking for a reference that clearly and concisely describes how the central angle can be calculated in various polyhedra, not just a tetrahedron. For example, how would I calculate the central angle for a cuboctahedron? Or even the simpler octahedron or cubic cases? I'm certain this is described in some fundamental geometry text book or maybe an online resource somewhere but so far I have not been able to find a reference that describes the necessary procedure. Many thanks in advance. 88.219.44.165 (talk) 11:01, 11 November 2010 (UTC)[reply]

The first step is to find the coordinates of the vertices, u and v, with the center at the origin. Then the angle is given by
Where is the dot product and is the norm. For the cuboctahedron, taking vertices (1, 1, 0) and (1, 0, 1) you have
So the angle is 60°. -- Meni Rosenfeld (talk) 13:58, 11 November 2010 (UTC)[reply]

Commutativity property on "two-stage" transformations

I'm trying to prove a property about transformations on a set of objects G (labelled directed graphs in my particular case, but not necessarily relevant), and got stumped, so was hoping for help. One property of the transformations I'm looking at is that they are split into two-stages; that is, transformations are functions where is then the set of functions . So for a given , you might compute , where . I've been using as shorthand for .

I've also got a notion of dependence or independence between two transformations. Given two transformations and , is dependent on if . If is not dependent on and vice versa, then they are independent.

The property I'd like to prove is this: if and are independent, then ; or, failing that, what additional conditions I might need on and to ensure that it is true.

Thanks in advance for any help! — Matt Crypto 16:40, 11 November 2010 (UTC)[reply]

Hmm. Unwinding the definitions, you are assuming that and for every , and you want to infer that . Is that correct? If so, then the property does not hold in general: for a counterexample, take G = {1,2}, and define (T1(x))(y) = 1 and (T2(x))(y) = 2 for every x and y.
Under your assumptions, the identity you want simplifies to , so you might want to use that as an additional assumption.—Emil J. 17:07, 11 November 2010 (UTC)[reply]
Thanks for taking the time to unravel the definitions -- and you were quite correct in working out what I meant -- and for the counterexample (pretty obvious now you point it out!). I'll go and think some more, but just in case it triggers any further suggestions, I'll share a little more about my specific problem domain. G is a set of labelled directed graphs, and the transformations are rules which find matches of a pattern graph in an input graph and produce a graph edit as output. A graph edit is a consistent set of atomic edits on the input graph -- remove particular specified edges and vertices, relabel vertices, add edges and previously unseen vertices. You can also apply a graph edit to a different graph than that with which it was created, and the effect is that it simply ignores any atomic edits that don't apply to the given graph. I've got a method of calculating dependencies (with the meaning given above) between rules, and, assuming that the dependency graph is a DAG, I wanted to show that if I executed the rules in any topological order then the output graph would be the same. — Matt Crypto 19:11, 11 November 2010 (UTC)[reply]