Jump to content

Wikipedia:Reference desk/Mathematics: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 123: Line 123:


:Spivak's Comprehensive Introduction to Differential Geometry is well-used and praised. [http://www.amazon.com/Comprehensive-Introduction-Differential-Geometry-Vol/dp/0914098705] Depending on your background, you may find it more comprehensive than introductory :) I also recommend his book on introductory real analysis, which is unfortunately named `Calculus'. [[User:SemanticMantis|SemanticMantis]] ([[User talk:SemanticMantis|talk]]) 16:43, 9 November 2010 (UTC)
:Spivak's Comprehensive Introduction to Differential Geometry is well-used and praised. [http://www.amazon.com/Comprehensive-Introduction-Differential-Geometry-Vol/dp/0914098705] Depending on your background, you may find it more comprehensive than introductory :) I also recommend his book on introductory real analysis, which is unfortunately named `Calculus'. [[User:SemanticMantis|SemanticMantis]] ([[User talk:SemanticMantis|talk]]) 16:43, 9 November 2010 (UTC)

== are these the same thing ==

[[Toroidal coordinates]] [[Bispherical coordinates]]

Revision as of 18:33, 9 November 2010

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


November 3

Sums of random variables, and line integrals

The setting is: Let and be independent random variables (on the reals), with density functions and . The standard result for the density of is . I'm trying to derive this formula by using a line integral, but I'm getting a different result, so I must be doing something wrong.

This is my reasoning: The joint density of on is given by . To find we should integrate over the line . We can parametrize this line by the function . Note that , so . Now if we calculate the line integral (as in Line_integral#Definition), we get . If we compare this with the textbook result above, then my result has a factor that is not supposed to be there. Where did I go wrong? Arthena(talk) 10:03, 3 November 2010 (UTC)[reply]

The problem is in the assumption that the line integral gives the desired result. Let's say we want to find the probability that . This is given by the integral over the region . This region has a thickness of , so the integral over it is equal to the line integral times . However, by assuming that the line integral will give the density, you have implied that the probability will be the line integral times (the density times the interval length). So you'll need to multiply by a correction factor equal to the thickness of the line divided by the corresponding length of the interval of the derived variable.
Because this can be confusing, it's best to use the cumulative density wherever possible. -- Meni Rosenfeld (talk) 12:03, 3 November 2010 (UTC)[reply]
Thanks for the answer. Arthena(talk) 23:38, 3 November 2010 (UTC)[reply]

Interpolation on the surface of a sphere

I want to to interpolate along a great circle between two points on the surface of a sphere expressed in polar coordinates as (φs, λs) and (φf, λf), with the interpolated point also expressed in polar coordinates as (φi, λi).

I'm doing this in code, and the best I've come up with is to:

  1. Interpret each point as Euler angles (φ, λ, 0.0)
  2. Convert the Euler angles to quaternions and use spherical linear interpolation to get an interpolated quaternion Qi
  3. Convert Qi to Euler angles
  4. Interpret the Euler angles as the interpolated point.

My questions would be: is this complete nonsense, and is there a simpler, or more direct, way of going about it?

78.245.228.100 (talk) 11:46, 3 November 2010 (UTC)[reply]

For an alternative, how about simply converting your start and end points to Cartesian and taking a linear interpolation of the resulting vectors, converting back to spherical when you want? You could do this symbolically as well; who knows, maybe there are some significant simplifications (though I don't hold out much hope). Seems way simpler than trying to run through Euler angles and quaternions. 67.158.43.41 (talk) 12:38, 3 November 2010 (UTC)[reply]
If I convert to 3D Cartesian and interpolate linearly I not only deviate from the great circle, but I end up tunelling through the sphere. 78.245.228.100 (talk) —Preceding undated comment added 12:44, 3 November 2010 (UTC).[reply]
Sorry, I had meant to say, "interpolate and normalize", though then a linear interpolation of the cartesian vectors is a nonlinear interpolation on the surface. However, making the interpolation linear wasn't a requirement in the OP. If it should be, the correction term for the interpolation speed shouldn't be too hard to derive geometrically with a little calculus. You would have to special-case angles larger than 180 degrees as well. Hmm... that special case makes this not nearly as nice as I thought at first. However, after normalizing, you do not deviate from a great circle, since great circles lie in planes running through the sphere's center, which any linear combination of vectors in that plane also lie in.
An alternative would be to use a linear map to send the start and end vectors to (1, 0, 0) and (0, 1, 0) with their normal (unit cross product) getting sent to (0, 0, 1). The vectors for would correspond to the points of interpolation on a sphere. The inverse of the linear map so defined would convert back. I like this method better, since the inverse operation has a very simple form.
Edit: you would have to normalize in this case as well, or you could send the end vector to where is the angle between the start and end vectors, which is the inverse cosine of their dot product, or 2pi minus that for interpolations longer than 180 degrees. The upper limit of is then . The inverse is slightly more complicated in this case.
Further edit: if my algebra is right, the final interpolation is simply
where s is the start vector, e is the end vector, theta and phi are as above. The singularities of this conversion occur at phi=0 or pi, exactly where they should. Sorry for all the edits; I'm alternating this with doing something deeply tedious.... 67.158.43.41 (talk) 13:24, 3 November 2010 (UTC)[reply]
(edit conflict) To implement your interpolate/normalize with constant speed, write and , where is the result. Then define the unnormalized and observe that . Solving gives (I think this is the correct branch for all ). Then the algorithm is just to take . --Tardis (talk) 15:00, 3 November 2010 (UTC)[reply]
I can't find it in an article, but what I would do is:
* find the quaternion that rotates one vector to the other.
* convert it to axis-angle form (alternately you can derive the axis and angle directly, but this is I think is clearer)
* interpolate the angle from 0 to the angle calculated
* generate a quaternion from the axis and interpolated angle, and use this to rotate the first of the vectors (the one rotated from in the first step)
the vector will move linearly along the great circle, i.e. at a constant speed along the surface, as the angle is interpolated linearly. The conversions to and from axis angle are at Rotation representation (mathematics)#Euler axis/angle ↔ quaternion. I can't find anywhere that gives the rotation between two vectors as a quaternion, but it's a straightforward result. --JohnBlackburnewordsdeeds 13:40, 3 November 2010 (UTC)[reply]

Begging the question and differentiating ex

Yesterday in class we were given a fairly simple assignment: show that . This was supposed to illustrate something like the importance of rigourous proofs—the actual problem wasn't important and was in fact a problem from Calc 1. Oddly, a dispute arose over the problem nontheless. One of my classmates claimed that you can show by differentiating the Taylor series . I contested that this was begging the question because to know the Taylor series you have to know the derivative. She said that can be defined by its Taylor series, so this does not beg the question. Who is right? Thanks. —Preceding unsigned comment added by 24.92.78.167 (talk) 23:40, 3 November 2010 (UTC)[reply]

Hmm. It might be worth considering that you only need to know the derivative at to construct your Taylor series (in this case, and it's a Maclaurin series). An alternative way to get to the derivative of at (like, say, the basic definition of e) would remove the circularity ... --86.130.152.0 (talk) 00:09, 4 November 2010 (UTC)[reply]
No, you need to know all derivatives (first, second, third, ...) of evaluated at x=0 to construct the MacLaurin series. However, I support 24.92's classmate: You certainly can take the MacLaurin series as the definition of , and I think this approach is found in several complex analysis texts. There are other reasonable options. You could define as the inverse of ; or you could define it as the unique function that is its own derivative and takes the value 1 at x=0. (My calculus text offers a choice between these two presentations.) Your assignment is ambiguous without a specified starting point, though presumably most people in the class knew what was expected. (But it's always good to stop and think about the assumptions you're making.) The fact that all of these approaches to defining are equivalent is a theorem, and your assignment was one step in the proof of that theorem. 140.114.81.55 (talk) 03:45, 4 November 2010 (UTC)[reply]
I'd agree with the OP in the sense that e^x's Maclaurin series was almost certainly a result of applying calculus to a different definition. That is, historically, the proof provided would very likely be circular. However, definitions can come out of thin air and if your class is using the Maclaurin series one the proof is correct for your purposes and not logically circular.
I really like the definition of the exponential function as the (unique) solution to the differential equation as mentioned above. Yet another definition could take which gives your result directly from the definition of the derivative. 67.158.43.41 (talk) 05:30, 4 November 2010 (UTC)[reply]
Since I don't see it quoted above, I'd like to make reference to this article. Pallida  Mors 08:32, 4 November 2010 (UTC)[reply]


November 4

graph automorphism

How do I show that for a simple graph G, Aut (G) is isomorphic to Aut(), where is the complement of G. I want to exhibit a isomorphism between them. But I cant seem to figure out that if I pick a permutation of the vertex set which preserves adjacency in G, what will be the corresponding permutation of the vertices which preserves adjacency in . -Shahab (talk) 07:24, 4 November 2010 (UTC)[reply]

Isn't it just the identity isomorphism? 67.158.43.41 (talk) 08:10, 4 November 2010 (UTC)[reply]
Oops!! Thanks.-Shahab (talk) 08:49, 4 November 2010 (UTC)[reply]

I have two more questions now. Won't the automorphism group of the complete bipartite graph Km,n be Sm x Sn. I reason this out by taking the complement of Km,n which is a disjoint union of Km and Kn and trying to find its automorphism group. The automorphism groups of Km and Kn are clearly Sm and Sn (sice adjacency is always preserved, no matter the permutation) and so to count all permutations I conclude that the automorphism group of Km,n is Sm x Sn. But the wikipedia article says that there are 2m!n! automorphisms if m = n. Why is that so?

Secondly, I also want to compute the automorphism group of the Petersen graph which is the same as that of L(K5). The wikipedia article says that it is S5, the reason is not clear to me.

Thanks-Shahab (talk) 09:42, 4 November 2010 (UTC)[reply]

(Edit: ignore this; see next comment.) For your second question, a proof outline would be: (1) the inner "star" gets sent to itself in any automorphism (this is the hard part); (2) determining where the inner star gets sent determines where the outside vertices get sent, completely determining the permutation; (3) there are 5! = 120 permutations of the inner star, where each generates an automorphism, and these permutations are easily seen to be isomorphic to S_5. 67.158.43.41 (talk) 09:52, 4 November 2010 (UTC)[reply]
Both parts of your argument are false. There are automorphisms which (for example) swap the inner star with the outer pentagon, and the inner star (being a five-cycle) only has ten automorphisms in any case. A better way is to consider S5 acting on the vertices of K5 in the obvious fashion, which induces an action on the line graph of K5, which induces and action on the Petersen graph. Algebraist 10:02, 4 November 2010 (UTC)[reply]
Thank you for pointing that out. I should be more careful in the future. I saw 5 vertices and 5! and shoved the ideas together, assuming the steps in the proof would necessarily follow. 67.158.43.41 (talk) 10:44, 4 November 2010 (UTC)[reply]
If m=n, then there are automorphisms of Km,m which swap the left side with the right side. Thus the automorphism group is a semidirect product of Sm x Sm by Z2. Algebraist 10:02, 4 November 2010 (UTC)[reply]
Thanks for the reply. For the first question instead of thinking about semi direct product (the part about Z2 confuses me) can I just say: given any 2m symbols we can obtain a required automorphism by permuting the first m in m! ways and the second m in m! more ways. Hence we have is m!m! permutations. Moreover for each such permutation f we may also another permutation by mapping 1,...m to f(m+1)...f(2m), and mapping m+1,...2m to f(1),...f(m). This swaps the two complete graph vertices and as this is possible for each of our previous permutations so the total is 2m!m!. The second question is not clear to me still unfortunately. I am not understanding how and what group action is induced. I'll prefer an intuitive counting argument, if thats possible. That the complement of the Peterson graph is L(K5) is clear to me and we hence need only to show Aut(L(K5))=S5.-Shahab (talk) 10:52, 4 November 2010 (UTC)[reply]
Here's how you may visualize the 5! automorphisms of the Petersen graph (if you like this algebraic way of thinking). You say you know that the Petersen graph is isomorphic to the complement of the line graph of K5. Consider how the line graph is defined: the vertices of the line graph L(G) are the edges {x,y} of graph G, and two edges are connected iff they aren't disjoint. Take a permutation σ of the vertices of K5 (there are 5! of these). This acts naturally on the edges of K5, namely {x,y}σ is defined as {,} (here we use that we have the complete graph so we know the latter is also an edge). Now you can regard this permutation of the edges of K5 as a permutation of the vertices of the line graph L(K5). Prove that this permutation is a graph automorphism of the line graph. Also prove that no two of these permutations are equal.
(Incidentally, you can also generalize this to finding the n! automorphisms of the Kneser graph KG(n,k), where KG(n,2) is isomorphic to the complement of L(Kn).)
This does not give an easy way to prove that these are the only automorphisms of the graph, so you'll have to find some specialized argument for that. – b_jonas 20:26, 4 November 2010 (UTC)[reply]

Two questions

ok I have two questions: speaking of the prime-counting function there is an approximation and . Which is more accurate? Also, how can it be proven that ? Thanks. —Preceding unsigned comment added by 24.92.78.167 (talk) 21:56, 4 November 2010 (UTC)[reply]

  1. The latter
  2. assuming x>0, substitute n=m/x in the left hand side
Bo Jacoby (talk) 22:56, 4 November 2010 (UTC).[reply]


November 5

Predicting the college football season

Let's say that:

  • Oregon has a 59.2% chance to win every game before the BCS.
  • Boise State has a 67% chance.
  • Auburn has a 15% chance (noting they would have to play a conference championship).

In addition, either TCU or Utah may go undefeated, but not both, since they play each other. TCU has a 58% chance of beating Utah and a 99% chance of winning both of its other games, while Utah has a 42% chance of beating TCU and a 65.7% chance of winning all of its other games. That means, I'm guessing, that there is an 85% chance that either TCU or Utah will go undefeated.

Don't kill me over these percentages -- they come from a rankings website.

  • What are the chances that no team among the above goes undefeated? (I'm guessing 1.7%)
  • What are the chances that exactly one team goes undefeated (OR, Boise, Auburn or TCU/Utah)?
  • What are the chances that exactly two teams go undefeated?
  • What are the chances that exactly three teams go undefeated?
  • What are the chances that exactly four teams go undefeated? (I'm guessing 5.1%)

Thanks -- Mwalcoff (talk) 03:13, 5 November 2010 (UTC)[reply]

One key assumption that should be explicitly stated is that the above percentages, unless otherwise stated, are independent probabilities, e.g., the chance of Boise winning is unaffected by what Oregon does. This may be approximately true, but may not hold if, for example, Boise was hoping for a Boise/Oregon grudge match in the finals, and as such is demoralized by an Oregon defeat. Anyway, it makes the calculations easier, so we'll assume it. Will also assume that when you ask for "three teams undefeated", your referring to three teams from those listed - that is, that some team not listed being undefeated doesn't count toward the three.
The probability of an event not happening is one minus the probability of it happening. The probability of two (probabilistically) independent events both occurring is simply the product of the independent probabilities. The probability of either of two mutually exclusive events occurring is simply the sum of the probabilities that either occurs. This is all we need to work out the probabilities.
So TCU has a .58*.99=.574 chance of being undefeated, while Utah has a .42*.657=.276 chance of being undefeated, which means that there's an .574+.276=.85 chance of either winning (since the two probabilities are mutually exclusive. For four teams undefeated, it's .592*.67*.15*.85=0.051. For no teams undefeated, it's (1-.59)*(1-.67)*(1-.15)*(1-.85)=.017 chance. The exactly one team defeated/undefeated is a little harder, but can be calculated with (prob.of only Oregon) + (prob. of only Boise) + ..., as each term is mutually exclusive (you can't have both Boise and Auburn be the only undefeated team). One team undefeated is .59*(1-.67)*(1-.15)*(1-.85) + (1-.59)*.67*(1-.15)*(1-.85) + (1-.59)*(1-.67)*.15*(1-.85) + (1-.59)*(1-.67)*(1-.15)*.85 = 0.161 and one team defeated (three teams undefeated) is (1-.592)*.67*.15*.85 + .592*(1-.67)*.15*.85 + .592*.67*(1-.15)*.85 + .592*.67*.15*(1-.85)=0.355 chance. Exactly two teams undefeated is a little harder, but becomes easier if we realize that we've exhausted all other cases - if it isn't no, one, three or four teams undefeated, it has to be two teams so 1-(.017+.161+.355+.051)=0.416 chance, as each sub-case is mutually exclusive. -- 174.21.240.178 (talk) 16:47, 5 November 2010 (UTC)[reply]

Calculus

How would one set up a calc equation to solve this. If the avg life expectancy of a person is some constant A, and their current age C, and the rate at which the life expectancy is extended be x, how would one set up a calculus equation to solve for the minimum value of x that would allow the person to live forever. Assuming that x is linear, I came up with something like t = (A-C) + (A-C)xdt, and then I would integrate and try to solve, but I don't think this is right. I think it would take the integral and not the derivative, but I am not even sure, as it has been quite a while since I have taken calculus. Can someone help me figure this out? I feel dumb for asking about this, as it should be a simple problem 98.20.180.19 (talk) 09:15, 5 November 2010 (UTC)[reply]

x is linear... in what? Time? I have difficulty interpreting your equation meaningfully, but maybe the answer is in there. If x is just a constant, then the minimum is x=1: each year a person lives, the average life expectancy must increase by at least 1 year to keep up with their growing age. In general, the rate of change of average life expectancy is a function x(t), you have initial average life expectancy of L(C) = A, so at any time the average life expectancy is given by the function
from the fundamental theorem of calculus. For x(t) = c, this simply becomes L(t) = c(t-C) + A. Your own age is just t, so you want L(t) - t > 0 always. If c < 0, L(t) is negative sometime, L(t) - t can't always hold. If c = 0, L(t) is a constant, which also won't work. If c >= 1, for A and C "reasonable" the equation does hold. For 0 < c < 1, L(t) grows more slowly than t, so L(t) - t is eventually negative, hence my statement above. 67.158.43.41 (talk) 10:51, 5 November 2010 (UTC)[reply]
If the population grows the average life expectancy could decrease and still have the possibility of a person living forever. I'm assuming life expectancy is calculated by seeing how long all the people lived who died in the current year. You might want to do something mor complex but you can't stick in an infinity for a person who's going to live forever! Dmcq (talk) 12:57, 5 November 2010 (UTC)[reply]
Actuaries already do these types of calculations, and have a standard terminology and notation for these concepts - see our articles on life table and force of mortality. Gandalf61 (talk) 13:33, 5 November 2010 (UTC)[reply]

Suggestions re learning LaTex, please?

Hi all. I think I need to finally spend the time to learn LaTex. My reasons for wanting to do so are:

  • to allow me to document my self-study in symbolic logic, set theory, foundations, especially,
  • to communicate effectively in online forums devoted to those topics,
  • to send documents I've created about these topics (aka "homework" or "examples") via e-mail to professors or tutors, as yet unknown, and (ideally) to allow them to electronically comment on, correct, and markup the same, and
  • to be able to ask (and occasionally answer, when I can help) questions about those topics here, on-wiki

I've read maybe two-hours worth of material about this so far, via the web, and our own articles, too, of course, and have naturally looked in the archives here. But I was hoping folks here could help me start out on the right foot by anwering a few probably naive questions, as well:

  1. I like the idea of being able to generate pdf files easily, or better still(?), having a setup that uses pdf as a native file format for whatever I create using Tex, and I presume that means using the pdfTex extension. If this is correct, is there any LaTex/pdfTex editor that's free, relatively "standard"/ubiquitous (easy cross-platform availability), and that can edit pdf files directly?
  2. About how much time will I need to reach a point point of minimal/reasonable proficiency? A point at which I can focus more on content, on writing math/logic documents, than on learning LaTex, that is?
  3. I'd like to be able to use Russell & Whitehead's Principia Mathematica notation easily, if possible, and yes, I know it's been largely superseded in modern practice. ;-)
  4. I'm using Ubuntu GNU/Debian/Linux, in case that matters.

All observations and suggestions will be most welcome. Many thanks,  – OhioStandard (talk) 16:04, 5 November 2010 (UTC)[reply]

I think it's difficult to give a general answer to this. In my case I wanted to write a mathematical paper, decided TeX and LaTeX would be the best way to do so, so downloaded the free TeXShop, bookmarked some documentation and started writing. I had a clear idea of the mathematics I wanted to write, it was just a case of finding the correct way to do it. As I did this on a paragraph by paragraph and equation by equation basis I slowly taught myself TeX. More recently I went through a similar process learning how to edit WP formulas, at the same time as learning Mediawiki syntax. Here my reference was WP itself and its help pages. In each case for me all I needed was an idea of what I wanted to do, access to documentation and examples, and a way to try out my ideas.
I notice from TeXShop there's a Comparison of TeX editors where you should be able to find a free editor. I don't know if you'll find one able to edit PDF files: PDF is designed for viewing only. Fast preview perhaps using PDF might be the best you can do.--JohnBlackburnewordsdeeds 17:16, 5 November 2010 (UTC)[reply]
I'd suggest that a better place to look for answers to your questions might be http://tex.stackexchange.com/. --Qwfp (talk) 17:39, 5 November 2010 (UTC)[reply]
You seem to want general/open-ended answers. Any plain text editor will do for a LaTeX editor, since the point is to have everything in ASCII. I don't know about specific editors for Linux, sorry, other than the usual standard text editors. I have used one called TeXworks on Windows which I've liked, and it's apparently cross-platform. PDF files are generated from LaTeX source, for instance with the pdflatex command available in some/most Linux distros. To directly edit a PDF file, you would need (I believe) to edit postscript code, which isn't at all the same thing as editing LaTeX code (postscript isn't really meant to be hand-edited, anyway). A PDF file is to LaTeX as a compiled binary is to C++ source code--the compilation process isn't really reversible.
I've never read Principia myself, but a glance at this page suggests LaTeX would have no trouble at all typesetting it.
For me, it didn't take long to be minimally proficient. As a brief example, "\prod_{i=1}^{N^2} \int_{R_i}\Psi\,dA" renders as
If you can pattern match what most of that is doing, you can do a surprisingly large amount in LaTeX right now if given a few examples to work from. Depending on your computer proficiency, a good afternoon working with it should get you started. After figuring out how to make your system compile your source, I'd find a math paper online in TeX format and just try to write something up working from it as an example.
Personally, I hand write most of my math. I just find it much more freeing. Then again, I've heard that writing in LaTeX can become quite natural. 67.158.43.41 (talk) 17:53, 5 November 2010 (UTC)[reply]
Bullshit. It takes 4 weeks to learn LaTeX. It is the biggest waste of 4 weeks I have ever spent -- or would have been, except for the results. 84.153.205.142 (talk) —Preceding undated comment added 18:33, 5 November 2010 (UTC).[reply]
"It takes 4 weeks to learn LaTeX"--it definitely did not take me this long to learn LaTeX enough to be reasonably proficient. I have a background in computer science, but still, a day was realistic for me to "get started". I'm curious, how long did it take others? (I know the comparisons are very flawed because the goal is unclear, but still.) 67.158.43.41 (talk) 19:03, 5 November 2010 (UTC)[reply]
As for Linux LaTeX-friendly editors/IDE: a good choice is Emacs+AUCTeX. I assume that vim should have a LaTeX mode too, if you prefer that.
As for Principia Mathematica, the inverted iota may be nontrivial to typeset, but otherwise I'm not aware of anything problematic.—Emil J. 18:50, 5 November 2010 (UTC)[reply]
Just because it's a good example, I googled "latex inverted iota" and got this page as the third result, which gives the commands "\usepackage{graphicx}" and "\newcommand{\riota}{\mathrm{\rotatebox[origin=c]{180}{$\iotaup$}}}" to define a handy macro "\riota" to display an upside-down iota. I've found the vast majority of my LaTeX questions are quickly answered similarly. 67.158.43.41 (talk) 18:58, 5 November 2010 (UTC)[reply]
Yes, this is a very good example, namely it is an example of the principle that 90% of macros that you can google on the web are crap. First, there is no such thing as \iotaup among standard LaTeX symbols, and the instructions didn't tell you which package to find that in, so the code will not even compile. Second, the \mathrm in the macro is completely pointless as the innards are put in a box, hence it should be omitted to reduce useless bloat. Third, the instructions didn't warn you that the graphicx package relies on PostScript specials to implement the rotation, hence it will not work in plain dvi viewers or interpreters.—Emil J. 19:20, 5 November 2010 (UTC)[reply]
Good points, though the package isn't hard to find with a further search, \iota may suffice (or may not; again, I haven't read the book), and the DVI issue may or may not appear depending on your setup. But certainly some issues take quite a bit of fiddling / searching / asking around about, and some online help sucks. Ultimately, though, I do have an upside-down iota on a page, regardless of anything else. 67.158.43.41 (talk) 21:52, 5 November 2010 (UTC)[reply]
I would strongly disrecommend LaTeX. It is my opinion that every mathematical symbol or function that could be typeset in Latex could be more easily typeset in unicode, using a well-designed word processor. The fundamental premises of TeX are invalid in comparison to modern word processors:
  • TeX claims to separate formatting from content. But this is evidently not the case, as you will notice that you need to learn formatting codes.
  • TeX intends to perfectly specify paper layout. In today's era of reflowable digital documents, why waste your time specifying pedantic details about pixel-spacings? Use a modern tool that reflows intelligently and properly.
  • TeX claims to be portable. But as EmilJ has correctly pointed out, one set of TeX code will not compile on a different computer unless all the same packages are installed; there is no standard set of TeX packages; many are mutually incompatible with each other. In fact, even the tool is not standard: There is CWEB TeX, there is LaTeX, MiKtEX, MacTeX, and so forth. Realistically, there is minimal compatibility between these tools.
  • Formats like HTML now support mathematical symbols. ODF and .docx are free and open formats that provide all your pedantic typesetting needs, but are supported by reasonable free and commercial platforms like OpenOffice and Microsoft Word.
  • Every useful special mathematical symbol can be represented directly as a unicode character; TeX uses non-standard representations. Even Donald Knuth (inventor of TeX!) acknowledges this lack-of-standardization is a problem!
  • Open-source and free software word processors, such as OpenOffice.org, provide powerful equation editors as add-ons and plugins, if flat formatting and symbolic text are insufficient.
  • Most major academic journals, including many physics[1] [2] and mathematical journals, will no longer accept TeX source, and will require PDF or .doc files - so who do you intend to impress with your .tex source? And AMS requires you to install a nonstandard version of LaTeX! (Amazingly, AMS TeX is not even compatible with itself - there is a separate old-version archive for reproducing old documents!)
  • When you finally give up on TeX and decide to painstakingly export to a reasonable document format, converting to HTML will produce buggy, ugly, and non-standards-compliant documents.
TeX is really an antique - like the linotype, it was a huge improvement over previous typesetting technology, but is now a dinosaur whose clunky interface and primitive feature-set pale in comparison to the latest and greatest in mathematical typesetting and word processing. Invest your time learning a modern tool. Nimur (talk) 20:26, 5 November 2010 (UTC)[reply]
Oh, sorry, I just saw your link. It is a lot of fun even though I can never get it to find the symbol I want. Eric. 82.139.80.73 (talk) 13:49, 6 November 2010 (UTC)[reply]
I think texlive is the standard / most popular TeX distribution for Linux users, though I still have an old copy of tetex on my system. In my setup, I use vim to edit .tex files and the latex / pdftex commands on the command line to compile, with xdvi and xpdf to view the output. (Generally using dvi instead of pdf gives me a faster turn around time while composing, and I find xdvi a more solid piece of software than xpdf. YMMV.) I've heard that Emacs+AUCTeX is good, as Emil suggests. I assume any standard programming environment will come with at least syntax highlighting for TeX commands.
Two handy resources I have found include: For math and equation typesetting, amsldoc.pdf is an introductory manual for the amsmath latex packages. symbols-a4.pdf, a very long reference list for latex symbols -- look up the symbol you want to typeset here. The detexifier is cool but I haven't found it particularly useful. Eric. 82.139.80.73 (talk) 13:45, 6 November 2010 (UTC)[reply]

I want to thank everyone who replied so far. I'm not sure whose advice will "win" re what I eventually decide, but I'm very grateful for the discussion. Thank you!  – OhioStandard (talk) 09:24, 6 November 2010 (UTC)[reply]

Despite some arguments by nay-sayers above (Nimur: geology is only an infinitesimal fraction of "all of math and physics" and not even representative for math/physics research), doing without laTeX is not practical in physics and math. When preparing large documents full of equations with large numbers of cross references etc., you want to actually edit the source code of the document, not the document itself. That source code has to be easily readable and editable.

Journals prefer to have your LaTeX source code for typesetting, they don't want to have some compiled PDF file because they typically need to change your document as it appears in print. Such changes can affect equation numbers that are cited in the text, the ordering of the references, etc. etc. Count Iblis (talk) 16:05, 6 November 2010 (UTC)[reply]

I found The Not So Short Introduction to LaTeX2e useful for learning LaTeX, and WP:Formula a good reference for writing formulas. -- Meni Rosenfeld (talk) 10:50, 7 November 2010 (UTC)[reply]

Sorry, but there are many misunderstandings in Nimur's comment. First, he confuses TeX and LaTeX. TeX specifies low-level formatting. LaTeX does not (although it allows you to uses the low-level stuff if you want to teak things). Secondly, while there are different implementations of the TeX translator, they are indeed compatible, and there is a rigorous test specification that ensures that. No, you do not need "all the same packages" to compile documents on different computers - you need the packages you actually used in the document on both machines. Unless you do fancy stuff, the packages you are indeed at least de-facto standard. This is no different from macros, styles, and fonts in WYSIWYG tools. Yes, compatibility of old and new TeX installations is not always 100%. But it is excellent in practice for a tool chain that has been around for 20-30 years. Try opening a Word for Windows document in Word 2010. At least in TeX you get the human-readable document source and can fix it. Because TeX source is plain text, it's easy to use a plethora of existing tools on the documents, like grep, subversion, sed, and diff. It's also easy to programmatically generate TeX/LaTeX. For me, the ability to type everything out in plain text, with no menus and drop downs and weird mousing is also very valuable. Generating a document with even trivial stuff like sub- and superscript is much faster in LaTeX then in Pages or OpenOffice, or Keynote. There is a reason for the existence of LaTeXiT and equivalent tools on other platforms. And finally, the typesetting is still much better than with any other widely used tool. --Stephan Schulz (talk) 11:20, 7 November 2010 (UTC)[reply]

Thanks, everyone! I'm very often surprised at the extraordinarily high quality of the responses I've received when I've asked questions before on the various boards here, and this occasion has been another instance of the same. I'm sure your opinions, links, and advice will be very useful to me as I proceed, and will save me from many missteps, as well. My best appreciation to all of you who replied. Cheers,  – OhioStandard (talk) 05:23, 8 November 2010 (UTC)[reply]
The main mistake I made was not learning LaTeX properly. It took me 4 years to do theorems properly. /o\ Nevertheless, 4 years has made me wise to LaTeX and I can now do lots of intermediate stuff, including PGF/TikZ (intermediate, at least). I would browse the full amsmath and the Short Guide to Math - get to know how to do all the constructs properly, from day one! And the Not So Short Guide to LaTeX (?) is the best overall guide to LaTeX. x42bn6 Talk Mess 23:44, 8 November 2010 (UTC)[reply]

Can we draw every mathematical concept?

In the case of simple concepts, it is evident that we can. Two segments on the same line are a sum. Two segments on a right angle are the equivalent of a multiplication. But, how about more complex concepts? Quest09 (talk) 17:40, 5 November 2010 (UTC)[reply]

If you want to really blow your mind, imagine that you write, in your programming language of choice (say C++), a programming environment, call it Crayon, to compile x86 code form a TOTALLY VISUAL programming paradigm. Now you draw in Crayon, and get x86 code. But Crayon the application is really still just a C++ program.


So imagine that you proceed to use Crayon to draw the EXACT SAME THING you had wrote in C++ to produce the Crayon-the-C++-application you are now running. Compile, and VOILA: you are now running Crayon, written in Crayon. Now you use Crayon -- written in Crayon -- to sketch out a rudimentary version of an operating system. It will take you years, but when you're done sketching it and compiling it, you can burn it to a CD-ROM.


Then you take it to a fresh computer, you put that CD-ROM in, you boot an Operating system not written in any programming language but sketched in Crayon, and you start up a copy of Crayon. YOUR WHOLE COMPUTING ENVIRONMENT WAS MADE WITHOUT A SINGLE LINE OF CODE! Without a single programming keyword. Without a single +, =, *, opening or closing brace, or any other symbol. It was all purely sketched out visually.


Well, what you have now, is what I imagine the Greeks would be running these days if it weren't for the Peloponnesian War. 84.153.205.142 (talk) 17:56, 5 November 2010 (UTC)[reply]
You might be interested in befunge. 67.158.43.41 (talk) 18:04, 5 November 2010 (UTC)[reply]
Pshaw. In my mind I'm imagining beautifully rendered spheres and pyramids of various shades and colours, a totally immersive 3D abstract environment! Whose primitives just happen to translate to x86 code... 84.153.205.142 (talk) 18:06, 5 November 2010 (UTC)[reply]
Maybe some of the things built in minecraft are a little closer to what you had in mind (though not the same), e.g. this 16-bit adder. 67.158.43.41 (talk) 18:20, 5 November 2010 (UTC)[reply]
I'd say no. Draw me the axiom of choice or 15 dimensional Euclidean space, which some would argue can't fit in our universe. Of course, your question isn't specific enough to have a real definitive answer. 67.158.43.41 (talk) 18:04, 5 November 2010 (UTC)[reply]
I'd also say no. Mathematics, unlike most sciences, is largely concerned with abstract concepts. The examples given above are some and not the most abstract or difficult. For example it's possible to consider not just finite but also infinite dimensions, or vector spaces over other than the real numbers, or non-Euclidian spaces instead of Euclidian. All these, although they have real applications, move further away from how we describe our universe and so are more difficult to represent within it. There are many more such examples, many even more abstract, which would similarly defy any attempt to draw them. --JohnBlackburnewordsdeeds 18:12, 5 November 2010 (UTC)[reply]
This is ultimately a problem of encoding and representation. I will not attempt to 'draw the axiom of choice', but one can easily draw a picture things that don't exist as concrete individual physical objects, such as as the real projective plane, hyperbolic plane, 4D euclidean space, etc. Of course, understanding the drawing is a matter of convention, but then so is understanding a 'drawing of the concept of addition'. SemanticMantis (talk) 18:18, 5 November 2010 (UTC)[reply]
Isn't any written language just a more sophisticated, and more abstract "drawing" of your idea? Nimur (talk) 20:05, 5 November 2010 (UTC)[reply]
Yes Nimur, I was also thinking along similar terms. Still, there is something that separates a diagram of the hyperbolic plane from a formal definition, no? SemanticMantis (talk) 21:14, 5 November 2010 (UTC)[reply]
That's an interesting point. Both language and drawings just describe some idealized thing, so in that sense they're basically the same. 67.158.43.41 (talk) 22:05, 5 November 2010 (UTC)[reply]
You might also be interested in our article on Mathematical visualization. WikiDao(talk) 20:24, 5 November 2010 (UTC)[reply]

Different universe, different maths?

I understand that, if other universes exist in a Multiverse, that they could have different physical laws. But must maths be the same in all of them? 92.29.112.206 (talk) 20:04, 5 November 2010 (UTC)[reply]

The answer depends on your philosophy. See Philosophy_of_math#Contemporary_schools_of_thought. In short, the Platonist view would say yes, all universes have the same math. In contrast, the Embodied mind theory (WP:OR not espoused by any professional mathematician I've known) would say mathematical constructs are constructs of the human mind, and cannot be examined in a universe without humans. The greater issues here are the ontology of mathematical objects, and the epistemology of mathematical claims. SemanticMantis (talk) 20:28, 5 November 2010 (UTC)[reply]
Could other systems of maths be imagined, or is there only one system of maths that is internally consistent? 92.29.112.206 (talk) 20:31, 5 November 2010 (UTC)[reply]
It depends in which sense you are asking. You may enjoy reading Hilbert's_program, which describes a famous (failed) attempt to make all of mathematics provably consistent. It is a common interpretation that Gödel's_incompleteness_theorems prove that such a task is impossible to accomplish. More to the (what I think is) the spirit of your question, many alternative systems have proven useful. For example, 1+1=2 in conventional everyday (integer) terms, but 1+1=0 in the Finite_field of two elements. These statements are not contradictory, and both are useful in different contexts. Another good example of an 'alternative system' of math is Non-Euclidean_geometry. You may get better answers if you give us some indication of what types/levels of math you know/are comfortable using. SemanticMantis (talk) 20:55, 5 November 2010 (UTC)[reply]
My only maths qualification is GCE "O" level, if that means anything to you. It was the exam taken by the brighter British sixteen year olds. 92.15.2.255 (talk) 20:36, 7 November 2010 (UTC)[reply]
I researched this in depth, including e-mailing the likes of Noam Chomsky, and leading philosopher (now at princeton) called Peter Singer. The answer (even from just the linguist, noam chomsky), was resounding: other universes MUST have the same laws of logic and math. It is simply impossible for a mathematician-creature to explore primes and found that the billionth prime in his Universe has a different value from the billionth prime in ours; or to explore pi and find that it has a slightly different value. To me, this is a really stupid fact, because it means we know a lot about potential Universes about which we know nothing (to my mind, a contradiction). Namely, we know what their mathematicians would find to be true about a given system (such as our integers) should they choose to explore that system. But, even though I disagree with all the leading philosophers' ideas, and think it's really stupid, I think I owe it to you to tell you this. There is unanimous consent that math must work exactly the same (if they explore the same system). 84.153.222.232 (talk) 21:51, 5 November 2010 (UTC)[reply]
It depends on whether you believe that God made the integers; all else is the work of man. I find it quite hard to imagine a universe where counting is not possible, but quite easy to imagine one where the concpt of real number was not developed.--Salix (talk): 22:13, 5 November 2010 (UTC)[reply]
84 above, now my IP has chamged. the question is not about what is developed, but what would be found to be terue once it is devekloped. Can you imgagine that when pi is developed, they calculkate it to some precision, but what they find is a sligghtly different value from ours (diverging agter our millkionth digit)? You cannot imagine it, any more than you can imagine a square having five corners yet remaining a square: you can convince yourself you are imagining it, but you are simply not thinking with rigor. When you imagine a universe whose pi diverges from oursd after a million digits (regardless on when, how, and whether this is even calculated by anyone in that univerde!) you are simply not imagining with rigor. Think more rigorously and you will find thaat the five cornerned object you imagined isn't really a square, or that it is a square but dfoesnt have five corners. Think more rigorously and you will find that the value of pi is exactly the same in any possible universe (again, regardless of who looks at that value). I think this is a really stupid fsact, and don't agree with it, but it is a fact nontheless. 93.186.31.236 (talk) 23:12, 5 November 2010 (UTC)[reply]
The idea that you can answer this question by trying to "rigorously imagine" other math seems to me to miss the point. The question is whether there is the possibility of different consistent math that we aren't capable of imagining. Rckrone (talk) 06:11, 6 November 2010 (UTC)[reply]
(same IP again). And it seems to me, with all respect, that you miss the point. Math is a part of TRUTH. It is a part of REALITY. When you say "whether there is the possibility of different consistent math that we aren't capable of imagining" it has NOTHING TO DO with the OP's question. His question is: physical laws can be different, can math laws be different the same way? The answer is "no". Let us say that an alternative universe DOES use math that is consistent and is something we aren't capable of imagining. No one can imagine it in our universe for 1000 years. But then a single person is born with an IQ of a million. That one person imagines the same consistent system and explores it. What he will find is EXACTLY THE SAME RESULTS as what the other universe has. That is because math works EXACTLY THE SAME in all possible universes. Whether it is ever discovered/invented is totally irrelevant, since it is a part of the truth or reality in a universe. Let me give you an example. We have no way of proving at the moment whether there are infinitely many twin primes. Nevertheless, either there are or there are not. It is impossible that in our Universe, there are infinite twin primes, whereas in a different Universe there is a largest one. What is possible, of course, is that we decide the answer is "undecidable" under our set of normal axioms, whereas another Universe starts, for intuitive reasons, with entirely different mathematics, and they have no problem proving that in their own axiomatic system there is a largest prime. But, if we were somehow, in a thousand years, with a single genius who has an IQ of a million, to explore that same axiomatic system, we would find the same results as them. (On the flip side, if a genius in their universe imagines the same set of axioms we are working, that genius would find the same truth: it is impossible that for us, there are infinite twin primes under ZFC, whereas when a genius in another Universe imagines the exact same ZFC system for an afternoon, he proves rigorously using it that there is a largest twin prime.) Math, and logic, simply work exactly the same in any possible Universe: it is not like the laws of physics at all in that respect. Please don't confuse yourself further by failing to differentiate between exploring a DIFFERENT system, one which we can't comprehend, and exploring the SAME system and finding different results. P.S.: Again, I don't agree with anything I've just said, I think it's really stupid, but it just so happens to be the universal consensus among scientists, mathematicians, and philosophers, that this is the case. I am just reporting the facts, even though I disagree with them. 84.153.222.232 (talk) 10:36, 6 November 2010 (UTC)[reply]
I'm not sure that anything you posted contradicts what I said. I never implied for instance that in another universe the same axioms would lead to different results. In fact the opposite, the idea that there is math we are incapable of imagining is to say that there may be axiomatic systems that are so unnatural to us that we would never or can't consider them (or would never have any reason to consider them). That said, the axioms that come naturally to us are closely bound to the physics of the universe we live in. For example in your original post when you're talking about the value of pi, how exactly are you defining the concept of pi? If you're defining it as some specified value, then obviously it has no other value. If you're defining it as the ratio of the circumference and diameter of a circle (which is how the concept arose for us humans) then that depends on the geometry of our particular universe. That ratio could certainly be different than the value we've labeled pi. I don't think it's possible to fully divorce physics from the resulting math. Rckrone (talk) 18:14, 6 November 2010 (UTC)[reply]
sorry, you just have to do more research. pi is simply a value that arises in different situations. In all situations where this value arises, regardless of the reasons such situations are explored, it will have the exact same value in this and any other possible Universe. It doesn't matter how, or even whether, the concept arises. Maybe there is a constant, like e, and pi, that is super commonly used in another possible Universe, since it arises very naturally from exploration of the physical laws that Universe follows. If in the next 10,000,000 years, anyone in this Universe is crazy enough to chance upon that constant for no good reason whatsoever, but purely theoretical crazywork, but they are a good mathematician, they will calculate the same value for it as it has in that other Universe. It doesn't matter that it is totally inapplicable here and obviously descriptive there. If there is a formula for it, that formula yields the same result. Basically, you are using semantics to imply that what we call "pi" could be "e" in another Universe, where the ratio of a slkwjegircle's circumference to diameter is e. But I say slkwjegircle because it is not a circle, a mathetmatically rigorous construct. If anyone ever explores an actual cirlce, instead of a slkwjegircle as found in that Universe, they will find the same value of pi that we have. To any precision. 84.153.207.135 (talk) 21:56, 6 November 2010 (UTC)[reply]
Yes, obviously it's semantic. That's the whole point. I don't disagree with the things you said in that post nor ever did I. What I am saying is that the physics of the universe has a strong influence on what we choose to study, or perhaps even what are brains are physically able to comprehend. Suppose the ratio of a slkwjegircle's circumference to its diameter is not e, but some value x that we don't care about. Then that value x will probably become an object of study for mathematicians in that other universe, and maybe it has some nice properties, but we don't know this or even care because what is a slkwjegircle anyway? Sure if we studied slkwjegircles we would find the same results, but we don't. Maybe we can't. Rckrone (talk) 19:51, 7 November 2010 (UTC)[reply]
I think we'll just have to agree to agree then. 84.153.212.109 (talk) 11:53, 8 November 2010 (UTC)[reply]
There is the anthropic principle, which tries to answer the question "why is our universe like it is", with the answer "because if it wasn't we could not exist to describe it", as if the laws were just a little different carbon based life would never have evolved.--JohnBlackburnewordsdeeds 22:26, 5 November 2010 (UTC)[reply]

The stability of pi is mentioned above, but wouldnt the value of pi be different for a circle on the surface of a sphere? Our universe may have some analogous 'curve' that we are not aware of. 92.15.2.255 (talk) 20:36, 7 November 2010 (UTC)[reply]

π is the ratio between the circumference and diameter of a circle in Euclidean geometry, not in whatever universe we happen to live in. It remains the same even if we happen to live in a non-Euclidean universe (indeed, we live in a non-Euclidean universe, and we are aware of it!). Also, which has nothing to do with geometry. -- Meni Rosenfeld (talk) 06:13, 8 November 2010 (UTC)[reply]
Yes, but I think the interesting part of this question is, could the actual logic we use to work such things out, turn out to be wrong in another universe? And if not, how do we know? A related question is, Is logic empirical?
Typically most math/science types will answer, no, that cannot happen. I am not sure how they can be quite so sure. My pragmatic, provisional answer is: We have no way of talking intelligently about such a possibility, if it is a possibility, because to do so we would have to use the logic that works here.
The protagonist of Heinlein's They faces a similar dilemma as he warns himself not to use the sorts of reasoning they have taught him. But what sort, then? --Trovatore (talk) 06:36, 8 November 2010 (UTC)[reply]

Things like Wave–particle duality and this quantum hocus-pocus suggest that there are counter-intuitive things in the universe, so different maths may be a possibility, particularly as the theorised ten dimensions are further explored. 92.15.3.137 (talk) 13:46, 8 November 2010 (UTC)[reply]

Am I misunderstanding your comment, or are you greatly underestimating the capability of mundane mathematics (which can easily deal with 10- or million- dimensional spaces)? -- Meni Rosenfeld (talk) 20:11, 8 November 2010 (UTC)[reply]
There's a difference between counter-intuitive physical results and "different maths". Fundamentally, even wacky physical theories like quantum and string theory are described using very conventional, if complicated and unintuitive, math. I don't see how their existence suggests the existence of another universe which has "different" math. In fact, that they are described using our regular math suggests the opposite, to me. 67.158.43.41 (talk) 04:28, 11 November 2010 (UTC)[reply]

Yes, you are misunderstanding my comment. 92.24.186.80 (talk) 20:30, 8 November 2010 (UTC)[reply]

Could there be a universe where 1+1=3? 92.15.3.20 (talk) 19:30, 11 November 2010 (UTC)[reply]

If '1', '+', '=', and '3' are defined the same way as in our universe, then no. If they are defined differently, than 1+1=3 by these definitions even in our universe. Math is the study of what follows from a set of definitions. Any self-consistent system imaginable is part of our math. Any inconsistent system is also part of math, but inconsistent systems are less interesting, so we already know everything there is to know about them. (If you want to follow up on this, note that paraconsistent logics look inconsistent, but there is a difference.) 74.14.109.89 (talk) 19:52, 11 November 2010 (UTC)[reply]

why don't I like databases?

Databases are the only aspect of computer science that I hate the very concept of. I refuse to even be a simple user. I will tell you my impressions and feelings and hope you will be able to tell me the real, mathematical reasons behind it, which I can't grasp. basically, databases feel about as slow as the difference between ram and a roundtrip ping to china: my feeling is that everyy time I do a query, I am waiting for a bubble sort on a huge data set to finish. Then I don't like sql, to the point that I would rather build up a hugely complex datatype (hashes of arrays of arrays of hashes etc) and fill it with constants in my source code, than use sql. Even though it takes me longer, and each lookup takes more code to write (for example, manually iterating over my data structure to see what meets conditions). Nevertheless, I am very satisfied with the results. Why? Same with my personal informationH. I use excel instead of a db every day. Does anyone have any ideas about what, mathematically, coulld really be behind my aversion? Thank you. 93.186.31.239 (talk) 23:00, 5 November 2010 (UTC)[reply]

Its worth reading up on the Relational model which does put the relational database on a pretty sound mathematical footing not far removed from first-order logic. It does require a different data-oriented approach rather than a more procedural way of thinking which might be more natural for humans. The object-relational mapping shows that data-structures are essentially the same as the set of relations in a database but you might find that the Object-relational impedance mismatch may explain some of your problems. Speed issues may be due to the fact that many database strive to be ACID compliant, good for banks with billions of customer records, less good the everyday hacker. Client server ACID transactions is hard to do right but quite an interesting intellectual challenge.--Salix (talk): 23:54, 5 November 2010 (UTC)[reply]
If a database goes slow then the queries that are taking some time need to be analysed to see what the problem is and sticking in indexes and using unique keys where necessary. The big question is, how much time will one waste on optimising the performance compared to just waiting for the result? One can hardly fault Google for speed for instance, they have optimized for speed to the nth degree but it is worth it for them. SQL databases put a lot of effort into optimising the queries, and for most users who don't want to get into the innards or who don't have feel for how it works this is best. It is exactly the same reason we now use high level languages rather than assembler, sometimes it can give awful code but overall the gain is huge. Dmcq (talk) 09:50, 6 November 2010 (UTC)[reply]
I share your aversion against traditional databases. Computer files suffer from backwards compatibility all the way back to the punched cards, which have now become rows in databases. In order to save paper the data were organized with several fields on each card. This is the original sin of database design, ending up with the ugly data hierarchy. In order to adress a field in a database you must provide the database name, the name of the table within the database, the name of the column in the table, and the value of some key field to identify the row. This fourfold increase of complexity makes life a misery for database users. Bo Jacoby (talk) 12:41, 6 November 2010 (UTC).[reply]
I don't share your aversion to databases, but I do share your aversion to SQL, which reminds me of COBOL in its verbosity. Its inadequacies have led to many incompatible versions. I think the SQL#Criticisms of SQL are entirely justified. You may like to read The Third Manifesto. --Qwfp (talk) 13:41, 6 November 2010 (UTC)[reply]
Answering the query about what causes the posters problem where they waste time doing every minute thing themselves and probably putting in errors compared to just using a well debugged package, a lot of people are like this. They think they are safer if they drive a car themselves instead of being driven by a taxi or bus driver. They want to pilot the plane themselves, see [3] for how to combine your love of doing it in Excel and lack of ability to trust somebody else's work. Dmcq (talk) 13:49, 6 November 2010 (UTC)[reply]
From your brief description, I'd say much of your dislike is based on misunderstanding. Running a database query is certainly slower than, say, a RAM lookup of a known array index; the latter requires a few processor cycles while the former may require file IO. Complex operations are what databases are designed to be quick at, though. They've been optimized (for decades, in the relational case) to be much more clever than the naive solution you'd most likely come up with implementing things yourself, for instance with the use of indexes and b-trees. So, the "mathematical reason" you dislike databases is, perhaps, that you choose poor use cases to extrapolate from. Also, SQL/relational databases are by no means the only show in town database-wise; an interesting one conceptually which I had to work with recently is RDF, which is queried using SPARQL.
Thanks for disambiguating my RDF link. 67.158.43.41 (talk) 21:45, 7 November 2010 (UTC)[reply]
That said, I have no great love for databases myself. Relational databases often become evil nests of bad design choices over years of use. Sometimes the best tool for the job is simply serialization of your data structures. As your structures get more complex, though, you'll almost certainly re-implement database features, possibly poorly, incorrectly, or slowly. Databases are often used for "big" data sets, say in the hundred megabyte or higher range (very approximate). Perhaps you simply haven't hit a good point to use them. 67.158.43.41 (talk) 15:55, 7 November 2010 (UTC)[reply]

November 6

Showing that lim (1+1/n)^n = e as n –> infinity



Find a series for :



Hence show that as n approaches infinity.

Since is equal to the series if I can show that converges to 1 as n approaches infinity, I can conclude that and hence , but how do I do this? --220.253.253.75 (talk) 00:58, 6 November 2010 (UTC)[reply]

Your series is wrong, firstly. We begin with
Now substitute x = 1/n to get:
Multiply through by n:
Clearly as n goes to infinity, (*) goes to 1, so that the argument of the logarithm goes to e. —Anonymous DissidentTalk 11:31, 6 November 2010 (UTC)[reply]

Correction to my previous Refdesk question

Hello,

I posted this [4] a while back in August, and I thought I'd solved the problem then. But I've just had another think about it and I've realised that there's no guarantee K_m is going to be a subgroup of K[alpha]! Even if you take K_m=K_m[alpha] as K[alpha]/(X^m-c), it's not necessarily the same using the tower law on [K[alpha]:K_m[alpha]][K_m[alpha]:K] and then saying both m, n divide [K[alpha]:K], because there's no guarantee as far as I can see that [K_m[alpha]:K]=[K_m:K]: surely the fact we're introducing a new dependence relation on our powers of alpha is going to change things.

So could anyone tell me where I went wrong? At the time it seemed right but I've unconvinced myself now! Thankyou, 62.3.246.201 (talk) 01:14, 6 November 2010 (UTC)[reply]

I just looked at your previous question, and hopefully I'm following along correctly:
We assume that is irreducible, so is a field. We wish to show that , where is any root of . What do we know about ? Eric. 82.139.80.73 (talk) 13:28, 6 November 2010 (UTC)[reply]
That it's a root of ? In which case we know is a constant multiple of the min poly for , so which then is contained in ? Hope that's right! Also I think I meant to write K() rather than square brackets previously, sorry! 131.111.185.68 (talk) 14:27, 6 November 2010 (UTC)[reply]
I think I get it now anyway, thankyou! 62.3.246.201 (talk) 05:42, 7 November 2010 (UTC)[reply]

Series Summation

Hi. I have to sum the series using the substitution . I'v given it a go but I only get as far as and now don't see how to proceed. Originally, I was hoping to use the sum of a geometric series but clearly the n on the denominator stops this from being feasible. Can someone suggest what to do next? Thanks. asyndeton talk 11:36, 6 November 2010 (UTC)[reply]

First, you can factor the Im out of the summation to get (because taking the imaginary part is linear). Second, a nice trick for summing things of the form is to take the derivative or integral (depending on whether k is negative or positive) with respect to z and work from there. Eric. 82.139.80.73 (talk) 13:17, 6 November 2010 (UTC)[reply]
A devilishly cunning trick. Cheers Eric. asyndeton talk 13:56, 6 November 2010 (UTC)[reply]
Where, by linear, you only mean additive or R-linear, not C-linear. – b_jonas 21:13, 7 November 2010 (UTC)[reply]
Right, I was thinking R-linear. "Additive" probably would have been clearer. I just didn't want to leave it at "factor the Im out" so that someone reading along wouldn't think "Im" was a number. Eric. 82.139.80.73 (talk) 18:09, 8 November 2010 (UTC)[reply]

Divisor

Hello, everybody! Suppose, that p is prime number in form . Can we always find such natural number n so that is divisible by p? How we can prove this. Thank you! --RaitisMath (talk) 18:33, 6 November 2010 (UTC)[reply]

The multiplicative group mod p is cyclic of order divisible by 8, it therefore has an element n of order 8. If follows that n4+1 is 0 mod p.--RDBury (talk) 04:26, 7 November 2010 (UTC)[reply]

November 7

Check my math please

Could someone check my math here? A random sample of wastewater shows a BOD of 959 mg/L. Total wastewater is 5.8 million gal/day. 959 x 3.875 (liters to gal) = 3.72gm/gal x 5.8 mgd = 21,576 kg/day. Right?

The same sample shows lead at 2.49 mcg/L so 2.49 x 3.875 x 5.8 mil = 55.96 grams.

Have I got all the zeros and decimals in the right places here? Thanks! —Preceding unsigned comment added by 148.66.156.170 (talk) 23:31, 7 November 2010 (UTC)[reply]

Google Calculator comes close to agreeing with you with 959 mg/L * 5.8 million gal/day in kg/day = 21 055.2174 kg / day. The second calculation gives 54.668917 g / day. 67.158.43.41 (talk) 23:47, 7 November 2010 (UTC)[reply]

November 8

Fibonacci-like Series

Consider the series , which is the number in base b that has ones in its expansion at indices corresponding with the Fibonacci numbers. Is this number algebraic? Similarly, is the number algebraic in the p-adic numbers? Black Carrot (talk) 01:34, 8 November 2010 (UTC)[reply]

The answer to the first question is "almost certainly not, but I very much doubt anyone has a proof". It should be the case that all algebraic irrationals are normal numbers to every base. Basically that's because almost all real numbers are normal, so if a number is not normal then there should be a reason it isn't, and algebraic irrationals don't really know from positional number representations so they shouldn't have such a reason. Given that there are only countably many algebraic irrationals, and the probability of any of them not being normal should be zero, the probability that there's even a single algebraic irrational that's not normal should also be zero.
But of course that's not a proof. --Trovatore (talk) 07:37, 8 November 2010 (UTC)[reply]
I don't believe in this one. Some rational numbers have a very good reason not to be normal, so why couldn't some algebraic irrational numbers also have a good reason? On the other hand, I do think that this particular sum is not algebraic. – b_jonas 11:18, 8 November 2010 (UTC)[reply]
All rational numbers have a very good reason not to be normal. They have repeating expansions in every base. No such reason is apparent for algebraic irrationals. There probably isn't one, and most likely all algebraic irrationals are in fact normal.
This is a case with a huge disconnect between what we basically know (in the sense of things that we'd say we know in, for example, physics) and what we can prove by the rules of mathematics. We basically know that all algebraic irrationals are normal to all bases. But I am unaware that anyone has managed to prove that even one algebraic irrational is normal to even one base. --Trovatore (talk) 18:38, 8 November 2010 (UTC)[reply]
You might be interested in the article Liouville number, though I don't believe it applies in this case. I also suspect your first number is not algebraic in any base, but do not have a proof. A plausible generalization might replace F_k with nearestint(e^k) for suitable real e, in light of Binet's formula. 67.158.43.41 (talk) 11:55, 8 November 2010 (UTC)[reply]
This is not an answer, but still. It is easy to see that the two related sums and both have approximation exponent ≥ φ2 > 2, hence they are both transcendental by Roth's theorem. This does not preclude their sum from being algebraic, but it seems quite unlikely.—Emil J. 15:15, 8 November 2010 (UTC)[reply]


At a first glance, my impression is that for any integer the number is indeed trascendental, and that there should be an elementary proof, arguing on the "base b support" of the number and of its powers. Here by "base b support" of a number x I mean the subset of corresponding to the non-zero digits of x, e.g. for , it is the set of all Fibonacci numbers (negated). Note that where is the number of representations of k as a sum of m Fibonacci numbers, with possible repetitions, and distinguishing the order of summands (in general this is not a base b representation, since the coefficients may exceed b-1). However I think it should be true that:
i) for any m, c(k,m) is bounded by a constant C(m);
ii) for any m, the set of all natural numbers that have a representation as a sum of less than m Fibonacci numbers has density 0 relative to the set of all natural numbers that have a representation as a sum of exactly m Fibonacci numbers, meaning that the number of elements of less than x is little oh of the number of elements of less than x, as Let be a polynomial of degree m with integer coefficients and with of degree less than m.
As a consequence of (i), the base b support of should be only a sligtly perturbation of the set and by (ii) it could not be covered by the support of , that would make it imposible for to be a root of . Though I think that the best proof is the one shown by EmilJ via Roth's theorem, I'd be glad to hear any comments or ideas on the above elementary approach. --pma 23:37, 8 November 2010 (UTC)[reply]

Quick query - integral tending to 0 on [0,1]

Hi there,

Could anyone quickly suggest a method I might use to show that  ? I've spent a long time staring at it and the only possible method I could think of was to say , but this only gives us a bound of via arctan, which is not good enough unfortunately.

I am happy to do all the working myself if anyone could point me in the direction of an effective method! (Limit tests, basic measure theory etc. is fine with me, incidentally).

Thanks! 131.111.1.66 (talk) 15:01, 8 November 2010 (UTC)[reply]

For , this is just which is obviously 0 (since ). For , it's , which is rather not 0. --Tardis (talk) 15:09, 8 November 2010 (UTC)[reply]
I suspect that the OP didn't write properly what they actually want, since they talk about integrals, which there aren't any in this expression.—Emil J. 15:18, 8 November 2010 (UTC)[reply]
Sorry! Corrected now, I'm an idiot :-P 131.111.1.66 (talk) 16:15, 8 November 2010 (UTC)[reply]
OK, trying again (and assuming a dx in the integral): dropping the cosine factor for simplicity, we have where and . As , the first term goes to zero because so which vanishes, the second goes to zero because on that interval so which vanishes for suitably small , and the third goes to zero because on that interval which vanishes (and the interval length is bounded). --Tardis (talk) 19:34, 8 November 2010 (UTC)[reply]

when will math end?

what is the most likely avenue for demise of the entire field of mathematics as we know it today and when is it most likely to take place? My impression is that it must be empirical evidence (not proof, of course =) ) that any axiomatic system is inconsistent, and it seems to me this is most likely to take place in the next 150 years. I would place a 90% chance on this happening. But this is just my impression. What are the facts, referenced ideas on this subject? Thank you. 84.153.236.235 (talk) 18:42, 8 November 2010 (UTC)[reply]

Take a look at Gödel's incompleteness theorems, and the references linked at our consistency article to learn about formal mathematical consistency. At the reference desk, we will not speculate about future developments in mathematics; that means we won't delve into guessing about "the demise" of mathematics (whatever that even means). Nimur (talk) 18:51, 8 November 2010 (UTC)[reply]
My guess would be because life has ended and the timescale would be some thousands maybe millions of millions of years. Hopefully evolution and technical fixes would have made people better at it by then. Dmcq (talk) 19:33, 8 November 2010 (UTC)[reply]
Agreed with Dmcq, the demise of humanity is by far the most likely scenario for math ending. But I think it's very likely we will destroy ourselves this century.
A discovery like what you describe, if indeed it takes place, may change some views in philosophy of mathematics, but it will not change how mainstream math is practiced.
I'd say that the probability that humanity as we know it will continue, but math as we know it will end this millennium, is about 0.00000001% (though I am in no way qualified to make such estimates). -- Meni Rosenfeld (talk) 20:00, 8 November 2010 (UTC)[reply]
I'd nitpick on Dmcq's and Meni Rosenfeld's answer on account of the fact that the study of mathematics has only been able to pick up due to the luxury of humanity's ability to focus on the mental exercises of math, sciences, and the arts, instead of the constant need to survive a harsh environment. It would not take the complete demise of humanity to end math as we know it; only the end of modern civilizations. --COVIZAPIBETEFOKY (talk) 21:31, 8 November 2010 (UTC)[reply]
Depending on what you mean by "as we know it", the "field of mathematics as we know it today" could end abruptly if a strong AI were developed which ordinary humans couldn't keep up with, effectively ending humanity's direct participation in mathematical discovery. Math would certainly exist and continue after that, but not really "as we know it". Staecker (talk) 00:53, 9 November 2010 (UTC)[reply]
If humans can get along with each other and the planet well enough, I dont think math will ever end. I recall Max Born once saying "physics, as we know it, will end in 2 weeks" only to find out the explosive development of new stuff aft that. I think, as new theories are developed to answer current problems, new problems arise from the new theories at a rate thats faster than the rate that current problems are being solved. Money is tight (talk) 14:26, 9 November 2010 (UTC)[reply]

November 9

Differential geometry textbook

Hello. I'm looking for a good introduction to differential geometry; can anyone suggest a particular author or text? Thanks in advance. —Anonymous DissidentTalk 12:27, 9 November 2010 (UTC)[reply]

I hear Lee's introduction to smooth manifold is pretty good, but I havent looked at it yetMoney is tight (talk) 14:22, 9 November 2010 (UTC)[reply]
Spivak's Comprehensive Introduction to Differential Geometry is well-used and praised. [5] Depending on your background, you may find it more comprehensive than introductory :) I also recommend his book on introductory real analysis, which is unfortunately named `Calculus'. SemanticMantis (talk) 16:43, 9 November 2010 (UTC)[reply]

are these the same thing

Toroidal coordinates Bispherical coordinates