Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Wikipedia Reference Desk covering the topic of mathematics.

Welcome to the mathematics reference desk.
Shortcut:
Want a faster answer?

Main page: Help searching Wikipedia

How can I get my question answered?

  • Provide a short header that gives the general topic of the question.
  • Type ~~~~ (i.e. four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Post your question to only one desk.
  • Don't post personal contact information – it will be removed. We'll answer here within a few days.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we’ll help you past the stuck point.


How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
 
See also:
Help desk
Village pump
Help manual


February 2[edit]

Computation of trig function before computers[edit]

What exactly were the steps, or the algorithm to find out the values of the logarithm, sine or cosine tables? Back then There were like 1000 values for each table, but if you did not have the table, was it really impossible to do these calculations? 186.146.10.154 (talk) 12:18, 2 February 2016 (UTC) (posted by SemanticMantis (talk) 16:07, 2 February 2016 (UTC))SemanticMantis (talk) 16:15, 2 February 2016 (UTC)

Trigonometric_tables gives some info on how they were computed. Let us know if there's something in there you don't understand. SemanticMantis (talk) 16:49, 2 February 2016 (UTC)
The earliest method would just be to construct the triangles and measure the values directly, perhaps interpolating to find the in-between values. StuRat (talk) 17:03, 2 February 2016 (UTC)
See Taylor series. The method of computing the values in the log, sine, or cosine tables were essentially the same as the method that the computer uses behind the scenes. If you did not have the table and were not skilled in doing the laborious pencil-and-paper Taylor series, it was impossible to do the calculations accurately, although there were estimation techniques. If you did not have the table and were skilled in the laborious pencil-and-paper calculation, I suppose (but am guessing) that you could contract with a publisher to develop their version of the table. Robert McClenon (talk) 18:25, 2 February 2016 (UTC)
The following series can be evaluated by hand to as many terms as accuracy demands.

Natural logarithm

of z where 0 < z < 2,


\ln (z)  = \frac{(z-1)^1}{1} - \frac{(z-1)^2}{2} + \frac{(z-1)^3}{3} - \frac{(z-1)^4}{4} + \cdots


Common logarithm

 \log_{10}(x) = \frac{\ln(x)}{\ln(10)}


Sine


\begin{align}
\sin x & = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots
\end{align}

where x is in radians (1 radian = 180/pi)


Cosine


\begin{align}
\cos x & = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots
\end{align}

AllBestFaith (talk) 20:38, 3 February 2016 (UTC)

The above series for ln(z) converges to slowly to be useful for numerical computation. Use
\ln((z+1)(z-1)^{-1})=2(z^{-1}+3^{-1}z^{-3}+5^{-1}z^{-5}+\cdots)
for |z|>1. (See Abramowitz and Stegun formula 4.1.28). Bo Jacoby (talk) 09:00, 4 February 2016 (UTC).
John Napier published tables of ln(x) in 1614. He could not know what equations would be published in 1964. Henry Briggs may have used a finite-difference method some years later. The judgement "converges to[sic] slowly" need not apply where the calculator devotes years to his task. AllBestFaith (talk) 13:32, 4 February 2016 (UTC)

We have this article: History_of_logarithms. Bo Jacoby (talk) 17:57, 4 February 2016 (UTC).

February 3[edit]

Conformal mapping[edit]

As I understand it, in two dimensions conformal maps can be almost uniquely defined by specifying a simple closed curve in the domain and another such curve in the codomain, such that everything within the curve in the domain is mapped bijectively to the area inside the codomain. I have two questions relating to this:

  1. Using complex-valued functions to represent the conformal map, is there a general procedure for finding the holomorphic function that maps the boundary of an arbitrary simple closed curve to the unit circle?
  2. Is the function thus found guaranteed to be representable by a single power series throughout the simple closed curve in the domain? I believe that the inverse map always has this property (the inverse function must be holomorphic throughout the unit disk so is representable by a single power series about the origin), but I'm not so sure about the original function.--Leon (talk) 08:47, 3 February 2016 (UTC)
1. There is the Schwarz-Christoffel mapping, which only applies to certain types of regions (but can be adapted as a numerical scheme for more general regions). There is a book by that title by Driscoll and Trefethen with lots of details and examples. 2. In general, the map will not have a single power series expansion throughout the domain. Most likely, this is never true except when the domain and codomain are both discs (or halfplanes). Sławomir
Biały
12:27, 3 February 2016 (UTC)
Thanks, but I didn't mean a power series that converges throughout the entire domain, I meant a power series with a radius of convergence that covers the simple closed curve.--Leon (talk) 12:36, 3 February 2016 (UTC)
If a disc contains a Jordan curve, then the disc must contain a component of the complement of the curve. So if the power series converges on the whole curve, then it must converge on a connected component of the complement. Sławomir
Biały
12:39, 3 February 2016 (UTC)

February 5[edit]

Hessian Matrix Meaning[edit]

Let f:\R^n\to\R be a smooth function. Let x\in\R^n, such that the gradient of f at x is zero. Let H be the Hessian matrix of f at the point x. Let V be the vector space spanned by the eigenvectors corresponding to negative eigenvalues of H. Let y\in V. Then, f(x)>f(y)? or maybe f(x)>f(x+y)?

In other words, does negative eigenvalue imply maximum point at the direction of the corresponding eigenvector, or maybe this is a maximum in another direction, and not in the direction of the eigenvector? עברית (talk) 06:45, 5 February 2016 (UTC)

See Morse lemma. Sławomir
Biały
12:23, 5 February 2016 (UTC)
(edit conflict)You're kind of circling around the second derivative test for functions of several variables. The Taylor expansion for f at x is
f(\mathbf{x}+\mathbf{y}) \approx f(\mathbf{x}) + \mathbf{y}^\mathrm{T}  \mathrm{D} f(\mathbf{x}) + \frac{1}{2!} \mathbf{y}^\mathrm{T} \mathrm{D}^2 f(\mathbf{x}) \mathbf{y} + \cdots
where Df is the gradient and D2f is the Hessian. In this case the gradient is 0 at x so this reduces to
f(\mathbf{x}+\mathbf{y}) \approx f(\mathbf{x}) + \frac{1}{2!} \mathbf{y}^\mathrm{T} \mathrm{D}^2 f(\mathbf{x}) \mathbf{y} + \cdots .
Let e be an eigenvector with eigenvalue λ, and wlog take e to be length 1. If y = te, then
f(\mathbf{x}+\mathbf{y}) \approx f(\mathbf{x}) + \frac{1}{2!} \lambda t^2 + \cdots
so f has a local minimum or maximum along the line parallel to e though x, depending on whether λ is positive or negative. If e, f ... are several linearly independent eigenvectors, with eigenvalues λ, μ, ... , and y = te + uf + ... , then
f(\mathbf{x}+\mathbf{y}) \approx f(\mathbf{x}) + \frac{1}{2!} (\lambda t^2 + \mu u^2 + \cdots) + \cdots
so f has a local minimum or maximum in the relevant space though x provided λ, μ, ... have the same sign. (The eigenvectors may be taken to be orthogonal since D2f is symmetric.) Note, this is only valid for y sufficiently small, otherwise the higher order terms in the Taylor series become significant and the approximation is no longer valid. --RDBury (talk) 12:46, 5 February 2016 (UTC)
Oh, great! Thank you! :) עברית (talk) 08:38, 6 February 2016 (UTC)

February 6[edit]

question in graph theory[edit]

Hi,
Suppose we have a graph G where |V(G)|\ge k+1 (k\in\mathbb{N}), and for all two non-neighbors vertices it holds that d(u)+d(v)\ge2k. How can we prove that the average degree of this graph is at least k?
Tnanks — Preceding unsigned comment added by 217.132.96.145 (talk) 19:51, 6 February 2016 (UTC)

Just sum over all vertices, and show that the sum is greater than |V|\cdot 2k.
The idea is to sum over couples of non-neighbors vertices.
If all the vertices in the graph are neighbors, we're done, since |V|\ge k+1, so each node has degree \ge k.
Also, if all the vertices have degree \ge k, we're done.
Otherwise, there're at least 2 non-neighbors vertices, v and u, that one of which has degree<k, and the second has degree>k.
We know that d(u)+d(v)\ge 2k. WLOG d(v) > d(u). So, d(v)\ge k+1
Now, for every vertex, u, which is not a neighbor of v, it holds that d(u)+d(v)\ge2k. So, d(u)\ge2k-n.
Now, we remain only with the neighbors of v.
If they're all neighbors, then we know that their degree \ge n-1 \ge k.
Otherwise, there are two vertices that are not neighbors - fix one of which and continue this way recursively.
Since the statement (that the average of the degrees over the fixed vertex and its non-neighbors vertices is >= k) holds all the time during the recursion, so the statement is correct.
Notice that this method of recursion is similar to inducion, that you're probably more familiar with. עברית (talk) 10:39, 8 February 2016 (UTC)

February 7[edit]

Semi-hereditary rings[edit]

Is there a ring R that is left semi-hereditary but not right semi-hereditary? Of course, the opposite ring Rop will then be right semi-hereditary but not left semi-hereditary. Unlike for (semi-)perfect rings and (semi-)firs, where the semi version is left-right symmetric and the non-semi version is asymmetric, both hereditariness and semi-hereditariness are asymmetric. GeoffreyT2000 (talk) 00:24, 7 February 2016 (UTC)

Discrete Fourier Transform[edit]

By convention, when we take a DFT of a series, we get a series-sized list of numbers back. These numbers describe the Frequency domain of that series. My question is: what is the exact relation of each number to the original signal? Let's say we take a DFT of 1024 samples from an audio recording with a sample rate of 44100 Hz. We get back a list of 1024 numbers. The first number (or last depending on how you order it I guess, but by convention usually the first) will represent the "constant" signal of the time series, correct? The last number will represent a signal oscillating fast enough to go through a full sine wave 512 times over the course of our 1024 samples (alternating between full positive and full negative every sample), right? This corresponds to a frequency of 22050 Hz? So what does e.g. the 384th number represent?

tl;dr: I wanna tie the results of an FFT of an audio file to specific frequencies in Hertz. How do? 97.93.100.146 (talk) 21:58, 7 February 2016 (UTC)

Hi, reporting back on some of my own digging to help build the record. My first point of confusion is I've been working with a full FFT instead of a real FFT when working with Real data. The "extra" 512 coefficients are identical to the first 512 when working with real data! So, with a RFFT of the same data from the earlier example, the 512th (e.g. last) member is 22050 hz. 97.93.100.146 (talk) 22:26, 7 February 2016 (UTC)
The resulting values are evenly spaced in frequency domain. So if you have N samples and a sample rate of S, then the k-th resulting value corresponds to frequency {k \over N} S. As you already noted, k > N / 2 just repeats for real signals, so the interesting information covers frequencies S / N to S / 2. Dragons flight (talk) 11:41, 8 February 2016 (UTC)

February 8[edit]

"Opposite" of Normal Distribution[edit]

What is the equation of the normal distribution turned up-down? Does this distribution have some name in the literature? עברית (talk) 08:38, 8 February 2016 (UTC)

If you mean a distribution with an inverted bell shape, it can't be that simple, because its integral would be infinite. —Tamfang (talk) 08:54, 8 February 2016 (UTC)
Oopss.. Thanx! עברית (talk) 10:41, 8 February 2016 (UTC)

Elementary Proof of an Integral Identity involving Bessel functions[edit]

It is well known that, for positive values of a, \int_0^\frac\pi2\cos(a\cos x)dx=\int_0^\infty\sin(a\cosh x)dx=\frac\pi2J_0(a). I was wondering whether it is possible to prove the first half of the identity in an elementary manner, without any explicit recourse to Bessel functions and their various properties.

I've tried writing them both as \int_0^1\frac{f_{1,2}(ax)}{\sqrt{1-x^2}}~dx, with f_1(t)=\cos(t) and f_2(t)=\frac{\sin(1/t)}t, and then expand \frac1{\sqrt{1-x^2}} into its binomial series, and reverse the order of summation and integration, but the general terms of the two series are not equal (not to mention the fact that each is expressed in terms of incomplete gamma functions and/or exponential integrals of imaginary argument). — 79.118.187.240 (talk) 13:11, 8 February 2016 (UTC)

It's not too hard to show that both satisfy the same second order differential equation (which turns out to he the Bessel equation). Sławomir
Biały
13:34, 8 February 2016 (UTC)

Grams in milliliters[edit]

What is 100g plain flour in milliliters? 2A02:8084:9360:3780:141:A29C:2CFA:4D6F (talk) 14:58, 8 February 2016 (UTC)