# Wikipedia:Reference desk/Archives/Mathematics/2008 February 24

Mathematics desk
< February 23 << Jan | February | Mar >> February 25 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.

# February 24

## derivatives and second derivatives of rotation functions

hi, how do i find the derivative and second derivative of a function that rotates counter clockwise an angle theta? do i just take the derivative and second derivative of (-sin+cos, cos+sin)?

also, how might one prove that a harmonic function composed with a function that preserves dot product is still harmonic? thanks —Preceding unsigned comment added by 199.74.71.147 (talk) 03:33, 24 February 2008 (UTC)

For the first question I assume you mean rotation around the origin in the Euclidean plane, using the mapping
${\displaystyle (x,y)\mapsto (x\cos \theta -y\sin \theta ,x\sin \theta +y\cos \theta )\,.}$

To get the n-th derivative with respect to the rotation angle θ, just work out

${\displaystyle (x,y)\mapsto {\frac {d^{n}}{{d\theta }^{n}}}(x\cos \theta -y\sin \theta ,x\sin \theta +y\cos \theta )\,,}$
where a pair is differentiated coordinate-wise:
${\displaystyle {\frac {d}{d\theta }}(X(\theta ),Y(\theta ))=({\frac {d}{d\theta }}X(\theta ),{\frac {d}{d\theta }}Y(\theta ))\,.}$
Using matrix–vector notation, the mapping can be written as
${\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}\mapsto M(\theta ){\begin{bmatrix}x\\y\end{bmatrix}}\,,}$
where the 2×2 rotation matrix M(θ) is given by:
${\displaystyle M(\theta )={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\,\,\,\,\cos \theta \end{bmatrix}}\,.}$
Here you have to find ddθM(θ), which can be done entry-wise:
${\displaystyle {\frac {d}{d\theta }}M(\theta )={\begin{bmatrix}{\tfrac {d}{d\theta }}\cos \theta &-{\tfrac {d}{d\theta }}\sin \theta \\{\tfrac {d}{d\theta }}\sin \theta &\,\,\,\,{\tfrac {d}{d\theta }}\cos \theta \end{bmatrix}}\,.}$
The second question is not clear. What does it mean for a function to preserve dot products? Normally you expect that to mean f(u·v) = f(u)·f(v), but the dot product turns two vectors into a scalar, so if u and v are vectors, on the left-hand side f is operating on a scalar and on the right-hand side on vectors. How can this be? Also, is it given in which order the harmonic function is composed with the dot-product preserving function? An example might help to clarify this.  --Lambiam 06:26, 24 February 2008 (UTC)
Probably means orthgonal, u·v = f(u)·f(v). With some very minor condition (possibly none), an orthogonal function is an orthogonal matrix. Then it is just asking to show that the laplacian is invariant under orthogonal coordinate changes, which is definitely a very common exercise to give (so likely to be his question). JackSchmidt (talk) 06:44, 24 February 2008 (UTC)
that probably is my question, but i am unfamiliar with most of the terminology you used as my professor is rather scatterbrained in lecture. could you please give me some pointers on where to start if i were to prove this? thanks! —Preceding unsigned comment added by 199.74.71.147 (talk) 08:04, 24 February 2008 (UTC)
Basically you use chain rule. If g:R^2->R is harmonic and f:R^2->R^2 preserves dot products, then f is actually given by an orthogonal matrix (which looks almost exactly like a rotation matrix). Define h:R^2->R by h(x) = g(f(x)). Write out what that means fairly explicitly in terms of the matrix entries of f (just label them f11, f12, f21, and f22; don't use cosines in my humble opinion). Then use chain rule to take the first and second derivatives. The derivatives will involve dot products of rows of f. f is orthogonal, so the dot products will be 0 for the mixed derivatives of g, and 1 for the repeated derivatives of g. In other words, lap(h) evaluated at x is equal to lap(g) evaluated at f(x). JackSchmidt (talk) 08:17, 24 February 2008 (UTC)
how would one avoid using cosines? and also, where do dot products come into the derivatives? I don't see them199.74.71.147 (talk) 20:22, 24 February 2008 (UTC)
(Sorry, I don't have an easy way to explain this; my comments have been meant for other more analytic types to help you out; same continues here). See directional derivative for the dot products. Applying f just changes the direction of a directional derivative. The laplacian is defined in terms of partials, but you get the same definition if you use directional derivatives in directions which are just a rotation of the previous ones. Simply writing it out and taking calc 1 derivatives suffices, it just takes a page. I recommend that method. JackSchmidt (talk) 20:47, 24 February 2008 (UTC)
can you explain a little more about how i get from taking the derivative to having dot products?
There is a dot product in the definition of a directional derivative, is that the one you're looking for? --Tango (talk) 11:44, 25 February 2008 (UTC)

## Integral Powers of Trinomials

For expanding binomials, we have the binomial expansion which gives us all the info we need to know about expanding a binomial to (any) power. For example, if I want to find out what is the full expansion of ${\displaystyle (x+y)^{25}}$, I can do so. My question is, is there a similar expansion formula for ${\displaystyle (x+y+z)^{25}}$. Is there a shorthand way to write this expansion in terms of the coefficients of the expansion?A Real Kaiser (talk) 05:53, 24 February 2008 (UTC)

Memory of high school maths a bit hazy, but you can break ${\displaystyle (x+y+z)^{25}}$ into ${\displaystyle ((x+y)+z)^{25}}$, which gives ${\displaystyle \sum _{k=0}^{n}{C_{k}^{n}}(x+y)^{n-k}z^{k}}$, and then expand ${\displaystyle (x+y)^{n-k}}$ binomially.
This gives something like ${\displaystyle \sum _{k=0}^{n}\sum _{l=0}^{n-k}{C_{k}^{n}}{C_{l}^{n-k}}x^{l}y^{n-k-l}z^{k}}$ --PalaceGuard008 (Talk) 06:19, 24 February 2008 (UTC)
You are looking for the Multinomial theorem. You are basically counting how many ways to rearrange "mississippi", or other such combinatorial questions. JackSchmidt (talk) 06:26, 24 February 2008 (UTC)
The coefficient of ${\displaystyle x^{i}y^{j}z^{k}}$ in the expansion of ${\displaystyle (x+y+z)^{n}}$ is
${\displaystyle {n \choose i,j,k}={\frac {n!}{i!\,j!\,k!}}}$
Pascal's pyramid contains the coefficients for the trinomial expansion - it bears the same relation to the trinomial expansion as Pascal's triangle does to the binomial expansion, but it has an extra dimension. So the coefficients of ${\displaystyle (x+y+z)^{25}}$ are the numbers in the 25th layer of Pascal's pyramid. Gandalf61 (talk) 11:26, 24 February 2008 (UTC)

Thanks guys, I had no idea that a generalization of Pascal's triangle existed.A Real Kaiser (talk) 01:29, 25 February 2008 (UTC)

Curiously enough, I independently discovered this relationship over a year ago. I found it a lot less obvious than Pascal's Triangle. A math-wiki (talk) 07:06, 27 February 2008 (UTC)

## Vibrational modes of a drum

I'm interested in creating a few images for Wikipedia of the vibrations of a drum membrane.

It appears these vibrations are related to the Bessel functions - searching on google, I didn't find any precise descriptions of the phenomenon. Somes pages appeared helpful though (for example : Standing waves in a drum membrane...).

So, could anyone explain how it works, and what the equations describing the vibrations are ?

Thanks. -- Xedi (19:25, 24 February 2008 (UTC))

Well, I don't know your mathematical background (have you studied PDEs?), but the simplest thing to do is assume that the membrane is isotropic and homogeneous, and the waves are small enough that it responds linearly. Then you have to solve the wave equation (whose spatial part after separation of variables is the Helmholtz equation) by finding eigenfunctions of the Laplacian for whatever shape your drum is. The solutions for a circular drum do involve Bessel functions. See also Hearing the shape of a drum. —Keenan Pepper 23:13, 24 February 2008 (UTC)
If you want a textbook that discusses this stuff, try Elementary Applied Partial Differential Equations with Fourier Series and Boundary Value Problems by Richard Haberman. —Keenan Pepper 23:17, 24 February 2008 (UTC)
Thanks a lot, the article on the Helmholtz equation is pretty much what I needed. (I'll also have a look at the book) -- Xedi (10:08, 25 February 2008 (UTC))