# User:Justin545/Valuables

## Gravitational Field vs. Electric Force Field. Why?

In the last paragraph of section Quantum Mechanics and General Relativity:

"...it is not clear how to determine the gravitational field of a particle, if under the Heisenberg uncertainty principle of quantum mechanics its location and velocity cannot be known with certainty...".

I am just curious about why the electric force field for the central-force problem (finding the wave function for the electron circling the nucleus of a hydrogen atom) can be determined, but it's not clear how to determine the gravitational field of a particle?

To figure out the wave function ${\displaystyle \Psi }$ for the electron of the central-force problem, the potential energy field ${\displaystyle V}$ in the time-independent Schrödinger equation

${\displaystyle E\Psi =-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}\Psi +V\Psi }$

must be determined. But the potential energy field ${\displaystyle V}$ is known after the force field ${\displaystyle {\mathbf {F}}}$ exerting on the electron is determined.

In my text book of quantum mechanics, the force field ${\displaystyle {\mathbf {F}}}$ is just the central force caused by the charges of the nucleus and the electron from the hydrogen atom.

The nucleus, a proton, which is a particle should comply with?? the Heisenberg uncertainty principle with uncertain location and velocity. How can we say the electric force field between the proton and the electron is in the form of central force? Or, why can not we say the gravitational field between them IS in the form of central force just like classical mechanics?

p.s. I'm just new to quantum mechanics so the questions here may be ridiculous and stupid. Forgive me please if any.

Justin545 (talk) 11:01, 13 January 2008 (UTC)

Hi Justin - I'd be more than glad to answer some of your questions. The quote above sources from an article on arXiv that attempts to circumvent what's called a singularity. In general, this is nothing out-of-the-ordinary, as for example in electromagnetism (namely: Quantum electrodynamics) such a singularity exists as well, which would result in infinite polarization of the vacuum around an electric point charge - but (for some reason) a procedure called Renormalization happens to be able to resolve this successfully. For gravity, unfortunately, the singularity is more complex: Because the gravitational field itself becomes a source of gravity. Gravitational charges (e.g. point masses) and the resulting gravitational fields are in a dynamic balance, and cannot simply be separated anymore (contrary to electrodynamics, where one can separate the fields from test charges; the electromagnetic field can simply be added through superposition; no so for gravity). Naturally, the singularity of a point mass becomes more complex: In addition to a singularity at the origin, there is an additional singularity (though of a different quality) at the Schwarzschild radius: Space and time get quite weird and counterintuitive. So, the Schrödinger equation that you wrote above still holds as a good approximation for gravity if the field self-interaction could be neglected. Problem is: The gravitational force would, in this case, be so terribly weak that it is futile to even consider: Richard Feynman calculated one time (in Acta Physica Polonica, if I remember correctly) that the gravitational force of a proton in a hydrogen atom would have shifted the quantum mechanical phase of the electron in that same atom just a few docent arcseconds ... during 100 lifetimes of our universe! In order to get meaningfully close to anything that could possible ever be measured, for quantum gravity and to the best of today's knowledge, one would have to go to energies and length scales at which charges/masses and their resulting fields are tightly coupled. So, on first look, the uncertainty principle is just one out of a spectrum of problems (but nevertheless, surely is one of it). Hope this helps! Jens Koeplinger (talk) 00:00, 14 January 2008 (UTC)
I just know very little about the special/general relativity, and nothing about the quantum field theory. It seems they are required to truly understand your explanation. I started to study quantum mechanis because of my curiosity about knowing how quantum computer works, especially for entanglement. I found the more I learn the more qustions bother me. Your answer is a good guidance for me and it helps. Thank your for your patience and time to answer my qustions!
Justin545 (talk) 12:17, 14 January 2008 (UTC)
Ok ... I'm glad my 'sweep' that touches several points of interest seems helpful to you :) - Now, just re-reading what you wrote: "I found the more I learn the more questions bother me." Welcome to the club, you're in good company. You seem interested in Quantum information, Quantum computer, and also Quantum entanglement - from an engineering point of view maybe. If you're interested in the foundations of quantum mechanics, there's one thing I might want to recommend to you studying early on, which is Bell's theorem. Good luck! Jens Koeplinger (talk) 04:22, 17 January 2008 (UTC)

## How Composite Quantum System Relates to Tensor Product?

Consider two noninteracting systems ${\displaystyle A}$ and ${\displaystyle B}$, with respective Hilbert spaces ${\displaystyle H_{A}}$ and ${\displaystyle H_{B}}$. The Hilbert space of the composite system is the tensor product

${\displaystyle H_{A}\otimes H_{B}}$

(8)

My question is why the composite Hilbert space of the two noninteracting systems is their tensor product as (8)?

It always be true the tensor product is accounted for the concept of composite quantum systems, quantum entanglement especially. As well, it is a big deal with respect to quantum computation. The massive Hilbert space of the composite system dramatically boosts the power of quantum computers. According to postulate of quantum mechanics, the Hilbert space of a composite system is the Hilbert space tensor product of the state spaces associated with the subsystems. But it's rare to see any article which can point out where such a postulate comes from. Is the postulate due to the overwhelming, experimental evidence? Is it a derivational consequence from fundamental quantum theory?

It's difficult to convince me of the ability and the power of quantum computation if no one can tell how the composite quantum system relates to the tensor product. Hopefully, the postulate came from the derivational consequence of quantum theory rather than just from the experimental evidence. After reviewed the original EPR paper, I came up with an idea. So I tried to explain it myself. Although the explanation is very likely to be wrong and even seems naive and optimistic, I would like to put it here to see if anyone could give some advice or correction to my faults. For simplicity, the following assumes all relevant state spaces are finite dimensional.

For a composite system of two particles ${\displaystyle A}$ and ${\displaystyle B}$, the wave function is

${\displaystyle \Psi (x_{1},x_{2})}$

(1)

where ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ are respective positions of ${\displaystyle A}$ and ${\displaystyle B}$. Similar to the idea of Separation of variables for solving PDE discovered by Leibniz, if the wave function can be separated into multiplication of two functions such that

${\displaystyle \Psi (x_{1},x_{2})=U(x_{1})V(x_{2})}$

(2)

As a result, the functions ${\displaystyle U(x_{1})}$ and ${\displaystyle V(x_{2})}$ can be viewed as wave functions for ${\displaystyle A}$ and ${\displaystyle B}$, respectively. Furthermore, ${\displaystyle U(x_{1})}$ and ${\displaystyle V(x_{2})}$ are in Hilbert spaces ${\displaystyle H_{A}}$ and ${\displaystyle H_{B}}$, respectively. Therefore, the two functions can be expanded by their related basis such that

${\displaystyle U(x_{1})=\sum _{i}a_{i}|i\rangle _{A}}$

(3)

${\displaystyle V(x_{2})=\sum _{j}b_{j}|j\rangle _{B}}$

(4)

where ${\displaystyle \{|i\rangle _{A}\}}$ and ${\displaystyle \{|j\rangle _{B}\}}$ are respective sets of basis for ${\displaystyle H_{A}}$ and ${\displaystyle H_{B}}$. Substitute (3) and (4) into (2), we have

${\displaystyle \Psi (x_{1},x_{2})=\left(\sum _{i}a_{i}|i\rangle _{A}\right)\left(\sum _{j}b_{j}|j\rangle _{B}\right)}$

${\displaystyle =\left(a_{1}|1\rangle _{A}+a_{2}|2\rangle _{A}+...+a_{m}|m\rangle _{A}\right)\left(b_{1}|1\rangle _{B}+b_{2}|2\rangle _{B}+...+b_{n}|n\rangle _{B}\right)}$

${\displaystyle =\sum _{i,j}a_{i}b_{j}|i\rangle _{A}|j\rangle _{B}}$

(5)

Since ${\displaystyle |i\rangle _{A}}$ and ${\displaystyle |j\rangle _{B}}$ are in different Hilber spaces, their multiplication is equivalent to their tensor product. Thus

${\displaystyle |i\rangle _{A}|j\rangle _{B}=|i\rangle _{A}\otimes |j\rangle _{B}}$

(6)

Substitute (6) into (5), we have

${\displaystyle \Psi (x_{1},x_{2})=\sum _{i,j}a_{i}b_{j}\left(|i\rangle _{A}\otimes |j\rangle _{B}\right)=\sum _{i,j}a_{i}b_{j}|ij\rangle _{AB}}$

(7)

That (7) is a state or vector in Hilber space ${\displaystyle H_{A}\otimes H_{B}}$. And (7) can be generalized to systems that involve more than two particles or subsystems. However, it is problematic such as

1. The method of the separation of variables can not guarantees to be the solution for every class of PDE. Likewise, not all wave function of form (1) can be separated into multiplication of two functions of form (2).
2. Even if the wave function (1) could be separated to the form of (2) "mathematically", but does it make physical sense to say that the functions ${\displaystyle U(x_{1})}$ and ${\displaystyle V(x_{2})}$ are two "wave functions" which are the component systems of ${\displaystyle \Psi (x_{1},x_{2})}$?

Well, I am neither a mathematician nor a physicist. I don't mean to offend or mislead someone with my words. I am just hoping to get more clue about answering the question "How Composite Quantum System Relates to Tensor Product?" with this discussion. Thanks! - Justin545 (talk) 00:50, 23 February 2008 (UTC)

Truth is a very difficult concept (with apologies to Alan Clarke MP (deceased))

All of your math looks right. As you say, some states of the composite system can be written in this way and some can't; those that can are called separable. It's correct to refer to U and V as wave functions, and in fact all wave functions are like that. You can't describe the whole universe with a wave function, only separable parts of it.
There's nothing quantum mechanical about the idea of phase space or separability or combining systems by taking the tensor product. For a classical analogy, take a system of three classical bits. This system has 23 = 8 states, which can be written ${\displaystyle |000\rangle }$, ${\displaystyle |001\rangle }$, and so on. An example of a computational step on these bits might be "flip the third bit if at least one of the first two is set", which can be written with the transition matrix
${\displaystyle \scriptstyle \left({\begin{array}{cccccccc}1&&&&&&&\\&1&&&&&&\\&&&1&&&&\\&&1&&&&&\\&&&&&1&&\\&&&&1&&&\\&&&&&&&1\\&&&&&&1&\end{array}}\right)}$
(all other entries zero). You can think of this matrix as acting (by left-multiplication) on a state vector, which is an 8-component column vector that has a 1 at the index corresponding to the state of the system and zeroes everywhere else. (So ${\displaystyle |000\rangle }$ is (1,0,0,0,0,0,0,0)t, ${\displaystyle |001\rangle }$ is (0,1,0,0,0,0,0,0)t, and so on.) Or, more generally, you can think of it as acting on a probability distribution over possible states of the computer, for example ${\displaystyle {\frac {1}{3}}|001\rangle +{\frac {2}{3}}|011\rangle =(0,{\frac {1}{3}},0,{\frac {2}{3}},0,0,0,0)^{t}}$. The probabilities all have to be between 0 and 1 and they must sum to 1. If you only allow reversible computations, then the only matrices that preserve that property are the permutation matrices.
If you have two of these computers, you can describe them as a single system using the 64-dimensional tensor product of the individual 8-dimensional phase spaces, for which the natural basis is ${\displaystyle |000\rangle |000\rangle ,|000\rangle |001\rangle ,\ldots ,|000\rangle |111\rangle ,|001\rangle |000\rangle ,\ldots }$. As long as the two subsystems don't interact, the composite state can be written as a tensor product of the states of the subsystems (as you did above), and the transitions can be written as the matrix tensor product of the transitions of the subsystems. If the subsystems do interact (e.g. a bit in one is flipped or not flipped depending on a bit in the other), then the subsystems may become correlated, in which case they can't be written this way any more.
To get quantum computing from this, all you do is replace the classical probabilities which sum to 1 with complex numbers whose squared absolute values sum to 1. Because the square norm is much more symmetric (the space of valid vectors is a sphere instead of a simplex), there are a lot more reversible computations you can do; in fact, any unitary matrix is a valid computation. Permutation matrices are unitary matrices, so classical computations are a subset of quantum computations. The quantum states that would be called "correlated" classically are called "entangled" instead. I do think a new name is justified because there is something new in the quantum case, namely violation of Bell's inequality, but the mathematics is the same.
It's unfortunately true that a lot of introductions to quantum computing don't explain the connection to classical computing and often attribute the extra power of quantum computers to the exponential size of the phase space or to entanglement. Neither explanation makesThis explanation doesn't make much sense given that both of these properties arethis property is inherited from the classical case. (Edit: I think it was a mistake to mention entanglement here since there are different notions of entanglement, and it's reasonable to relate quantum computing to entanglement in some senses.) The real nature of the extra power of quantum computers isn't well understood. There seems to be a class of problems in between P and NP which is efficiently solvable on quantum but not classical computers. It includes interesting number-theoretic problems like factoring and discrete logarithm, and it may be related to public-key cryptography somehow. To my knowledge the only interesting quantum algorithm outside that class is Grover's algorithm, which is often described as "database search" but is actually a SAT solver. It's faster (in the worst case) than the best known classical algorithm, but still very slow. No one has found an efficient quantum algorithm for any NP-complete problem, and it seems likely that there aren't any. In other words, a quantum computer's power seems to be very limited compared to the naive idea of a parallel-universe computer that does exponentially many calculations in parallel, since such a computer could solve NP-complete problems efficiently (basically by the definition of NP).
If you don't like the Hilbert space and the tensor products and the exponential size, you can look at the path integral formulation of quantum mechanics. It coexists with the Hilbert space approach because a lot of problems are much easier to solve in one than the other. You might also be interested in this paper. -- BenRG (talk) 19:34, 23 February 2008 (UTC)
Your answer is pretty clear and understandable, especially when you are explaining the transition matrix of the 3-bit computer. My understanding of your answer is that interacted=entangled=correlated and non-interacted=separable. But some new problems appear after reading:
1. Consider the two paricles ${\displaystyle A}$ and ${\displaystyle B}$ in my qestion above. When they are entangled, or non-separable, the wave function ${\displaystyle \psi (x_{1},x_{2})}$ can NOT be written as separated wave functions multiplication ${\displaystyle U(x_{1})V(x_{2})}$. Therefore, we may NOT write
${\displaystyle \Psi (x_{1},x_{2})=\sum _{i,j}a_{i}b_{j}\left(|i\rangle _{A}\otimes |j\rangle _{B}\right)}$
Does that mean the entangled state space of the composite system is NOT in ${\displaystyle H_{A}\otimes H_{B}}$? However, we can still see several examples of entanglement that the state of the composite system is written in term of the basis of ${\displaystyle H_{A}\otimes H_{B}}$ such as the following entangled state:
${\displaystyle |\Psi \rangle ={1 \over {\sqrt {2}}}{\bigg (}|0\rangle _{A}\otimes |1\rangle _{B}-|1\rangle _{A}\otimes |0\rangle _{B}{\bigg )}}$
2. How to determine two particles whether they are conposite or non-composite? Can we say the two particles is two non-composite systems when they are distanced far away, and they are one composite system when they are very closed to each other like the electron and the proton in a hydrogen atom?
Well, I am not quite understand the quantum computing. I think the quantum computer can only solve decision problems such as SAT, but not problems which is sort of like programming that needs many step of calculations. There seems to be many problem useful but belong to NP-complete which is not likely to be solved by quantum computer. It sounds somewhat disappointing. We don't know if the quantum computer is an useful and universal machine even if we can really make a 500-bit (or more then 500-bit) of quntum computer. - Justin545 (talk) 03:51, 24 February 2008 (UTC)
Apologies for neglecting this thread. On your first point, the state space IS ${\displaystyle H_{A}\otimes H_{B}}$, but you can't write arbitrary elements of that space as a sum of products of elements of the subspaces weighted by aibj. You can write arbitrary elements with arbitrary weights cij. There are no ${\displaystyle a_{0},a_{1},b_{0},b_{1}\in \mathbb {C} }$ such that ${\displaystyle a_{0}b_{1}=-a_{1}b_{0}=1/{\sqrt {2}}}$ and ${\displaystyle a_{0}b_{0}=a_{1}b_{1}=0\,\!}$, but there are ${\displaystyle c_{00},c_{01},c_{10},c_{11}\in \mathbb {C} }$ such that ${\displaystyle c_{01}=-c_{10}=1/{\sqrt {2}}}$ and ${\displaystyle c_{00}=c_{11}=0\,\!}$. On the second point, particles that are causally interacting like the electron and proton need to be treated together, and particles that aren't causally interacting can usually be treated separately even if they're nonclassically entangled. The only case where the entanglement of noninteracting particles matters is if you do measurements on both particles and later compare the results; then you can get nonclassical correlations. If you're only doing measurements on one particle then you can always describe it without reference to the other. If the two particles are unentangled then your particle can be represented by a state vector; otherwise it has to be represented by a density matrix. Measuring a property of your particle destroys its entanglement with the other particle (in that property), so once you've measured all the properties that the density matrix says you're classically uncertain about, you can again represent your particle by a state vector. Incidentally, I shouldn't have said that entanglement is just the quantum name for correlation, since it's often used to mean just the nonclassical part of the correlation (the part that violates Bell's inequality).
Quantum computers are universal; they can solve the same problems as classical computers with the same efficiency as classical computers, in terms of big-O notation. But there's not much point using a quantum computer to run a classical algorithm, especially because the constant factor will probably be enormously higher. There are some specific problems for which specifically quantum algorithms are known, but, as you say, they mostly don't seem very useful. There's a big exception that I forgot to mention, which is simulation of quantum systems. I don't know anything about this, but I think that quantum computers could potentially revolutionize fields like lattice QCD. Also, a large quantum computer is a great test of the principles of quantum mechanics; successful factorization of the RSA challenge numbers would be a dramatic confirmation of quantum mechanics and would definitively falsify a large class of hidden variable theories, and for that reason alone I think it's an experiment worth doing. -- BenRG (talk) 16:20, 27 February 2008 (UTC)
Indeed, you have done a pretty good job to prove why the entangled state below

${\displaystyle |\Psi \rangle ={1 \over {\sqrt {2}}}{\bigg (}|0\rangle _{A}\otimes |1\rangle _{B}-|1\rangle _{A}\otimes |0\rangle _{B}{\bigg )}}$

(9)

is not separable i.e. the state can not be writter in the form of (7). Since you have proven there are no numbers ${\displaystyle a_{0},a_{1},b_{0},b_{1}\in \mathbb {C} }$ which can satisfy the conditions you listed above. It it understandable and clear. I apologize for obscuring my question last time. I attempt to clarify my question again.
We have proven the state of a composite system is in ${\displaystyle H_{A}\otimes H_{B}}$ if the state is separable. That's because when the state of a composite system is separable, the corresponding wave function can be written in the form of (2) which also implies (7) is true and therefore we can say the state of the composite system is in ${\displaystyle H_{A}\otimes H_{B}}$. Moreover, we have also proven a separable state has ${\displaystyle m*n}$ basis, which is just a basic property of tensor product, when ${\displaystyle H_{A}}$ has ${\displaystyle m}$ basis and ${\displaystyle H_{B}}$ has ${\displaystyle n}$ basis.
However, it is not enough to say "all" of states of composite systems are in ${\displaystyle H_{A}\otimes H_{B}}$. We have only proven that all of "separable" states are in ${\displaystyle H_{A}\otimes H_{B}}$ (like what I did from step (1) to (7)), but we "have not" proven all of "non-separable" (entangled) states are in ${\displaystyle H_{A}\otimes H_{B}}$. Since any state of a composite is either separable or non-separable (entangled), we can not say "all" of states are in ${\displaystyle H_{A}\otimes H_{B}}$ until we can prove "both" separable and non-separable (entangled) states are in ${\displaystyle H_{A}\otimes H_{B}}$.
Then my question last time was "How to prove all of non-separable (entangled) states of any composite system are also in ${\displaystyle H_{A}\otimes H_{B}}$?". For example, if we take a look at the non-separable (entangled) state (9), we can find the state is in ${\displaystyle H_{A}\otimes H_{B}}$ since its basis ${\displaystyle |0\rangle _{A}\otimes |1\rangle _{B}}$ and ${\displaystyle |1\rangle _{A}\otimes |0\rangle _{B}}$ are in ${\displaystyle H_{A}\otimes H_{B}}$. But I have no idea where the two basis ${\displaystyle |0\rangle _{A}\otimes |1\rangle _{B}}$ and ${\displaystyle |1\rangle _{A}\otimes |0\rangle _{B}}$ come from. The state (9) is denoted in bra-ket notation, but I have no idea how does its corresponding wave function look like. Can we use the similar way (like what I did from step (1) to (7)) to prove all of non-separable (entangled) states of any composite system are also in ${\displaystyle H_{A}\otimes H_{B}}$? If we can prove it, we will be able to say "all" of states of composite systems are in ${\displaystyle H_{A}\otimes H_{B}}$ and we can also explain how a wave function for a separable or non-separable state relates to its bra-ket notation. - Justin545 (talk) 01:42, 3 March 2008 (UTC)
Turing has proven that Turing machine is universal. If we can simulate a Turing machine on a quantum computer, we may prove quantum computer has the ability as strong as Turing machine and therefore it is universal. However, as you said, there's not much point using a quantum computer to run a classical algorithm, it would be required to combine quantum computer with classical computer to achieve similar function of Turing machine.
I agree with you it's an experiment worth doing. But I am afraid that quantum computers are not scalable (well, I'm not sure). Although D-Wave has announced a working prototype of 16-qubit (or even more qubits later) quantum computer, there seems to be some coupling problem with the prototype. It sounds like D-Wave simply put four quantum computers, each of which is 4-qubit, together. I can not see any significant advances when we are talking about if quantum computers are scalable. On the other hand, keeping the system entangled is also difficult, especially when more quanta are involved. Which would limit the time to do quantum operation and therefore limit the complexity of the problems it can solve. Some experts predicted useful quantum computers would appear after one or two decades. Is it just a matter of time? Well, I am not so sure. By contrast, DNA computers are more stable than quantum computers. But DNA computing does not provide any new capabilities from the standpoint of computational complexity theory. It seems only quantum computers have such potential. Other than quantum computers and DNA computers, aren't there any other natural analogies to quantum computers but also stable enough? That question drives me to study why quantum computers are so powerful from mathematical point of view. - Justin545 (talk) 08:56, 3 March 2008 (UTC)

## Chain Rule and Higher Derivative

Let

${\displaystyle z=f(x,y)}$
${\displaystyle x=g(s,t)}$
${\displaystyle y=h(s,t)}$

Then

${\displaystyle {\frac {\partial z}{\partial t}}={\frac {\partial z}{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial z}{\partial y}}{\frac {\partial y}{\partial t}}=\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right)z}$

(1)

${\displaystyle {\frac {\partial ^{2}z}{\partial t^{2}}}={\frac {\partial }{\partial t}}{\frac {\partial z}{\partial t}}={\frac {\partial }{\partial t}}\left[\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right)z\right]=\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right){\frac {\partial z}{\partial t}}}$

(2)

Replace (1) into (2)

${\displaystyle {\frac {\partial ^{2}z}{\partial t^{2}}}=\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right)\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right)z}$

(3)

${\displaystyle {\frac {\partial ^{2}z}{\partial t^{2}}}=\left[{\frac {\partial ^{2}}{\partial x^{2}}}\left({\frac {\partial x}{\partial t}}\right)^{2}+{\frac {\partial ^{2}}{\partial x\partial y}}{\frac {\partial x\partial y}{\partial t^{2}}}+{\frac {\partial ^{2}}{\partial y\partial x}}{\frac {\partial y\partial x}{\partial t^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\left({\frac {\partial y}{\partial t}}\right)^{2}\right]z}$

(4)

${\displaystyle {\frac {\partial ^{2}z}{\partial t^{2}}}={\frac {\partial ^{2}z}{\partial x^{2}}}\left({\frac {\partial x}{\partial t}}\right)^{2}+{\frac {\partial ^{2}z}{\partial x\partial y}}{\frac {\partial x\partial y}{\partial t^{2}}}+{\frac {\partial ^{2}z}{\partial y\partial x}}{\frac {\partial y\partial x}{\partial t^{2}}}+{\frac {\partial ^{2}z}{\partial y^{2}}}\left({\frac {\partial y}{\partial t}}\right)^{2}}$

(5)

Is every step above correct? - Justin545 (talk) 07:07, 27 February 2008 (UTC)

No.
${\displaystyle {\frac {\partial z}{\partial x}}{\frac {\partial x}{\partial t}}}$
is not the same as
${\displaystyle \left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}\right)z.}$
For example, if z = x = t, the former evaluates to 1·1 = 1, and the latter to z = 0. Think of ${\displaystyle {\tfrac {\partial }{\partial x}}}$ as an operator, and abbreviate it as D. Also abbreviate ${\displaystyle {\tfrac {\partial x}{\partial t}}}$ as U. Then in your first line of equations you replaced Dz × U by DU × z.  --Lambiam 09:46, 27 February 2008 (UTC)
Reply to Lambiam: You are right indeed. According to your answer, does that mean (5) is also an incorrect result? What is the correct answer if (5) is incorrect? (i.e. what is ${\displaystyle {\frac {\partial ^{2}z}{\partial t^{2}}}}$ equal to?) Thanks! - Justin545 (talk) 12:15, 28 February 2008 (UTC)
If you take f(x,y) = x and g(s,t) = t2, then z = t2, and the result should be 2. However, the right-hand side of (5) evaluates to 0. I think two terms have gone AWOL. Applying the product rule,
${\displaystyle {\frac {\partial }{\partial t}}\left({\frac {\partial z}{\partial x}}\cdot {\frac {\partial x}{\partial t}}\right)={\frac {\partial }{\partial t}}{\frac {\partial z}{\partial x}}\cdot {\frac {\partial x}{\partial t}}+{\frac {\partial z}{\partial x}}\cdot {\frac {\partial }{\partial t}}{\frac {\partial x}{\partial t}}.}$
The second term seems to be missing, and likewise with x replaced by y. Taking them together, the following should be added to the right-hand side of (5):
${\displaystyle +{\frac {\partial z}{\partial x}}\cdot {\frac {\partial ^{2}x}{{\partial t}^{2}}}+{\frac {\partial z}{\partial y}}\cdot {\frac {\partial ^{2}y}{{\partial t}^{2}}}.}$
--Lambiam 21:59, 28 February 2008 (UTC)

${\displaystyle {\frac {\partial }{\partial t}}\left({\frac {\partial z}{\partial x}}\cdot {\frac {\partial x}{\partial t}}\right)={\frac {\partial }{\partial t}}{\frac {\partial z}{\partial x}}\cdot {\frac {\partial x}{\partial t}}+{\frac {\partial z}{\partial x}}\cdot {\frac {\partial }{\partial t}}{\frac {\partial x}{\partial t}}.}$

(6)

Then

${\displaystyle {\frac {\partial }{\partial t}}\left({\frac {\partial z}{\partial x}}\cdot {\frac {\partial x}{\partial t}}\right)={\frac {\partial }{\partial t}}{\frac {\partial z}{\partial x}}\cdot {\frac {\partial x}{\partial t}}+{\frac {\partial z}{\partial x}}\cdot {\frac {\partial ^{2}x}{\partial t^{2}}}}$

(7)

But what does ${\displaystyle {\tfrac {\partial }{\partial t}}{\tfrac {\partial z}{\partial x}}\cdot {\tfrac {\partial x}{\partial t}}}$ evaluate to? Does it evaluate to 0 or does it evaluate to the rest of terms on the right-hand side of (5)? Thanks! - Justin545 (talk) 03:56, 29 February 2008 (UTC)
The latter (or, more precisely, half of these terms – there is also the same with x replaced by y).  --Lambiam 00:50, 1 March 2008 (UTC)
According to our discussion above, we may conclude that

${\displaystyle {\frac {\partial ^{2}z}{\partial t^{2}}}={\frac {\partial ^{2}z}{\partial x^{2}}}\left({\frac {\partial x}{\partial t}}\right)^{2}+{\frac {\partial ^{2}z}{\partial x\partial y}}{\frac {\partial x\partial y}{\partial t^{2}}}+{\frac {\partial ^{2}z}{\partial y\partial x}}{\frac {\partial y\partial x}{\partial t^{2}}}+{\frac {\partial ^{2}z}{\partial y^{2}}}\left({\frac {\partial y}{\partial t}}\right)^{2}+{\frac {\partial z}{\partial x}}{\frac {\partial ^{2}x}{{\partial t}^{2}}}+{\frac {\partial z}{\partial y}}{\frac {\partial ^{2}y}{{\partial t}^{2}}}}$

(8)

where

${\displaystyle z=f(x,y)}$

(9)

${\displaystyle x=g(s,t)}$

(10)

${\displaystyle y=h(s,t)}$

(11)

The conclusion seems to be correct since I can now verify our conclusion by a related problem in my text book of quantum mechanics:
Show that (12) is true

${\displaystyle {\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}={\frac {\partial ^{2}\psi }{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial \psi }{\partial r}}+{\frac {1}{r^{2}}}{\frac {\partial ^{2}\psi }{\partial \phi ^{2}}}}$

(12)

where

${\displaystyle \psi =\Psi (x,y)}$

(13)

${\displaystyle x=r\cos \phi }$

(14)

${\displaystyle y=r\sin \phi }$

(15)

Eq. (14) and (15) also implies

${\displaystyle x^{2}+y^{2}=r^{2}}$

(16)

Fortunately and finally, I can solve the problem above with your great help! Although I don't understand how does ${\displaystyle {\tfrac {\partial }{\partial t}}{\tfrac {\partial z}{\partial x}}\cdot {\tfrac {\partial x}{\partial t}}}$ become half of the rest of terms on the right-hand side of (5) at the moment, I think I would figure it out later or open a new question here. Our discussion may end here. The following is just how I verify our conclusion with the problem. Just want to make the thread more complete. And you could simply skip the stuff below. Thanks for your help :-)
My proof of (12) is as below:
Replace ${\displaystyle z}$ by ${\displaystyle \psi }$, ${\displaystyle t}$ by ${\displaystyle r}$, ${\displaystyle s}$ by ${\displaystyle \phi }$, ${\displaystyle f}$ by ${\displaystyle \Psi }$ in (8), (9), (10) and (11). We have

${\displaystyle {\frac {\partial ^{2}\psi }{\partial r^{2}}}={\frac {\partial ^{2}\psi }{\partial x^{2}}}\left({\frac {\partial x}{\partial r}}\right)^{2}+{\frac {\partial ^{2}\psi }{\partial x\partial y}}{\frac {\partial x\partial y}{\partial r^{2}}}+{\frac {\partial ^{2}\psi }{\partial y\partial x}}{\frac {\partial y\partial x}{\partial r^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}\left({\frac {\partial y}{\partial r}}\right)^{2}+{\frac {\partial \psi }{\partial x}}{\frac {\partial ^{2}x}{{\partial r}^{2}}}+{\frac {\partial \psi }{\partial y}}{\frac {\partial ^{2}y}{{\partial r}^{2}}}}$

(17)

${\displaystyle \psi =\Psi (x,y)}$

(18)

${\displaystyle x=g(\phi ,r)}$

(19)

${\displaystyle y=h(\phi ,r)}$

(20)

Evaluate derivatives

${\displaystyle {\frac {\partial x}{\partial r}}=\cos \phi }$,    ${\displaystyle \left({\frac {\partial x}{\partial r}}\right)^{2}=\cos ^{2}\phi }$,    ${\displaystyle {\frac {\partial ^{2}x}{\partial r^{2}}}=0}$,

${\displaystyle {\frac {\partial y}{\partial r}}=\sin \phi }$,    ${\displaystyle \left({\frac {\partial y}{\partial r}}\right)^{2}=\sin ^{2}\phi }$,    ${\displaystyle {\frac {\partial ^{2}y}{\partial r^{2}}}=0}$,

${\displaystyle {\frac {\partial x\partial y}{\partial r^{2}}}={\frac {\partial y\partial x}{\partial r^{2}}}=\sin \phi \cos \phi }$

(21)

Replace (21) into (17)

${\displaystyle {\frac {\partial ^{2}\psi }{\partial r^{2}}}={\frac {\partial ^{2}\psi }{\partial x^{2}}}\cos ^{2}\phi +{\frac {\partial ^{2}\psi }{\partial x\partial y}}\sin \phi \cos \phi +{\frac {\partial ^{2}\psi }{\partial y\partial x}}\sin \phi \cos \phi +{\frac {\partial ^{2}\psi }{\partial y^{2}}}\sin ^{2}\phi +{\frac {\partial \psi }{\partial x}}\cdot 0+{\frac {\partial \psi }{\partial y}}\cdot 0}$

(22)

${\displaystyle {\frac {\partial ^{2}\psi }{\partial r^{2}}}=\cos ^{2}\phi {\frac {\partial ^{2}\psi }{\partial x^{2}}}+2\sin \phi \cos \phi {\frac {\partial ^{2}\psi }{\partial x\partial y}}+\sin ^{2}\phi {\frac {\partial ^{2}\psi }{\partial y^{2}}}}$

(23)

By similar way from (17) to (23), we have

${\displaystyle {\frac {\partial ^{2}\psi }{\partial \phi ^{2}}}=r^{2}\sin ^{2}\phi {\frac {\partial ^{2}\psi }{\partial x^{2}}}-2r^{2}\sin \phi \cos \phi {\frac {\partial ^{2}\psi }{\partial x\partial y}}+r^{2}\cos ^{2}\phi {\frac {\partial ^{2}\psi }{\partial y^{2}}}-r\cos \phi {\frac {\partial \psi }{\partial x}}-r\sin \phi {\frac {\partial \psi }{\partial y}}}$

(24)

Multiply (23) by ${\displaystyle r^{2}}$

${\displaystyle r^{2}{\frac {\partial ^{2}\psi }{\partial r^{2}}}=r^{2}\cos ^{2}\phi {\frac {\partial ^{2}\psi }{\partial x^{2}}}+2r^{2}\sin \phi \cos \phi {\frac {\partial ^{2}\psi }{\partial x\partial y}}+r^{2}\sin ^{2}\phi {\frac {\partial ^{2}\psi }{\partial y^{2}}}}$

(25)

${\displaystyle r^{2}{\frac {\partial ^{2}\psi }{\partial r^{2}}}+{\frac {\partial ^{2}\psi }{\partial \phi ^{2}}}=r^{2}\left(\sin ^{2}\phi +\cos ^{2}\phi \right){\frac {\partial ^{2}\psi }{\partial x^{2}}}+r^{2}\left(\sin ^{2}\phi +\cos ^{2}\phi \right){\frac {\partial ^{2}\psi }{\partial y^{2}}}-r\cos \phi {\frac {\partial \psi }{\partial x}}-r\sin \phi {\frac {\partial \psi }{\partial y}}}$

(26)

${\displaystyle r^{2}{\frac {\partial ^{2}\psi }{\partial r^{2}}}+{\frac {\partial ^{2}\psi }{\partial \phi ^{2}}}=r^{2}{\frac {\partial ^{2}\psi }{\partial x^{2}}}+r^{2}{\frac {\partial ^{2}\psi }{\partial y^{2}}}-r\left(\cos \phi {\frac {\partial \psi }{\partial x}}+\sin \phi {\frac {\partial \psi }{\partial y}}\right)}$

(27)

Divide (27) by ${\displaystyle r^{2}}$

${\displaystyle {\frac {\partial ^{2}\psi }{\partial r^{2}}}+{\frac {1}{r^{2}}}{\frac {\partial ^{2}\psi }{\partial \phi ^{2}}}={\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}-{\frac {1}{r}}\left(\cos \phi {\frac {\partial \psi }{\partial x}}+\sin \phi {\frac {\partial \psi }{\partial y}}\right)}$

(28)

By chain rule, we know

${\displaystyle {\frac {\partial \psi }{\partial r}}={\frac {\partial \psi }{\partial x}}{\frac {\partial x}{\partial r}}+{\frac {\partial \psi }{\partial y}}{\frac {\partial y}{\partial r}}=\cos \phi {\frac {\partial \psi }{\partial x}}+\sin \phi {\frac {\partial \psi }{\partial y}}}$

(29)

Replace (29) into (28)

${\displaystyle {\frac {\partial ^{2}\psi }{\partial r^{2}}}+{\frac {1}{r^{2}}}{\frac {\partial ^{2}\psi }{\partial \phi ^{2}}}={\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}-{\frac {1}{r}}{\frac {\partial \psi }{\partial r}}}$

(30)

${\displaystyle {\frac {\partial ^{2}\psi }{\partial x^{2}}}+{\frac {\partial ^{2}\psi }{\partial y^{2}}}={\frac {\partial ^{2}\psi }{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial \psi }{\partial r}}+{\frac {1}{r^{2}}}{\frac {\partial ^{2}\psi }{\partial \phi ^{2}}}}$

(31)

Q.E.D. - Justin545 (talk) 07:19, 4 March 2008 (UTC)
The dy/dx notation is not exactly a fraction, although often you can treat it as a fraction (chain rule, integration by substitution) --wj32 t/c 05:47, 28 February 2008 (UTC)
Reply to wj32t/c: That is ture. I think (well, I'm just a layman) Leibniz notation is somewhat confusing to me. I didn't see (yes, I am not experienced enough) any formal introduction which states when can I treat it as a fraction and when can't I after I learned the derivative for many years. It seems to be a mystery to me! :) - Justin545 (talk) 12:15, 28 February 2008 (UTC)

## Quantum Mechanics: Operator and Eigenvalue

For a given wave function ${\displaystyle \psi (x)}$ of a particle at position ${\displaystyle x}$, the momentum ${\displaystyle p}$ of the particle is the eigenvalue of (1)

${\displaystyle {\hat {P}}\psi (x)=p\psi (x)}$

(1)

where

${\displaystyle {\hat {P}}=-i\hbar {\frac {\partial }{\partial x}}}$

(2)

For example, if the wave function of a particle is

${\displaystyle \psi (x)=\exp \left(i{\frac {p_{0}}{\hbar }}x\right)=e^{i(p_{0}/\hbar )x}}$

(3)

, the corresponding momentum ${\displaystyle p}$ will be

${\displaystyle -i\hbar {\frac {\partial }{\partial x}}e^{i(p_{0}/\hbar )x}=pe^{i(p_{0}/\hbar )x}}$

(4)

${\displaystyle -i\hbar \cdot e^{i(p_{0}/\hbar )x}\cdot \left(i{\frac {p_{0}}{\hbar }}\right)=pe^{i(p_{0}/\hbar )x}}$

(5)

${\displaystyle p_{0}e^{i(p_{0}/\hbar )x}=pe^{i(p_{0}/\hbar )x}}$

(6)

${\displaystyle p=p_{0}}$

(7)

Therefore, the momentum of the particle (3) is ${\displaystyle p_{0}}$. But does it make sense to say the coordinate ${\displaystyle x}$ of the particle (3) is the eigenvalue of (8)? It seems that we will always get ${\displaystyle x={\hat {X}}}$ if we replace (3), or any other wave function, into (8)!

${\displaystyle {\hat {X}}\psi (x)=x\psi (x)}$

(8)

Justin545 (talk) 11:05, 9 March 2008 (UTC)

I think the problem you are running into here is Heisenbergs uncertainty principle which states that there is an unavoidable minimum uncertainty in the product of the momentum and position observables;

${\displaystyle \Delta x\Delta p\geq {\frac {\hbar }{2}}}$

If you claim to know with certainty that the momentum of the particle is ${\displaystyle p_{0}}$ then the position must, of necessity, be completely indeterminate. There is a similar relationship between other pairs of observables, such as Energy and Time. SpinningSpark 13:17, 9 March 2008 (UTC)

Uncertainty principle aside, most wavefunctions are not eigenvectors of most Hermitian operators. In fact no proper (normalizable) wavefunction satisfies ${\displaystyle {\hat {P}}\psi (x)=p\psi (x)}$ for any p. An equation like ${\displaystyle {\hat {A}}\psi =a\psi }$ is not meant to be solved for a as a function of ψ, it's meant to be solved for ψ as a function of a. Most wave functions won't be in the solution set, but they'll be expressible as a sum of elements of the solution set. If you like you can think of Hermitian operators like ${\displaystyle {\hat {P}}}$ as an odd way of specifying an orthogonal basis with a real number attached to each basis vector. -- BenRG (talk) 15:50, 9 March 2008 (UTC)
It turns out what I did was just replace the solution, or the basis, (3) into (1) according to the reply. I also forgot the position of a particle is uncertain, it is good to recall Heisenberg's uncertainty principle. I just did some ridiculous generalization and thought that the operator ${\displaystyle {\hat {X}}}$ can be used as (8) which is similar to (1) :p But it seems the operator ${\displaystyle {\hat {X}}}$ is useless except it is only used to calculate the mean value ${\displaystyle \langle {\hat {X}}\rangle =\int _{-\infty }^{\infty }\psi ^{*}(x)\left[{\hat {X}}\psi (x)\right]dx}$ - Justin545 (talk) 03:37, 10 March 2008 (UTC)

## Quantum Mechanics: Entangled Wave Function

The equation 9, or (EPR9) for short here, in the original paper of EPR paradox gives a wave function of two entangled particles

${\displaystyle \Psi (x_{1},x_{2})=\int _{-\infty }^{\infty }e^{(2\pi i/h)(x_{1}-x_{2}+x_{0})p}dp}$

(EPR9)

where ${\displaystyle h}$ is Planck's constant, ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ are the variables describing the two particles and ${\displaystyle x_{0}}$ is just some constant. According to reduction of the wave packet, when an observable ${\displaystyle B}$ of the first particle is measured, (EPR9) can be expanded by the eigenfunctions ${\displaystyle v_{1}(x_{1}),v_{2}(x_{1}),v_{3}(x_{1}),...}$ of ${\displaystyle B}$ in the form

${\displaystyle \Psi (x_{1},x_{2})=\sum _{s=1}^{\infty }\varphi _{s}(x_{2})v_{s}(x_{1})}$

(EPR8)

where ${\displaystyle \varphi _{1}(x_{2}),\varphi _{2}(x_{2}),\varphi _{3}(x_{2}),...}$ are the corresponding coefficients to the eignefunctions. If ${\displaystyle B}$ is a continuous observable, the coordinate of the first particle, (EPR8) can be written as

${\displaystyle \Psi (x_{1},x_{2})=\int _{-\infty }^{\infty }\varphi _{x}(x_{2})v_{x}(x_{1})dx}$

(EPR15)

According to the paper, the eigenfunctions of ${\displaystyle B}$ is

${\displaystyle v_{x}(x_{1})=\delta (x_{1}-x)}$

(EPR14)

which has corresponding eigenvalue ${\displaystyle x}$. The first question is how come the eignefunction and the eigenvalue of ${\displaystyle B}$ are (EPR14) and ${\displaystyle x}$, respectively? It seems that

${\displaystyle B=x_{1}}$

(1)

and if we let

${\displaystyle f(x_{1})=x_{1}-x}$

(2)

then

${\displaystyle v_{x}(x_{1})=\delta (x_{1}-x)=\delta (f(x_{1}))}$

(3)

Find the solution of

${\displaystyle f(x_{1})=0}$

(4)

we have

${\displaystyle x_{1}-x=0}$

(5)

${\displaystyle x_{1}=x}$

(6)

the right-hand side of (6) is the eigenvalue of ${\displaystyle B}$. Similarly, the eigenvalue of the observable

${\displaystyle Q=x_{2}}$

(EPR17)

can be found by knowing

${\displaystyle \varphi _{x}(x_{2})=\int _{-\infty }^{\infty }e^{(2\pi i/h)(x-x_{2}+x_{0})p}dp=h\delta (x-x_{2}+x_{0})}$

(EPR16)

and let

${\displaystyle g(x_{2})=x-x_{2}+x_{0}}$

(7)

The solution of

${\displaystyle g(x_{2})=0}$

(8)

is

${\displaystyle x_{2}=x+x_{0}}$

(9)

Again, the right-hand side of (9) is the eigenvalue of ${\displaystyle Q}$ which complies with the paper. But it still doesn't explain how to figure out the eignefunction (EPR14).

To continue the unsolved discussion last time, the second question is how to denote the entangled wave function (EPR9) in bra-ket notation? If it can be done, it should help with respect to the last discussion. The bra-ket notation of (EPR9) is supposed to be in the Hilbert space which is the tensor product of the state spaces associated with the the two particles. - Justin545 (talk) 06:36, 12 March 2008 (UTC)

Hi, I'm sorry I haven't followed up to the old thread yet, but maybe a response here will serve the same purpose.
There are many ways to write (EPR9) in bra-ket notation; for example I could just write ${\displaystyle |\Psi \rangle }$ where Ψ is defined by (EPR9). In terms of tensor products of kets inhabiting the state spaces of the individual particles, I could write for example ${\displaystyle \Psi =\int _{-\infty }^{\infty }|x\rangle _{1}\,|x+x_{0}\rangle _{2}\,dx=\int _{-\infty }^{\infty }e^{(2\pi i/h)x_{0}p}\,|p\rangle _{1}\,|{-}p\rangle _{2}\,dp}$. I'm not sure those are properly normalized, to the extent that these mathematical monstrosities can be considered to be normalized to begin with. The product ${\displaystyle |a\rangle |b\rangle }$ might also equivalently be written ${\displaystyle |a\rangle \otimes |b\rangle }$ or ${\displaystyle |a,b\rangle }$ or ${\displaystyle |ab\rangle }$. The subscripts 1 and 2 just indicate which subspaces the kets inhabit; they could be left off since the two subspaces are isomorphic in this case.
I'm not sure I understand your first question. Finding eigenfunctions of the position operator in a single-particle space involves solving equations of the form ${\displaystyle B\Psi =x\Psi }$ where B(z) = z and BΨ is a pointwise function product. It should be clear enough that the only possibilities for Ψ here are functions that are zero everywhere except at a point, and the "normalized" versions of these functions are the delta functions, which form an orthonormal eigenbasis. In the two-particle space things are a bit more interesting. You're now solving ${\displaystyle B\Psi =x\Psi }$ where ${\displaystyle B(z_{1},z_{2})=z_{1}}$. The normalized solutions here are ${\displaystyle \Psi _{x}(x_{1},x_{2})=\delta (x-x_{1})g(x_{2})}$ where g is any normalized function of x2. These do not form a basis; there are far too many of them for that. You have to choose arbitrarily some orthonormal basis for the functions g. This happens because there are degenerate eigenvalues; the discrete analogy is that there's only one orthonormal eigenbasis for diag(1,2,3) but many for diag(1,1,2). -- BenRG (talk) 12:55, 12 March 2008 (UTC)
It's reasonable making the ket be the function of the corresponding eigenvalue since each eigenvalue identifies an unique basis or eigenfunction. But, I am a bit confused with the bra-ket notation ${\displaystyle \Psi =\int _{-\infty }^{\infty }|x\rangle _{1}|x+x_{0}\rangle _{2}\,dx}$ since I expect the bra-ket notation should be in the form

${\displaystyle \Psi =c_{1}|a_{1}\rangle _{1}|b_{1}\rangle _{2}+c_{2}|a_{2}\rangle _{1}|b_{2}\rangle _{2}+c_{3}|a_{3}\rangle _{1}|b_{3}\rangle _{2}+...}$

(10)

rather than in the form

${\displaystyle \Psi =\int _{-\infty }^{\infty }\left(c_{1}|a_{1}\rangle _{1}|b_{1}\rangle _{2}+c_{2}|a_{2}\rangle _{1}|b_{2}\rangle _{2}+c_{3}|a_{3}\rangle _{1}|b_{3}\rangle _{2}+...\right)dx}$

(11)

It seems the integral ${\displaystyle \int _{-\infty }^{\infty }\cdot \,dx}$ surrounding the ket can not be removed. But, will the integral of the ket yield another "ket" in the same space? Another confusion is about the momentum part of the bra-ket example ${\displaystyle \Psi =\int _{-\infty }^{\infty }e^{(2\pi i/h)x_{0}p}|p\rangle _{1}|{-}p\rangle _{2}\,dp}$. I am not able to figure out ${\displaystyle e^{(2\pi i/h)x_{0}p}}$ in it.
Apologies for obscuring my first question. My first question is just to understand why the eigenfunction of ${\displaystyle B}$ is a "delta function". Just wonder how the delta function (EPR14) is mathmatically derived. As you said "Finding eigenfunctions of the position operator in a single-particle space involves solving equations of the form ${\displaystyle B\Psi =x\Psi }$ where B(z) = z and BΨ is a pointwise function product." But I can not understand why it's pointwise. Excuse my poor quantum mechanics, I left so many question marks here :-) Justin545 (talk) 08:43, 13 March 2008 (UTC)
The integral is the sum, it just happens to be a sum with uncountably many terms. You need an uncountable sum here because the wave function is a superposition of uncountably many tensor-product states—the particles could be at x and x+x0 for any real x. I picked somewhat arbitrarily the position basis vectors ${\displaystyle |x\rangle (x')=\delta (x-x')}$ and the momentum basis vectors ${\displaystyle |p\rangle (x)=e^{(2\pi i/h)xp}}$. They're somewhat arbitrary because they're only unique up to scalar multiplication, but they're eigenvectors of the appropriate operators with the appropriate eigenvalues (unless I got the sign convention backwards). Then ${\displaystyle |x_{1}\rangle |x_{2}\rangle (x_{1}',x_{2}')=\delta (x_{1}-x_{1}')\delta (x_{2}-x_{2}')=\delta ((x_{1},x_{2})-(x_{1}',x_{2}'))}$ and ${\displaystyle |p_{1}\rangle |p_{2}\rangle (x_{1},x_{2})=e^{(2\pi i/h)(x_{1}p_{1}+x_{2}p_{2})}}$. So in particular ${\displaystyle e^{(2\pi i/h)x_{0}p}|p\rangle \,|{-}p\rangle (x_{1},x_{2})=e^{(2\pi i/h)(x_{1}-x_{2}+x_{0})p}}$, which is where my momentum integral form came from. The position integral one is odder. When ${\displaystyle x_{1}-x_{2}+x_{0}\neq 0}$, EPR9 gives ${\displaystyle \int _{-\infty }^{\infty }e^{i({\text{some nonzero real}})p}\,dp}$, which to a mathematician is undefined but to a physicist is zero. When ${\displaystyle x_{1}-x_{2}+x_{0}=0}$, EPR9 gives ${\displaystyle \int _{-\infty }^{\infty }dp}$, which to a physicist is the peak of a delta function. So EPR9 describes a "function" that's zero everywhere except on the line ${\displaystyle x_{1}-x_{2}+x_{0}=0}$ where it's infinity, and my position integral expressed that more directly.
Let me explain the operators in a finite-dimensional case. Let's say we have a four-state system with position states ${\displaystyle \left\{\left({\begin{array}{r}1\\0\\0\\0\end{array}}\right),\left({\begin{array}{r}0\\1\\0\\0\end{array}}\right),\left({\begin{array}{r}0\\0\\1\\0\end{array}}\right),\left({\begin{array}{r}0\\0\\0\\1\end{array}}\right)\right\}}$ and momentum states ${\displaystyle {\frac {1}{2}}\left\{\left({\begin{array}{r}1\\1\\1\\1\end{array}}\right),\left({\begin{array}{r}1\\i\\-1\\-i\end{array}}\right),\left({\begin{array}{r}1\\-1\\1\\-1\end{array}}\right),\left({\begin{array}{r}1\\-i\\-1\\i\end{array}}\right)\right\}}$ (the Fourier basis). We can arbitrarily assign a distinct real number to each position and to each momentum. Say the positions are 1,2,3,4 and the momenta are 0,1,2,−1. Then there exists a matrix which scales each position/momentum axis by the corresponding real number. For the position basis it's just diag(1,2,3,4), while for the momentum basis it's U diag(0,1,2,−1) U−1, where U is the unitary matrix whose columns are the aforementioned Fourier basis vectors. This matrix will always be Hermitian (it's a theorem that a matrix is Hermitian if and only if it can be written in the form UAU−1 where U is unitary and A is real diagonal). In this case the scaling factors were all distinct, so by solving the eigenvalue equation we can recover the original basis from the Hermitian matrix. If some scaling factors are equal then all you can tell is that a particular (hyper)plane was scaled by that factor; you can't uniquely recover the basis vectors lying in that hyperplane. That's the case for a four-state system that's the product of two single-particle two-state systems, where the position of the two particles might be represented by the matrices diag(1,2,1,2) and diag(3,3,4,4) respectively. In the continuous case you can't write down matrices any more, but the differential operators serve the same purpose. In order to get the right eigenbasis and eigenvalues, the position operator has to multiply the wave function by a real number corresponding to the position, which is why I described it as a pointwise function product. It might have been better to say that B is an operator defined by (B f)(x) = x f(x). -- BenRG (talk) 14:32, 14 March 2008 (UTC)

## Speed of ...

Say you have a rigid object, such as a meter long titanium pole. When you pull one end, the other end presumably doesn't move instantly, because the force would have to move faster than the speed of light. It must be determined by the flexibility between the bonds of the titanium atoms/molecules. So ignoring the impracticality of its weight and size, if you were to pull a light-year, or some similar length, long titanium pole, the other end wouldn't notice it's been pulled until all the bonds are at their maximum pulling length? At that point, it still can't be instant if it's pulled further? Could anyone elaborate on what goes on? -- MacAddct  1984 (talk &#149; contribs) 15:16, 17 March 2008 (UTC)

I'm not sure what you are asking but yes, it is impossible to pull all of it instantaniously. The rod will deform elastically and the pull will travel down it at the speed of sound in that material. (much slower than the speed of light). Theresa Knott | The otter sank 15:28, 17 March 2008 (UTC)
This is a Reference Desk Frequently-Asked Question but I think you've already sussed-out the answer: What we consider to be the "structural strength" of solid materials is actually the electromagnetic interaction of the electron shells of the constituent atoms. And these electromagnetic interactions can never propagate faster than the speed of light. So if you pull or push on that light-year-long titanium rod, a compression or expansion wave propagates through the material. It certainly doesn't go faster than the speed of light and probably only travels at the speed of sound in that material.
Atlant (talk) 15:31, 17 March 2008 (UTC)
Thanks! Yeah, that's about what I was thinking. It's hard to think in terms of familiar objects moving in ways we're not familiar with.
As for it being FAQ , is there an FAQ (official or unofficial) for the Reference Desk? If not, there really should be one started... -- MacAddct  1984 (talk &#149; contribs) 15:53, 17 March 2008 (UTC)
There is a FAQ page, at Wikipedia:Reference_desk/FAQ. It is embryonic and underused, because it is hidden. We should link to it from the main RD page, and maybe mention it in the Before asking a question/Search first section at the top of each RD page. --169.230.94.28 (talk) 19:04, 17 March 2008 (UTC)
So what happens if you move one end faster than the speed of sound ? StuRat (talk) 17:36, 17 March 2008 (UTC)
You'd be breaking the rod. By definition you would be moving the atoms faster than they could convey that movement on to the atoms next to it; the rod wouldn't be structurally stable if you could do that, by definition. Remember it's the speed of sound in that material, not the speed of sound in air, which is normally what we think the speed of sound as being. --Captain Ref Desk (talk) 18:08, 17 March 2008 (UTC)
Or, if you are moving in a compressive direction, and buckling doesn't occur, you may create a shock wave. --169.230.94.28 (talk) 19:04, 17 March 2008 (UTC)
It's not speed that would break or buckle the rod, but acceleration 196.2.113.148 (talk) 22:07, 17 March 2008 (UTC)

## Proof of Chain Rule

Let

${\displaystyle y=f(u)}$

(1)

${\displaystyle u=g(x)}$

(2)

where ${\displaystyle f(u)}$ and ${\displaystyle g(x)}$ are both differentiable functions. Then

${\displaystyle {\frac {dy}{dx}}}$

(3)

${\displaystyle =\lim _{\Delta x\rightarrow 0}{\frac {f(g(x+\Delta x))-f(g(x))}{\Delta x}}}$

(4)

${\displaystyle =\lim _{\Delta x\rightarrow 0}\left[{\frac {f(g(x+\Delta x))-f(g(x))}{g(x+\Delta x)-g(x)}}\cdot {\frac {g(x+\Delta x)-g(x)}{\Delta x}}\right]}$

(5)

${\displaystyle =\lim _{\Delta x\rightarrow 0}{\frac {f(g(x+\Delta x))-f(g(x))}{g(x+\Delta x)-g(x)}}\cdot \lim _{\Delta x\rightarrow 0}{\frac {g(x+\Delta x)-g(x)}{\Delta x}}}$

(6)

Treat the arrow ${\displaystyle \rightarrow }$ as equal sign ${\displaystyle =}$. We can do the same operation on both sides of the arrow without changing the relationship

${\displaystyle \Delta x\rightarrow 0}$

(7)

${\displaystyle x+\Delta x\rightarrow x}$

(8)

Function ${\displaystyle g(x)}$ is continuous since it is differentiable. Apply ${\displaystyle g(\cdot )}$ to both sides of (8)

${\displaystyle g(x+\Delta x)\rightarrow g(x)}$

(9)

${\displaystyle g(x+\Delta x)-g(x)\rightarrow 0}$

(10)

Let

${\displaystyle \Delta u=g(x+\Delta x)-g(x)}$

(11)

Replace (11) into (10)

${\displaystyle \Delta u\rightarrow 0}$

(12)

Therefore

${\displaystyle \lim _{\Delta x\rightarrow 0}\cdot }$  implies ${\displaystyle \lim _{\Delta u\rightarrow 0}\cdot }$

(13)

Replace (2) into (11)

${\displaystyle g(x+\Delta x)=u+\Delta u}$

(14)

Replace (2), (11), (13) and (14) into (6)

${\displaystyle {\frac {dy}{dx}}=\lim _{\Delta u\rightarrow 0}{\frac {f(u+\Delta u)-f(u)}{\Delta u}}\cdot \lim _{\Delta x\rightarrow 0}{\frac {g(x+\Delta x)-g(x)}{\Delta x}}={\frac {dy}{du}}{\frac {du}{dx}}}$

(15)

Q.E.D.

Is the proof of chain rule above correct and rigorous? - Justin545 (talk) 06:25, 18 March 2008 (UTC)

There are some questionable details. First, if we want a proof we can consider "rigorous", we would want to avoid treating functions as quantities (e.g., u instead of ${\displaystyle g(x)}$) and using Leibniz notation (${\displaystyle {\tfrac {du}{dx}}}$). So as a first step you should try formulating the proof without using u or y, only f, g and their composition ${\displaystyle h=f\circ g}$ (equivalently, ${\displaystyle h(x)=f(g(x))}$. Second, the limit notation, ${\displaystyle \lim _{x\to x_{0}}f(x)=L}$, is one unit. You shouldn't take out the ${\displaystyle x\to x_{0}}$ and treat it as something that stands on its own. This would be acceptable for a handwaving proof, but not for a rigorous one. -- Meni Rosenfeld (talk) 07:37, 18 March 2008 (UTC)
>> "the limit notation, ${\displaystyle \lim _{x\to x_{0}}f(x)=L}$, is one unit. You shouldn't take out the ${\displaystyle x\to x_{0}}$ and treat it as something that stands on its own."
I think you mean the result of (13) is incorrect or not rigorous. Does it mean the whole proof should be re-derived in a completely different way or we can somehow fix the problem so that we don't have to re-derive the whole proof? If (13) is not rigorous, is there any example which opposes it? Thanks! - Justin545 (talk) 09:00, 18 March 2008 (UTC)
(13) and the derivations that lead to it are "not even wrong" in the sense that in the standard framework of calculus they are pretty much meaningless - if you look at the standard rigorous definitions of limits, you will see that they do not allow a function to be used as a variable. It is "correct" in the sense that intuitively, the limit of a function "when" the variable approaches some value is equal to the limit when some othee function approaches its appropriate limit value. However, this "when" business lacks a rigorous interpretation and is haunted by Bishop Berkeley's ghosts.
I have thought about how one might amend the proof, and realized that you also have a mistake much earlier. Step (5), dividing and multiplying by ${\displaystyle g(x+\Delta x)-g(x)}$, is only valid if ${\displaystyle g(x+\Delta x)-g(x)\neq 0}$, but there is no reason to assume that should be the case. Take, for example
${\displaystyle g(x)=\left\{{\begin{array}{ll}x^{2}\sin {\tfrac {1}{x}}&x\neq 0\\0&x=0\end{array}}\right.}$
- a perfectly differentiable function at 0, and yet ${\displaystyle g(x)=0=g(0)}$ infinitely many times in any neighborhood of 0. Thus your proof will not work for it. Those kinds of pathological counterexamples are one of the things that separates rigorous proofs from not-so-rigorous ones. -- Meni Rosenfeld (talk) 10:58, 18 March 2008 (UTC)
>> "Step (5), dividing and multiplying by ${\displaystyle g(x+\Delta x)-g(x)}$, is only valid if ${\displaystyle g(x+\Delta x)-g(x)\neq 0}$, but there is no reason to assume that should be the case. Take, for example..."
I think ${\displaystyle g(x+\Delta x)-g(x)}$ will never be zero since ${\displaystyle \Delta x}$ "is not zero", ${\displaystyle \Delta x}$ is just a value that "very close to zero". Thus, ${\displaystyle g(x+\Delta x)-g(x)}$ will only close to zero but ${\displaystyle g(x+\Delta x)-g(x)}$ will not be zero, and I believe the step (5) would be still correct. As for your example, we may first need to evaluate

${\displaystyle \lim _{\Delta x\rightarrow 0}g(0+\Delta x)}$

(16)

${\displaystyle =\lim _{\Delta x\rightarrow 0}(0+\Delta x)^{2}\sin {\frac {1}{0+\Delta x}}}$

(17)

${\displaystyle =\lim _{\Delta x\rightarrow 0}{\Delta x}^{2}\sin {\frac {1}{\Delta x}}}$

(18)

${\displaystyle =\lim _{\Delta x\rightarrow 0}{\Delta x}^{2}\cdot \lim _{\Delta x\rightarrow 0}\sin {\frac {1}{\Delta x}}}$

(19)

But what will ${\displaystyle \lim _{\Delta x\rightarrow 0}\sin {\frac {1}{\Delta x}}}$ evalute to? I'm not sure... - Justin545 (talk) 01:41, 19 March 2008 (UTC)
You've made two mistakes here. First, ${\displaystyle g(x+\Delta x)-g(x)}$ can be zero for arbitrarily small values of ${\displaystyle \Delta x}$. That's what Meni's example shows. Your (18)=(19) is also mistaken: it would be valid if both limits in (19) existed, but as it happens the second one doesn't. Btw, your error at step (5) is a reasonably common one: IIRC, it occurs in the first few editions of G H Hardy's A Course of Pure Mathematics. Though there are other ways round it, perhaps the best is to avoid division at all in the proof. This has the advantage that your proof immediately generalises to the multi-dimensional case. Algebraist 02:36, 19 March 2008 (UTC)
>> "First, ${\displaystyle g(x+\Delta x)-g(x)}$ can be zero for arbitrarily small values of ${\displaystyle \Delta x}$. That's what Meni's example shows."
Meni's example is not so obvious to me why ${\displaystyle g(x+\Delta x)-g(x)=0}$ where

${\displaystyle g(x)=\left\{{\begin{array}{ll}x^{2}\sin {\tfrac {1}{x}}&x\neq 0\\0&x=0\end{array}}\right.}$

(20)

Could you provide more explanation for it? Or could you tell what theorem supports that ${\displaystyle g(x+\Delta x)-g(x)}$ could be exactly zero?
>> "perhaps the best is to avoid division at all in the proof."
Division could be avoided at all, but it is "intuitive" since the definition of derivative involves division. Besides, even this proof involves division I think. If it does involve division, the proof would be considered non-rigorous. - Justin545 (talk) 03:08, 19 March 2008 (UTC)
>> "(13) and the derivations that lead to it are "not even wrong" in the sense that in the standard framework of calculus they are pretty much meaningless - if you look at the standard rigorous definitions of limits, you will see that they do not allow a function to be used as a variable."
I'm afraid I don't get it that "rigorous definitions of limits do not allow a function to be used as a variable" and why the derivations lead to (13) is meaningless. - Justin545 (talk) 03:37, 19 March 2008 (UTC)
A better question is how are they not meaningless. Where in your textbook did anyone mention taking the ${\displaystyle \Delta x\to 0}$ notation, treating it as a formula on its own, and doing manipulations on it? -- Meni Rosenfeld (talk) 16:48, 19 March 2008 (UTC)
This proof is really from my textbook except the steps from (7) to (14) are missing. The missing steps is my creation since I have no idea how does step (6) become step (15). I want to know, in detail, how does step (6) become step (15) so I added those steps and make discussion here to see if it's correct or not. - Justin545 (talk) 05:30, 20 March 2008 (UTC)
In this case, the proof in your book is wrong (that happens too). Step 5 cannot be justified without more assumptions on g. Your steps 7-12 describe intuitively correct ideas but are far from being rigorous. If g is "ordinary" enough for step 5 to hold, it is possible to justify the leap from (6) to (15), but if you want it to be rigorous you need to rely only on the definition of limits, not on your intuitive ideas of what they mean. -- Meni Rosenfeld (talk) 12:03, 20 March 2008 (UTC)
If you want a similar proof that really works, one way would be to apply the mean value theorem to f at (4). This allows you to replace ${\displaystyle f(g(x+h))-f(g(x))}$ 163.1.148.158 (talk) 12:54, 18 March 2008 (UTC)
The mean value theorem I found is
${\displaystyle f'(c)={\frac {f(b)-f(a)}{b-a}}}$
where ${\displaystyle c\in (a,b)}$. But I have no idea how to apply it to f at (4) and why it's needed to replace ${\displaystyle f(g(x+h))-f(g(x))}$? Thanks! - Justin545 (talk) 02:38, 19 March 2008 (UTC)
To the OP: it is not necessary to avoid division to make the proof rigorous, but it is one way of doing it. I meant division specifically by values of the domain or codomain of f and g (since these are the things that become vectors when you generalise), but I see I failed to say it. Apologies. The definition of the derivative need not involve such division (the one lectured to me didn't, for example), and one could argue that it shouldn't. Not sure if one would be right, mind. To your specific question, Meni's function is zero whenever x is 1/(nπ) (n a non-zero integer). Thus we have g(x)=0 for arbitrarily small x. Algebraist 03:34, 19 March 2008 (UTC)
>> "The definition of the derivative need not involve such division (the one lectured to me didn't, for example), and one could argue that it shouldn't."
The familiar definition of derivative is

${\displaystyle {\frac {dy}{dx}}=f'(x)=\lim _{\Delta x\rightarrow 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}$

(21)

It seems you was were saying that (21) is not a "rigorous" definition. It sounds pretty odd to me. I thought (21) is the only way of defining derivative. There are many lemmas or theorems about derivative in my textbook are originated from (21). It's not easy to imagine there other there are other definition without division. - Justin545 (talk) 05:13, 19 March 2008 (UTC)
No, that's not what he was saying. He said that you can define the derivative without division, not that you should. Definition (21) (at least the ${\displaystyle f'(x)=\lim _{\Delta x\rightarrow 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}$ part) is rigorous and is indeed the standard definition. There is nothing wrong with division, except for division by zero. The main flaw in your proof is dividing by ${\displaystyle g(x+\Delta x)-g(x)}$ which may be zero. Just because ${\displaystyle \Delta x\neq 0}$ doesn't mean that ${\displaystyle g(x+\Delta x)\neq g(x)}$. This is just common sense, you don't need my complicated example for that. -- Meni Rosenfeld (talk) 16:48, 19 March 2008 (UTC)
>> "To your specific question, Meni's function is zero whenever x is 1/(nπ) (n a non-zero integer). Thus we have g(x)=0 for arbitrarily small x."
I'm afraid I'm not able to proof (20) is zero when ${\displaystyle x\in \left\{{\frac {1}{n\pi }}{\Bigg |}n\in \mathbb {Z} \land n\neq 0\right\}}$. But I think ${\displaystyle g(x+\Delta x)-g(x)}$ will be zero when ${\displaystyle g(x)=a}$ where ${\displaystyle a}$ is any fixed constant. (Edit: which means I was ridiculously wrong. Apologies.) - Justin545 (talk) 05:42, 19 March 2008 (UTC)
x2sin(1/x) is zero whenever sin(1/x) is zero, which happens whenever 1/x is a multiple of pi, which happens whenever x = 1/npi for some integer n. You know, you're not really all that wrong. You have the right idea, you just don't have the tools to implement it. Here's roughly how my analysis textbook solves the problem. First, you define a new function h(y). I'll skip the details about intervals and mappings, and just say that it's focused on f and ignoring g, and assumes some interesting value c has been chosen. Let h(y) = (f(y)-f(g(c)))/(y-g(c)) if y does not equal g(c), and let h(y) = f'(y) if y=g(c). All that should be possible by assumption. Since g is differentiable at c, g is continuous at c, so h of g is continuous at c, so lim x->c (hog)(x)=h(g(c))=f'(g(c)). By the definition of h, f(y)-f(g(c))=h(y)(y-g(c)) for all y, so ((fog)(x)-(fog)(c)) = (hog(x))(g(x)-g(c)), so for x not equal to c we have ((fog)(x)-(fog)(c))/(x-c) = (hog(x))(g(x)-g(c))/(x-c). Taking the limit of both sides as x->c, then (fog)'(c)=lim x->c ((fog)(x)-(fog)(c))/(x-c) = (lim x->c hog(x))(lim x->c (g(x)-g(c))/(x-c)) = f'(g(c))g'(c). Black Carrot (talk) 06:36, 19 March 2008 (UTC)
>> "x2sin(1/x) is zero whenever sin(1/x) is zero, which happens whenever 1/x is a multiple of pi, which happens whenever x = 1/npi for some integer n."
Thanks! Now I understand it.
>> "You know, you're not really all that wrong. You have the right idea, ..."
Here's roughly how my analysis textbook solves the problem. First, you define a new function ${\displaystyle h(y)}$. I'll skip the details about intervals and mappings, and just say that it's focused on ${\displaystyle f}$ and ignoring ${\displaystyle g}$, and assumes some interesting value ${\displaystyle c}$ has been chosen. Let
${\displaystyle h(y)={\begin{cases}{\frac {f(y)-f(g(c))}{y-g(c)}},&y\neq g(c)\\f'(y),&y=g(c)\end{cases}}}$
All that should be possible by assumption. Since ${\displaystyle g}$ is differentiable at ${\displaystyle c}$, ${\displaystyle g}$ is continuous at ${\displaystyle c}$, so ${\displaystyle h}$ of ${\displaystyle g}$ is continuous at ${\displaystyle c}$, so
${\displaystyle \lim _{x\rightarrow c}h(g(x))=h(g(c))=f'(g(c))}$.
By the definition of ${\displaystyle h}$,
${\displaystyle f(y)-f(g(c))=h(y)[y-g(c)]}$, ${\displaystyle \forall y\neq g(c)}$,
so
${\displaystyle f(g(x))-f(g(c))=h(g(x))[g(x)-g(c)]}$,
so for ${\displaystyle x}$ not equal to ${\displaystyle c}$ we have
${\displaystyle {\frac {f(g(x))-f(g(c))}{x-c}}={\frac {h(g(x))[g(x)-g(c)]}{x-c}}}$.
Taking the limit of both sides as ${\displaystyle x\rightarrow c}$, then
${\displaystyle (f\circ g)'(c)=\lim _{x\rightarrow c}{\frac {f(g(x))-f(g(c))}{x-c}}=\left[\lim _{x\rightarrow c}h(g(x))\right]\left[\lim _{x\rightarrow c}{\frac {g(x)-g(c)}{x-c}}\right]=f'(g(c))g'(c)}$.
Did I misunderstand your response? Thanks! - Justin545 (talk) 09:01, 19 March 2008 (UTC)
After "By the definition of h", it should be for all y. If y=g(c), both sides are equal to zero, and the equality still holds. That one line is pretty much the goal of the whole thing, finding a way to get that conclusion without dividing by zero anywhere. Black Carrot (talk) 16:02, 19 March 2008 (UTC)
But I think it should be

${\displaystyle {\begin{cases}f(y)-f(g(c))=h(y)[y-g(c)]&\forall y\neq g(c)\\f'(y)=h(y)&\forall y=g(c)\end{cases}}}$

(22)

by definition of ${\displaystyle h}$.
By the way, I think derivatives of composition functions should be able to rewritten rewrite to be rewritten in Leibniz notation as below

${\displaystyle (f\circ g)'(c)={\frac {df(g(c))}{dc}}={\frac {d}{dc}}f(g(c))}$

(23)

${\displaystyle f'(g(c))={\frac {df(g(c))}{dg(c)}}={\frac {d}{dg(c)}}f(g(c))}$

(24)

Justin545 (talk) 07:02, 20 March 2008 (UTC)

## Gödel's Incompleteness Theorems: Is The Math Reliable?

Many sciences depend on the math to prove something and use it for rigorous study. But Gödel's incompleteness theorems states:

For any consistent formal, computably enumerable theory that proves basic arithmetical truths, an arithmetical statement that is true, but not provable in the theory, can be constructed.1 That is, any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete.

Therefore, I would like to know are all the theories we use (for biology, chemistry, physics, medicine, computer science, etc.) considered to be consistent theories themself? And are all of maths we learn from elementary school to university considered to be reliable and don't contradict each other? - Justin545 (talk) 07:00, 19 March 2008 (UTC)

What do you mean by "reliable"? I would say the mathematics underlying biology, chemistry, etc is far less likely to be in error than the biology and chemistry themselves. But if you're looking for apodeictic certainty -- the sort of thing that, by its nature, cannot be wrong -- well, sorry, we don't have any of that. In my humble opinion, anyway. We'll settle for being right; we don't have to be completely certain.
Or as the Eagles put it -- "I could be wrong, but I'm not". --Trovatore (talk) 07:18, 19 March 2008 (UTC)
Math is used as a tool for studying many sciences. If the tool itself is "problematic" or "questionable", the consequences of employing it are very likey to be wrong! "reliable" means "consistent" and "don't contradict". Incompleteness theorems, in other words, states: if every arithmetical statement that is true and is provable in the theory, the theory is inconsistent but it is complete. So what I want to know is: the math we use is either
(1) consistent but not complete, or
(2) complete but not consistent - Justin545 (talk) 07:48, 19 March 2008 (UTC)
Well, we don't know for certain, but the general view is that we are in the consistent but not complete case, which is really not as bad as it sounds at first. If you know any group theory, consider that there are plenty of facts about groups that cannot be deduced from the axioms for a group alone -- the theory of groups, as given by the most basic group axioms, is not complete. In some sense this is because there are different models, different groups, that all meet those basic axioms, and thus have truths that are not derivable from just those axioms. You can think of arithmetic as being similar, with different models, just with the proviso that, unlike groups, we haven't found models that disagree on any arithmetic facts you or I would generally care about. -- Leland McInnes (talk) 12:11, 19 March 2008 (UTC)
Could you give an example of what you mean, there? I've done quite a bit of group theory and have never come across something that's true but can't be proven from the axioms of a group (together with ZF). --Tango (talk) 13:39, 19 March 2008 (UTC)
>> "which is really not as bad as it sounds at first"
It sounds bad to me... since we are not able to justify our math.
>> "there are plenty of facts about groups that cannot be deduced from the axioms for a group alone...the theory of groups is not complete."
I don't know any of group theory, but: Could those set of un-deducible facts themself be considered as axioms? Will group theory be complete if we make those facts axioms? - Justin545 (talk) 02:50, 20 March 2008 (UTC)
What I mean is that the group axioms don't uniquely define the group, but rather a whole slew of possible objects each of which satisfies the axioms of being a group. Thus there isn't a unique model of "group" specified by the axioms, but rather each and every different group is a different model that satisfies the basic group axioms. There are things that are true of particular groups that you can't deduce from just the group axioms -- you need more information (more axioms in essence) to pin down which group (or class of groups) you are talking about. Thus there are truths that occur in systems that fulfill the group axioms that are not provable from the group axioms alone. Does that make more sense? -- Leland McInnes (talk) 17:26, 19 March 2008 (UTC)
Right, but arithmetic (and set theory) are quite a different case from group theory. Arithmetic is not the study of models of arithmetic; it's the study of numbers. All models of arithmetic have (copies of) all the true natural numbers, but some of them also have fake natural numbers. The one true Platonic intended model of arithmetic has only the true ones, and none of the fake ones, and is unique up to a canonical isomorphism. There's a limit to what we can find out about the behavior of the true natural numbers from a fixed set of axioms and first-order logic alone. That doesn't mean we have to stop there. --Trovatore (talk) 17:49, 19 March 2008 (UTC)
Let me insert a response here: essentially, yes. I was going for a loose analogy suggesting that incompleteness isn't really a horrible thing. As to models of arithmetic, there is the question of what the intended model is, and, for the sufficiently messy cases where we can't practically distinguish is from some fake model, whether it even matters. I would liken it (again, an analogy, so don't take it too literally) to science trying to model (in a different sens of the word) some objective reality -- we can't know the objective reality, only our model of it, but as long as we can't tell the difference between our model and the reality (i.e. where our model hasn't been falsified) we may as well consider our model as true. -- Leland McInnes (talk) 20:55, 19 March 2008 (UTC)
Sure, there are plenty of things that can't be proven using just the axioms of a group, but those things aren't true. ${\displaystyle gh=hg\quad \forall g,h\in G}$ can't be proven just from the group axioms, because it isn't true in general. That's not incompleteness, it's just a false statement. If you want it to be true, you have to add an additional assumption (that the group G be cyclic, say). If the statement can be stated in terms of only the group axioms, and is true, then it can be proven using only the group axioms. If it can't be stated using only those axioms, then it being impossible to prove isn't a case of incompleteness. A framework is incomplete if there are unprovable true statements within that framework. --Tango (talk) 18:06, 19 March 2008 (UTC)
Tango, I think you have not thought these things through terribly well. At least it isn't clear to me what you mean by a framework, or unprovable but true within a framework. Is a framework a first-order theory, or a model, or what exactly? --Trovatore (talk) 18:19, 19 March 2008 (UTC)
Let me be a little less Socratic and hopefully more constructive (took me more time to figure out how to say this than it did to ask a question). Let's take a specific example. Peano arithmetic neither proves nor (we suppose) refutes the claim "Peano arithmetic is consistent" (the claim is usually abbreviated Con(PA). Therefore there are models of PA in which Con(PA) is true, and there are models of PA in which Con(PA) is false. So we can make an analogy with your example statement "multiplication is commutative": There are models of group theory (that is, groups) in which "multiplication is commutative" is true, and there are other models of group theory in which it's false.
Here's the big difference: There's no such thing as "the intended group", the group that defines the truth value of "multiplication is commutative in group theory". We're interested in Abelian groups, and we're also interested in non-Abelian groups, and you just have to specify which ones you're talking about.
But Peano arithmetic (we suppose) really is consistent. The models of PA that think otherwise are wrong about that. That's not to say they're not interesting (people devote whole careers to them), but merely by their opinion on this one issue, they prove that they are not the intended model. --Trovatore (talk) 18:35, 19 March 2008 (UTC)
Ok, I think I understand what you're saying now. I'm not sure I agree, though. Group theory is defined in terms of set theory. Once you've determined a model of set theory, your model of group theory is completely determined (a group is simply a set together with a function - both concepts defined outside of group theory). Is there a (reasonable) model of set theory in which all groups are abelian? --Tango (talk) 18:56, 19 March 2008 (UTC)
Whoah, we have to be careful here -- the phrase "group theory" is being used in two different ways (my fault, probably). When I say "model of group theory==group", I'm using "group theory" to mean the first-order theory defined by the three axioms (identity existence, existence of two-sided inverses, associativity). That's different of course from "group theory" as in "the study of groups", which is not a formal first-order theory at all. Please re-read my remarks keeping this clarification in mind -- they won't have made any sense at all if you were thinking of "model of group theory" as meaning "model of the study of groups(?)". --Trovatore (talk) 19:02, 19 March 2008 (UTC)
Ok, but I think my point still stands. Group theory, in that sense, is still built on set theory. Any model of group theory must be a model of set theory, since it has to satisfy ZF plus the 3 axioms of a group. Can you have such model of set theory in which all groups are abelian? For example, set theory provides all kinds of methods of combining sets to produce sets - those method can be used to combine groups and produce other groups. Is there a model in which all such possible combinations are abelian? --Tango (talk) 19:22, 19 March 2008 (UTC)
No, of course not. It's a theorem of ZF that there exist non-Abelian groups. But you're still mixing things in a confusing way -- whatever a "model of group theory" is, it's certainly not something that "satisfies ZF plus the three axioms of a group"; that doesn't even make sense; the ZF axioms are in a different language from the group axioms. If by "group theory" we mean the three axioms, then "model of group theory" means precisely "group", and does not imply that the model satisfies the ZF axioms. That's the sense in which I was using the phrase "model of group theory". --Trovatore (talk) 19:47, 19 March 2008 (UTC)
[edit conflict] I think you're still misunderstanding. Sure, it's possible to define groups as a special kind of set in ZF set theory. But that is not what we are talking about here. You are probably confused by the fact that ZF is an immensely more complex system then the meager 3 axioms of groups (to which I will refer as GP). But they are the same thing for this discussion. Each of them is a collection of rules governing a world of objects. A bag of objects can either satisfy these rules, in which case it is called a model of the theory, or not. In the case of ZF, the models are very complicated and hard to point out, but I think Godel's constructible universe is an example of one. For GP, every simple little group is a model, and the elements of the group are the basic objects. In ZF, you can have models that satisfy choice, and models that don't; in GP, you can have models that satisfy commutativity, and models that don't. -- Meni Rosenfeld (talk) 19:55, 19 March 2008 (UTC)
Ok, I get you. So, if I'm understanding your definition of completeness correctly, a theory being complete is basically equivalent to there only being one model satisfying it? Since, if there are two models of the theory, they must differ in some way and that way gives rise to a statement which is true in one model and not true in the other. --Tango (talk) 20:05, 19 March 2008 (UTC)
Not quite. It's possible for two models to satisfy all the same first-order statements, but to be nonisomorphic. For example the theory of torsion-free abelian groups is complete, but there are nonisomorphic torsion-free abelian groups. --Trovatore (talk) 20:38, 19 March 2008 (UTC)
If memory serves, all torsion-free abelian groups are of the form ${\displaystyle {\mathbb {Z}}\times \cdots \times {\mathbb {Z}}}$. I sometimes get a little confused with the orders of logical statements, but is ${\displaystyle \exists g\in G,\forall h\in G,\exists n\in {\mathbb {N}},h=g^{n}}$ not a first order statement satisfied by only one of those groups? --Tango (talk) 21:32, 19 March 2008 (UTC)
It's not a first-order statement in the language of groups. The language of groups has no function symbol for nth power and no symbol for the set of all natural numbers. --Trovatore (talk) 21:40, 19 March 2008 (UTC)
Ah, good point. I think we've got there, I have no more questions! Thank you. (Well, I'm sure one will come to me at 3am, but that can wait until tomorrow. ;)) You know... I really do wish my Maths dept. had a proper course on logic, it seems a really major topic to miss out (we did a bit in 1st year, but it was really just half a module on set theory in rather vague terms - the phrase "first order logic" did not appear once). I've done some reading on the subject, I should do some more... --Tango (talk) 21:47, 19 March 2008 (UTC)
For the record, the rationals Q, the reals R, and the p-adic integers Zp are all torsion-free abelian groups. Your structure theorem holds for finitely generated abelian groups. Tesseran (talk) 16:03, 21 March 2008 (UTC)
Excellent point. That doesn't change my (nevertheless incorrect) point, though. --Tango (talk) 16:11, 21 March 2008 (UTC)
Theories in physics (thought to be the trunk of the science "tree") are not necessarily consistent. Helpfulness of established theories begin and end with orders of magnitude. This is why we have semiclassical physics, and the mesoscopic scale, and why we differentiate "Physics in the Classical Limit," Relativity, and quantum theory. Mac Davis (talk) 08:01, 19 March 2008 (UTC)
Did I misunderstand Gödel's Incompleteness Theorems or the Incompleteness Theorems is really about distinguishing between classical physics and modern physics? I thought Incompleteness Theorems is just all about the math but not the physics. And Incompleteness Theorems should be able to be applied to all kinds of science, not just physics. I'm not offending, just hope someone can clarify the concept. - Justin545 (talk) 09:23, 19 March 2008 (UTC)
The incompleteness theorems don't apply particularly well to the kind of math you're probably familiar with. That is, they aren't relevant. They claim that a specific very sensible, very general way of justifying things doesn't work very well in certain contexts. That doesn't mean that what we were trying to justify is wrong, just that we'll have to look somewhere else for confidence in it. It also throws essentially no doubt on actual arithmetic, which deals only with fairly small numbers and can be justified by direct experience and some common sense. Black Carrot (talk) 08:10, 19 March 2008 (UTC)

Theorems are proved based on axioms. Experience in proving theorems made mathematicians conjecture that every true statement could eventually be proved. This conjecture turned out to be naive. The incompletenes theorem states that the conjecture is not true: the fact that some statement cannot be proved does not imply that the statement is false. The incompleteness theorem does not threaten the reliabilty of mathematics. Bo Jacoby (talk) 11:06, 19 March 2008 (UTC).

>> "The incompleteness theorem does not threaten the reliabilty of mathematics."
I think you mean the mathematics we use is consistent but not complete since there are still some true statements can not be proven by mathematics, and also you said the mathematics is reliable. But your opinion sounds a bit different with the other. For example, some one said "mathematicians believe that mathematics is consistent". Which means mathematicians "can not prove" mathematics is consistent. - Justin545 (talk) 01:20, 20 March 2008 (UTC)
Before the incompleteness theorem mathematics was supposed to be consistent and complete. After the incompleteness theorem mathematics is known to be incomplete. The incompleteness theorem does not clarify whether mathematics is consistent or not. So I do not say that mathematics is reliable as a consequence of the incompleteness theorem, nor do I say that mathematics is unreliable as a consequence of the incompleteness theorem. Bo Jacoby (talk) 05:03, 22 March 2008 (UTC).

In general, mathematicians believe that mathematics (however we may choose to define that term) is consistent. This is mainly because we have not found an inconsistency (a statement P such that both P and not-P can be proved). We can even express this "conjecture" as a (humungously complex) arithmetical statement. Problem is that we also know, thanks to Gödel, that we cannot prove this statement - at least, not without stepping up to some more powerful axiom system, which then leads a "turtles all the way down" type of regression. Bottom line is, most mathematicians say "that's interesting and slightly weird" but they don't lose sleep worrying that mathematics might be inconsistent. On a scale of rational evidence-based confidence, you can put the consistency of mathematics right up at the 99.99% mark. Gandalf61 (talk) 12:21, 19 March 2008 (UTC)

>> "(a statement P such that both P and not-P can be proved) ... thanks to Gödel, that we cannot prove this statement"
I believe Gödel used "logic" to build his Incompleteness Theorems. But isn't logic a kind of mathematics? If logic is a kind of mathematics, Gödel was using a tool, about which its consistency can not be sure, to prove his Incompleteness Theorems. In other words, Incompleteness Theorems is questionable since the logic itself is questionable. - Justin545 (talk) 01:57, 20 March 2008 (UTC)
It sounds like we can not use logic to justify the logic itself. It's meaningless! If we are doubt of the logic, we should also doubt of the natural language, such as English, Chinese,... etc., we use, since the logic is just a symbolization of our natural language. We can do inference by the logic and we can also do inference by our language. - Justin545 (talk) 02:15, 20 March 2008 (UTC)
Learn to live with uncertainty. (Like you have a choice....) --Trovatore (talk) 02:23, 20 March 2008 (UTC)
Maybe, learn to live with confidence in the logic and the language. - Justin545 (talk) 02:28, 20 March 2008 (UTC)
Confidence is one thing; fully justified certainty is quite a different thing. You seem to be looking for the latter. You're not going to find it. --Trovatore (talk) 02:33, 20 March 2008 (UTC)
Knowing the incompleteness theorems somewhat shakes my confidence in the logic and math. I was just trying to find my confidence in them by this discussion. I'm not pursuing the absolute certainty. Imperfection is allowed. - Justin545 (talk) 03:01, 20 March 2008 (UTC)
Ah, I see. Well, I suppose maybe they should shake your confidence. Just not very much. The take-away message is that mathematics is not really different in kind from the experimental sciences -- you can have confidence in it because it's observed to work, not because it's built up from an unassailable foundation via unassailable steps. The latter idea never really did make sense, even before Gödel -- there was always an infinite regress built into it, as you've noticed. But Gödel does seem to have made people come to terms with this more. --Trovatore (talk) 03:13, 20 March 2008 (UTC)
Literally, your prior response "I would say the mathematics underlying biology, chemistry, etc is far less likely to be in error than the biology and chemistry themselves." seems to contradict "mathematics is not really different in kind from the experimental sciences". Well, just my picking hobby, I'm not trying to "offend" you again. And excuse my English, I don't know why you use "kind" in italic.
>> "not because it's built up from an unassailable foundation via unassailable steps."
The foundation may not be unassailable, but the stpes is unassailable I think. That's why I like deduction more than induction.
>> "The latter idea never really did make sense"
The latter idea? - Justin545 (talk) 03:40, 20 March 2008 (UTC)
I said the mathematics was "far less likely" to be in error. That's a difference in degree, not a difference in kind. Your English seems to be pretty good, but I see on your user page that you're not a native speaker -- are you familiar with the phrases "different in degree" and "different in kind"? --Trovatore (talk) 04:15, 20 March 2008 (UTC)
Sure, I know what are "different in degree" and "different in kind". Maybe I understand your point now. Well, thanks for the compliment. But actually I can not write articles without a dictionary. Besides, my English grammar is questionable. - Justin545 (talk) 05:03, 20 March 2008 (UTC)

## Quantum Mechanics: Orthogonality of Dirac Delta Function

The functions ${\displaystyle u(x)}$ and ${\displaystyle v(x)}$ are said to be orthogonal on interval ${\displaystyle (\alpha _{1},\beta _{1})}$ if their inner product is zero

${\displaystyle \int _{\alpha _{1}}^{\beta _{1}}u(x)v(x)\,dx=0}$

(1)

For complex-valued functions or kets ${\displaystyle f(x)=|f\rangle }$ and ${\displaystyle g(x)=|g\rangle }$, they are said to be orthogonal on interval ${\displaystyle (\alpha _{2},\beta _{2})}$ when

${\displaystyle \langle f|g\rangle =\int _{\alpha _{2}}^{\beta _{2}}f^{*}(x)g(x)\,dx=0}$

(2)

To continue my last discusstion, Quantum Mechanics: Entangled Wave Function, my question is how to prove the orthogonality of the Dirac delta function ${\displaystyle \delta (x)}$ mathematically? Or some related resource? Thanks! - Justin545 (talk) 08:53, 20 March 2008 (UTC)

This is really a maths question, but the way to prove it is to look at the definition. The dirac delta function is zero at all the places except for the argument. If you multiply this zero value by another dirac delta you, will get a zero, unless the arguments are the same, when it will be greater than zero. ${\displaystyle \delta (t-x)\delta (t-y)=0}$ if x not equal y. Graeme Bartlett. Another way to look at the dirac delta is that it is a sampling function, when you integrate its product with another function, it samples the other function at the argument to the dirac delta. (talk) 11:14, 20 March 2008 (UTC)

## Quantum: Measurement vs. Schrödinger Equation

1. Article Copenhagen interpretation: Each measurement causes a change in the state of the particle, known as wavefunction collapse.

2. Article Schrödinger equation: The Schrödinger equation is commonly written as an operator equation describing how the state vector evolves over time.

Although I don't fully understand quantum mechanics, the two items above seem to be related to each other.

When an observable of a quantum system is measured, the state ${\displaystyle |\psi (t)\rangle }$ of the system can be expressed as

${\displaystyle |\psi (t)\rangle =\sum _{i}\psi _{i}|i\rangle }$

(1)

where ${\displaystyle |i\rangle }$ is the ${\displaystyle i}$th eigenfunction, which is associated to eigenvalue ${\displaystyle i}$, of the observable and

${\displaystyle \psi _{i}=\langle i|\psi (t)\rangle }$

(2)

which will "suddenly" or "discretely" collapse from ${\displaystyle |\psi \rangle }$ to one of terms, say ${\displaystyle \psi _{a}|a\rangle }$, of the right-hand side of (1). The rest of the terms not associated to eigenvalue ${\displaystyle a}$ simply vanish after the measurement.

On the other hand, Schrödinger equation

${\displaystyle {\hat {H}}(t)\left|\psi \left(t\right)\right\rangle =\mathrm {i} \hbar {\frac {d}{dt}}\left|\psi \left(t\right)\right\rangle }$

(3)

where

${\displaystyle {\hat {H}}(t)=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V\left(\mathbf {r} ,t\right)}$

(4)

describing how the state vector ${\displaystyle |\psi (t)\rangle }$ evolves over time. When the state ${\displaystyle |\psi (t)\rangle }$ of the system is measured, the apparatus measuring the system will interact with the system and makes change to the potential field ${\displaystyle V\left(\mathbf {r} ,t\right)}$. Therefore, the state ${\displaystyle |\psi (t)\rangle }$ should evolve "smoothly" or "continuously" according to the varying potential ${\displaystyle V\left(\mathbf {r} ,t\right)}$ during the measurement. According to Schrödinger Equation (3) and (4) together with ${\displaystyle V\left(\mathbf {r} ,t\right)}$, we should be able to figure out the final state of the system after the measurement.

It seems that the measuring process can be explained by the two ways, wavefunction collapse & Schrödinger equation, above. Do they contradict? Is "wavefunction collapse" compatible with "Schrödinger Equation"? - Justin545 (talk) 08:12, 26 March 2008 (UTC)

Yes, they do contradict. There is no place for a collapse in Schrödinger's Equation, which is one reason why David Bohm concluded that there can be no collapse of a wave function, that it's a figment of the model. — kwami (talk) 08:34, 28 March 2008 (UTC)
Figment of the model? I'm amazed that they do contradict since the two items are considered to be postulate of quantum mechanics in some textbook of quatum mechanics IIRC. It should imply at least one of the two items is wrong. So has David Bohm or some one else solved the contradiction? And how about the experimental evidence? Experimental evidence supports which one? - Justin545 (talk) 08:55, 28 March 2008 (UTC)
The Copenhagen interpretation is just that, an interpretation. It has no empirical support (or at least it didn't some years ago) and is in no way an axiom of QM. I've heard people who use it make the excuse that none of the other interpretations have any empirical support either, even though some of them are less counter-intuitive than Copenhagen. Bohm attempted to create a deterministic hidden-variable QM, but was unable to solve some fundamental problems before he died. One of his students continued with his work, but I don't know if he ever got anywhere. — kwami (talk) 09:04, 28 March 2008 (UTC)
I think neither Schrödinger equation nor wavefunction collapse could be axiom of QM. Therefore, they are considered to be "postulates" of QM. Schrödinger equation seems to correctly predict the spectral lines of each atomic models. On the other hand, wavefunction collapse seems to correctly predict the phenomenon of quantum entanglement. And both of the predictions has been observed by many experiments. The experimental results seem to support both of the two items. But there may be some subtle differences are missing (enough precision? relativity?). When reading the article Copenhagen interpretation, we should also notice the sentence "The Copenhagen interpretation consists of attempts to explain the experiments and their mathematical formulations in ways that do not go beyond the evidence to suggest more (or less) than is actually there." - Justin545 (talk) 09:41, 28 March 2008 (UTC)
>> "There is no place for a collapse in Schrödinger's Equation"
Theoretically, is it possible to build a thought experiment in which the measuring process is simulated and use the Schrödinger equation to find out the result of the experiment? Had some one done this job before? - Justin545 (talk) 10:02, 28 March 2008 (UTC)
As above, it is an open problem. There are ongoing efforts to create "measurement" systems that can be fully modeled quantum mechanically via Schrodinger's equation for all parts of the system. Observationally it is certainly true that wavefunctions "collapse", by which one means that a single particle state interacting with a much larger collection of particles will usually be observed to reside in an eigenstate, however the mechanics of how this occurs is not well understood. The dynamical timescale is apparently quite short, and the systems that need to be modelled fairly large (e.g. 30 or 40 plus particles evolving simultaneously). Dragons flight (talk) 16:02, 28 March 2008 (UTC)
It is likely to get only numerical solution to Schrödinger's equation for so many particles. Finding the solution of exact expression for so many particles seems impossible.
After reviewed the article wavefunction collapse this morning, I noticed this:
By the time John von Neumann wrote his famous treatise Mathematische Grundlagen der Quantenmechanik in 1932[1], the phenomenon of "wave function collapse" was accommodated into the mathematical formulation of quantum mechanics by postulating that there were two processes of wave function change:
1. The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement, as outlined above.
2. The deterministic, unitary, continuous time evolution of an isolated system that obeys Schrödinger's equation (or nowadays some relativistic, local equivalent).
In general, quantum systems exist in superpositions of those basis states that most closely correspond to classical descriptions, and -- when not being measured or observed, evolve according to the time dependent Schrödinger equation, relativistic quantum field theory or some form of quantum gravity or string theory, which is process (2) mentioned above. However, when the wave function collapses -- process (1) -- from an observer's perspective the state seems to "leap" or "jump" to just one of the basis states and uniquely acquire the value of the property being measured, ${\displaystyle e_{i}}$, that is associated with that particular basis state. After the collapse, the system begins to evolve again according to the Schrödinger equation or some equivalent wave equation.
It seems that we should treat wave function change as an if-then-else statement in programming. If the change is discrete then use wavefunction collapse method else if the change is continuous then use Schrödinger's method. Not quite a an elegant way in science. - Justin545 (talk) 02:30, 29 March 2008 (UTC)
Any mathematical model that involves "alakazaam!" is obviously fundamentally flawed. However, QM is also the most precisely confirmed theory in human history. As a result, you get the null "Shut up and calculate!" interpretation, which seems to be what most people actually abide by. — kwami (talk) 18:19, 28 March 2008 (UTC)
It sounds like you (kwami) are really sick of quantum theories and those people who are learning it. Unfortunately, I am not here trying to pick a fight with someone over my post. I mean maybe you want to ignore this post and take a rest for a while. - Justin545 (talk) 00:41, 29 March 2008 (UTC)

## Quantum: Determine the Force between the Electrons

According to Coulomb's law, when two electrons are put close to each other, there will be electrostatic forces act on them and the force can be determined as

${\displaystyle F={1 \over 4\pi \varepsilon _{0}}{\frac {q_{1}q_{2}}{r^{2}}}}$

where ${\displaystyle r}$ is the distance between the two electrons. But according to Uncertainty Principle, we can not make sure the positions of the electrons so how can we decide ${\displaystyle r}$? - Justin545 (talk) 09:18, 19 April 2008 (UTC)

Coulomb's law is a classical approximation and only applies when r is large compared to inter-atomic distances and the charges are stationary or moving slowly compared to the speed of light. For the full quantum Monty, you need quantum electrodynamics, which explains non-classical phenomena such as the Casimir effect. Gandalf61 (talk) 11:45, 19 April 2008 (UTC)
Well, you don't actually need QED to make use of Coulomb's law in quantum mechanics. I just finished an elementary quantum mechanics course (where we never touched QED), and we used Coulomb's law all the time. It turns out that "force" is not a very useful concept in quantum mechanics. Much more often, one speaks of the potential, which is
${\displaystyle V={1 \over 4\pi \varepsilon _{0}}{\frac {q_{1}q_{2}}{r}}}$
(If you know any vector calculus, the potential is defined so that ${\displaystyle F=-\nabla V}$, where ${\displaystyle \nabla }$ is the gradient operator.) So the way you actually use Coulomb's law is that you have a space of possible positions of some particles (say, a proton and an electron in a hydrogen ataom), and you use Coulomb's law to assign a value of the potential to every point of this space. Then you solve the Schrödinger equation on this space to obtain the possible energy states of the system. As you say, the distance between the two particles is uncertain, but this is not a problem because the electrostatic contribution to the energy is also uncertain. It is only the total energy that is certain; it is uncertain what fraction of that is electrostatic potential energy and what fraction is kinetic energy. (If you're confused about how the sum of two uncertain numbers can be uncertain, imagine I flip a penny and a nickel and don't tell you the results, but I tell you I got one head and one tail. The total number of heads is now certain, but the number of pennies or nickels that came up heads is uncertain.) —Keenan Pepper 14:42, 19 April 2008 (UTC)
Suppose a wave function of two electrons is ${\displaystyle \psi (x,y)}$ where ${\displaystyle x}$ and ${\displaystyle y}$ are the postions of the respective electron. Then ${\displaystyle |\psi (x,y)|^{2}}$ should be the probability density function for finding the first electron at ${\displaystyle x}$ and the second electron at ${\displaystyle y}$. And the normalization condition should be ${\displaystyle \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|\psi (x,y)|^{2}\,dx\,dy=1}$. The distance between them should be ${\displaystyle r=|x-y|}$. The probability of finding the first electron at ${\displaystyle x'}$ and the second electron at ${\displaystyle y'}$ should be ${\displaystyle P(x',y')=\int _{y'}^{y'+dy}\int _{x'}^{x'+dx}|\psi (x,y)|^{2}\,dx\,dy}$. So do you mean the potential is determined by the weighted sum in discrete form
${\displaystyle V={1 \over 4\pi \varepsilon _{0}}\left(P_{1}{\frac {q_{1}q_{2}}{r_{1}}}+P_{2}{\frac {q_{1}q_{2}}{r_{2}}}+P_{3}{\frac {q_{1}q_{2}}{r_{3}}}+...\right)}$
{\displaystyle {\begin{aligned}V=&{1 \over 4\pi \varepsilon _{0}}\left[P(x'_{1},y'_{1}){\frac {q_{1}q_{2}}{|x'_{1}-y'_{1}|}}+P(x'_{1},y'_{2}){\frac {q_{1}q_{2}}{|x'_{1}-y'_{2}|}}+P(x'_{1},y'_{3}){\frac {q_{1}q_{2}}{|x'_{1}-y'_{3}|}}+...\right]\\+&{1 \over 4\pi \varepsilon _{0}}\left[P(x'_{2},y'_{1}){\frac {q_{1}q_{2}}{|x'_{2}-y'_{1}|}}+P(x'_{2},y'_{2}){\frac {q_{1}q_{2}}{|x'_{2}-y'_{2}|}}+P(x'_{2},y'_{3}){\frac {q_{1}q_{2}}{|x'_{2}-y'_{3}|}}+...\right]\\+&{1 \over 4\pi \varepsilon _{0}}\left[P(x'_{3},y'_{1}){\frac {q_{1}q_{2}}{|x'_{3}-y'_{1}|}}+P(x'_{3},y'_{2}){\frac {q_{1}q_{2}}{|x'_{3}-y'_{2}|}}+P(x'_{3},y'_{3}){\frac {q_{1}q_{2}}{|x'_{3}-y'_{3}|}}+...\right]\\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\vdots \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\vdots \\\end{aligned}}}
${\displaystyle V={1 \over 4\pi \varepsilon _{0}}\sum _{x'}\sum _{y'}P(x',y'){\frac {q_{1}q_{2}}{|x'-y'|}}}$
${\displaystyle V={1 \over 4\pi \varepsilon _{0}}\sum _{x',y'}P(x',y'){\frac {q_{1}q_{2}}{|x'-y'|}}}$
or in continuous form
${\displaystyle V={1 \over 4\pi \varepsilon _{0}}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|\psi (x,y)|^{2}{\frac {q_{1}q_{2}}{|x-y|}}\,dx\,dy}$
? - Justin545 (talk) 06:00, 20 April 2008 (UTC)
Getting out of my depth here so this might be a stupid comment. I don't see how you could use that in a real calculation. ψ(x,y) and hence P(x',y') is not a given. The starting information is usually the potential function V(r,φ,z) which is then fed into Schödinger to get the answer. Also, I cannot understand why you are working in two dimensions only, you need x,y,z in cartesian co-ordinates - or was that just for brevity? SpinningSpark 08:53, 20 April 2008 (UTC)
The use of symbol ${\displaystyle y}$ would be confusing, but ${\displaystyle y}$ doesn't mean the y-axis which is perpendicular to the x-axis. For brevity, I assume both of the two electrons lie on the same line so that they can only move in one-dimentional space (imaging two electrons in the same wire of infinite length).
The starting information, the potential field ${\displaystyle V}$ is where my qustion came from. For a system which consists of only one charged particle, an electron for example, the potential field ${\displaystyle V}$ of the particle should be determined by its environment. But consider a system of more than one charged particles, the potential field ${\displaystyle V}$ should be a function of those charged particles as well as their environment since the charged particles will interact with each other (either attraction or repulsion because of electrostatic forces). - Justin545 (talk) 11:02, 20 April 2008 (UTC)

## Quantum: Potential between the Electrons

Suppose there are two electrons in an one-dimentional spatial space (imaging two electrons in the same wire of infinite length). And the wave function of two electrons is ${\displaystyle \Psi =\psi (x_{1},x_{2})}$ where ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ are the postions of the respective electron. Then ${\displaystyle |\psi (x_{1},x_{2})|^{2}}$ should be the probability density function for finding the first electron at ${\displaystyle x_{1}}$ and the second electron at ${\displaystyle x_{2}}$. And the normalization condition of the wave function ${\displaystyle \Psi }$ should be

${\displaystyle \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|\psi (x_{1},x_{2})|^{2}\,dx_{1}\,dx_{2}=1}$

(1)

And the potential ${\displaystyle V}$ of the electron should be

${\displaystyle V={1 \over 4\pi \varepsilon _{0}}{\frac {q_{1}q_{2}}{r}}}$

(2)

according to Coulomb's law and the fact that ${\displaystyle F=-\nabla V}$. Where ${\displaystyle r}$ is the distance between the two electrons. For two electrons in one-dimensional space, it should be

${\displaystyle r=|x_{1}-x_{2}|}$

(3)

Replace (3) into (2), the potential ${\displaystyle V}$ becomes

${\displaystyle V={1 \over 4\pi \varepsilon _{0}}{\frac {q_{1}q_{2}}{|x_{1}-x_{2}|}}={q_{1}q_{2} \over 4\pi \varepsilon _{0}}{\frac {1}{|x_{1}-x_{2}|}}\equiv v(x_{1},x_{2})}$

(4)

Because of the Uncertainty Principle, we can not make sure the position of each electron. But we know the probability of finding them by ${\displaystyle |\psi (x_{1},x_{2})|^{2}}$. (one way to find out the potential ${\displaystyle V}$ of the electron is to introduce quantum electrodynamics according to Gandalf61's suggestion in the discussion Determine the Force between the Electrons which, however, seems too complex to me) Then my idea to find out the effective potential ${\displaystyle V_{d}}$ in discrete form is simply to calculate the weighted sum

${\displaystyle V_{d}=P_{1}V_{1}+P_{2}V_{2}+P_{3}V_{3}+...}$

where ${\displaystyle P_{1}}$, ${\displaystyle P_{2}}$, ${\displaystyle P_{3}}$,... are probabilities of finding the two electrons at different positions and ${\displaystyle V_{1}}$, ${\displaystyle V_{2}}$, ${\displaystyle V_{3}}$,... are the potentials at the corresponding positions. So

${\displaystyle V_{d}=\sum _{x_{2}}\sum _{x_{1}}P(x_{1},x_{2})v(x_{1},x_{2})}$
${\displaystyle V_{d}=\sum _{x_{2}}\sum _{x_{1}}P(x_{1},x_{2})\left({q_{1}q_{2} \over 4\pi \varepsilon _{0}}{\frac {1}{|x_{1}-x_{2}|}}\right)}$
${\displaystyle V_{d}={q_{1}q_{2} \over 4\pi \varepsilon _{0}}\sum _{x_{2}}\sum _{x_{1}}{\frac {P(x_{1},x_{2})}{|x_{1}-x_{2}|}}}$

where ${\displaystyle P(x_{1},x_{2})}$ is the probability of finding the first electron at ${\displaystyle x_{1}}$ and the second electron at ${\displaystyle x_{2}}$. Or, the effective potential ${\displaystyle V_{c}}$ in continuous form

${\displaystyle V_{c}=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|\psi (x_{1},x_{2})|^{2}\,v(x_{1},x_{2})\,dx_{1}\,dx_{2}}$
${\displaystyle V_{c}=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }|\psi (x_{1},x_{2})|^{2}\,\left({q_{1}q_{2} \over 4\pi \varepsilon _{0}}{\frac {1}{|x_{1}-x_{2}|}}\right)\,dx_{1}\,dx_{2}}$
${\displaystyle V_{c}={q_{1}q_{2} \over 4\pi \varepsilon _{0}}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\frac {|\psi (x_{1},x_{2})|^{2}}{|x_{1}-x_{2}|}}\,dx_{1}\,dx_{2}}$

and my question is can I determine the potential of the two electrons by the equation ${\displaystyle V_{c}={q_{1}q_{2} \over 4\pi \varepsilon _{0}}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\frac {|\psi (x_{1},x_{2})|^{2}}{|x_{1}-x_{2}|}}\,dx_{1}\,dx_{2}}$? - Justin545 (talk) 03:16, 27 April 2008 (UTC)

I don't mean to be disrespectful or denegrating, but honestly, this looks suspiciously like a homework problem to me. =Axlq 04:12, 27 April 2008 (UTC)
It appears they've done prior work, indicating that they require assistance on something researched. Wisdom89 (T / C) 04:16, 27 April 2008 (UTC)
I am neither doing a homework nor writing a paper. Studying things about quantum is just one of my hobbies while I am free. - Justin545 (talk) 05:15, 27 April 2008 (UTC)
You have the right form for the expectation value of the potential between the electrons, however, you have to use ${\displaystyle v(x_{1},x_{2})}$ in solving the Schrodinger equation (written below) by which you determine the wave function. Until you solve the Schrondinger equation you won't know the form of Ψ and hence you couldn't compute the expectation value anyway.
${\displaystyle (-{\hbar ^{2} \over 2m_{1}}{\partial ^{2} \over \partial x_{1}^{2}}-{\hbar ^{2} \over 2m_{2}}{\partial ^{2} \over \partial x_{2}^{2}}+v(x_{1},x_{2}))\Psi (x_{1},x_{2},t)=i\hbar {\partial \over \partial t}\Psi (x_{1},x_{2},t)}$
Here I've included the explicit time dependence because your proposed boundary condition (two electron on an infinite wire) would not admit any non-trivial time independent solutions. Dragons flight (talk) 08:30, 27 April 2008 (UTC)
Does that mean I can know the form of ${\displaystyle \Psi }$ by solving the following Schrondinger
${\displaystyle \left[-{\hbar ^{2} \over 2m_{1}}{\partial ^{2} \over \partial x_{1}^{2}}-{\hbar ^{2} \over 2m_{2}}{\partial ^{2} \over \partial x_{2}^{2}}+\left({1 \over 4\pi \varepsilon _{0}}{\frac {q_{1}q_{2}}{|x_{1}-x_{2}|}}\right)\right]\Psi (x_{1},x_{2},t)=i\hbar {\partial \over \partial t}\Psi (x_{1},x_{2},t)}$
? Thanks. - Justin545 (talk) 09:43, 27 April 2008 (UTC)
Yes, that looks correct to me. If you extend this to three spatial dimensions, you have the hydrogen-like atom, which is perhaps the most useful exactly solvable model in quantum mechanics. Make sure to work in the center of momentum frame and use the reduced mass; then the problem separates nicely into a center-of-mass motion part (whose eigenstates are plane waves), and a relative motion part (whose eigenstates are atomic orbitals). —Keenan Pepper 18:04, 27 April 2008 (UTC)

## Explain Jargon: 'decay'

From the article Optical pumping:

"...however due to the cyclic nature of optical pumping the bound electron will actually be undergoing repeated excitation and decay between upper and lower state sublevels..."

How to give a proper (internal) link to the underlined 'decay'? - Justin545 (talk) 10:18, 3 May 2008 (UTC)

I'd say Atomic orbital is your best bet. It doesn't have the word in it, but explains the concept. I think you know the term describes the electron moving to a lower energy state, which used to be described as "falling/decaying to a lower orbit". The article already links to Energy level which would be my other choice. Orbital decay deals with satellites. I think the problem you are encountering is that while the specialists have moved on to new concepts Electron configuration Quantum states the other fields quoting them still use classical systems and vocabulary. That makes linking rather difficult. Hope this helps. Lisa4edit (talk) 11:50, 3 May 2008 (UTC)
Thanks. Maybe that's why I never see a linking 'decay' in related articles. But do you think if we should create a brand new article or a redirect page for it? There seems to be no relevant articles on the disambiguation page of decay. - Justin545 (talk) 12:20, 3 May 2008 (UTC)
I don't think you should create a new article. I would say that the article Excited state is the right place to explain this meaning of decay. A link could then be put in from the Decay disambiguation page. Both Energy level and Excited state are already linked from the Optical pumping article so I don't think any new links are required in that article. You could join the two terms together in one link though like this:- excitation and decay. SpinningSpark 12:31, 3 May 2008 (UTC)
What is really missing is an unambiguous term for this type of decay, analogous to "radioactive decay", "orbital decay", etc. I can't think of one, however. "Electron decay" doesn't sound right, and "atomic decay" seems to be a synonym for radioactive decay. If there were an unambiguous term, that could be redirected to Excited state, and links could be made to the unambiguous term in articles and on the dab page Decay.--Srleffler (talk) 18:18, 3 May 2008 (UTC)
I've changed the link according to SpinningSpark's suggestion. I think the meaning of 'decay' would be opposite to the meaning of 'excite' but you know I'm not a physicist and not sure about it. According to Lisa4edit, I may name the new page tile as 'Energy Level Decay' or 'Decay (energy level)'. Make the link be excitation and decay may not be sufficient. As we can see, we can not event find any 'decay' in the article Excited state, which would still make the meaning of 'decay' ambiguous to a lay reader. Therefore, I tend to creat a new / redirect page for it to explain it explicitly. If not, it would be better to extend the article Excited state and explain 'decay' in the aritcle explicitly. - Justin545 (talk) 00:22, 4 May 2008 (UTC)
Not sure that I'd like that title. The exitation and decay terms are already explained in the continuing sentence "excitation and decay between upper and lower state sublevels." What you are trying to achieve is to make that phrase understandable to a layperson. That is why I suggested linking to Atomic orbital, and Spinningspark suggested Excited state. The former explains the classical model of electrons in orbits. Depending on what your educational background in Physics is, this may be more accessible to you than the latter link which homes in on the process based on modern concepts. I'm not sure we need a new stub for "decay". On the one hand one could explain the concept both in classical view and modern terms there. On the other hand we might end up with a stub that never goes very far; and particularly in Physics, you'd open a Pandora's box of terms that would have to be explained based on various concepts and at various levels of depth. I would not find either 'Energy Level Decay' nor 'Decay (energy level) a sufficiently clear and understandable definition. Your best bet might be to add a couple of phrases to your Optical pumping page. It is not that extensive yet that it could not accommodate another paragraph. Lisa4edit71.236.23.111 (talk) 03:22, 4 May 2008 (UTC)
When I read Optical pumping and Trapped ion quantum computer, I was confused. That's why I think if I can do somthing to improve the articles and make it more clear. Unfortunately, I am one of the layperson we are talking about and I'm not a host of Optical pumping which means I may not be qualified to make that phrase understandable. As you said, the created new page could end up with a stub that never goes very far. Therefore, extending Excited state and link to it may be the alternative choice. The first consideration is that the jargon in question also appears in the other articles. For example:
• Trapped ion quantum computer: "...Hyperfine qubits are extremely long-lived (decay time of the order of thousands to millions of years) and phase/frequency stable (traditionally used for atomic frequency standards). Optical qubits are also relatively long-lived (with a decay time of the order of a second)...a laser couples the ion to some excited states which eventually decay to one state...If the ion decays to one of the other states, the laser will continue to excite the ion until it decays to the state that does not interact with the laser...resulting in a photon being released when the ion decays from the excited state. After decay, the ion is continually excited by the laser and repeatedly emits photons..."
(which would probably give a second jargon 'decay time'. again it's ambiguous to a layperson like me)
Using links may reduce the job to explain it each time it appears in an article. The second consideration is that I think a jargon can not be explained by an article unless it appears in the articles at least once. As we can see, the jargon 'decay' can not be found neither in Excited state nor in Atomic orbital, which meams they are likely not formal articles describing 'decay'. I believe a good article should be "accessible to the lay reader and yet are also useful to the professional working" as stated in Wikipedia:WikiProject Physics. - Justin545 (talk) 06:15, 4 May 2008 (UTC)

I think the standard dictionary definition is sufficient: "to decline from a sound or prosperous condition". Granted, "sound" and "prosperous" are not the terms a physicist would use, but in the context of the original sentence ("decay between upper and lower state sublevels") one can figure out that the decay is this transition between the upper level and the lower level (whatever they are) without having to be a physicist. --Itub (talk) 17:10, 7 May 2008 (UTC)

## Does an Electron Occupy Space? Why?

There is a paragraph in article Electron I don't get it well:

"The electron is currently described as a fundamental or elementary particle. It has no known substructure. Hence, for convenience, it is usually defined or assumed to be a point-like mathematical point charge, with no spatial extension. However, when a test particle is forced to approach an electron, we measure changes in its properties (charge and mass). This effect is common to all elementary particles. Current theory suggests that this effect is due to the influence of vacuum fluctuations in its local space, so that the properties measured from a significant distance are considered to be the sum of the bare properties and the vacuum effects (see renormalization)."

As my understanding, it tells us that an electron DOSE occupy space (its radius is 2.8179 × 10−15 m according to the article) and also it somewhat explains why an electron occupies space. But I'm afraid it doesn't help for understanding the reason why electron occupies space, especially:

• "when a test particle is forced to approach an electron, we measure changes in its properties (charge and mass)"
• "the properties measured from a significant distance are considered to be the sum of the bare properties and the vacuum effects"

the sentences above are confusing to me...dose anyone know what do they mean?

Additional question: What if we force two electrons collide? Will they just overlap with each other and then pass through without collision?

(solving the questions above might involve some QFT and/or QED but they are too tough to me) - Justin545 (talk) 06:06, 5 May 2008 (UTC)

While that paragraph does leave much to question, I think the answer is definitely yes. The electron does occupy space. It is just really small compared with the nucleus of the atom. Thus, when a test particle is forced to approach the electron, it causes changes that can be measured. The problem with small things is that trying to measure and define their properties actually changes their behavior. So, we can only guess about their real behavior and measure from a distance so as to not affect the results.

In answer to the second question, since they occupy space, they can't just go through each other. They will collide and bounce apart. Leeboyge (talk) 07:22, 5 May 2008 (UTC)

I'm not sure. But I think things collide that's because of the fundamental interactions (strong interaction, weak interaction, electromagnetic force and gravitation) between them. If we are able to overcome the repulsion and all other forces between the two electrons, I think they will overlap and go through each other when they approach to each other. Even though they occupy space! (please correct me if I'm wrong) - Justin545 (talk) 08:14, 5 May 2008 (UTC)

Sticking my nose temerariously into a question well outside my expertise: I think the answer is that, as of 2008, no one really knows whether the electron (in the sense of the bare charge, not the cloud of virtual electrons and positrons that surround it) occupies space. My understanding is that QED takes it to be a point charge, occupying no space, and having therefore infinite density (and by the way also infinite charge -- the cloud of virtual particles surrounding it "shield" most of that charge). QED works extraordinarily well, but the extent to which it faithfully represents the underlying reality, as opposed to simply being a collection of hacks that get the right answer -- again, nobody knows. --Trovatore (talk) 08:28, 5 May 2008 (UTC)

Telling is the sentence after the statement of the electron's "radius": "This is the radius that is inferred from the electron's electric charge, by using the classical theory of electrodynamics alone, ignoring quantum mechanics." I take that to mean it helps in some physical problems (collision cross sections maybe?), but shouldn't necessarily be interpreted as a literal "size" as we think of it in intuitive macroscopic terms. --Prestidigitator (talk) 08:40, 5 May 2008 (UTC)

I'm afraid it depends on what you mean by "occupy space". If you want the everyday-scale notion of occupying space to extend down to the quantum level then I think you're pretty much forced to say that electrons do occupy space, since most of the space supposedly "occupied" by solid objects only has electron orbitals in it. The space occupied by an electron in this sense has nothing to do with any intrinsic size. A hydrogen atom and a He+ ion both have a single electron orbiting a nucleus, but the former is larger than the latter because of the different nuclear charge, even though it's an identical electron that occupies most of the space.

Protons and neutrons do have an intrinsic size: they're bound states of more fundamental particles (much like an atom is), and the bound state has a characteristic radius (much like an atom does). People have proposed preon theories where the electron is a composite particle, but none of them have had much success. If an electron were composed of preons then I suppose it would have a size in this sense, but I don't know much about this.

Strings (from string theory) have a characteristic size but don't really occupy space, being one-dimensional subsets of a three-or-more-dimensional space.

It seems fairly likely that future physical theories won't have a concept of "space" any more (except as a large-scale approximation), which will make this question even harder to answer.

2.8179 × 10−15 m is the classical electron radius. Ignore it, it's meaningless. -- BenRG (talk) 10:28, 5 May 2008 (UTC)

I could be wrong, but I think what happens if you collide electrons is Møller scattering. --98.217.8.46 (talk) 15:17, 5 May 2008 (UTC)

To make the conclusion, I presume:

1. (classical and modern view) An electron occupies NO space. It is just a macroscopic illusion to say that an electron occupies space as the article Electron did.
2. (classical view) An electron is just a tiny, movable electric field with definite position. Because the movable electric field has mass ${\displaystyle m_{e}=9.1\times 10^{-31}{\mbox{ kg}}}$ and inertia, when it feels a net force ${\displaystyle {\mathbf {F}}}$ it will accelerate with ${\displaystyle {\mathbf {a}}={\frac {\mathbf {F}}{m_{e}}}}$.
3. (classical view) Because the electron is nothing more than an electric field with mass ${\displaystyle m_{e}}$, when we force two electrons move toward to each other they will finally pass through each other WITHOUT collision. Moreover, they will overlap (perfectly) halfway while they move toward each other. (however, Pauli exclusion principle tells us "no two identical fermions may occupy the same quantum state simultaneously" so I'm not sure if it holds in modern physics)
4. Electrons are not solid/stiff objects/entities so they don't collide. The collision is just an illusion due to the repulsion Coulomb forces between them. So saying an electron occupies space is meaningless.
5. The sentences "Hence, for convenience, it is usually defined or assumed to be a point-like mathematical point charge, with no spatial extension. However, when a test particle is forced to approach an electron, we measure changes in its properties (charge and mass)." in Electron should be removed or rewritten as it misleads the reader to think an electron occupies space.

the conclusion above may sound ridiculous. Please point out my errors if any. - Justin545 (talk) 01:53, 6 May 2008 (UTC)

Two electrons can't be forced to move towards eachother until the overlap, since that would require infinite energy (the force follows an inverse square law, so approaches infinity as they electrons approach each other). --Tango (talk) 16:56, 6 May 2008 (UTC)
This is false. Because electrons are ultimately probability clouds, the amount of charge at any point in space is infinitesimal. Infinitesimal charges can be seperated by neglible distance without requiring infinite energy, and as a consequence electron clouds can overlap (and do routinely in all atoms with more than 1 electron). Dragons flight (talk) 18:51, 6 May 2008 (UTC)
True, but Justin listed that item as being the classical view, and it's incorrect from the classical viewpoint. --Tango (talk) 12:33, 7 May 2008 (UTC)
From the classical viewpoint, two electrons could not overlap because ${\displaystyle V={1 \over 4\pi \varepsilon _{0}}{\frac {q_{1}q_{2}}{r}}}$. If ${\displaystyle r=0}$ there will be infinite potential.
From the quantum viewpoint, two electrons could overlap. For example, suppose ${\displaystyle \Psi =\psi (x_{1},x_{2})}$ is a wavefunction of two electrons with position ${\displaystyle x_{1}}$, ${\displaystyle x_{2}}$ respectively. Then ${\displaystyle \int _{c}^{d}\int _{a}^{b}|\psi (x_{1},x_{2})|^{2}\,dx_{1}\,dx_{2}}$ will be the probability of finding the first electron between interval ${\displaystyle (a,b)}$ and the second electron between interval ${\displaystyle (c,d)}$. If ${\displaystyle \psi (x_{1},x_{2})}$ is well-designed such that:
1. ${\displaystyle \int _{c}^{d}\int _{a}^{b}|\psi (x_{1},x_{2})|^{2}\,dx_{1}\,dx_{2}\neq 0}$
2. ${\displaystyle (a,b)}$ and ${\displaystyle (c,d)}$ overlap. (which implies ${\displaystyle a=c}$ and ${\displaystyle b=d}$)
3. ${\displaystyle a}$ is very close to ${\displaystyle b}$ so they are almost at the same point
then we have a chance (with non-zero probability) of finding the two electrons arbitrarily close to each other! However, two electrons overlap may sound like to violate Pauli exclusion principle. But as long as the two electrons with different momentum they are not in the same quantum state. (again I could be wrong. please point out my errors) - Justin545 (talk) 02:11, 7 May 2008 (UTC)
I added scare quotes to "classical electron radius" in the electron article because it's not really an electron radius, but that doesn't mean electrons don't occupy space! There's no Platonic notion of occupying space that we can appeal to here. It's a matter of how you want to define the English phrase. If you define it in such a way that electrons don't occupy space, then neither does anything else, which makes the phrase useless. I think that's reason enough to use some other definition.
Also, it's pretty hard to make the case that electrons are at all point-like to begin with. In the original quantum mechanics (now taught to undergrads) you start with a quasiclassical theory where electrons are point particles and then "quantize" that, which introduces wave behavior. (The scare quotes here are because quantization (physics) isn't actually quantization. Horrible terminology.) But in quantum field theory you start with a quasiclassical field theory of electrons, so even at the classical level the electrons are spread out and occupy space to the same extent that electromagnetic fields do. When you "quantize" this theory you get field phonons which are referred to as particles, not because they are but because the term was already grandfathered in by the time QFT was developed. The only sense in which electron field quanta are particle-like is that they tend to spatially localize when they interact with a thermodynamically irreversible system (like a cloud chamber). I still don't grok why this happens, but it has something to do with quantum decoherence.
Probably the most misleading part of QFT is Feynman diagrams. They're not pictures of pointlike particles propagating and interacting in spacetime, they're graphs (in the computer-science sense) and their origin is combinatorial. Any integral of the form ${\displaystyle \int \cdots \int e^{P(x,y,\ldots )}\,dx\,dy\cdots }$, where P is a polynomial, can be evaluated with Feynman diagram techniques (for some value of "any"). The idea is to write P = Q + R where Q contains only terms of degree 2 or less, and then Taylor expand R to get ${\displaystyle e^{P}=e^{Q}(1+R+R^{2}/2!+\cdots )}$, which gives you a series of relatively easy Gaussian integrals. The ${\displaystyle R^{n}}$ term of this series can be represented by a collection of Feynman diagrams with ${\displaystyle n}$ interaction vertices. The diagrams always look like QFT diagrams with lines and interaction vertices, whether or not there's anything resembling space or time in the integral. There's a nice discussion of this in chapter I.7 of Quantum Field Theory in a Nutshell by A. Zee, which I just found out is available online. This expansion is extremely useful in QED, where the series converges rapidly, but not so useful in QCD, where it doesn't. There are other approaches to evaluating the integral which give you a totally different "picture of what's going on". I love the book QED, but Feynman went way overboard with his claims about the particle nature of light and electrons. In fact chapter 2, where he talks about photons only, is completely classical in almost every respect. The idea that pulses of light take every possible path from the source to the target, and every path contributes equally to the final amplitude, is just the path-integral form of Maxwell's equations. His discussions of reflection, refraction and diffraction are classical. It's crazy to claim that these arguments show the particle nature of light unless you're willing to argue that Maxwell's theory is also a particle theory. What does show the particle nature of light is the fact that the detector (photomultiplier) registers individual clicks of equal amplitude in low light. That's the only genuine quantum mechanics in the whole chapter. -- BenRG (talk) 16:33, 7 May 2008 (UTC)
BenRG: do you really think those scare quotes are necessary? I went back and saw your edit. I think "So called" is a little harsh as well. Granted, contemporary theory no longer describes the electron this way, but the paragraph qualifies the term as such. Scare quotes would imply that there is something intrinsically wrong with the phrase, which there is not. Qualifying the radius described as Classical gives it a timeframe during which the description was perceived as accurate theory. It would be like putting scare quotes around phlogiston or n-ray. Thoughts? --Shaggorama (talk) 19:57, 8 May 2008 (UTC)

## Formatting

Recently, I made a template Template:NumBlk which can number mathematical equations. For example, the equations look like this before applying the template:

${\displaystyle {\frac {\partial z}{\partial t}}={\frac {\partial z}{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial z}{\partial y}}{\frac {\partial y}{\partial t}}=\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right)z}$
${\displaystyle {\frac {\partial ^{2}z}{\partial t^{2}}}={\frac {\partial }{\partial t}}{\frac {\partial z}{\partial t}}={\frac {\partial }{\partial t}}\left[\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right)z\right]=\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right){\frac {\partial z}{\partial t}}}$

and the equations look like this after applying the template:

${\displaystyle {\frac {\partial z}{\partial t}}={\frac {\partial z}{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial z}{\partial y}}{\frac {\partial y}{\partial t}}=\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right)z}$

(Eq. 1)

${\displaystyle {\frac {\partial ^{2}z}{\partial t^{2}}}={\frac {\partial }{\partial t}}{\frac {\partial z}{\partial t}}={\frac {\partial }{\partial t}}\left[\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right)z\right]=\left({\frac {\partial }{\partial x}}{\frac {\partial x}{\partial t}}+{\frac {\partial }{\partial y}}{\frac {\partial y}{\partial t}}\right){\frac {\partial z}{\partial t}}}$

(Eq. 2)

As you can see, the two equations before applying the template are tightly aligned. But there is a bigger gap between the two equations to which the template is applied. So what can I do to remove the gap between them and make them tightly aligned vertically?

If my question above looks complicated, the alternative question is: what can I do to remove the (vertical) gap between the following two indented tables and make them tightly aligned vertically?

 TBL 1
 TBL 2

Justin545 (talk) 08:06, 10 May 2008 (UTC)

There are a number of ways, but are you sure this is desirable in the first place? To me, equations such as the ones in your example are more easily readable with some space between them, since MediaWiki's own TeX-to-image converter does not automatically insert any whitespace to keep them apart. —Ilmari Karonen (talk) 11:46, 10 May 2008 (UTC)
Yes. That's why I added dashed line between the equation and the number so that I don't need the space between them for readability and make articles shorter. Could you tell the solutions? I would like to hear. Thanks! - Justin545 (talk) 12:29, 10 May 2008 (UTC)
The space you are seeing is a result of you indenting the tables using : or *. If you remove the : from your template up above, you'll see that your tables "smoosh" back together as expected:
 TBL 1
 TBL 2

${\displaystyle mx={\frac {-b}{y}}}$

(Eq. 1)

${\displaystyle mx={\frac {-b}{y}}}$

(Eq. 2)

The vertical spacing you are seeing is the result of monobook putting a nice paragraph separation between lines which makes reading text discussions so much easier. My recommendation is to adjust your template to use CSS as an indention scheme, rather than using :. On a side note, you also might want to make some adjustments to your template for math equations that don't invoke the TeX-Image generator, such as ${\displaystyle y=mx+b}$. This equation is utterly unreadable inside your template. -- ShinmaWa(talk) 15:09, 10 May 2008 (UTC)
I didn't consider equations that don't invoke the TeX-Image generator. And I've adjusted the template according to your suggestion. Thanks for the side note.

${\displaystyle y=mx+b}$

(Eq. 3)

>> My recommendation is to adjust your template to use CSS as an indention scheme, rather than using :.
I know I can remove the spacing by not indenting. But I don't really get the CSS as you said. Could give more details? Thanks! - Justin545 (talk) 03:06, 11 May 2008 (UTC)

(outdent) What I mean is to use positioning. Set up your template to add 'position:relative; left: {{{x}}}' to your table's style, such that it appears as it does below:

 TBL 1
 TBL 2

Something like this:

{| style="{{#if: {{{1|}}} | position:relative; left:{{{1}}}; || }} border-collapse:collapse;" border="0"


...should solve your problem. I'm not a template expert, but I think that'll work. -- ShinmaWa(talk) 18:22, 12 May 2008 (UTC)

## What will happen if a virus leaves its host cell for a long time?

and what will happen in terms of its chemical structure? - Justin545 (talk) 06:54, 23 May 2008 (UTC)

Presumably it wouldn't be able to replicate as it has no access to RNA, but i'm not sure if it would stop functioning (it certainly won't 'die' since it isn't alive in the first place).
A virus can surely "die", that is, lose the ability to replicate in a host. I believe most animal viruses live for very short periods--hours at most--outside the host, unless they are being coddled in a lab. Woodlore (talk) 11:03, 23 May 2008 (UTC)
Well, there is dispute in the scientific community to whether or not viruses are actually "alive" in the biological sense of the term. However, viruses can certainly be rendered inert or destroyed if they are left exposed outside of the body. Disinfectants, ethanol, bleach etc.. can certainly disrupt the lipid envelope of the virus and denature its protein coat and inner nucleic acids. Such things will also occur over time without the use of exogenous chemicals. Wisdom89 (T / C) 20:03, 23 May 2008 (UTC)
It's actually the lay community that's concerned with the notion of whether or not viruses are "alive"; scientists pretty much know that it's not a useful term to be using about viruses. As Wisdom points out, what happens to a virus is a direct result of its environment: many remain unchanged, and remain infective, for quite a period of time. Others rapidly become non-infective. But heat and the chemical environment can change the speed at which this happens. (The virus, btw, can't replicate because it lacks ribosomes and has no way to produce proteins - that (and not lack of RNA) - is the reason that viruses are obligate intracellular parasites.) - Nunh-huh 20:10, 23 May 2008 (UTC)
The "Tobacco Mosaic Virus" is the one they always teach about in school. When you extract it from it's host, it crystallises into rather gorgeous crystals of completely inert-seeming stuff. There you have something about as inert as you could imagine..."dead"...like a grain of salt. But let it dissolve in water and spray it onto a plant - and it's back to being an active, reproducing, disease-causing agent. It's right on the edge between being a "poison" and a "creature". Humans are very interested in putting up hard and fast barriers between one kind of object and another - where often, no real distinction exists. Consider "Planets"...is Pluto a "planet"? Well, the truth is, we shouldn't care - it's a big rock or a small world - but there is no hard line between rocks and worlds in nature - we see every possible value inbetween. The same is true for "life". Dogs and cats are obviously "alive" and crystals of NaCl (table salt) are "not-alive"...but the Tobacco Mosaic virus sure behaves like it's alive when it's taking over a plant...but when you crystalise it, you pretty much have to say it's a bunch of inert chemicals...but it's not "dead" because you can easily revive it. So viruses are to the "alive/not-alive" debate just as Pluto is to the "Planet/Not-planet" debate - whatever we decide to label them is entirely a linguistic convenience and tells you nothing whatever about what's going on in the universe. 70.116.10.189 (talk) 02:50, 24 May 2008 (UTC)

## MediaWiki Page Protection

Suppose that there are three users 'Alex', 'Bob' and 'Chad' whose user pages are 'User:Alex', 'User:Bob' and 'User:Chad' respectively. Is there any way to configure MediaWiki such that only Alex can edit the page User:Alex, only Bob can edit the page User:Bob and only Chad can edit the page User:Chad? That is, no one can edit a user page except the user who owns the user page. As I know, users in some group have rights to protect/unprotect arbitrary page but it doesn't work for the case I described. Does MediaWiki provide such advanced protection? Or any extension available? Thanks! - Justin545 (talk) 02:08, 4 June 2008 (UTC)

The extension UserPageEditProtection ([1]) does that —Dvyjones (tc) 20:16, 4 June 2008 (UTC)

## Transverse waves

Why can't transverse waves pass in liquids? I can see why they can't pass in gasses, but why not in liquids? —Preceding unsigned comment added by 65.92.231.82 (talk) 02:11, 29 August 2008 (UTC)

Clearly they propagate nicely along the interface between water and the air above it. It seems that to propagate within a body of liquid, there would have to be an interface within the liquid, such as a layer of fresh water above a layer of salt water where a river flows into the ocean, or the interface between oil and water. Longitudinal waves propagate nicely within a homogeneous liquid. Edison2 (talk) 04:46, 29 August 2008 (UTC)
The liquid has no strength or stiffness. It does not resist movement like a solid, so there is no restoring force to make a wave. However the liquid does resist compression, so you can get a compression wave. Graeme Bartlett (talk) 05:55, 29 August 2008 (UTC)

## Emit Visible Light by Radio Transmitter

According to article Electromagnetic spectrum, radio waves and visible lights are both electromagnetic waves. The only difference I can see between them is they have different frequencies. So is it possible for a radio transmitter to emit visible light if the radio transmitter can emit radio wave of extremely high frequency? - Justin545 (talk) 09:35, 29 August 2008 (UTC)

In principle, yes. In practice, the required frequencies are unobtainable in any system that would resemble a normal radio transmitter. Dragons flight (talk) 10:10, 29 August 2008 (UTC)
A radio transmitter that emitted visible light it would look very similar to a Incandescent light bulb --Shniken (talk) 14:45, 29 August 2008 (UTC)
Not really. A light bulb's output is a byproduct of heat, not a modulated frequency. — Lomn 17:28, 29 August 2008 (UTC)
Yes, if you use the right equipment. Electromagnetic radiation in radio transmitters is an oscillating field caused by an oscillating current of electrons. That is, we accelerate the electrons back and forth. It's the acceleration in itself that is the trick and another word for electromagnetic radiation caused by the acceleration of a charged particle is Bremsstrahlung. With a current in a wire we only manage radio waves, but when sending the particles round in circles in a vacuum it's a different matter. This is called Synchrotron radiation and you can indeed achieve frequencies from radio waves, into and past the visible spectrum and into ultraviolet and x-rays. EverGreg (talk) 18:31, 29 August 2008 (UTC)
It's a matter of frequency, certainly - but it's easier to think in terms of wavelengths. (Frequency and wavelengths are opposite sides of the same coin here). So let's look at how long the actual waves are:
• TV and FM radio waves are around a meter long...AM radio is out at 300 meters maybe.
• Radar systems and microwave ovens operate at - between 10cm and 1cm wavelengths (there is an elegant experiment involving chocolate chips that lets you measure that in your microwave oven!)
• Waves smaller than a centimeter are called "millimeter band" and are used for short, precise, distance measurements - short range radar - such as the 'reversing sensors' that some cars have.
• Millimeter waves at 1mm are right next to the "far" infrared part of the spectrum - which extends down to "near" infrared at a millionth of a meter. For some reason, we call infrared "light" - not "radio" or "radar" - but it's all the same stuff - it's just a matter of wavelength.
• Infrared light is right next to visible red light in the spectrum. Visible light waves are around half of a millionth of a meter long.
So "radio" waves are about a million times bigger than "light" waves - but they are EXACTLY the same phenomenon - it's just a matter of frequency and wavelength.
To answer the question then: To operate efficiently, Radio antennae need to be about a wavelength long. The antenna on your car is about a meter long - so it can pick up radio waves. The little stubby antenna on your cellphone reflects the fact that it's operating at wavelengths of a few centimeters. So the "antenna" for visible light would be less than a millionth of a meter long! But the systems that make radio transmitters work efficiently simply aren't designed to put out waves that small. The other side of the coin - the "frequency" is to do with how fast the electronics have to oscillate to make waves of the appropriate length. To make radio waves, you only have to oscillate a few million times a second. This is easy to engineer - there are crystals that oscillate at those rates - also you can build all sorts of circuits that'll do that. But as the frequencies get higher, it gets harder and harder to make systems that'll vibrate fast enough. We have computers that run at frequencies as high as 3 to 4 GHz - that's barely as fast as microwaves. Making electronics oscillate faster than that starts to get tough because the size of the electronics has to be small enough to let the electrons get across the circuit in a small enough amount of time. By the time you get into the infrared region, you need to have atoms oscillating - not big things like crystals. So we can't make a 'crystal radio' oscillate anywhere near fast enough to make light. We make infrared by stimulating atoms to vibrate - similarly with visible light. So while light and radio are "the same thing", in practice, it's not just a matter of retuning the transmitter and reducing the size of the antenna to convert a radio into a lightbulb. SteveBaker (talk) 19:10, 29 August 2008 (UTC)
Tiny question for clarity: I thought antennae needed to be half a wavelength long for optimal efficiency? Franamax (talk) 21:24, 29 August 2008 (UTC)
That would be the distinction between a Marconi antenna[2] (archaic nomenclature) and a half-wave dipole antenna; but in modern antenna theory, there are ten million variations on the theme, so "optimal efficiency" may be traded for directionality, bandwidth, narrow-band frequency-specific coupling effects, active control, etc. Nimur (talk) 21:32, 29 August 2008 (UTC)
Yes - and in any case, it's a pretty rough requirement. For the purposes of answering this question we don't need to get into horrible details of the 'black art' of antenna design! You can pick up a perfectly good signal over a wide range of radio or TV channels with a fixed size antenna of roughly a wavelength...but ten times the wavelength or a tenth the wavelength doesn't work nearly as well. The point is that a radio transmitter that's set up with an antenna suitable for AM radio won't make light (which would require an antenna a MILLIONTH of that length). SteveBaker (talk) 21:54, 29 August 2008 (UTC)
Two images of the sky over the HAARP Gakona Facility using the NRL-cooled CCD imager at 557.7 nm. The field of view is approximately 38°. The left-hand image shows the background star field with the HF transmitter off. The right-hand image was taken 63 seconds later with the HF transmitter on. Structure is evident in the emission region.
There have been some experiments in radio-induced airglow or (artificial) aurora, such as this IEEE publication on work performed at the HAARP facility. This is an indirect effect and requires certain ionospheric conditions. The beam is a "High Frequency" radiowave, meaning ~5 MHz, and it is through ionspheric interactions that this energy can be converted in to optically observable light. Nimur (talk) 21:30, 29 August 2008 (UTC)
Here's a link to the Navy's description of optical emissions: [3]. "The exciting result was that by pointing the HF beam directly along a geomagnetic field line, artificial emissions of greater than 200 Rayleighs (R) at 630.0 nm and greater than 50 R at 557.7 nm could be produced. This intensity was nearly an order of magnitude larger than that produced by heating directly overhead." Nimur (talk) 21:40, 29 August 2008 (UTC)
Let's not confuse the OP though. HAARP and the Navy work are NOT a matter of retuning a radio transmitter to broadcast up in the PetaHz (1015Hz) range! We have no idea how to make a "radio transmitter" that works at such spectacularly high frequencies. The things you are discussing are systems causing secondary effects in the atmosphere. That's not at all what the OP is asking - so let's not muddy the waters! SteveBaker (talk) 22:01, 29 August 2008 (UTC)
I only half-agree. The generation of optical frequencies is a special class of frequency mixing, where the frequency mixer just happens to be a particular natural phenomena / atomic property. Though this effect occurs at high altitude, it's not so very different from using a diode mixer on a circuit board, where a different other atomic effect is responsible for signal conditioning suitable for changing frequency. Nimur (talk) 22:10, 29 August 2008 (UTC)
As you increase in frequency the techniques of producing the oscillations produces less and less power, and amplifiers produce less and less gain until you reach the technology limit around one terahertz. Your antennas will have to be on the same scale as light waves, much bigger than atoms and molecules, and potentially on the same scale as silicon chip technology. But really the problem is generating the arbitrary waveform at the required frequency. As you get to light frequencies the quantization in to photons also takes effect, raising your noise floor. Graeme Bartlett (talk) 08:54, 30 August 2008 (UTC)

## Cause of Quantum Decoherence?

According to the section Problems of the article Quantum computer, there are number of practical difficulties in building a quantum computer. One of the major difficulties is keeping the components of the computer in a coherent state. But I still get no sense of what interaction will cause the system to decohere... Is it environmental temperature? some form of noise? CMB? movement of planets, stars? cosmic inflation? Any ideas? Thanks - Justin545 (talk) 11:45, 30 August 2008 (UTC)

What I've usually heard is thermal noise. With say a silicon chip with a quantum dot on it, the chip emits a photon which hits the dot and ruins its superposition. Or the dot could emit a photon that hits the chip. Because of this, proper cooling is a preequisite of many quantum computer blueprints. EverGreg (talk) 15:43, 30 August 2008 (UTC)

## Why Is Electromagnetic Wave Sine Wave?

There are many kind of waves such as saw wave, square wave, triangle wave etc. But why is electromagnetic wave always depicted as sine wave? How to prove that? - Justin545 (talk) 11:34, 15 September 2008 (UTC)

Good question. The reason is that circle functions like sinus are the solutions of differential equations that describe damped movement. I'm not able to say which equations and why but I remember that was the reason. --Ayacop (talk) 14:05, 15 September 2008 (UTC)
Undamped surely? Damped motion decays exponentially IIRC, but undamped motion, such as Simple harmonic motion, results in nice sinusoidal motion.
I would guess it's also related to the fact that, through Fourier Analysis, all of those other waves can be represented by adding sinusoidal waves of different frequencies. AlmostReadytoFly (talk) 14:16, 15 September 2008 (UTC)
PS: See Electromagnetic wave_equation#Solutions to the homogeneous electromagnetic wave equation. AlmostReadytoFly (talk) 14:22, 15 September 2008 (UTC)
A non-sine wave (a squarewave say) is the sum of a number of different sine waves at different frequencies. A square wave is the sum of a wave at some frequency ('f'), plus another at 3f, another at 5f, 7f, 9f...and so on. So if (for example) you had a light that emitted square waves, it might be emitting mostly red light - with some at three times that frequency, some more at five times and so on. Well, three times and five times the frequency of red light is ultraviolet light - and seven and nine times times red light is in the X-ray spectrum and so on. So you could in principle make square-wave "light" by carefully arranging an exact set of visible light, UV light and X-ray emitters. It wouldn't look much different from red light...except that it would irradiate your retinas pretty nastily!
It's certainly possible - but it doesn't tend to happen in nature because the processes that produce waves of such radically different frequencies are very different and their absorption in atmosphere is wildly different too.
SteveBaker (talk) 17:39, 15 September 2008 (UTC)
anything where the force is proportional to the negative of the displacement will give you a sine wave for motion, and since the derivatives and integrals of a sine wave are also sine waves, it ends up being sine waves all the way down. since that arrangement of force, or an approximation, is very common in a lot of physical systems, everything from a weight on a spring to an L-C oscillator ends up giving you sine waves; a pendulum is approximately close enough you get an almost sine wave, etc. Gzuckier (talk) 15:13, 19 September 2008 (UTC)

## 01:30, 6 October 2008 (hist) (diff) User:Justin545/Private Note ‎ (→A Terrible, Mechanical Analog to Quantum Register)

1. ^ "the “collapse” or “reduction” of the wave function. This was introduced by Heisenberg in his uncertainty paper [3] and later postulated by von Neumann as a dynamical process independent of the Schrodinger equation"Kiefer, C. On the interpretation of quantum theory – from Copenhagen to the present day