Jump to content

Talk:Maxwell's equations/Archive 4

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3Archive 4Archive 5Archive 6

Boundary conditions

Which boundary conditions are you referring to? The boundary conditions at material interfaces actually follow from Maxwell's equations themselves, as derived in any textbook (e.g. Jackson). Of course you need initial conditions, but that's true for essentially any physical law. (Of course, computational methods often require one to artificially truncate space with some boundary condition, but that's a property of the approximate solution method more than of the equations. And some other methods that involve solving homogeneous regions separately and then matching solutions at boundaries also require you to "manually" impose boundary conditions, but again that's a property of the solution method.) If you want something that must typically be given separately, and is not derived purely from Maxwell's equations and/or the Lorentz force, it would be macroscopic material properties (susceptibilities, etc.). —Steven G. Johnson (talk) 20:03, 29 March 2008 (UTC)

Hi Steve: Please excuse my movement of your comments to a new heading, but I believe it is a new subject.
The links examples of boundary value problems, Sturm-Liouville theory, Dirichlet boundary condition, Neumann boundary condition, mixed boundary condition, Cauchy boundary condition, Sommerfeld radiation condition describe some of the possibilities. You mention that the internal interface conditions are inherent in the Maxwell equation themselves, and the suggestion is that all boundary conditions have this property. To a degree that is true – however, Maxwell's equations cannot specify whether the waveguide you are interested in is circular or square, or whether an antenna is half or quarter wavelength, or whether a heterostructure has thin layers or thick, many or few. So, there is information related to boundaries that must be supplied outside of Maxwell's equations, and the solutions dependent critically upon this information. So I'm just looking for a heads up here. At a minimum the reader should be made aware that this is a nontrivial issue that is part of setting up any application of the equations.
In this connection, how do you view the subject of John D Joannopoulos, Johnson SG, Winn JN & Meade RD (2008). Photonic Crystals: Molding the Flow of Light (2nd Edition ed.). Princeton NJ: Princeton University Press. ISBN 978-0-691-12456-8. {{cite book}}: |edition= has extra text (help)CS1 maint: multiple names: authors list (link)? In particular, pp. 58 ff with localized modes? Brews ohare (talk) 21:31, 29 March 2008 (UTC)
Whether the waveguide is circular or square etc. is part of the specification of ε(x,y,z), which is inside the (macroscopic) Maxwell equations, not "outside". Obviously, in order to specify the (macroscopic) Maxwell equations you need to specify the coefficient functions like ε, in the same way that you must also specify external currents J and charges ρ. Specifying a differential equation requires you to specify its coefficients. The coefficients of a PDE, however, are not "boundary conditions" per se (there are resulting internal boundary conditions at the material interfaces, of course, but they are determined by the macroscopic Maxwell equations given ε etcetera).
Dirichlet, Neumann, etcetera boundary conditions are names for different kinds of boundary conditions, but these don't need to be specified in addition to the macroscopic Maxwell equations, they are consequences of them. e.g. for 2d problems with the electric field polarized out of the plane, you get Dirichlet boundary conditions on the (scalar) electric field at the interface of perfect metals, but this is simply a consequence of Maxwell's equations plus the definition of perfect metals (infinite conductivity, which again goes inside Maxwell's equations).
The Sommerfeld radiation condition is only for the time-harmonic equations, and comes as a consequence of the fact that you have removed time from the problem and the remaining equations are mildly singular; they are not needed for the full Maxwell equations, including time; roughly speaking, they are the analogue of the initial conditions. (As an alternative, a common trick is to add an infinitesimal dissipation loss everywhere, which makes the equations nonsingular and, in the limit of zero loss, recovers the outward-radiation boundary condition.)
Regarding localized modes, the localization is a consequence of Maxwell's equations: eigenmodes in the photonic bandgap are exponentially localized. Again, if you ask what the eigenmodes are it is a time-harmonic problem, not the full Maxwell's equations with time, so in an unbounded problem you need some kind of boundary conditions at infinity (to exclude solutions growing exponentially towards infinity). If you solve the full equations with time, you don't need boundary conditions at infinity: e.g. a localized current source with zero initial conditions in the right frequency bandwidth will excite the exponentially localized modes, with no need to specify any boundary conditions.
—Steven G. Johnson (talk) 22:49, 29 March 2008 (UTC)
To elaborate a bit: if I solve for modes in a box, e.g. a box in ε(x), don't I have to ask for solutions that decay on either side of the box to get a localized state? Are there not also solutions that blow up as x → ∞ ? Brews ohare (talk) 23:17, 29 March 2008 (UTC)
Again, if you are solving for the "modes" (i.e. time-harmonic solutions), you are solving the time-harmonic equations, not the full Maxwell equations (i.e. you have a linear system and you've replaced all time derivatives d/dt with -iω). In this case you need boundary conditions to exclude solutions diverging towards infinity. (This is closely related to the Sommerfeld outward-radiation condition which I discussed above.) These conditions at infinity are not needed if you solve the full Maxwell's equations with time, as an initial-value problem.
Moreover, in fact, they are essentially consequences of the full Maxwell's equations with time, in the sense that those conditions at infinity are chosen because those are the only solutions that you can excite with localized sources starting from zero fields. Basically, "boundary conditions" arise as explicit "external" conditions on the equation only in situations where you have taken the full Maxwell's equations and thrown out some degrees of freedom (in a manner of speaking, you replace the degrees of freedom you threw out by boundary conditions, where the boundary conditions are derived from the full original equations). —Steven G. Johnson (talk) 23:24, 29 March 2008 (UTC)
A question: If conditions are specified along some curve at time "t" in one inertial frame, these same conditions do not all apply simultaneously in another frame. Does this mean that "boundary" conditions and "initial" conditions are not all that distinct? Brews ohare (talk) 00:31, 30 March 2008 (UTC)

resetting indentation

First, this is beside the point; as I've been saying consistently, you don't need any other boundary conditions in addition to your initial condition (which are indeed a kind of boundary condition in the time dimension), because all of the other (spatial) boundary conditions are consequences of Maxwell's equations. My complaint is about the false implication that additional boundary conditions from "outside" Maxwell's equations are needed beyond initial conditions. Nor do we need to make a special point about needing initial conditions—this is obvious for any time-dependent problem.

Second, although initial conditions in one inertial frame become a mixed spatio-temporal "initial condition" in other frames (i.e. an initial condition at each point in space, but at different times for different points), there is no inertial frame that transforms an initial condition into a purely spatial boundary condition, so the two are not mathematically equivalent. Moreover, in practice you always choose an inertial frame corresponding to putting your initial condition at a fixed time—or, even more commonly, your initial condition is just that all fields are zero for (and you only turn on current sources at finite times), in which case it doesn't matter what inertial frame you choose. Of course, there are other possibilities besides a purely initial condition, e.g. you can have a "final condition" and ask what happened previously in time, but again this is irrelevant to my point.

Your mistake above is actually pretty common; students see explicit boundary conditions being imposed all the time in solving Maxwell's equations (as a byproduct of particular solution methods that work by breaking space into homogeneous regions and then matching the solutions in each region), and they come to the conclusion that the boundary conditions are something that must always be put in "manually". Then they get confused when you show them, for example, a finite-difference numerical solver for a region with inhomogeneous materials, and they ask where you put in the boundary conditions at the material interfaces...the answer is that you didn't have to, because when you solve the inhomogeneous equations you get the boundary conditions at interfaces automatically.

—Steven G. Johnson (talk) 02:25, 30 March 2008 (UTC)

Hi Steven:
To be a bit blunt about it, you could solve a waveguide problem solving Maxwell's equations with ε(r, t) and μ (r, t) valid for the entire lab and never use "manually added" boundary conditions. With a general functional form for ε(r, t) and μ (r, t), only a numerical approach would work if no splicing of simpler regions were to be used, and the functions ε(r, t) and μ (r, t) in the vicinity of the waveguide boundaries would be pretty hard to determine experimentally, and very demanding to treat numerically. Alternatively, you could solve Maxwell's equations only inside the waveguide with a simple ε and μ using boundary conditions. I'd guess the latter would be first choice and would lead to a design that meets specs a lot faster, and with a lot more insight. It isn't an accident that there are three centuries of math related to boundary conditions and associated special functions.
My view is that you are right in principle, but with a viewpoint that does not reflect a good deal of common (and successful) practice.
I removed the very provocative (apparently) one sentence ( ! ! ! ) providing the reader with links to other Wiki articles covering boundary effects and initial conditions. To avoid clarity about the very common practice of using boundary conditions is a disservice to Wikipedia readers, but… Brews ohare (talk) 07:22, 30 March 2008 (UTC)
As I said, when you remove degrees of freedom (e.g. you only worry about the fields in certain regions of space), you replace them with boundary conditions. I never said that such methods weren't useful, just that you were misunderstanding them: you do a disservice to readers by implying that such boundary conditions are needed in addition to Maxwell's equations, rather than coming from Maxwell's equations and being used in particular solution methods. Thanks for removing the misleading statement; saying things that are false never provides "clarity" or does a "service" to readers. —Steven G. Johnson (talk) 16:04, 30 March 2008 (UTC)

Saying things that are false never provides "clarity" or does a "service" to readers

Hi Steven: Thanks for that one. Suggesting a link to some practical methods would be helpful. I'll try putting in a subsection on boundary conditions to provide an alternative to numerically solving the equations with ε(r, t) and μ (r, t) for the expanding universe. Brews ohare (talk) 17:42, 30 March 2008 (UTC)

(a) I don't think this article should be on numerical methods, nor for that matter on analytical methods for solving PDEs; that's a huge can of worms. It could link to a few, but there is a huge variety here that you aren't appreciating—there are many books written on this topic, and each book typically covers only a slice of the available techniques.. (b) Why are you so eager to write sections on topics that you've just discovered you don't really understand? Perhaps that should be a clue? —Steven G. Johnson (talk) 15:58, 31 March 2008 (UTC)
Hi Steven: No intention to provide a complete discussion, which would be more appropriate for a sequence of articles in themselves. Just a heads-up and some links to what is presently available for handling these problems.
No need to be abusive. Brews ohare (talk) 16:29, 31 March 2008 (UTC)
In response to your editing comment, quote sorry, just because I don't have time to completely rewrite a hopelessly poor section doesn't mean that this new addition should stay in the article
I have added references for the statements made, which I'm sure you do not feel are conjectural in any way, but now have support. As an editor, I believe you could be more helpful by suggesting what you find is lacking here. Obviously the Wiki articles in this area are deficient, but that does not seem to require waiting until better ones are written. Also it is not appropriate to put an extensive discussion in this article. So, a heads-up seems to be about all that can be done just now.Brews ohare (talk) 20:27, 31 March 2008 (UTC)

The Lorentz Force

Brews, The Lorentz force is indeed very important and I'm happy enough to have it mentioned at the end of the introduction.

If we actually put the two sets of Maxwell's equations side by side, we see that they differ in substance in only one important respect.

The original eight have the Lorentz force whereas the Heaviside four have what you term the Maxwell-Faraday law.

We can reduce the original eight to seven by virtue of the fact that the Maxwell-Ampère law is two equations in the original eight. We can further knock it down to six by ignoring Ohm's law. We can further knock it down to four by ignoring the equation of continuity and the electric displacement equation.

Three of the remaining four equations then correspond directly to three of the Heaviside four.

So what about the Maxwell-Faraday law? Well it is nice for symmetry purposes and it makes it easy to derive the EM wave equation. But clearly Maxwell considered the Lorentz force to be a more substantive equation for the purposes of describing the forces of EM induction.

This is borne out nowadays by the fact that the Lorentz force has to be used alongside Maxwell's equations as an extra equation which is quite ironic since it was one of the original eight Maxwell's equations in the first place.

I would suggest that it is the Maxwell-Faraday law which is the joker in the pack, and it didn't even have anything to do with Maxwell, and it's not even a complete Faraday's law.

So yes, the Lorentz force is indeed a necessary extra to the modern four Maxwell's equations.

But I'm not sure if it's quite for the reasons that you were saying about boundary conditions. A full equation always gives more information than a differential equation, but in this particular case I think that the arbitrary constant is irrelevant because we already know that we are dealing with E as electric field and not as electric field + Arbitrary Vector.

I'd be inclined to remove the bit about boundary conditions from the introduction. It clutters the introduction with a very specialized topic of debate. George Smyth XI (talk) 01:24, 30 March 2008 (UTC)

Done. Brews ohare (talk) 05:43, 30 March 2008 (UTC)

The Maxwell-Faraday Law

I'm still not happy about the term 'Maxwell-Faraday' law. It's the one equation which is not connected to Maxwell in any way. He neither derived it nor did it appear in any of his papers.

If I had my way, I would remove it from the list of Maxwell's equations in all textbooks and replace it with the Lorentz force which would be re-christened a 'Maxwell's equation'.

I would only wheel the so called Maxwell-Faraday law out for the purposes of deriving the EM wave equation. I would introduce it via the full Faraday's law. I would then say that we don't need to consider the convective (motion dependent) aspect for deriving the EM wave equation and I would then work from a partial time derivative version.

Anyway, I'm not going to interfere on the main page in relation to this matter but I wanted to bring the matter to the attention of the other editors. Obviously, since the textbooks list it as a Maxwell's equation, then that's what we have to preach. But I'm not sure that we are actually obliged to name it the Maxwell-Faraday equation. George Smyth XI (talk) 11:09, 30 March 2008 (UTC)

I support naming the section Faraday's Law. The Feynman Lectures use that name, Jackson uses that name, even Eric Weisstein's Encyclopedia references Jackson. And this nomenclature ought to be uniform across the article. --Ancheta Wis (talk) 17:38, 30 March 2008 (UTC)

Yes, I'd go along with plain 'Faraday's law'. Even though it is not the complete Faraday's law I think that 'Faraday's law' is still the best term to use to name it with. Maxwell had nothing to do with this particular version of Faraday's law. George Smyth XI (talk) 03:50, 31 March 2008 (UTC)

I'm fine either way, but have a marginal preference for the term "Faraday's law", for the reason that, as Ancheta notes, it's far more common. It's nice for terminology to be unambiguous, but I think that consideration gets outweighed, at least in this article. (In the article Faraday's law of induction, the tradeoffs are quite different, and using an unambiguous but obscure terminology is a necessary sacrifice.)
I'd also like to disagree with the idea, "I would introduce it via the full Faraday's law. I would then say that we don't need to consider the convective (motion dependent) aspect for deriving the EM wave equation and I would then work from a partial time derivative version." The partial time derivative version is, without anything else, a true law of nature, and there is no deception or lack of clarity in saying, here's one of Maxwell's equations, and it's usually called "Faraday's law", and leave it at that (maybe with a footnote warning that the term "Faraday's law" is also used to refer to something different/broader). After all, this isn't the article on Faraday's law, it's the article on Maxwell's equations, and there's no need to tell readers something outside of Maxwell's equations and then immediately tell them that they can forget about it. (Of course, introducing the "full" law is more in the historical spirit of things, but that point is already made quite well in the history section.) --Steve (talk) 05:16, 31 March 2008 (UTC)
Steve, I would agree with you. I actually said all that above in relation to how I would teach the partial Faraday's law to university students. I wasn't referring to how it should be treated in an article about Maxwell's equations. George Smyth XI (talk) 15:24, 31 March 2008 (UTC)
The use of "Faraday's law" (a term with many meanings, even outside of electromagnetism) obviously leads to ambiguity. Ambiguity is not good - it may require more words to differentiate what is meant, or it may be misconstrued by the reader if the distinction is not made. I don't think anyone can argue but that ambiguity results and that it has a downside.
In addition, those opposed to the term "Maxwell-Faraday equation" seem to be the same as those that feel the "Maxwell-Faraday equation" is an emasculated abomination that never should be seen, never mind heard from. I am dismayed that they would like to honor this disgraceful object with the revered name "Faraday's law" knowing full well that it is not, and should never be construed as such.
What is the upside to using "Faraday's law" instead of "Maxwell-Faraday equation"? The upside is that lots of people use the term "Faraday's law"' Of course they use it loosely, and probably don't really mean "Faraday's law of induction", but "Maxwell-Faraday equation". But they also aren't trying to write articles in Wikipedia where some clarity would be nice.
Finally, the name "Maxwell-Faraday equation" has these merits:
1. It is a name that very clearly says what it means - it is one of the Maxwell equations that has a connection to Faraday's law.
2. It is not ambiguous, and incurs no doubt or need for explanation
3. It is a term already in use, not an arbitrary invention
4. It has already spread throughout Wikipedia in links and cross-references, which I very much doubt anyone has the stomach to track down and change, and indeed change to what? Faraday's law? Do we really need a maze of links to disambiguation?
So, look deep into your souls and ask: From whence cometh this dark desire to unseat a perfectly useful and unambiguous term in favor of the murk and mire of a misnomer? Brews ohare (talk) 05:38, 31 March 2008 (UTC)
There are many ancestors of the law: Joseph Henry, Michael Faraday, Ampere (and before him, Ørsted and before Hans Christian Ørsted, other natural philosophers such as Johann Wilhelm Ritter). On the American side, Josiah Willard Gibbs is of equal stature to Maxwell. The history deserves an article, but Faraday had the physical insight which Maxwell formalized. I would argue that the physical and philosophical insight of this huge chain, proceeding to this day, place Faraday at the top of this law. --Ancheta Wis (talk) 12:16, 31 March 2008 (UTC)
On the disambiguation side, it is a simple-enough matter to use markup which refers to an exact article or section of an article while retaining common usage. If you argue that Maxwell ought to be given credit for Faraday's law, that is a misnomer and improper attribution. If you seek precision, then write the history of the law (but not in this article, please, in a separate one) and give credit to the entire stream of scientists. For the users of this article, the statement of the equations, possibly links to their solutions, and the impact of Maxwell's equations on the rest of physics belong in the article. But a misnomer does injustice to Faraday. It might be argued that he was in the right place at the right time. That's history. --Ancheta Wis (talk) 13:35, 31 March 2008 (UTC)

The point is that the so-called Maxwell-Faraday law has got absolutely nothing to do with Maxwell. That's why I don't like the term 'Maxwell-Faraday law'.

Maxwell essentially produced two equations that embody all of electromagnetism. These two equations are the Lorentz force and Ampère's circuital law with the displacement current.

The latter can be written as del^2A = (1/c^2)(partial)dE/dt

Then all we have to do is look at the three choices of E that the Lorentz force provides. If we choose the (partial)dA/dt term, we end up with the EM wave equation in the form,

del^2A = (1/c^2)d^2A/dt^2

That is Maxwell's work. We don't need the so-called Maxwell-Faraday law, and Maxwell certainly never used it. It is a Heaviside truncation of Faraday's law.

Since it contains two thirds of the full Faraday's law then I think that we will have to call it simply 'Faraday's law'.George Smyth XI (talk) 14:41, 31 March 2008 (UTC)

History has its place, but this is not it. Brews ohare (talk) 14:45, 31 March 2008 (UTC)
To associate Maxwell's name with Faraday's Law is a misnomer. If you seek a disambiguation, then Faraday's and Henry's Law or Faraday-Henry Law would be more accurate. But we would need a citation. Or if we were to add everyone's name, then an initialism might substitute. You get where this is going. The weight of the citations would simply be for Faraday's Law. --Ancheta Wis (talk) 15:05, 31 March 2008 (UTC)
We can talk about the Maxwell-Ampère law because Maxwell bettered Ampère's circuital law. But Maxwell made no additions to Faraday's law that would warrant him getting any credit for it. Heaviside removed something from Faraday's law but we have no citations that would give us a precedent to call it the Faraday-Heaviside law. So we are really stuck with plain simple 'Faraday's law'. We then have to draw attention to the fact that it is not the full Faraday's law. George Smyth XI (talk) 15:20, 31 March 2008 (UTC)
The term "Maxwell-Faraday equation" could be taken to mean that both names are attached because both names are connected to the origination of the law. I believe that is the view of Ancheta Wis and also George Smyth XI. However, that view is a bit narrow, I think. Even in the historical context, the name given to theorems, physical phenomena, inventions etc. very often are the names given to those who successfully promulgated the item, not to the originator. In that sense Maxwell's name has a role.
In the context of the Wikipedia articles on Electromagnetism, very ample space is given to the full details of who was responsible for what. For those interested in such matters, there is little doubt that their curiosity will be satisfied.
However, from the expository standpoint, all that is meant by "the Maxwell-Faraday equation" is that (a) it is the equation among Maxwell's equations that has a connection to Faraday's law of induction, and (b) it is not to be confused with the more general Faraday's law of induction. Brews ohare (talk) 16:15, 31 March 2008 (UTC)

Brews, yes it would be nice to mark it out separately from the full Faraday's law. But unfortunately it happens to be one of the Maxwell's equations that Maxwell didn't do. Maxwell never promulgated that equation. The term Maxwell-Faraday equation refers to 'The limited form of Faraday's law that appears in the set of equations promulagated by Heaviside but referred to as Maxwell's equations because Maxwell made an important amendment to one of them, but not to the one in question, and with the only equation fully attributable to Maxwell excluded from this set and travelling under the name of the Lorentz force'.

The existing situation is already a mess. The term Maxwell-Faraday equation compounds that mess.George Smyth XI (talk) 16:29, 31 March 2008 (UTC)

Hi George: There is a mess, but it is an historical mess. It has been addressed in the historical sections. Brews ohare (talk) 16:32, 31 March 2008 (UTC)

As Ancheta suggested, if we have the text "Faraday's law" (wherever it appears) be a piped-link to Faraday's law of induction#The Maxwell-Faraday equation, then I think that should be sufficient. This would be analogous to how, for example, an article on electricity can refer to "potential", with a link to electrical potential, and not need to worry about the fact that potential energy is also often called "potential". In other words, the disambiguation is done through the wikilink, a practice that is ubiquitous in Wikipedia.

If that weren't enough, the term is already written right next to its associated, unambiguous equation :-) --Steve (talk) 01:43, 1 April 2008 (UTC)

Was div B= 0 a Maxwell original?

curl A = B is equivalent to div B = 0. Both of these equations appeared in Maxwell's 1861 paper. Does anybody know if he was the originator. George Smyth XI (talk) 11:15, 30 March 2008 (UTC)

Josiah Willard Gibbs had a complete theory, Maxwell had it of course, Helmholtz' theorem is referenced in Eric Weisstein's encyclopedia. See magnetic vector potential, which needs this history. There is an online history of vector analysis. --Ancheta Wis (talk) 18:28, 30 March 2008 (UTC)
FYI: When I get a chance (probably within the next week), I've been planning to write a dedicated article on Gauss's law for magnetism, since that content is out-of-place at Gauss's law and buried among a million other things here and at magnetic monopole. Anyone who knows anything about the history of the law (I don't) should add a section to that article, when it exists :-) --Steve (talk) 03:03, 31 March 2008 (UTC)

More on boundary conditions

Steven G. Johnson has taken the stance that any mention of boundary conditions in this article will be summarily deleted. I find very little to object to in the following very brief paragraph intended to alert readers to the boundary value issue. In addition, of course, a very large portion of most Electromagnetism texts is devoted to exactly this topic, so its omission seems an incompleteness in this article. I'd like to solicit some support for including this paragraph in the article:

==Role of boundary conditions==
Although Maxwell's equations apply throughout space and time, practical problems are finite and require excising the region to be analyzed from the rest of the universe. To do that, the solutions to Maxwell's equations inside the solution region are joined to the remainder of the universe through boundary conditions and started in time using initial conditions. In addition, the solution region often is broken up into subregions with their own simplified properties, and the solutions in each subregion must be joined to each other across the subregion interfaces using boundary conditions. The links examples of boundary value problems, Sturm-Liouville theory, Dirichlet boundary condition, Neumann boundary condition, mixed boundary condition, Cauchy boundary condition, Sommerfeld radiation condition describe some of the possibilities.
Brews ohare (talk) 17:03, 31 March 2008 (UTC)
Seems to me that the relevant test is whether the material is supported by a reliable source that says something like that about boundary conditions in the context of Maxwell's equations. It's always fair to remove unsourced stuff, in my opinion, but once it is sourced we can have a better discussion of how relevant it is and how prominent it should be. Don't put it back without citing a source to support it. Dicklyon (talk) 18:40, 31 March 2008 (UTC)
Hi Steven:
In response to your editing comment, quote: Sorry, just because I don't have time to completely rewrite a hopelessly poor section doesn't mean that this new addition should stay in the article
I have added references for the statements made, which I'm sure you do not feel are conjectural in any way, but now have support. As an editor, I believe you could be more helpful by suggesting what you find is lacking here. Obviously the Wiki articles in this area are deficient, but that does not seem to require waiting until better ones are written. Also it is not appropriate to put an extensive discussion in this article. So, a heads-up seems to be about all that can be done just now.Brews ohare (talk) 20:29, 31 March 2008 (UTC)

Brews, the revised section is much improved. However, several problems remain. (I'm not suggesting we should include an extensive discussion, but we should avoid saying things that are positively misleading and we should give the right general idea.)

  • First, lots of practical problems are do not occur within a finite volume of space. e.g. scattering problems or (if you want a problem involving infinite surfaces), waveguide bends. These are sometimes called "open" problems, and there are methods to deal with this (e.g. integral-equation methods) that do not involve truncating space per se. Later parts of your revised paragraph actually allude to this, but you shouldn't start with something misleading and then correct it. A more accurate statement would be that, to make the solution of problems tractable, one usually attempts to reformulate them so that all unknowns can be described in terms of unknowns defined within a finite volume; this is done in various ways for different problems and different solution methods.
  • Second, it is still missing an important point: boundary conditions cannot be simply imposed, they must come from the underlying Maxwell's equations and the physical class of problems one is interested in (e.g. problems with no sources at infinity).
  • Third, the distinction between absorbing boundaries and asymptotic conditions at infinities is not between "antenna" problems and other problems, it is between integral-equation/Green's-function type methods (e.g. boundary element methods), which focus on surface unknowns, and volumetric methods (e.g. finite element and finite-difference methods such as FDTD) which have unknowns throughout a volume. Boundary-element methods are used in lots of electromagnetic cases besides antenna problems (e.g. they are common for capacitance extraction, radar cross-sections, etc. etc.), and conversely there are plenty of people using e.g. finite-element methods for antennas. Also, the most common absorbing "boundary" these days is not a boundary condition at all, it is a perfectly matched layer (an artificial absorbing material). (Alternatively, for problems involving exponentially localized modes, or for elliptic PDEs that arise in electrostatics, you don't have to worry about radiating fields and you have much greater freedom in truncating the volume.)
  • Fourth, I see no purpose in appending a laundry list of boundary conditions that can appear in various PDE problems. The boundary conditions that appear in electromagnetic problems are not arbitrary---one cannot simply select Dirichlet conditions from the list and hope for the best---they are dictated by Maxwell's equations themselves. If the reader follows the link to one of the boundary conditions from your list, she will find no guidance regarding how that boundary condition arises in electromagnetism. Linking a to-be-written article on boundary conditions in electromagnetism would be more useful (where that article would start with Maxwell's equations and derive the various common boundary conditions of interest, e.g. at material interfaces; to start with it could at least state the continuity conditions).
  • Fifth, not all "waveguides" in electromagnetism are closed metallic waveguides. There are open metallic waveguides, dielectric waveguides via index-guiding, and other possibilities. Even closed metallic waveguide problems sometimes involve open boundaries, e.g. for in/out-coupling.

Given the above information, you should have no problem finding references by searching the usual places, but let me know if not. —Steven G. Johnson (talk) 15:31, 1 April 2008 (UTC)

Also, I'm finding some of the sources you add very dubious, because they don't really seem to go along with the statements they are supposed to support. In general, you shouldn't add a reference just because it's the first thing you find at the end of a Google search: you should make an effort to check what the reference actually says, and that it is an authoritative reference for the subject it is supposed to support (as opposed to just mentioning it obliquely). e.g. you added a reference primarily on nonlinear optics for homogenization methods (when there are whole books on homogenization per se), and you are referencing a paper on photonic crystals for absorbing boundaries (rather than e.g. Taflove's book on FDTD which has a fine review of many absorbing-boundary and PML methods, or for that matter many other books on computational EM). Please go for quality over quantity. —Steven G. Johnson (talk) 16:19, 1 April 2008 (UTC)
Hi Steven: Thanks for the discussion. I believe that several of your comments take my "for example" cases and extrapolate them to mean "in every case and always". A careful reading would avoid that. There is no implication that all waveguides are closed, nor that the bc's are arbitrary, choose what you like.
I agree that definitive references are preferable to a pot pourri. However, (i) I do not know what the "definitive" references are, and (ii) I believe it is preferable to refer to a source that has some content at Google, at least as a supplement, in those cases where the "definitive" work is not available, and (iii) Some on-line discussion of the material is better than absolutely no example, especially where there is no Wiki info to refer to.
In this connection, I notice that your preference is to link to non-existent pages, resulting in red links. I have added two references on "effective medium" and "homogenization" to your article to supplement this nonexistent info.
I'll look through your remarks and change what is easy to do. Brews ohare (talk) 17:19, 1 April 2008 (UTC)

The new Introduction

Brews, you might have been better to have left the introduction the way it was. Your new introduction says that the main article will discuss how these equations came together as a distinct group. But it doesn't discuss that. There is nothing in the article about why Heaviside produced that group.

Also, you say that the article will discuss how these equations predict electromagnetic radiation. I would agree that that would be of paramount importance. But first of all, Maxwell didn't predict EM radiation through the Heaviside four. Maxwell never used that so called Maxwell-Faraday equation. He predicted EM radiation from the Lorentz force and the displacement current.

And as the article stands at the moment, any reference to EM radiation being predicted from the displacement current is very far down the page.

I would actually like to see that rectified. The article has been criticised for being badly presented but containing good information.

I think that immediately after the history section, the four equations should be dealt with one by one with EM radiation then being dealt with in the Ampère's circuital law sub-section. At the moment we do have that, but it is very far down the page. About a week ago it was alot further up the page but it was squeezed further down by the addition of lots of new specialized sections that should really be further down.


I think you'll find that the introduction as it was, was more suited to the facts and the existing state of the full article. George Smyth XI (talk) 16:02, 2 April 2008 (UTC)

Whoops, most of that was my edit, I think. :-) I feel very strongly that the four equations should not be dealt with one-by-one, except possibly in very brief terms (maybe a paragraph or two for each of the four). We already have the articles on each of the four individual equations, and this article is already so long that it's hard to read. The main idea I was trying to convey in that paragraph was that a reader interested in what Gauss's law is, what it means, what it predicts, how to apply it, etc., should read the article Gauss's law. Likewise with Gauss's law for magnetism, likewise with the other two. Anyway, that's the message I was trying to get across--serving sorta the same function as a top-of-article {{otheruses4}} template--but if I mischaracterized this article in the process, of course I'd be happy for the wording to be changed. --Steve (talk) 17:21, 2 April 2008 (UTC)
Speaking of which, here's an idea which I think is even better. Delete that paragraph, and instead replace the current note at the top with the following:

This article is about Maxwell's equations, a group of four equations in electromagnetism. For information about the individual equations, see Gauss's law, Gauss's law for magnetism, Faraday's law, and Ampère-Maxwell equation. For the thermodynamic relations, see Maxwell relations.

That would clear up and shorten the intro, yet still serve as a helpful redirecting notice to the many readers who come to this page trying to understand something about a specific one of Maxwell's equations, and instead are overloaded with information about all of them. What do y'all think? --Steve (talk) 17:41, 2 April 2008 (UTC)

Steve, The article as it stands doesn't tell us very much about why Heaviside brought the four equations together as a group, and so I don't think that a reference to that effect should be stated in the introduction.

Also, I'm not sure why you are strongly opposed to individual scrutiny of the four equations. I would have thought that the natural curiosity of a reader after having been presented with the set would then be to look at the individual members one at a time.

The main thrust of the entire article should be the fact that Maxwell extended Ampère's circuital law and then derived the EM wave equation.

Also, I'm not sure about your term 'Gauss's law for magnetism'. You say that it is widely used but I had never seen it before. In fact, I'm not even sure that it is Gauss's law at all. Gauss's law is about radial symmetry, sinks and sources. div B = 0 does not have the same meaning as in regions where div E = 0. div B = 0 follows from the curl equation curl A = B.

And the latest introduction that you have proposed is far too clinical. You are now starting to reduce it to the extent that it contains no interesting information. George Smyth XI (talk) 18:16, 2 April 2008 (UTC)

The term 'Gauss's law for magnetism' occurs in nearly every introductory physics (calc-based) textbook that I have used. So it is at least common at that level. PhySusie (talk) 18:22, 2 April 2008 (UTC)
I'm pretty happy with the intro to the article as it now stands with the redundant stuff removed by George. Brews ohare (talk) 18:44, 2 April 2008 (UTC)
Not happy with the recent change to put more history into the intro about the the Lorentz force. I put it back into the history section. As for a "clinical" intro, I believe the intro should be dictionary-like and provide the reader with a very expeditious statement of the topic. That way, the reader who wants only to know what the term means is quickly satisfied, and the other readers know they have found the topic they wanted and can pursue the T of C to see if it contains specifics they want to look into.Brews ohare (talk) 18:47, 2 April 2008 (UTC)
Hi George: Thanks - everything looks good to me now. Brews ohare (talk) 19:25, 2 April 2008 (UTC)

Hi George. I'm fine with the introduction not mentioning anything about the four equations coming together as a group. Indeed, my suggestion of the italicized text at the top does not say anything like that. I'm not "strongly opposed to individual scrutiny of the four equations". I am strongly opposed to said scrutiny being in this encyclopedia article. I think we should be encouraging readers who want to know more about the specific equations to go to the respective articles, where they can get a whole lot of really good information on the equations. This article is already very long and hard-to-read, and we should keep it focused by not putting in excessive amounts of content is already better explained in other articles. (As I said, I'm not so opposed to putting in maybe a paragraph or two for each, along with the "Main article:..." template.) I also think that a reader who comes to this article wanting to understand one of the individual equations would benefit from having, right at the top, the disambiguating note I proposed above; since the four individual articles are, after all, the best place for a reader to get information on the four individual equations. --Steve (talk) 23:45, 2 April 2008 (UTC)

The Solenoidal Field

PhySusie, I'm now satisfied, having done a google search, that the term Gauss's law has indeed been extended to div B = 0 in the literature. However, I do believe that this is a mistake, because the term Gauss's law used in this respect masks the true significance of the equation. Zero divergence merely tells us that we have an inverse square law. But when we are looking specifically at div B = 0 we are interested in the fact that it follows from curl A = B. Hence B is a solenoidal field. That is the point of interest. The situation regarding when div E = 0 is different. In this situation, the emphasis is on the absence of charge density. As regards div E = 0, we truly are interested in the Gauss's law aspect, and we know that we are not dealing with a solenoidal field.

Once again, we are seeing a casualty of giving primacy to the equation div B = 0 from the Heaviside group over the more informative curl A = B equation of the original Maxwell eight. div B = 0 follows from curl A = B, but not vica versa.

If we were to have used the original curl A = B, there would have been no question of calling it Gauss's law. While Gauss's law may be technically correct for div B = 0, it misses the point.George Smyth XI (talk) 19:33, 2 April 2008 (UTC)

A key point that should be included in the article. Brews ohare (talk) 19:47, 2 April 2008 (UTC)
Brews, Yes. But then I would run the risk of being dismissed on the grounds of opinion.
While on this subject, I should point out that the term Gauss's law was never used for div B = 0 in any textbooks that I ever used. If we are aware of the inferior usage of the term Gauss's law in this respect, then we should refrain from using it in this article. The mere fact that some textbooks use it doesn't mean that wikipedia has to follow suit, especially if we know already that it is a bad terminology. And we do know that it is bad terminology because the true significance of div B = 0 is all about curl A = B and solenoidality. That issue is not catered for by Gauss's law even if div B = 0 is technically Gauss's law. The issue of 'no magnetic poles' follows from the solenoidality curl A = B, and Gauss's law merely describes that consequence.George Smyth XI (talk) 20:38, 2 April 2008 (UTC)
The term for the equation div B=0, as you can easily find in a zillion textbooks, is "Gauss's law for magnetism". This terminology is not meant to imply that this is a special case of (or necessary has any relation to) either Gauss's law or Gauss's theorem, as you seem to have interpreted it. I don't have any opinion about whether "Gauss's law for magnetism" is the best of all possible terms for the law, but it does seem to be by far the most common...which means that Wikipedia does have to follow suit.
The article Gauss's law for magnetism does indeed mention, as does any good textbook, that Gauss's law for magnetism is equivalent to the statement "There are fields A such that B = curl A", or "B is solenoidal". --Steve (talk) 21:28, 2 April 2008 (UTC)
Also, George, you say "div B = 0 follows from curl A = B, but not vica versa." On the contrary, in both of the textbooks I have on hand (Griffiths and Jackson), vice versa is exactly how it's done. That is, first they state that div B = 0, and then they say because div B = 0, we can define a vector field A such that B = curl A (using Helmholtz decomposition.) As far as I know, there may be other textbooks that do it your way: They define B to be curl A, in which case "Gauss's law for magnetism" is a trivial, tautological statement (PS: Can you find a textbook that does it this way? Just curious :-) ) But at any rate, you should know that this doesn't seem to be the most common textbook presentation. --Steve (talk) 22:25, 2 April 2008 (UTC)
It seems that Helmholtz decomposition does cover the situation mathematically speaking. However, div D = ρ puts the stress on the irrotational side of D while div B = 0 puts the stress on the curl part of B. So the physical emphasis shifts. That is how I understand George's point. Brews ohare (talk) 22:48, 2 April 2008 (UTC)
Again, just because we're calling it "Gauss's law for magnetism" doesn't mean we're implying anything about what specific relationship it has to Gauss's law. --Steve (talk) 23:32, 2 April 2008 (UTC)

Steve, my textbooks do exactly as you claim. They say that div B = 0 implies that there must exist a vector A such that curl A = B. But this is clearly wrong and it's not the direction that Maxwell worked from.

The mere fact of the divergence of a vector field being zero does not imply that it is necessarily the curl of another vector field, and we can see this just by considering the equation div E = 0 in charge free regions of space.

Zero divergence is indeed technically Gauss's law and it does tell us that there are no sources and sinks at that point in space. Hence div B = 0 is theoretically Gauss's law and it tells us that there are no magnetic monopoles. But it doesn't tell us that the B field is solenoidal. We need curl A = B to tell us this.

Since div B = 0 came from curl A = B originally, then by calling div B = 0, Gauss's law, we are shifting the emphasis.

We would never call curl A = B Gauss's Law and so we shouldn't call div B = 0 Gauss's law.

Gauss's law is more concerned with radial symmetry, irrotationality, and the inverse square law whereas curl A = B, and hence div B = 0 is more concerned with solenoidality and curl.

You say that there are many textbooks calling div B = 0 Gauss's law. Well I admit there are quite a few web links on the internet that do so. But no high quality textbooks that I ever used called div B = 0 Gauss's law. The fact that they diligently avoided using this over simplistic terminology means that there must have been a good reason for doing so.

If you know that something is cheap, you don't have to imitate it just because some textbooks do it. George Smyth XI (talk) 06:35, 3 April 2008 (UTC)

Hi George! Four points in response :-)
First of all, in this discussion, can we please use the full term "Gauss's law for magnetism" for div B=0 and reserve the term "Gauss's law" for div E = rho? I'm easily confused :-)
Second of all, you question that "The mere fact of the divergence of a vector field being zero does not imply that it is necessarily the curl of another vector field". This is a well-known mathematical theorem. You can find the proof in any textbook that mentions the Helmholtz decomposition, or check out this site for an explicit construction for (one possible) A in terms of B. Contrary to what you say, div B = 0 does tell us that the B field is solenoidal. This is exactly the definition of a solenoidal vector field.
Third, I don't see anything suspicious about the fact that Jackson and Griffiths don't call div B=0 "Gauss's law for magnetism". After all, they still state it as an empirical law, use it in the same contexts that I'm proposing, etc. Just, instead of labelling the law "Gauss's law for magnetism", they call the law "Absence of monopoles" and "(no name)", respectively. For example, since writing an article on the Sokhatsky-Weierstrass theorem, I've noticed it being used a zillion times in papers and textbooks, and they almost always call it "a well-known theorem" or some other generic term, and almost never call it by a proper name (but when they do call it by a proper name, it's always "Sokhatsky-Weierstrass"). As that example shows, just because a law is commonly referred to in generic terms shouldn't count against us writing an article that calls it by its most common proper name. Anyway, if the name is the only thing you're objecting to, then I think we have to go with the clear majority (including practicing physicists on arxiv) and call it "Gauss's law for magnetism". An article on "(no name)" is pretty impractical anyway. :-)
Finally, if you think that textbooks don't do this thing right, you need to find a modern, reliable source that does it your way. If you find such a source, then I guess we can present both views, while emphasizing that the more common approach is the more common approach, in accordance with WP:NPOV. --Steve (talk) 08:25, 3 April 2008 (UTC)

Steve, while "Gauss's law for magnetism" is a correct term as regards the letter of the law, it is not correct as regards the spirit of the law. If we were to write the law in the form curl A = B as it appeared in the original eight Maxwell's equations, there would be no question of referring to it as "Gauss's law for magnetism". It is the curl aspect that is the important aspect of this particular Maxwell's equation. It is the curl aspect that tells us that the B field is solenoidal. It is not the zero divergence aspect that tells us that the field is solenoidal.

Again, I need to point out that the divergence of a vector field being zero does not imply that it is necessarily the curl of another vector field. It can alternatively mean that the vector field in question obeys the inverse square law. When div E = 0, E is certainly not derivable as the curl of another vector field. In this case, the divergence is of E is zero, exclusively because the function is inverse square law.

You say that it is a well known theorem. If that is so, it is therefore a well known theorem that is easily demonstrated to be false simply by the example that I have given you regarding E.

If major textbooks don't use the term 'Gauss's law for magnetism' then I would tend to go along with them in this respect. The use of the term 'Gauss's law for magnetism' is obviously a modern term that has crept into the popular literature as a result of a limited understanding of the topic on the part of the authors.

In actual fact, I'm beginning to realize now that Maxwell's original eight equations were a superior grouping to the Heaviside four. They contain both curl A = B and E = vXB. The Heaviside four need to be supplemented by one of the original eight.

I suggest we just use the standard university textbook terminology for this equation div B = 0 and write beside it 'no magnetic monopoles'. If you insist on referring to it as 'Gauss's law of magnetism' then you are knowingly aiding and abetting a slide into degeneracy. It is not good enough to simply say that some textbooks use it after you have had the inferior nature of this title exposed.

We are not obliged to use the name 'Gauss's law for magnetism' just because some textbooks use it.George Smyth XI (talk) 10:39, 3 April 2008 (UTC)

Again, I have to tell you that if the divergence of a vector field is zero everywhere then it does imply that it is the curl of another vector field. This is an extremely well-known mathematical theorem, called "Helmholtz's theorem", proved in any good vector calc textbook and many electromagnetism textbooks too, and I'm frankly shocked that you would continue to dispute it. As for your "counterexample", the divergence of E is not usually equal to zero everywhere, so it cannot be written as the curl of another vector field. In a (simply-connected) charge-free region of space, the divergence of E is zero, so it would appear that there is a vector field AE in that region of space whose curl equals E there. What's wrong with that? Analogously, in a region with no current or displacement current, the curl of B is zero, and you can and do write B as the gradient of a "magnetic scalar potential" (see the Magnetic potential article, for example). But please, I don't want to dispute Helmholtz's theorem with you. If you think that it's not true, you should be immediately publishing your counterexample in a math journal, not posting it on a wikipedia talk page :-)
"Gauss's law for magnetism" is certainly "standard university textbook terminology", since it's used in many of the most standard, major university textbooks. Not all of them, but many if not most of them. It's also used by practicing physicists in published articles. Why isn't that good enough? After all, your proposal of "no magnetic monopoles" is not unanimously used by textbooks either. "Gauss's law for magnetism" appears to be the most common term, as well as the most unambiguous and easiest to refer to. So we should use it :-) --Steve (talk) 15:57, 3 April 2008 (UTC)

Equation (D)

Brews, reading through your edits, I don't think that you have as yet realized the true significance of the Lorentz force. Equation (D) in the original eight Maxwell's equations contains Gauss's law, vXB, curl A = B, and the equation that you call the Maxwell-Faraday law all in one single equation. It was derived from Faraday's law. The only aspect of electromagnetism that is not catered for by equation (D) (Lorentz force) is Ampère's circuital law. With Maxwell's correction to the Ampère's circuital law, we then have all of electromagnetism in two equations. To get the EM wave equation, all we need to do is take one of the terms, (partial)dA/dt from the Lorentz force and subsitute it into Ampère's circuital law.

We don't need the vXB term or the Gauss's law term for the EM wave equation. But we need the vXB term for motion dependent EMF and that is why we need to add the Lorentz force to the modern four Maxwell's equations.George Smyth XI (talk) 19:59, 2 April 2008 (UTC)

You're right that I didn't appreciate all the content of this equation. However, I'd say that the Heaviside exiling of the Lorentz law has benefits that he never thought about, namely, in a medium the Lorentz force shows up in a very implicit manner in the constitutive equations, or in some statistical mechanical kind of way, and not with an explicit v. So from that standpoint it is conceptually cleaner to keep the fields dependent on j and ρ and let the transport specialists deal with how Lorentz force translates into expressions relating j and ρ to the fields. If instead we use an explicit v in the equations for the fields, it seems to me that for every choice of medium we'd have to include all the transport derivations along with the Maxwell's equations in a particular form designed for each medium of interest. Brews ohare (talk) 22:58, 2 April 2008 (UTC)

Brews, The Lorentz force adds only one thing to the modern Heaviside Maxwell's equations. It restores the F/q = vXB term that Heaviside took away.

The E term of the Lorentz force is already the same E term that appears in the so-called Maxwell-Faraday law and Ampère's circuital law. George Smyth XI (talk) 10:44, 3 April 2008 (UTC)

"relativistic transformation" section to relativity section

I don't think this content really belongs in this article at all. Can you explain that viewpoint further? Brews ohare (talk) 01:21, 3 April 2008 (UTC)

Well, the section basically gives the formula for how E and B transform under a Lorentz transformation. It has everything to do with the electric field, and the magnetic field, and the electromagnetic field, and special relativity, and Lorentz transformation, but I don't see how it belongs in an article on Maxwell's equations.
Sure, if you want to be able to apply Maxwell's equations in a different frame of reference, you need to know how E and B transform, just as you need to know how forces, velocities, and positions transform. But the same could be said for any electromagnetic phenomenon, and it's silly to put the E and B transformation rules into every single article that has E or B in one of its equations. Likewise, you could just as well say that if you want to apply Maxwell's equations to a rigid body, you need to know the equation for torque, and if you want to apply Maxwell's equations in a rotating frame, you need to know about centrifugal force, and so forth. Just because knowing something outside of Maxwell's equations could help you apply Maxwell's equations in some context, doesn't mean that other thing belongs in this article.
Also, some authors (like Purcell) start with Maxwell's equations and the Lorentz-transformation rules for coordinates and charge densities, and then "derive" the Lorentz-transformation rules for E and B. This is fine pedagogy, but it doesn't mean that the Lorentz transformation rules for E and B are a consequence of the "more fundamental" Maxwell's equations. After all, nothing in classical electromagnetism is fundamental, it all emerges from QFT, and I haven't seen any QFT textbooks that attempted to "derive" the transformation rules for E and B from anything else besides the definition of F. By the way, if the section were rewritten to say, "I'm going to show you that Maxwell's equations are Lorentz-invariant: Here's the transformation rules for E and B and J and rho and position and time, and here's the algebra that shows that it works", then that would be a sensible and relevant inclusion. It would also be pointless, though, as it would require a mountain of algebra to show something which is trivially obvious if you're willing to use the covariant formulation instead of three-vectors.
The transformation rules for E and B are important, to be sure, and I'm not quite sure where they best belong. (Which is why I haven't deleted it.) It's already at Mathematical descriptions of the electromagnetic field, but I suspect that no one would think to look there. As you noticed at Talk:Covariant formulation of classical electromagnetism, I think an article on Classical electromagnetism and special relativity might be worth writing, in which case that would be the perfect place (providing a home this content was my original motivation for that idea). Alternatively, electric field, magnetic field, electromagnetic field, special relativity, or Lorentz transformation might be potential homes...I haven't thought too much about it. --Steve (talk) 02:13, 3 April 2008 (UTC)
Agreed. We might as well do a section on Maxwell's equations in rotating reference frames. George Smyth XI (talk) 06:15, 3 April 2008 (UTC)
Hi Steve: I hope you can help me out on this. The article Moving magnet and conductor problem as I understand says that Maxwell's equations themselves result in a change in fields, B → γB for example, and when this is put into the Lorentz force law, we get a force modified by γ. All that without relativity. Then relativity (by which I mean γ-corrections to lengths and time, applied to Newton's law of motion, not to Maxwell's equations) determines the transformations of forces, and makes the Maxwell prediction expected. - So question 1 is: Do you agree with all that? If so, the field transformations stand apart from relativity and do have a place in a Maxwell's equations article. Then we come to question 2: in Faraday's law of induction#Example: viewpoint of a moving observer an analysis for velocities v << c0 is made. It uses the form B = B( x + v t ) to describe the field seen in the moving frame. However, this analysis satisfies only the Maxwell-Faraday equation, and does not satisfy the Maxwell-Ampere equation because a t-dependent E-field generates a displacement current related B-field that was neglected. In a v << c0 case that is OK because the missing term is ≈ v / c0. But it isn't OK at large v. What is the correct way to handle this B-field? Is that the reason Maxwell's equations themselves lead to a field transformation, apart from relativity? Brews ohare (talk) 15:31, 3 April 2008 (UTC)
If you're not physically invoking either Galilean invariance or Lorentz invariance, you can't say anything whatsover about what E and B are in other frames, since there are no other frames. You can say, I want Maxwell's equations (and the Lorentz force) to be Galilean invariant; then what's the trasformation rule for E and B? You can use the specific example of a moving magnet and conductor as a one test of whether you got the rules right. And you would get the right answer, to first order in v/c. I think that's what's done in the section "Transformation of fields as predicted by Newtonian mechanics". You can instead say, I want Maxwell's equations (and the Lorentz force) to be Lorentz invariant; then what's the transformation rule for E and B? Well you can assume the rules for coordinates and forces and charges, and then you can get the right answer, as in Relativistic electromagnetism. Or you can say, I'm looking simultaneously for both coordinate and field transformations, so that Maxwell's equations are invariant, and I think you could probably show that the correct Lorentz transformations are the only possibility. (I don't know what's going on in that section of Moving magnet and conductor problem.) So in that sense, you can presumably "derive" the Lorentz transformation rules (both coordinates and fields) from Maxwell's equations. But as I said before, that's history and that's pedagogy, but that's not physically the correct fundamentals of things.
As for your question 2, if you use the exact transformation rules for B, as well as for EMF, time, position, and everything else, I'm sure it works out. I haven't thought through the details.
Also, by the way even if the field transformation rules were physical consequences of Maxwell's equations, I still wouldn't want them article. After all, almost everything in electromagnetism is a physical consequence of Maxwell's equations. In a typical electromagnetism textbook, half or more of the book is about the physical consequences of Maxwell's equations, everything from why the sky is blue to magnetohydrodynamics. We should keep the article more focused than that. :-) --Steve (talk) 16:33, 3 April 2008 (UTC)

The name "Lorentz force"

When modern Maxwell's equations are supplemented by the Lorentz force which we all know is equation (D) of the original eight Maxwell's equations, this is the same as firing a cabinet minister and then bringing him back in again under a new name and hoping that nobody will notice that it is the same person. George Smyth XI (talk) 10:51, 3 April 2008 (UTC)

You're entitled to have whatever opinion you want about whether or not the terms "Maxwell's equations" and "Lorentz force" as they're used today are bad, ahistoric, physically-nonsensical terms. For my part, I have no opinion whatsoever, I just call them what all the other physicists call them. I just want to remind you, though, not to incorporate your opinions on these matters into the article unless you have a reliable source backing them up. (Perhaps you weren't planning to anyway.) Thanks! :-) --Steve (talk) 16:03, 3 April 2008 (UTC)

Steve, I'm very surprised to hear that you don't have opinions on these matters. I would have thought that anybody that was keen to edit these articles had very definite opinions. The talk pages are there for the purposes of exchanging opinions and for learning from each others opinions so as to help in obtaining a consensus for the best way to word the article on the main page.

There is a big problem with this topic which is not faced by alot of other topics. It is the fact that the term Maxwell's equations originally referred to a set of equations by Maxwell but later got referred to a similar, but not identical set of equations by Heaviside.

There will naturally be alot of opinion involved as regards how to best present this state of affairs in the main article. There will also be lots to be learned from comparing the two sets and from comparing the relative merits of the two sets.

I have expressed the opinion that curl A = B is a superior equation to div B = 0. I did not however suggest that we replace it in the Heaviside four in the main article since we have already agreed that this article should be concentrating on the Heaviside four as being the most universally accepted versions of what is understood to be Maxwell's equations.

However, it is a fact that curl A = B is not Gauss's law. And since the equation div B = 0 is the corresponding equation in the Heaviside four, then it should not be referred to as Gauss's law for magnetism.

And yes, you are probably right that if the divergence of a vector field is zero "everywhere" then it follows that there will be another vector such that its curl yields the vector in question. But somebody looking at an equation of the form div B = 0 would be entitled to assume an irrotational inverse square law solution. That of course implies a singularity (source) at the point of origin and so that solution doesn't satisfy the condition that the divergence is zero everywhere. In fact we see this in the case of the Biot-Savart law which has an inverse square law solution pointing to a source at the orgin, and one is left to wonder how his ties in with the solenoidal B field. I have my reservations about the accuracy of the Biot-Savart law.

In order to avoid confusion between inverse square law irrotational solutions to B and solenoidal solutions to B, I hold to the opinion that the term Gauss's law as used in Maxwell's equations should be reserved for matters to do with the E vector. I think we should imitate the textbooks that don't call div B = 0 Gauss's law for magnetism. George Smyth XI (talk) 06:00, 4 April 2008 (UTC)

I would imagine that someone who sees the equation "div B=0" would assume that that means "div B=0", and that it doesn't mean "div B=0 except possibly at certain points where it's infinite". I don't see how a reader would be "entitled" to interpret it the latter way. But maybe they do, in which case that can be easily clarified in the text or footnotes. We also have the law written in integral form, which specifically and transparently rules out the latter interpretation.
When you refer to "an inverse square law solution pointing to a source at the orgin", I assume you mean
This is not a solution to the Biot-Savart law. In fact, every solution to the Biot-Savart law is consistent with "div B=0 everywhere", as proven, for example, in Griffiths.
Just to say it again: There's one thing called "Gauss's law" that has nothing to do with B, and there's another thing called "Gauss's law for magnetism" that has nothing to do with E. They're two different terms, for two different laws. Just because "Gauss's law" is two of the words in "Gauss's law for magnetism" doesn't mean they necessarily have anything to do with one another. This seems to be the basis for some of your objections, if I understand correctly. If this is indeed the source of confusion, it would seem more economical to simply say this, as opposed to throwing out the most common and unambiguous terminology for this law in favor of a hard-to-refer-to, generic term.
So far in the past couple days, you've expressed skepticism about both Helmholtz's theorem and the Biot-Savart law, two basic laws that have been universally accepted for a hundred years. I think you should take that as a sign that you might benefit from further background reading and learning about these topics. I recommend that if you get a chance, you spend some more time reading an electromagnetism textbook. Of course, it's great to have you here contributing, but I'm encouraging you to take extra caution to double-check any of your potential edits against a textbook before incorporating them. No offense intended, and thanks a lot! :-) --Steve (talk) 16:18, 4 April 2008 (UTC)

The Lorentz Transformations

The Lorentz transformation section is well presented. It shows the Lorentz transformation to be basically the Lorentz force but with the relativistic gamma factor included, along with a reciprocal equation for the B vector which we know is effectively the Biot-Savart law. In relativity it is acceptable to use the term E for vXB, as it was in Maxwell's original papers. Strangely we never see that usage of E = vXB when treating the Lorentz force classically in modern textbooks.

But where does the idea come from that a B field in one frame of reference can become an E field in another frame? I don't read that into the Lorentz transformations.George Smyth XI (talk) 06:18, 4 April 2008 (UTC)

If you can get a hold of Purcell's textbook, you can find exhaustive yet readable discussions of the various interrelationships between electromagnetic phenomena in different inertial frames. Some of that is also at the article Relativistic electromagnetism. --Steve (talk) 16:31, 4 April 2008 (UTC)

Steve, I'm totally familiar with the theories of Rosser and Purcell. They bear no relationship to the issue above. They were working along completely different lines. They were using the Lorentz-Fitzgerald contraction to create a charge density. The topic above is only about adding the relativistic gamma factor to the Lorentz force and the Biot-Savart law.

What you must bear in mind about Purcell's theory is that he is claiming that an E field (Coulomb force) with a source in one reference frame can become a B field without a source in another reference frame. How does the source appear and disappear? George Smyth XI (talk) 05:16, 5 April 2008 (UTC)

George: I don't have Purcell's book. However, maybe the ambiguity here is that E need not originate from an electric charge, but also can originate through the Maxwell-Faraday equation: E can have both a curl and a div. The curl part can transform into a B-field. That transformation does not require the magical disappearance of a source. Brews ohare (talk) 15:49, 5 April 2008 (UTC)

Brews, You are thinking along the correct lines. There are many who have never noticed what you have noticed about there being two kinds of E. And yes, the (partial)dA/dt one is almost certainly solenoidal.

But I can assure you that Purcell's theory unequivocally converts the Coulomb force version of E, with its sources, into the sourceless B of the Biot-Savart law. In other words, a stationary observer sees solenoidal B lines whereas a moving observer, no matter how slowly he is moving, will see these solenoidal B lines as radial electrostatic irrotational E lines, according to Purcell. Purcell's theory totally contradicts that other theory that people talk about whereby Maxwell's equations can be derived directly from a Lorentz transformation on the Coulomb force if we assume charge invariance. Purcell's theory assumes the complete opposite of charge invariance. Purcell's theory assumes charge variance due to Lorentz-Fitzgerald contraction.

But this is a bit off topic. The section in question deals with the Lorentz transformation of Maxwell's equations as they already exist. And it produces two formulae which are effectively the Lorentz force and the Biot-Savart law with the relativistic gamma factors added.

At the very most, all these equations tell us is that an E of the vXB kind in one frame of reference is an E of the -(partial)dA/dt kind in another frame. I would personally dispute even this, and I have already said so to you elsewhere, citing the Faraday paradox as a case in point.

But my own opinion on this is irrelevant for the purposes of the main article. However, one thing is absolutely sure and that is the fact that a Lorentz transformation on Maxwell's equations does not lead to the conclusion that an E field in one frame is a B field in another frame, and so any such Purcellian type statements should be removed from the main article. George Smyth XI (talk) 16:27, 5 April 2008 (UTC)

My view is the following: (1)It's true, of course, that the E field in one frame contributes to the B field in another frame, (2) I'm pretty sure, but not 100% sure, there are no other conceivable field transformations that would leave {Maxwell's equations and the Lorentz force} exactly Lorentz-invariant, (3) There's no need to put either (1) or (2) into this article. :-) --Steve (talk) 17:07, 5 April 2008 (UTC)

Steve, I don't see how you ascertain that an E field in one frame even remotely contributes towards a B field in another frame. They are different kinds of quantities.

A Galilean transformation introduces the 'v'X'B' term to the Heavisde versions of Maxwell's equations.

Anyway, I agree with you that we don't need this section at all. It is about as relevant as having a section on Maxwell's equations in a precessing restaurant. George Smyth XI (talk) 09:37, 6 April 2008 (UTC)

Sounds like we DO need this section, so that people can understand that E and B are really the same things, in different frames of reference. Dicklyon (talk) 13:01, 6 April 2008 (UTC)
Dicklyon, there are a lot of things people don't understand in classical electromagnetism. This article, "Maxwell's equations", should addresss those misunderstandings that are directly relevant to Maxwell's equations. I don't think that "E and B are really the same things, in different frames of reference" qualifies. :-)
George, it's been known since Einstein exactly how an E field in one frame contributes towards a B field in another frame. If you don't understand the argument, read about it in a textbook or ask your local physicist. If you have a disproof of this claim, publish it and you'll be famous. --Steve (talk) 18:44, 6 April 2008 (UTC)

Steve, the thing you are talking about is indeed the very relationship laid out in the section in question. Those two expressions are the Lorentz force and the Biot-Savart law. But they certainly don't tell us that E in anyway contributes to B. I think you are getting confused with Rosser and Purcell. They told us something like that. George Smyth XI (talk) 10:25, 7 April 2008 (UTC)

Dicklyon, are you saying that the Coulomb force in one frame becomes the Biot-Savart law in another frame? If so, where did the sources go? Or are you saying that an E given by dA/dt in one frame is a B, where B = curl A, in another frame? In other words, are you saying that curl A in one frame is dA/dt in another frame? George Smyth XI (talk) 10:30, 7 April 2008 (UTC)

George, every book and every course that covers relativistic electromagnetism gives the rules for transforming E and B into different frames, and they're exactly the rules that Brews put into this article (they're also in this article). If you have a good reason for thinking those rules are incorrect, then you should certainly publish this revolutionary discovery, and you'll be famous. Good luck :-) --Steve (talk) 16:19, 7 April 2008 (UTC)

Steve, it's the same rules that I am referring to. They are in this article in the very section that we are now talking about. They do indeed give the rules for transforming E and B into different frames under Lorentz transformation.
But that's not the same as saying that a B field in one frame is an E field in another frame. This latter assertion stems from more recent theories by Rosser in 1959, and by Purcell in 1963.
The point I'm making is that the conclusion of the 1959 Rosser theory is being cited wrongly as the conclusion for how E and B are inter related under Lorentz transformation. The conclusion of the Lorentz transformation is different. It is that vXB conributes to E and vXE contributes to B.
By all means retain that conclusion, but you first have to introduce Purcell. At that point we would be going off topic, but it might be worthwhile to transfer the whole section to the relativity page. Does the relativity page have a section of Purcell yet? George Smyth XI (talk) 07:42, 8 April 2008 (UTC)
George, just to get this straight: You agree that the correct way to compute the transverse component of the electric field in a different frame is
But you would disagree with the claim: "Therefore, the B field in one frame contributes to the E field in another frame"? Isn't this an obviously-true statement given this transformation law above? Or maybe what you're disagreeing with is the stronger claim: "Therefore, the B field in one frame is an E field in another frame", a claim that I would also disagree with, insofar as that sentence implies that if you have E=0, B0 in one frame of reference, you can always find another frame where B=0, E0...which I don't think is true. --Steve (talk) 17:35, 8 April 2008 (UTC)

Steve, yes. It's only the stronger claim I was disagreeing with. B does contribute to E and vica-versa through the above equations, which are the Lorentz force law and interestingly what I had termed the microscopic version of the Biot-Savart law. I believe that equation is the correct equation for B, whereas the Biot-Savart law with its inverse square law term is distinctly wrong.George Smyth XI (talk) 06:29, 9 April 2008 (UTC)

George, physicists for the last hundred years have been learning the Biot-Savart law and Gauss's law for magnetism, using them every day in their research, teaching about them, and writing textbooks about them. Maybe it's possible that none of these thousands of bright minds noticed what you see as a glaring contradiction between the two laws, even though they clearly thought about and went to the effort of mathematically proving that they couldn't possibly contradict each other. And maybe you, George, are the only one who sees this truth, thanks to your superior understanding of vector calculus. Maybe that's the case. If so, you should be doing what you can to get the truth out, since there's a mountain of physics which has been built on the consistency of these laws, and it will have to be thrown out and re-started from scratch. Your best bet is publication; anything you post on Wikipedia will be deleted as original research. You can try to track down your nearest physicist to be a coauthor -- no one would turn down such a career-making opportunity. If successful, you'll be first in line for a Nobel Prize. Like I said, good luck on this exciting journey, and let me know how it goes. --Steve (talk) 16:22, 9 April 2008 (UTC)

Maxwell's equations in Relativity

In the relativity section historical introduction it mentions how due to the involvement of the speed of light it was assmed that Maxwell's equations were only valid in the rest frame.

Well that is true as regards the EM wave equation. Maxwell removed the vXB term from his equation (D) (the Lorentz force) in order to derive the EM wave equation in the rest frame. So naturally if we start doing Galilean transformations on Maxwell's equations we will bring the vXB term back again. George Smyth XI (talk) 06:29, 4 April 2008 (UTC)

D and B against E and H

193.198.16.211, It is normal to consider D to be the parallel quantity to B, and E to be the parallel quantity to H. This is in large part because,

D = εE

and,

B = μH

I don't intend to revert this again as it is a relatively trivial issue. But you did invite your edit to be discussed on the talk pages and so I am giving you my opinion. George Smyth XI (talk) 11:45, 4 April 2008 (UTC)

George: The structure of this article is to treat E and B as basic. They are the variables chosen to appear in vacuum with all charges treated as free, and in the Lorentz law. In materials the variables D and H are introduced. Therefore what is needed is the connection between these new variables and the ones introduced first. That is simply a matter of the logic of this article. It is not the historically most ancient and venerable approach, but to change the logical precedence of B and E will require total reorganization of the article, and is not a trivial matter. Moreover, it would fly in the face of "modern" presentations, as found in the dominant textbooks of today.
It would be a service to the historical record if you would trace the events leading to this change in viewpoint. Obviously the symmetry of the constitutive equations would be better served if the reciprocal of μ were defined instead of μ, so the choice of B as more fundamental than H has to have some history.

Brews ohare (talk) 14:29, 4 April 2008 (UTC)

Brews, That's OK with me. I said that I wouldn't revert it back again. But just as a matter of curiosity, I can accept that E is a more basic quantity than D. But regarding B and H, I have never given it too much thought until now, as to why modern textbooks prefer to use B over H. If they were to swap all B's for the term μH, would that not be more informative? As it stands now, all references to B have the implicit μ term concealed.

What exactly are the advantages of using B as opposed to H? B is a weighted term. It is officially called magnetic flux density. What was wrong with using the original magnetic field term H?George Smyth XI (talk) 14:46, 4 April 2008 (UTC)

Brews, I see that you are now asking the exact same question. Right now I don't know the answer. George Smyth XI (talk) 14:47, 4 April 2008 (UTC)
There's a nice explanation of this in Griffiths, as I recall. He explains why E and B are the more fundamental ones, and also why E and H are the most common ones, in practice. If I remember right, currents are very easy to measure experimentally, and they give you H, and voltages are very easy to measure, which gives you E. On the other hand, charge densities, which would give you D, are very difficult to directly measure, as are magnetic potentials, which would give you B. :-) --Steve (talk) 16:27, 4 April 2008 (UTC)
As pure speculation, the choice of B as fundamental may relate historically to the choice of standards for units, which makes Ampère's force law the basis for the ampere. Once you have a known current I, the force on a loop of wire is expressed in terms of B as:
Brews ohare (talk) 16:49, 4 April 2008 (UTC)

Regarding E and D, the situation is not exactly parallel to the situation regarding B and H. Maxwell combined D, E and H in his equations. He used D for displacement and E for electromotive force and he even had a special equation to relate D and E to each other.

But Maxwell never used B. He always used μH.

Today we use E and B in the vacuum equations, with E being applied to both displacement current and to electromaotive force. As regards E and D it is easy to see that E is the more comprehensible quantity.

Regarding B, perhaps it was considered simpler in later years, once the significance of μ as the density of Maxwell's vortex sea was lost, to simply define a quantity B as μH in standardized units and consider B to be the magnetic field in the vacuum.George Smyth XI (talk) 05:13, 5 April 2008 (UTC)

Duality

It sometimes helps to look at the units of measure of the various quantities to see which ones are duals. It is interesting to note that the SI units for E are volts per meter and for H are amperes per meter, while the SI units for D are coulombs per square-meter and for B are webers per square-meter. Now, since a coulomb is equivalent to an ampere-second, and a weber is equivalent to a volt-second, it seems that E and H are duals, and D and B are also duals. First Harmonic (talk) 02:56, 20 August 2008 (UTC)

Also, notice the ε0 has units of farads per meter, while μ0 has units of henries per meter. This observation suggests that the dual of ε0 is actually μ0 and not 1/μ0, as suggested above. Thus the correct dual equations should be:

and not

First Harmonic (talk) 03:02, 20 August 2008 (UTC)


It is only because of historical accident that some authors have presented B as the magnetic field instead of H. It should be clear from the equations and the units of measure that B is actually the magnetic flux density (magnetic flux per unit area measured in webers per square meter), just as D is actually the electric flux density (electric flux per unit area, measured in coulombs per square meter). In my opinion, the physical significance of the quantities and the equations should be the primary consideration more than the historical development. First Harmonic (talk) 03:15, 20 August 2008 (UTC)

Two textbooks that support this point of view:

  • David H Staelin, Ann W. Morgenthaler, and Jin Au Kong, Electromagnetic Waves (Prentice-Hall, 1994).
  • Hermann A. Haus and James R. Melcher, Electromagnetic Fields and Energy (Prentice-Hall, 1989).

The authors of both of these textbooks are well-known professors in electrical engineering at MIT.

First Harmonic (talk) 03:35, 20 August 2008 (UTC)

Well, for what it's worth, David J. Griffiths's (physics-based) textbook makes it quite clear and explicit (e.g. see p.271) that E and B are the fundamental quantities, and D and H are less fundamental. (And I quote: "B is indisputably the fundamental quantity".) He also explains (p271) why E and H are the two which are more practical and thus most often used in laboratories and by engineers. Your books (being by electrical engineers), naturally, would regard the more practical ones as more important. Fine with me.
I don't know what you mean by "dual", or what you're trying to prove in the argument above. As I'm sure you're aware, unit names are decided upon by the SI committee, not by Nature. Perhaps you're saying that the SI committee intended to have some parallel to , in which case I agree. But who cares what the SI committee was going for?
Getting back to the article, which is what we should be talking about, are you saying that the "Constitutive relations" section should replace, e.g.
by
?
If so, I disagree. These are expressed this way not because we're making some deep statement about what electrical quantity is "dual" to what magnetic quantity. It's just, simply, that in the context of the algebraic manipulations implied by the flow of text, the quantity on the left-hand-side would end up being replaced by the quantity on the right-hand-side. Because the final result is Maxwell's equations just in terms of E and B. If we were ultimately deriving Maxwell's equations just in terms of E and H (which I've never seen written down), then it would make sense to write it as you suggest. Nothing deep here, just writing an equation so as to make the algebraic manipulations maximally transparent. --Steve (talk) 04:32, 20 August 2008 (UTC)
If you make the argument based simply on algebraic manipulations, then of course, it really doesn't matter which of the two pairs you choose to use as constitutive relations. But I am trying to point out something that is deeper than just algebraic manipulations, something that actually has a basis in physical significance. First Harmonic (talk) 23:45, 20 August 2008 (UTC)
BTW, I never made any argument as to which one H or B is "fundamental" versus which one is "derived". That of course is also open to interpretation. What I said is that the dual of E is H, and the dual of D is B, and I base that assertion not on algebraic manipulations, but on physical principles and physical significance. Duality theory is well established and well documented in many places, including a few articles in WP. I don't have time to explain all of it right now. The short form is that transforming from one dual to the other involves substituting voltage for current and vice versa. Why? This choice is not arbitrary, it is necessary to ensure that energy is conserved in the transformation. Current and voltage are a conjugate pair in the sense that their multiplicative product is Power. First Harmonic (talk) 23:45, 20 August 2008 (UTC)
As for the units of measure, of course the SI Committee selects the names of the units. That was not my point. I was using the names of the units as a shorthand for the physical dimensions that they represent, e.g. volts represent electrical potential, amperes represent electrical current, meters represent distance, etc. The SI Committee did not decree that valid physical equations must conserve physical dimensions, that is a requirement of physical reality. And they better conserve units of measure as well, if you want the numbers to be correct as well ! First Harmonic (talk) 23:45, 20 August 2008 (UTC)
I have seen many textbook discussions based entirely on E and H with no mention of D and B, especially in the case of linear isotropic media (such as free space). The fact is that you only need two of the four, as long as the first is chosen from either E or D, and the second from either H or B. The issue is that Maxwell's equations are almost perfectly symmetrical if you choose E and H as the basis of the equations, with D and B as the result of the constitutive relations. That alone doesn't mean much, although I prefer symmetry to asymmetry, but the duality argument is actually solid. First Harmonic (talk) 23:45, 20 August 2008 (UTC)
As for the quote from David Griffith's textbook that "B is indisputably the fundamental quantity", I have one question: Why? Although I realize the Prof. Griffith's is a distinguished authority on the subject, it is not enough for him to simply assert that B is fundamental, but I would hope that he has some reason and/or evidence to support such an assertion. Also, what is the year of publication? How do you know that the prevailing opinion of physicists as to the best way to present this material from a pedagogical point of view has not changed since Griffith's book was published? Also, why do you assume that the opinion of a physics professor should carry more weight than that of an electrical engineering professor? First Harmonic (talk) 23:57, 20 August 2008 (UTC)
I'm afraid I'm not familiar with the "duality" that you're referring to. The only kind of electromagnetic duality I've heard of is a transformation that also turns electric charges into magnetic monopoles and vice-versa, which I gather is not what you're talking about. Anyway, I'll take your word for it.
I have seen many textbook discussions based entirely on E and B with no mention of D or H. Again, I'm reading physics textbooks, you're reading engineering textbooks.
I think the opinion of a physics professor should carry more weight in questions of physics, while the opinion of an EE professor should carry more weight in questions of EE. Most "duality transformations" that I've heard of are considered "physics", so that's what I assumed here. But maybe this one isn't, in which case we should indeed be listening to the engineering professors.
The reason that Griffiths regards B as fundamental is explained in the book; my quote was just the conclusion. I brought it up because Griffiths (like other physics E&M textbook writers) regards E and B as fundamental, and D and H as less-fundamental. In this respect, one might reasonably say that E and B "are a pair" and D and H "are a pair" in a loose sense. Maybe your duality thing is a different sense in which this isn't a productive "pairing".
Does your unit argument still work in cgs? Just curious.
Griffiths' book was written in the last 10 years I think. --Steve (talk) 00:26, 21 August 2008 (UTC)
I haven't worked it out 100 percent, but I did a quick calculation yesterday (in my head), and the duality that pairs voltage and current is equivalent to the duality that pairs electric charge with magnetic monopoles. And if you follow that duality transformation, it will pair E with H, and D with B. Again, I am not making any claim as to which quantities are "fundamental" and which are "less fundamental", because I don't really know what that means in a physical sense. But I do understand what duality is, and how it relates to the physics.
But do not take my word for it. Please try to work it out yourself. Start with duality that pairs electric charge with magnetic monopoles. Then follow the logic wherever it leads. What does it tell you? Remember, the electric displacement D is dimensionally the same as electric charge per unit area. Is that the dual of H or B ? First Harmonic (talk) 01:17, 21 August 2008 (UTC)
Confirmed! Page 274 of Jackson's Classical Electromagnetism has the duality transformation, and if you're willing to start calling "electric charges" "magnetic monopoles" and vice-versa, you also have to switch E with (a multiple of) H and D with (a multiple of) B for electromagnetism to still be consistent. Neat. Anyway, it's not especially relevant to this article, since in most contexts it's most useful to keep sacrosanct the convention of what's an electric charge and what's a magnetic monopole. :-) --Steve (talk) 02:14, 21 August 2008 (UTC)
I am glad you have arrived at an understanding of duality, and I appreciate your willingness to step out onto the limb. But I am afraid that I have not been sucessful in driving home the real point of my argument. Duality was not the objective, it was meant to be a means to the ultimate end. The point of duality is not to do the transformation, but rather to gain a deeper understanding and some physical intuition about Maxwell's equations and how they represent electrical and magnetic phenomena. I am not interested in transforming electric charges into magnetic monopoles -- as far as we know, there are no magnetic monopoles that anyone has ever seen -- but I am interested in using duality theory and symmetry in understanding electricity and magnetism in a simple and elegant way. Unfortunately, the confusion between H and B (not to mention the disaster of cgs units) is so deeply entrenched in basic textbooks and high school curricula, I guess I am banging my head against a brick wall.
Anyway, good luck to you. Sorry that I was unable to explain it better. First Harmonic (talk) 02:47, 21 August 2008 (UTC)

Error in Table 2

I not sure but it seems that in the Table 2 in the last row there are and missing. I mean it should look like this: . It think someone omitted it while switching from H to B. I'm just studying this topic so didn't decide to change it at my own. —Preceding unsigned comment added by 79.186.69.103 (talk) 16:23, 16 June 2008 (UTC)

You're right, thanks for pointing that out. I've just corrected it. :-) --Steve (talk) 20:24, 16 June 2008 (UTC)

Zero Divergence

Steve, If you look a the Biot-Savart law page, you will see that the definition of B is in terms of an inverse square law.

We have agreed that zero divergence can apply to either a solenoidal field, or to an inverse square law field other than at the point of origin.

I don't know how you can have both these scenarios at once. So something is wrong somewhere.

I doubt very much if this matter will be resolved by reading textbooks. George Smyth XI (talk) 04:36, 5 April 2008 (UTC)

George, the "zero divergence" which Gauss's law for magnetism is talking about is "zero divergence". It's not "zero divergence other than at the point of origin". Gauss's law for magnetism does implies a solenoidal field. This is the definition of a solenoidal field.
Jackson and Griffiths both explicitly prove that any solution to the Biot-Savart law, for any current distribution in the world, is a solenoidal field, consistent with Gauss's law for magnetism, with zero divergence everywhere, including at the origin or anywhere else. If you've found a counterexample to this, you should be publishing it, not posting it here.
The Biot-Savart law is an "inverse square law", but with a funny vector dependence because of the cross-product. is not a solution to the Biot-Savart law, as you'll find if you try to actually write down the current distribution that gives rise to it.
Contrary to what you say, if you read a good E&M textbook you will get a better understanding of these issues, and you'll be able to prove for yourself that Gauss's law for magnetism and the Biot-Savart law are consistent, and why Gauss's law for magnetism proves the existence of the vector potential. If you can't make sense of any textbook presentation, ask your local physics professor to clarify the issues. But please, I have better things to do than to defend statements to you that are universally accepted and understood by every physicist in the world. You have to take some responsibility yourself for getting up to speed with basic electromagnetic theory. --Steve (talk) 16:58, 5 April 2008 (UTC)

Steve, I agree that B is zero divergent everywhere because it is solenoidal, and hence curl A = B. However, with Biot-Savart, B will not be defined at the origin and therefore div B will not be zero everywhere. Therefore there is something seriously wrong with the Biot-Savart law. George Smyth XI (talk) 09:24, 6 April 2008 (UTC)

If you think there's something seriously wrong with the Biot-Savart law, I'd encourage you to follow through on it, find a specific contradiction between the Biot-Savart law and Gauss's law for magnetism, or between the Biot-Savart law and experiment, and submit it for publication in Science. When it gets published and you become famous for revolutionizing classical electromagnetism, I'll send you a gift-basket to apologize for doubting you. :-)
Seriously, though, yes, solutions to the Biot-Savart law blow up at the origin, but their divergences don't. Their divergences are zero. If you spend some more time with a textbook, you'll learn that infinities in physics are not hopeless but can be dealt with by means of limits, delta-functions, etc. There are perfectly well-defined ways to compute what the divergence of B is at the origin, and it turns out to be zero. If you have further questions or doubts about the Biot-Savart law, I promise you that you will find answers to them in textbooks, if you read them carefully enough. Or if you have access to any professional physicists, you can arrange a face-to-face meeting with them in a room with a blackboard, and all your misunderstandings can be answered much more efficiently than they can here. --Steve (talk) 18:32, 6 April 2008 (UTC)

Steve, The divergence of B is zero everywhere because of curl A = B. If we introduce an inverse square law to B then we lose that because the divergence cannot be zero at the origin on that basis. George Smyth XI (talk) 10:09, 7 April 2008 (UTC)

Well, then it sounds like you have a demonstration that Gauss's law for magnetism and the Biot-Savart law are inconsistent, even though many textbooks explicitly prove otherwise. You should certainly not be wasting time here, you should be publishing this result, which would be the most important discovery in classical electromagnetism since Einstein. When you're famous, I'll send you a bushel of flowers. Good luck! :-) --Steve (talk) 16:09, 7 April 2008 (UTC)

Steve, We're getting away from the point here. Take a look at the wiki article on Coulomb Gauge. I've copied this from it,

<quote>::

In the Coulomb gauge, it can be seen from Gauss' law that the scalar potential is determined simply by Poisson's equation based on the total charge density ρ (including bound charge):

<end of quote>

We know that A is not solenoidal. The solution is the Coulomb force with a source at the origin. In other words, does not hold at the origin.

So there is indeed an ambiguity in using the equation for zero divergence. Are we referring to the solenoidal condition or the inverse square law condition?

That's why I prefer using curl A = B as in Maxwell's original eight equations, because it is less ambiguous. But we can't do that because this article is about the Heaviside four. Nevertheless, we can maintain the quality by using Maxwell's name for that equation as opposed to Gauss's law for magnetism. Maxwell referred to it as the equation of magnetic force.

Finally, the zero divergence condition cannot staisfy both the inverse square law solution and the solenoidal solution simultaneously, so something is very wrong with the Biot-Savart law. —Preceding unsigned comment added by George Smyth XI (talkcontribs) 07:56, 8 April 2008 (UTC)

Sounds great. Good luck publishing your proof that the Biot-Savart law yields a non-solenoidal B-field, and your proof that A is not solenoidal in the Coulomb gauge, and your proof that div B=0 does not necessarily imply that there is an A with curl A = B. These claims are explicitly denied, or in some cases even disproved, in textbooks on these subjects, so certainly the physics community will have a lot to learn from your insights, and you'll undoubtedly be showered with fame, prizes, and offers of tenured faculty positions. Opportunity is knocking, George; please let me know how it goes. :-P --Steve (talk) 17:18, 8 April 2008 (UTC)

Steve, you are misrepresenting me here.

means that there is a vector A such that curl A = B, provided that we emphasize the fact that we really mean that everywhere, with no exceptions about points of origin.

When we speak of the Coulomb gauge, , we do not make this emphasis and our minds are focused on the inverse square law Coulomb force solution which is radial and certainly not solenoidal. The equation of course breaks down at the point of origin.

You implied above that you believe that A is solenoidal in the Coulomb gauge. It most certainly isn't. It is radial inverse square law.

And I think your error in this regard illustrates the ambiguity in the use of the equation as opposed to the curl A = B equation.

The more I look at the original eight Maxwell's equations, the more I realize that they are a superior set to the Heaviside four. George Smyth XI (talk) 14:15, 9 April 2008 (UTC)

My view, and the demonstrated view of modern physicists, is that an equation , with no other information, always means "everywhere with no exceptions", unless otherwise stated. For example, how many times have you seen Gauss's law (for electricity) stated as (with no other qualifications)?? But maybe there are other people who have the same confusions as you, in which case, by all means, we can say "this equation holds everywhere".
I'll say explicitly: The definition of the Coulomb gauge is that the divergence of A is zero everywhere, including at the origin, including at every point in space. In other words, A is required to be solenoidal. So what you're saying, then, is that the Coulomb gauge is impossible. This will be very exciting news to the physics community, who have used the Coulomb gauge in hundreds of thousands of books and papers over the years. It shouldn't be a problem for you to tie up all the loose ends, make this argument totally airtight, and submit it for publication. The physics community will have its work cut out in revising every textbook and rewriting all those papers.
Do you have access to any physicists or physics professors? For example, do you have an association, past or present, with a university? If so, you should be having this conversation with that professor, who can verify to you what I said is exactly what is meant by the term "Coulomb gauge", and prove to you, mathematically and through examples, that it is always possible to find such an A. Or maybe you'll end up explaining to him (or her) why every physicist in the world is wrong but you are right, and I'm sure that the professor will jump at the opportunity to help you publish these revelations. --Steve (talk) 18:45, 9 April 2008 (UTC)

Steve, I never for one moment took it that the A vector was solenoidal in the Coulomb gauge. You seem to think that it is. So what is the even more fundamental vector Z such that curl Z = A? And where does it all end? I always took it that it ended with A, and that curl A = B, E = -(partial)dA/dt, and that in the Coulomb gauge, div A = 0, except at the origin.George Smyth XI (talk) 03:28, 10 April 2008 (UTC)

I'm sorry to hear that you never understood what the Coulomb gauge is. In fact, with two minutes of google-searching, I did, in fact, find a paper that writes the Coulomb-gauge vector potential A as the curl of a different vector field. Here is the paper, but you may need an institutional subscription. --Steve (talk) 15:33, 10 April 2008 (UTC)

Steve, whoever the author of that paper is, he is going around in circles. He has lost the plot. He is trying to define A in terms of B when in fact B is already defined in terms of A. George Smyth XI (talk) 15:59, 10 April 2008 (UTC)

Then you should find the flaw in these derivations (remember, both this paper, and the one it's responding to, both agree on this formula. You have to find a flaw in both, different, derivations), and submit the corrections to the European Journal of Physics.
It's a formula, not a definition. The presentation given by every classical E&M textbook is: If you have a given B, then Gauss's law for magnetism and gauge choice say that you can find many A's that will yield that B. After you add the requirement of the Coulomb gauge condition, it becomes a unique A that will yield that B. So it's hardly surprising, in this context, that there should exist a formula giving that A in terms of the B. And wouldn't you know it, A can be expressed as the curl of a vector field, just like you said was "most certainly" impossible. Is there anything on this earth that will make you seriously entertain the possibility that maybe the thousands of professional physicists understand classical electromagnetism and you need to learn more, as opposed to the other way around? --Steve (talk) 17:20, 10 April 2008 (UTC)

Steve, If B is the curl of A, then A is more fundamental than B. A cannot then be expressed as the curl of B because B might not even be curled.George Smyth XI (talk) 03:57, 11 April 2008 (UTC)

Sounds great, George. When the European Journal of Physics publishes your corrections to these articles, maybe then I'll take your point of view more seriously. --Steve (talk) 06:14, 11 April 2008 (UTC)

The Focus of the Article

It's time now to look at the coherence of the article as a whole.

We have a short introduction with a bit of colour. It includes a box stating what Maxwell's equations are.

We have a history section outlining the evolution and controversy surrounding the nomenclature.

We must not then lose sight of the main thing that both sets of Maxwell's equations are famous for. They are famous because of displacement current and how displacement current allows us to derive the electromagnetic wave equation in conjunction with either Faraday's law or the Lorentz force law.

Perhaps more could be written on the Maxwell-Ampère equation and a derivation of the EM wave equation supplied.

That is really what Maxwell's equations are all about and why people would be reading about Maxwell's equations.

Maybe some stuff could be moved to other pages in order to shorten the article. For example matters to do with B and H could be moved to the page on magnetic flux density. Matters to do with relativity could be moved to the relativity page. George Smyth XI (talk) 06:40, 9 April 2008 (UTC)

Displacement current is one of the things that's interesting about Maxwell's equations, and perhaps the main historical one. But I think the main reason that modern physicists use and talk about "Maxwell's equations", as you can tell from textbooks, articles, etc., is that they offer a compact formulation of almost everything in classical electromagnetism. Certainly the history section can elaborate on the role of displacement current, but we already have a dedicated article on that subject, so I don't see much need to dwell too much on it outside of history.
My view is that the article could be quite compact and readable if only the things that are already better discussed in other articles are discussed only briefly here, using the "Main article:" link for its proper purpose. I'm thinking in particular of Section 4, which takes up a big proportion of the article, with hardly a speck of information that's not already discussed better in the articles on the four individual equations. I also agree with you that certain things in the special relativity section could be shortened or moved off-page; there'll be a great place to put them as soon as I finish drafting the appropriate article. --Steve (talk) 16:49, 9 April 2008 (UTC)

Steve, if you are going to take out section 4, at least leave the subsection on the The Maxwell-Ampère equation. That subsection is crucial to the whole importance of Maxwell's equations. George Smyth XI (talk) 11:08, 11 April 2008 (UTC)

The Lorentz Force

Brews, I noticed that you consigned some historical information about Maxwell's role in the Lorentz force to the footnotes. This piece of information is a largely unknown curiosity.

I'm curious to know why you didn't like it to take a high profile. Many people would read that information and perhaps even go into denial. They might even argue against it. I once brought it to the attention of a Professor. He denied it. I showed him the proof. He still denied it.

Is there something about that piece of information that makes people feel uncomfortable and that it should be swept away to places where people are less likely to look?

Does it upset certin physicists to learn that the Lorentz force is not exclusively a consequence of the Lorentz transformation?

I'm interested to know why you should have homed in on such a small detail, which is verifiably correct, and which makes interesting and novel reading, and consigned it to the stacks. That is the kind of information which makes people read more. I personally thought that it made good reading in the introduction.

Do you feel happier with the idea that readers will continue to associate the Lorentz force with Lorentz and not with Maxwell?George Smyth XI (talk) 08:36, 14 April 2008 (UTC)

Hi George: Some of this relegation to footnotes is an accident of the historical evolution of this article. My view is not to suppress any historical facts, but to keep the history confined to the "History" section. So, in my take on it, footnotes 1 and 2 about the Lorentz force can be brought back into the text in the historical section.
My point of view here is that readers with an historical bent are a subset of readers. This subset certainly can find the historical section. It is not advisable to proselytize the historical aspects for the entire readership, many or most of which just want to get on with finding out "What are Maxwell's equations anyway?". Brews ohare (talk) 15:07, 16 April 2008 (UTC)

Brews, it's looking OK now. I take your point. But there are certain key facts which should be highlighted. I brought one of those facts back. It's ironical that the only equation which Maxwell was totally responsible for is the one which has to be added to Maxwell's equations to make them complete. George Smyth XI (talk) 02:16, 17 April 2008 (UTC)

"Limitations"?

User:Woodstone just added the parenthetical "(in non relativistic form and without dielectric or magnetic media)" before the version of Maxwell's equations stated in the intro. I think this should be removed. The equations are always true, even when there are relativistic velocities involved, and even when there are magnetic and dielectric media, provided the symbols are defined as in the table (so that Q includes both bound and free charge, for example). The only limitation is that it's classical not quantum, but this is already stated in the second word of the article. Accordingly, I'm going to undo that revision. Any objections? --Steve (talk) 19:35, 8 June 2008 (UTC)

It seems to me that if only mu0 and eps0 appear, the propagation of the waves would always have the velocity of light in vacuum. We all know that is not true in all media. So something is missing. (P.S. I did not add the remark on relativism). −Woodstone (talk) 09:11, 9 June 2008 (UTC)
Ah, 85.145.113.151 added the remark on relativity. Sorry about that.
In a material, the electric and magnetic fields will alter the charge density and current density by creating bound charge and bound current. These are source terms in Maxwell's equations, which affect how E and B propegate. In a linear material, you can go through the math, and you will indeed find that the light propegates at sqrt(1/(mu epsilon)), not sqrt(1/(mu0 epsilon0)). See the section "Bound charge, and proof that formulations are equivalent".
Anyway, now that I think about it, having that table in the intro is technically correct, but I see how it could be misleading to readers, as it was to you. Would anyone object to my deleting the tables from the intro, putting Sections 2.1 and 2.2 above "History" right at the start of the article, and leaving the rest of Section 2 in place? --Steve (talk) 15:32, 9 June 2008 (UTC)
Table belongs to intro because it shows the equations. Perhaps an notice should be added before or after the table to point the reader to an explanation to avoid such misunderstandings.
And those equations are certainly not non relativistic, because in non relativistic case there would be no magnetic field at all. --193.198.16.211 (talk) 20:49, 9 June 2008 (UTC)
I understand that it's good to have the equations be visible and prominent, but you'll notice that I proposed moving Sections 2.1 and 2.2 to above history, immediately below the intro. Do you think that having the equations immediately below the intro would be that much worse than having them in the intro? Are readers really going to be put off by having to scroll three more inches down before seeing the equations? Right now, we're presenting a mediocre, incomplete, unclear, unexplained set of equations in the intro. Instead, we could present the clear, complete, and thorough set of equations in the very first section after the table of contents. To me, that's a clearly better option. It's also the option which I think is more consistent with WP:LEAD, which emphasizes that the lead section should be the most "accessible" part of the article. (Few readers would find partial differential equations to be "accessible".) What's your take? :-) --Steve (talk) 22:25, 9 June 2008 (UTC)
I'm in favor of easy to understand set of equations either in the intro or immediately below. I agree that the current intro equations need improvement in the dummies are us category. Daniel.Cardenas (talk) 23:18, 9 June 2008 (UTC)
I made the change. Now people have to scroll down a couple more paragraphs to get to the equations, but in exchange, the equations (1) Include the integral forms (2) Include the D and H forms (3) Make the unit system clear (4) Are right next to their table of definitions. Thoughts? --Steve (talk) 16:58, 11 June 2008 (UTC)

Much better this way. This is the way they are easier to understand. However they are rather underdetermined this way. The simple relations like are missing. −Woodstone (talk) 18:01, 11 June 2008 (UTC)

This is (or should be) an article on Maxwell's equations, not on electromagnetism in general. Ohm's law isn't mentioned anywhere in the article, which I think is as it should be. (That said, there's already a section "Materials and dynamics" which seems to be a catch-all for tangentially-related electromagnetism stuff...Ohm's law could go there.) As for constitutive relations, they're covered in section 3. These also are not really part of Maxwell's equations, but they are part of "Maxwell's equations in linear materials" (Section 3.2.4), which certainly belongs in the article. Anyway, if you feel that the constitutive relations are buried too deep in the article, how would you (and other people) feel about me moving some or all of (the current) section 3 up above the history section? (I'm under the impression that some people here feel strongly about having history as near to the top as possible, but personally I'd prefer it to be later. I was trying to compromise by splitting what was section 2 into what is now sections 1 and 3.) Alternatively, the start of section 1 could be rephrased to make it clearer that constitutive relations are in Section 3. --Steve (talk) 18:55, 11 June 2008 (UTC)

Split in math box causes faulty layout

A single math formula lines out all components correctly. By splitting it, effectively two separate parts are created that don't necessarily line up anymore. So, after the split the nabla symbol ends up higher or lower on the line than the operand. Simulated (and exagerated) it looks like: and . −Woodstone (talk) 17:17, 10 June 2008 (UTC)

Is there another way of fixing besides killing the link? I can investigate if no one knows. Thx, Daniel.Cardenas (talk) 18:11, 10 June 2008 (UTC)

Links do not work inside math expressions. There is no easy way. Perhaps it is not too confusing to link the whole equation: . −Woodstone (talk) 18:34, 10 June 2008 (UTC)

There's another problem with such linking: most people won't notice the links!
Perhaps it would be better to add something like:
"where:
  • is the divergence operator,
  • is the curl operator,
  • ..."
after the table. --193.198.16.211 (talk) 08:52, 11 June 2008 (UTC)
...Or better yet, we could use the beautiful, comprehensive table of definitions which is already in the article. This is yet another reason to not have a half-hearted presentation of the equations in the introduction, but instead to move the good, thorough presentation to immediately below the introduction. I'm doing that now, see how y'all like it. :-) --Steve (talk) 16:52, 11 June 2008 (UTC)

Proposed rewrite of "The Heaviside versions in detail" section

The section "The Heaviside versions in detail" contains a lot of information, much of which is repeated elsewhere on the page, some of which has little to do with "Maxwell's equations" as a set of four equations, instead relating only to the individual law, and 100% of which is repeated on the four pages Gauss' law, Gauss' law for magnetism, Faraday's law of induction, Ampere's circuital law. The only useful contribution that I see in the section is the qualitative descriptions of the laws. I propose renaming the section "Conceptual descriptions of the four equations", and rewriting it as follows:

This section will conceptually describe each of the four Maxwell's equations, and also how they link together to explain the origin of electromagnetic radiation such as light.

  • Gauss' law describes how electric charge can create and alter electric fields. In particular, electric fields tend to point away from positive charges, and towards negative charges. Gauss' law is the primary explanation of why opposite charges attract, and like repel: The charges create certain electric fields, which other charges then respond to via an electric force.
  • Gauss' law for magnetism states that magnetism is unlike electricity in that there are not "north pole" and "south pole" particles that attract and repel the way positive and negative charges do. (Such a particle, if it existed, would be called a magnetic monopole.) Instead, north poles and south poles necessarily come as pairs. In particular, unlike the electric field which tends to point away from positive charges and towards negative charges, magnetic field lines always come in loops, for example pointing away from the north pole outside of a bar magnet but towards it inside the magnet.
  • Faraday's law of induction describes how a changing magnetic field can create an electric field. This is, for example, the operating principle behind many electric generators: Mechanical force (such as the force of water falling through a hydroelectric dam) spin a huge magnet, and the changing magnetic field creates an electric field which drives electricity through the power grid.
Maxwell's correction to Ampère's law was particularly important: With its inclusion, the laws state that a changing electric field could produce a magnetic field, and vice-versa. It follows that, even with no electric charges or currents present, it's possible to have stable, self-perpetuating waves of oscillating electric and magnetic fields, with each field driving the other. (These waves are called electromagnetic radiation.) The four Maxwell's equations describe these waves quantitatively, and moreover predict that the waves should have a particular, universal speed, which can be simply calculated in terms of two easily-measureable physical constants (called the electric constant and magnetic constant).
The speed calculated for electromagnetic radiation exactly matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). In this way, Maxwell's equations unified the hitherto separate fields of electromagnetism and optics.

Thoughts? :-) --Steve (talk) 06:45, 20 June 2008 (UTC)

Done. Again, any thoughts? --Steve (talk) 17:24, 23 June 2008 (UTC)

Maxwell and the Electromagnetic Wave Equation

Maxwell derived the electromagnetic wave equation in 1864 without using Faraday's law. He used equation (D) of his own original set of eight equations. Equation (D) is to all intents and purposes equivalent to what is nowadays referred to as the Lorentz force.

In deriving the electromagnetic wave equation, Maxwell dropped the vXH term from equation (D) because he was referencing the wave equation to a fixed stationary point in his luminiferous medium. As such, the key term was the (partial)dA/dt term.

If we take the curl of (partial)dA/dt we obtain Faraday's law, and so the Lorentz force and Faraday's law are actually very closely related.

However, the introduction was technically incorrect in stating that Maxwell used Maxwell's equations to derive the electromagnetic wave equation because it has been agreed that this article is about the Heaviside versions of Maxwell's equations which use Faraday's law as opposed to the "Lorentz force equivalent" equation (D) in Maxwell's original 1864 equations.David Tombe (talk) 22:23, 17 July 2008 (UTC)

Maxwell did not predict the electric motor, which he called the single greatest achievement of his century; the motor was discovered accidentally. The electric motor depends on 1) Lorentz force, not 2) Faraday's law. It is overstating things to claim 1) and 2) are in the same class (i.e., very closely related). --Ancheta Wis (talk) 03:44, 18 July 2008 (UTC)
Having learned my lesson repeatedly, I have no intention of debating any physics with David. I don't agree with what he wrote here on the talk page, but as long as it stays on the talk page, it's fine with me. If you look at David's actual edit, he changed it to something which, as it stands, is completely uncontroversial. (Except that there ended up with no link to James Clark Maxwell in the introduction, which I fixed.) Just my two cents. :-) --Steve (talk) 04:20, 18 July 2008 (UTC)

Steve, sorry about removing the link to Maxwell. That was an accidental consequence of the amendment. I see you have now fixed the matter in a satisfactory manner.

And yes, I deliberately kept my views on the link between the Lorentz force and Faraday's law to the talk pages because Maxwell doesn't use electric charge in his original works. As such this causes confusion as regards the modern definition of electric field as force per unit charge on the one hand, and Maxwell's use of the term electromotive force on the other hand. I believe the two to be equivalent as regards Maxwell's original works, while also accepting that in modern textbooks, the term EMF is now equated to voltage/potential energy.

At any rate it is true that if we take the curl of E when E equals (partial)dA/dt, then we get what has been termed the Maxwell-Faraday equation on these pages. (a term which although I understand the rationale behind it and although I accept that it is a helpful term, I am not totally happy about its accuracy)

When E equals (partial)dA/dt, we have one single aspect of Maxwell's equation (D) and we also have one single aspect of the modern Lorentz force. The curl relationship between this aspect of the Lorentz force and Faraday's law is totally uncontroversial. The controversial bit is the corresponding relationship between the convective aspect, vXB, of the Lorentz force and the convective term in the full total time derivative version of Faraday's law. I would say that once again, that the latter is a straight curl relationship. David Tombe (talk) 10:53, 18 July 2008 (UTC)

Reply to Ancheta Wis

Ancheta, Faraday's law can be applied to electric motors. The total time derivative term can be split into a local and a convective term. The convective term can be shown to be the curl of vXH. Maxwell showed how vXH can be used to explain the force on a current carrying wire in a magnetic field. Look up equation (5) in his 1861 paper and read the follow up explanation. Maxwell considers vXH to be centrifugal force in this scenario. He considers the force on a current carrying wire in a magnetic field to be caused by centrifugal aether pressure coming from the equatorial plane of his solenoidally aligned molecular vortices.

I do agree with you however that this vewpoint is no longer mainstream. The convective aspect of Faraday's law is not taken to refer to the irrotational centrifugal force. The textbooks only consider the convective term to apply to electromagnetic induction and they are are silent regarding the physical cause. The curled version of the convective term by analogy with Maxwell's method will of course point to the Coriolis force. David Tombe (talk) 17:20, 18 July 2008 (UTC)

Ancheta, I've considered an alternative way to explain the above to you. The electric motor uses the vXB principle. The curl of vXB is (v.grad)B.
(v.grad)B is the convective component of the total time derivative dB/dt term in Faraday's law.
Faraday's law and the Lorentz force have a curl/anti-curl relationship to each other.
However, I agree with you that in prcatice we don't use Faraday's law in connection with electric motors since the Lorentz force is much more explicit and suitable for the purpose. David Tombe (talk) 13:20, 19 July 2008 (UTC)

Div and Curl

Daniel Cardenas, perhaps div and curl might be included in the word description column:

Name Differential form Integral form Description
Gauss' law: Charges produce electric fields radially about themselves
Gauss' law for magnetism: Magnetic monopoles have not been observed
Maxwell-Faraday equation
(Faraday's law of induction):
A changing magnetic flux curls an electric field around it
Ampère's circuital law
(with Maxwell's correction):
Electric current (or a changing electric field) curls a magnetic field around it

--Ancheta Wis (talk) 18:57, 26 July 2008 (UTC)

I believe I incorporated your suggestions. Feel free to update if you'd like. Thanks! Daniel.Cardenas (talk) 19:33, 26 July 2008 (UTC)
I don't like the addition of these descriptions. For example, "Charges produce electric fields radially about themselves". What does that mean? Readers either won't understand the phrase "radially about themselves", or if they do, they'll assume it means "in a radial direction", which is not correct in general. I think saying nothing at all is better than saying an inadequate ten words that will confuse and frustrate readers.
INSTEAD, I propose moving the section "Conceptual descriptions of the four equations" to either before or immediately after the current first section. That way, readers could easily find an accessible and much better qualitative description of the equations and their interpretation and importance. Thoughts? --Steve (talk) 19:49, 26 July 2008 (UTC)

replace "contour"

At the bottom of table 3, there is a link to "contour", which is a disambiguation page. Perhaps there is a better term, or a more exact link? I come from the cartography end of things and am unqualified to make edits to this article.--Natcase (talk) 22:49, 1 September 2008 (UTC) Wow, quick fix. Wile you're at it, look at Flux, where I removed the link...--Natcase (talk) 22:59, 1 September 2008 (UTC)

Time derivative

A recent edit changed the symbol for a few time derivatives from partial to straight. That seems to make sense. Should we make that a global change? It matches well with Maxwell's "dot" notation. −Woodstone (talk) 11:36, 8 September 2008 (UTC)

As can be seen from prior conversations on this page, there are actually two very different things that people interpret as being time-dependent: The E and B fields on the one hand, and the S curve on the other. If the S curve may be changing in time, then the total derivative is not correct. For example, take the equation:
Take the situation where E is zero everywhere in space and time, and B is 1 Tesla in the +z direction, everywhere in space and time. The left-hand side is always zero, since the integrand is identically zero. The right-hand side is not zero if the size and shape of the S curve is changing in time. Therefore, the equation is false. --Steve (talk) 15:33, 8 September 2008 (UTC)

Yes, but in the formulation: this does not play a role.

Would it not be better to write:  ?(Woodstone (talk) 17:17, 8 September 2008 (UTC))

Right, the differential forms are essentially unambiguous. There was one person, George Smyth XI aka David Tombe, who thought otherwise, arguing that a total time derivative entailed the magnetic electric field of a point that traveled along a path, B(x(t)) E(x(t)). Most readers wouldn't come up with such a creative misinterpretation, so that's only a very slight argument in favor of the partial derivative. Another very slight argument in favor of the partial derivative is that if the differential form has a total derivative and the integral form has a partial derivative, readers may think that there's some important reason for this difference, when there isn't. Anyway, that leaves me more-or-less neutral between the partial and total derivatives, maybe with a slight lean towards the partial-derivative. What's the advantage you see in the total derivative? --Steve (talk)
Doesn't this mean that the integral form should more properly be written as:
Woodstone (talk) 20:08, 8 September 2008 (UTC)


B is a function of both time and space. Therefore, the derivative of B with respect to time should be a partial derivative, not a total derivative. First Harmonic (talk) 01:42, 9 September 2008 (UTC)
As for the magnetic flux, I still think it should be a total derivative, because I don't see how it depends on anything other than time. But that's just me. First Harmonic (talk) 01:42, 9 September 2008 (UTC)
It depends on time and on S, right? There are situations where it makes sense to have S vary in time (such as in one form of Faraday's law of induction), and in those cases, a total time derivative is very different from a partial time derivative. Do you agree? :-) --Steve (talk) 04:20, 9 September 2008 (UTC)
The magnetic flux, defined as an integral of the normal component of B over a time dependent surface, is still a function of a single variable time. For that reason, its derivative with respect to time is the ordinary derivative of a function of a single variable and I think that this should not be called a total derivative, although in this "single variable" case ordinary, partial and total derivative are the same thing. I think the concept of a total derivative should not be considered as a special kind of derivative but as an intermediate object in the ordinary or partial derivative of a composition of functions. In the case of the flux, the composition is a bit complicated. The time-dependent pull-back of the integrand to a given chart depends explicitly on time as well as through the composition of its spatial dependence with the time dependent surface parameterisation. But the composition can certainly be written out completely (perhaps most easily using exterior differential forms) associating a scalar value to the time coordinate. If you like, certain elements of the derivative of the resulting map can be recognised as a "total-derivative" but I think that it makes no sense to try to introduce a total derivative without defining the composition. Bas Michielsen (talk) 14:22, 9 September 2008 (UTC)
Ahh, OK, now I see where I went wrong. The flux depends implicitly on time through B, if B is time-dependent, and the flux depends implicitly on time through S, if S is time-dependent. Saying "partial derivative" doesn't specify one of these over the other. I guess the best we can do is to say in Table 3 that S is constant in time. And replacing the partial derivative symbol with a total derivative symbol is now making more sense to me. --Steve (talk) 18:18, 11 September 2008 (UTC)

Maxwell's equations with total charge and current and just E and B: General or vacuum-only?

The same recent edit replaced some E's with D's in Table 2, claiming that without that change it would be vacuum-only, not general. This isn't right, both of them were completely general as written. The key lies in the difference between free charge/current and total charge/current. (Their difference is the bound charge/current, which has all the dielectric/magnetic activity wrapped up inside it.) See this section for a proof that the two formulations, as given, were completely identical--and therefore if one is always true, then both are always true, and if one is false outside of a vacuum, then both are false outside of a vacuum. --Steve (talk) 15:33, 8 September 2008 (UTC)

I love this page

This is a really good, easy to read and digest page on Maxwell's Equations - I really love it. —Preceding unsigned comment added by 202.7.183.131 (talk) 07:06, 30 September 2008 (UTC)