Maxwell's equations

From Wikipedia, the free encyclopedia
  (Redirected from Maxwell equations)
Jump to: navigation, search
For thermodynamic relations, see Maxwell relations. For the history of the equations, see History of Maxwell's equations.
Maxwell's equations (mid-left) as featured on a monument in front of Warsaw University's Centre of New Technologies

Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These areas of physics are the basis for all electric, optical and radio technologies like power generation, electric motors, wireless communication, cameras, televisions, computers etc. Maxwell's equations describe how electric and magnetic fields are generated by charges, currents and changes of each other. One important consequence of the equations is that fluctuating electric and magnetic fields can propagate at the speed of light, and this electromagnetic radiation manifests itself in manifold ways from radio waves to light and X- or γ-rays. The equations are named after the physicist and mathematician James Clerk Maxwell, who published an early form of the equations between 1861 and 1862, and first proposed that light is an electromagnetic phenomenon.

The equations have two major variants. The "microscopic" set of Maxwell's equations uses total charge and total current, including the complicated charges and currents in materials at the atomic scale. The microscopic equations have universal applicability but may be infeasible to calculate with. The "macroscopic" set of Maxwell's equations defines two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic scale details, but their use requires experimentally determining parameters that describe the electromagnetic response of materials phenomenologically.

The term "Maxwell's equations" is often used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The space-time formulations (i.e. on space-time rather than space and time separately), are commonly used in high energy and gravitational physics because they are manifestly compatible with special and general relativity.[note 1] In fact, historically, Einstein developed special and general relativity to accommodate the absolute speed of light that drops out of the Maxwell equations with the principle that only relative movement has physical consequences.

Since the mid-20th century, it has been understood that Maxwell's equations are not exact but are a classical field theory approximation to the more accurate and fundamental theory of quantum electrodynamics. In many situations, though, deviations from Maxwell's equations are immeasurably small. Exceptions include nonclassical light, photon-photon scattering, quantum optics, and many other phenomena related to photons or virtual photons.

Formulation in terms of electric and magnetic fields (microscopic or in vacuum version)[edit]

In the electric and magnetic field formulation there are four equations. Two of them describe how the fields vary in space due to sources, if any; electric fields emanating from electric charges in Gauss's law, and magnetic fields as closed field lines not due to magnetic monopoles in Gauss's law for magnetism. The other two describe how the fields "circulate" around their respective sources; the magnetic field "circulates" around electric currents and time varying electric fields in Ampère's law with Maxwell's addition, while the electric field "circulates" around time varying magnetic fields in Faraday's law. A separate law of nature, the Lorentz force law, describes how the electric and magnetic field act on charged particles and currents. A version of this law was included in the original equations by Maxwell but, by convention, is no longer.

The precise formulation of Maxwell's equations depends on the precise definition of the quantities involved. Conventions differ with the unit systems, because various definitions and dimensions are changed by absorbing dimensionful factors like the speed of light c. This makes constants come out differently. The most common form is based on conventions used when quantities measured using SI units, but other commonly used conventions are used with other units including Gaussian units based on the cgs system,[1] Lorentz–Heaviside units (used mainly in particle physics), and Planck units (used in theoretical physics).

The vector calculus formulation below has become standard. It is mathematically much more convenient than Maxwell's original 20 equations and is due to Oliver Heaviside [2][3] The differential and integral equations formulations are mathematically equivalent and are both useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis.[4]

Formulation in SI units convention[edit]

Name Integral equations Differential equations Meaning
Gauss's law \oiint The electric flux leaving a volume is proportional to the charge inside.
Gauss's law for magnetism \oiint There are no magnetic monopoles; the total magnetic flux through a closed surface is zero.
Maxwell–Faraday equation (Faraday's law of induction) The voltage induced in a closed circuit is proportional to the rate of change of the magnetic flux it encloses.
Ampère's circuital law (with Maxwell's addition) Electric currents and changes in electric fields are proportional to the magnetic fields circulating about the areas where they accumulate.

Formulation in Gaussian units convention[edit]

Main article: Gaussian units

Gaussian units are a popular system of units, that are part of the centimetre–gram–second system of units (cgs). When using cgs units it is conventional to use a slightly different definition of electric field Ecgs = c−1 ESI. This implies that the modified electric and magnetic field have the same units (in the SI convention this is not the case: e.g. , making dimensional analysis of the equations different). The CGS system uses a unit of charge defined in such a way that the permittivity of the vacuum ε0 = 1/4πc, hence μ0 = /c. These units are sometimes preferred over SI units in the context of special relativity,[5]:vii since when using them, the components of the electromagnetic tensor, the Lorentz covariant object describing the electromagnetic field, have the same unit without constant factors. Using these different conventions, the Maxwell equations become:[6]

Equations in Gaussian units convention
Name Integral equations Differential equations Meaning
Gauss's law \oiint The electric flux leaving a volume is proportional to the charge inside.
Gauss's law for magnetism \oiint There are no magnetic monopoles; the total magnetic flux through a closed surface is zero.
Maxwell–Faraday equation (Faraday's law of induction) The voltage induced in a closed circuit is proportional to the rate of change of the magnetic flux it encloses.
Ampère's law (with Maxwell's extension) Electric currents and changes in electric fields are proportional to the magnetic fields circulating about the areas where they accumulate.

Key to the notation[edit]

Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated.

The equations introduce the electric field, E, a vector field, and the magnetic field, B, a pseudovector field, each generally having a time and location dependence. The sources are

The universal constants appearing in the equations are

Differential equations[edit]

In the differential equations,

Integral equations[edit]

In the integral equations,

  • Ω is any fixed volume with closed boundary surface ∂Ω, and
  • Σ is any fixed surface with closed boundary curve ∂Σ,
  • \oiint is a surface integral over the surface ∂Ω, (the loop indicates that the boundary surface is closed)
  • is a volume integral over the volume Ω,
  • is a surface integral over the surface Σ,
  • is a line integral around the curve ∂Σ (the loop indicates that the boundary curve is closed).
  • The volume integral over Ω of the total charge density ρ, is the total electric charge Q contained in Ω:
where dV is the volume element.
where dS denotes the vector element of surface area, S, normal to surface, Σ. (Vector area is also denoted by A rather than S, but this conflicts with the magnetic potential, a separate vector field).

Here a fixed volume or surface means that it does not change over time. Since the surface is taken to be time-independent, we can bring the differentiation under the integral sign in Faraday's law:

The "dynamics" or "time evolution of the fields" is due to the partial derivatives of the fields and with respect to time. The equations are correct, complete and a little easier to interpret with time-independent surfaces, but we can formulate the Maxwell's equations with possibly time-dependent surfaces and volumes by replacing the lefthand side with the righthand side in the integral equations.

Relationship between differential and integral formulations[edit]

The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem.

Flux and divergence[edit]

Volume Ω and its closed boundary ∂Ω, containing (respectively enclosing) a source (+) and sink (−) of a vector field F. Here, F could be the E field with source electric charges, but not the B field which has no magnetic charges as shown. The outward unit normal is n.

The "sources of the fields" (i.e. their divergence) can be determined from the surface integrals of the fields through the closed surface ∂Ω. E.g. the electric flux is

\oiint

where the last equality uses the Gauss divergence theorem. Using the integral version of Gauss's equation we can rewrite this to

Since Ω can be chosen arbitrary, e.g. as an arbitrary small ball with arbitrary centre, this implies that the integrand must be zero, which is the differential equations formulation of Gauss equation up to a trivial rearrangement. Gauss's law for magnetism in differential equations form follows likewise from the integral form by rewriting the magnetic flux

\oiint .

Circulation and curl[edit]

Surface Σ with closed boundary ∂Σ. F could be the E or B fields. Again, n is the unit normal. (The curl of a vector field doesn't literally look like the "circulations", this is a heuristic depiction).

The "circulation of the fields" (i.e their curls) can be determined from the line integrals of the fields around the closed curve ∂Σ. E.g. for the magnetic field

,

where we used the Kelvin-Stokes theorem. Using the modified Ampere law in integral form and the writing the time derivative of the flux as the surface integral of the partial time derivative of E we conclude that

.

Since Σ can be chosen arbitrary, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centred disk, we conclude that the integrand must be zero. This is Ampere's modified law in differential equations form up to a trivial rearrangement. Likewise, the Faraday law in differential equations form follows from rewriting the integral form using the Kelvin-Stokes theorem.

The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.

Conceptual descriptions[edit]

Gauss's law[edit]

Gauss's law describes the relationship between a static electric field and the electric charges that cause it: The static electric field points away from positive charges and towards negative charges. In the field line description, electric field lines begin only at positive electric charges and end only at negative electric charges. 'Counting' the number of field lines passing through a closed surface, therefore, yields the total charge (including bound charge due to polarization of material) enclosed by that surface divided by dielectricity of free space (the vacuum permittivity). More technically, it relates the electric flux through any hypothetical closed "Gaussian surface" to the enclosed electric charge.

Gauss's law for magnetism: magnetic field lines never begin nor end but form loops or extend to infinity as shown here with the magnetic field due to a ring of current.

Gauss's law for magnetism[edit]

Gauss's law for magnetism states that there are no "magnetic charges" (also called magnetic monopoles), analogous to electric charges.[7] Instead, the magnetic field due to materials is generated by a configuration called a dipole. Magnetic dipoles are best represented as loops of current but resemble positive and negative 'magnetic charges', inseparably bound together, having no net 'magnetic charge'. In terms of field lines, this equation states that magnetic field lines neither begin nor end but make loops or extend to infinity and back. In other words, any magnetic field line that enters a given volume must somewhere exit that volume. Equivalent technical statements are that the sum total magnetic flux through any Gaussian surface is zero, or that the magnetic field is a solenoidal vector field.

Faraday's law[edit]

In a geomagnetic storm, a surge in the flux of charged particles temporarily alters Earth's magnetic field, which induces electric fields in Earth's atmosphere, thus causing surges in electrical power grids. Artist's rendition; sizes are not to scale.

The Maxwell-Faraday's equation version of Faraday's law describes how a time varying magnetic field creates ("induces") an electric field.[7] This dynamically induced electric field has closed field lines just as the magnetic field, if not superposed by a static (charge induced) electric field. This aspect of electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field, which in turn generates an electric field in a nearby wire.

Ampère's law with Maxwell's addition[edit]

Magnetic core memory (1954) is an application of Ampère's law. Each core stores one bit of data.

Ampère's law with Maxwell's addition states that magnetic fields can be generated in two ways: by electric current (this was the original "Ampère's law") and by changing electric fields (this was "Maxwell's addition").

Maxwell's addition to Ampère's law is particularly important: it makes the set of equations mathematically consistent for non static fields, without changing the laws of Ampere and Gauss for static fields.[8] However, as a consequence, it predicts that a changing magnetic field induces an electric field and vice versa.[7][9] Therefore, these equations allow self-sustaining "electromagnetic waves" to travel through empty space (see electromagnetic wave equation).

The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents,[note 2] exactly matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics.

Charge Conservation[edit]

The lefthand side of the modified Ampere's law has zero divergence by the div-curl-identity. Therefore the right handside, Gauss's law and interchanging derivatives give

.

By the Gauss divergence theorem that means that electric charge in a volume is conserved and can only change by flowing in or out of the boundary

Vacuum equations, electromagnetic waves and speed of light[edit]

This 3D diagram shows a plane linearly polarized wave propagating from left to right with the same wave equations where E = E0 sin(−ωt + kr) and B = B0 sin(−ωt + kr)

In a region with no charges (ρ = 0) and no currents (J = 0), such as in a vacuum, Maxwell's equations reduce to:

Taking the curl (∇×) of the curl equations, and using the curl of the curl identity ∇ × (∇ × X) = ∇(∇·X) − ∇2X we obtain the wave equations

which identify

with the speed of light in free space. In materials with relative permittivity, εr, and relative permeability, μr, the phase velocity of light becomes

which is usually[note 3] less than c.

In addition, E and B are mutually perpendicular to each other and the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's addition to Ampère's law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity, c.

Macroscopic formulation[edit]

The microscopic variant of Maxwell's equation is the version given above. It expresses the electric E field and the magnetic B field in terms of the total charge and total current present, including the charges and currents at the atomic level. The "microscopic" form is sometimes called the "general" form of Maxwell's equations. The macroscopic variant of Maxwell's equation is equally general, however, with the difference being one of bookkeeping.

The "microscopic" variant is sometimes called "Maxwell's equations in a vacuum". This refers to the fact that the material medium is not built into the structure of the equation; it does not mean that space is empty of charge or current. They are also known as the "Maxwell-Lorentz equations". Lorentz tried to use these equations to predict the macroscopic properties of bulk matter from the physical behavior of its microscopic constituents.[10]:5

"Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself.

Name Integral equations (SI convention) Differential equations (SI convention) Differential equations (Gaussian convention)
Gauss's law \oiint
Gauss's law for magnetism \oiint
Maxwell–Faraday equation (Faraday's law of induction)
Ampère's circuital law (with Maxwell's addition)

Unlike the "microscopic" equations, the "macroscopic" equations separate out the bound charge Qb and bound current Ib to obtain equations that depend only on the free charges Qf and currents If. This factorization can be made by splitting the total electric charge and current as follows:

Correspondingly, the total current density J splits into free Jf and bound Jb components, and similarly the total charge density ρ splits into free ρf and bound ρb parts.

The cost of this factorization is that additional fields, the displacement field D and the magnetizing field H, are defined and need to be determined. Phenomenological constituent equations relate the additional fields to the electric field E and the magnetic B-field, often through a simple linear relation.

For a detailed description of the differences between the microscopic (total charge and current including material contributes or in air/vacuum)[note 4] and macroscopic (free charge and current; practical to use on materials) variants of Maxwell's equations, see below.

Bound charge and current[edit]

Left: A schematic view of how an assembly of microscopic dipoles produces opposite surface charges as shown at top and bottom. Right: How an assembly of microscopic current loops add together to produce a macroscopically circulating current loop. Inside the boundaries, the individual contributions tend to cancel, but at the boundaries no cancelation occurs.

When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization P of the material, its dipole moment per unit volume. If P is uniform, a macroscopic separation of charge is produced only at the surfaces where P enters and leaves the material. For non-uniform P, a charge is also produced in the bulk.[11]

Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization M.[12]

The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of P and M which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume.

Auxiliary fields, polarization and magnetization[edit]

The definitions (not constitutive relations) of the auxiliary fields are:

where P is the polarization field and M is the magnetization field which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density ρb and bound current density Jb in terms of polarization P and magnetization M are then defined as

If we define the total, bound, and free charge and current density by

and use the defining relations above to eliminate D, and H, the "macroscopic" Maxwell's equations reproduce the "microscopic" equations.

Constitutive relations[edit]

In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field D and the electric field E, as well as the magnetizing field H and the magnetic field B. Equivalently, we have to specify the dependence of the polarisation P (hence the bound charge) and the magnetisation M (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description.[13]:44–45

For materials without polarisation and magnetisation, the constitutive relations are (by definition)[5]:2

where ε0 is the permittivity of free space and μ0 the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal.

An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarisation and magnetisation. More generally, for linear materials the constitutive relations are[13]:44–45

where ε is the permittivity and μ the permeability of the material. For the displacement field D the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however.

  • For homogeneous materials, ε and μ are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time).[14]:463
  • For isotropic materials, ε and μ are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors.[13]:421[14]:463
  • Materials are generally dispersive, so ε and μ depend on the frequency of any incident EM waves.[13]:625[14]:397

Even more generally, in the case of non-linear materials (see for example nonlinear optics), D and P are not necessarily proportional to E, similarly H or M is not necessarily proportional to B. In general D and H depend on both E and B, on location and time, and possibly other physical quantities.

In applications one also has to describe how the free currents and charge density behave in terms of E and B possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohms law in the form

Alternative formulations[edit]

Following is a summary of some of the numerous other ways to write the microscopic Maxwell's equations, showing they can be formulated using different points of view and mathematical formalisms that describe the same physics. Often, they are also called the Maxwell equations. The direct space–time formulations make manifest that the Maxwell equations are relativistically invariant (in fact studying the hidden symmetry of the vector calculus formulation was a major source of inspiration for relativity theory). In addition, the formulation using potentials was originally introduced as a convenient way to solve the equations but with all the observable physics contained in the fields. The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the fields vanish (Aharonov–Bohm effect). See the main articles for the details of each formulation. SI units are used throughout.

Formalism Formulation Homogeneous equations Inhomogeneous equations
Vector calculus Fields

3D Euclidean space + time

Potentials (any gauge)

3D Euclidean space + time

Potentials (Lorenz gauge)

3D Euclidean space + time

Tensor calculus Fields

Minkowski space

Potentials (any gauge)

Minkowski space

Potentials (Lorenz gauge)

Minkowski space

Fields

Any space–time

Potentials (any gauge)

Any space–time (with topological restrictions)

Potentials (Lorenz gauge)

Any space–time (with topological restrictions)

Differential forms Fields

Any space–time

Potentials (any gauge)

Any space–time (with topological restrictions)

Potentials (Lorenz gauge)

Any space–time (with topological restrictions)

where

  • In the vector formulation on Euclidean space + time, φ is the electrical potential, A is the vector potential and ◻ = 1/c2 2/t2 − ∇2 is the d'Alembert operator.
  • In the tensor calculus formulation, the electromagnetic tensor Fαβ is an antisymmetric covariant rank 2 tensor; the four-potential, Aα, is a covariant vector; the current, Jα, is a vector; the square brackets, [ ], denote antisymmetrization of indices; α is the derivative with respect to the coordinate, xα. In Minkowski space coordinates are chosen with respect to an inertial frame; (xα) = (ct,x,y,z), so that the metric tensor used to raise and lower indices is ηαβ = diag(1,−1,−1,−1). The d'Alembert operator on Minkowski space is ◻ = ∂αα as in the vector formulation. In general spacetimes, the coordinate system xα is arbitrary, the covariant derivative α, the Ricci tensor, Rαβ and raising and lowering of indices are defined by the Lorentzian metric, gαβ and the d'Alembert operator is defined as ◻ = ∇αα. The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). Note that this is violated for Minkowski space with a line removed, which can model a (flat) space-time with a point-like monopole on the complement of the line.
  • In the differential form formulation on arbitrary space times, F = Fαβdxα ∧ dxβ is the electromagnetic tensor considered as a 2-form, A = Aαdxα is the potential 1-form, J is the current 3-form, d is the exterior derivative, and is the Hodge star on forms defined by the Lorentzian metric of space–time. Note that in the special case of 2-forms such as F, the Hodge star only depends on the metric up to a local scale . This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian space–time. The topological condition is again that the second real cohomology group is trivial. By the isomorphism with the second de Rham cohomology this condition means that every closed 2 form is exact.

Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation[15][16] was used.

Solutions[edit]

Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations, which are often very difficult to solve. In fact, the solutions of these equations encompass all the diverse phenomena in the entire field of classical electromagnetism. A thorough discussion is far beyond the scope of the article, but some general notes follow.

Like any differential equation, boundary conditions[17][18][19] and initial conditions[20] are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, many solutions to Maxwell's equations are possible, not just the obvious solution E = B = 0. Another solution is E = constant, B = constant, while yet other solutions have electromagnetic waves filling spacetime. In some cases, Maxwell's equations are solved through infinite space, and boundary conditions are given as asymptotic limits at infinity.[21] In other cases, Maxwell's equations are solved in just a finite region of space, with appropriate boundary conditions on that region: For example, the boundary could be an artificial absorbing boundary representing the rest of the universe,[22][23] or periodic boundary conditions, or (as with a waveguide or cavity resonator) the boundary conditions may describe the walls that isolate a small region from the outside world.[24]

Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. Jefimenko's equations are not so helpful in situations when the charges and currents are themselves affected by the fields they create.

Numerical methods for differential equations can be used to approximately solve Maxwell's equations when an exact solution is impossible. These methods usually require a computer, and include the finite element method and finite-difference time-domain method.[17][19][25][26][27] For more details, see Computational electromagnetics.

Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of E and B) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampere's laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampere's law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does.[28][29] This explanation was first introduced by Julius Adams Stratton in 1941.[30] Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account.[31]

Limitations of the Maxwell equations as a theory of electromagnetism[edit]

While Maxwell's equations (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena, they are not exact, but approximations. In some special situations, they can be noticeably inaccurate. Examples include extremely strong fields (see Euler–Heisenberg Lagrangian) and extremely short distances (see vacuum polarization). Moreover, various phenomena occur in the world even though Maxwell's equations predict them to be impossible, such as "nonclassical light" and quantum entanglement of electromagnetic fields (see quantum optics). Finally, any phenomenon involving individual photons, such as the photoelectric effect, Planck's law, the Duane–Hunt law, single-photon light detectors, etc., would be difficult or impossible to explain if Maxwell's equations were exactly true, as Maxwell's equations do not involve photons. For the most accurate predictions in all situations, Maxwell's equations have been superseded by quantum electrodynamics.

Variations[edit]

Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well.

Magnetic monopoles[edit]

Main article: Magnetic monopole

Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed (despite extensive searches)[note 5] and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.[5]:273–275

See also[edit]

Notes[edit]

  1. ^ Maxwell's equations in any form are compatible with relativity. These space-time formulations, though, make that compatibility more readily apparent by revealing that the electric and magnetic fields blend into a single tensor, and that their distinction depends on the movement of the observer and the corresponding observer dependent notion of time.
  2. ^ The quantity we would now call 1ε0μ0, with units of velocity, was directly measured before Maxwell's equations, in an 1855 experiment by Wilhelm Eduard Weber and Rudolf Kohlrausch. They charged a leyden jar (a kind of capacitor), and measured the electrostatic force associated with the potential; then, they discharged it while measuring the magnetic force from the current in the discharge wire. Their result was 3.107×108 m/s, remarkably close to the speed of light. See The story of electrical and magnetic measurements: from 500 B.C. to the 1940s, by Joseph F. Keithley, p115
  3. ^ There are cases (anomalous dispersion) where the phase velocity can exceed c, but the "signal velocity" will still be < c
  4. ^ In some books—e.g., in U. Krey and A. Owen's Basic Theoretical Physics (Springer 2007)—the term effective charge is used instead of total charge, while free charge is simply called charge.
  5. ^ See magnetic monopole for a discussion of monopole searches. Recently, scientists have discovered that some types of condensed matter, including spin ice and topological insulators, which display emergent behavior resembling magnetic monopoles. (See [1] and [2].) Although these were described in the popular press as the long-awaited discovery of magnetic monopoles, they are only superficially related. A "true" magnetic monopole is something where ∇ ⋅ B ≠ 0, whereas in these condensed-matter systems, ∇ ⋅ B = 0 while only ∇ ⋅ H ≠ 0.

References[edit]

  1. ^ David J Griffiths (1999). Introduction to electrodynamics (Third ed.). Prentice Hall. pp. 559–562. ISBN 0-13-805326-X. 
  2. ^ Bruce J. Hunt (1991) The Maxwellians, chapter 5 and appendix, Cornell University Press
  3. ^ "IEEEGHN: Maxwell's Equations". Ieeeghn.org. Retrieved 2008-10-19. 
  4. ^ Šolín, Pavel (2006). Partial differential equations and the finite element method. John Wiley and Sons. p. 273. ISBN 0-471-72070-4. 
  5. ^ a b c J.D. Jackson. Classical Electrodynamics (3rd ed.). ISBN 0-471-43132-X. 
  6. ^ Littlejohn, Robert (Fall 2007). "Gaussian, SI and Other Systems of Units in Electromagnetic Theory" (PDF). Physics 221A, University of California, Berkeley lecture notes. Retrieved 2008-05-06. 
  7. ^ a b c Jackson, John. "Maxwell's equations". Science Video Glossary. Berkeley Lab. 
  8. ^ Classical Electrodynamics, by J.D. Jackson, section 6.3
  9. ^ Principles of physics: a calculus-based text, by R.A. Serway, J.W. Jewett, page 809.
  10. ^ Kimball Milton; J. Schwinger (18 June 2006). Electromagnetic Radiation: Variational Methods, Waveguides and Accelerators. Springer Science & Business Media. ISBN 978-3-540-29306-4. 
  11. ^ See David J. Griffiths (1999). "4.2.2". Introduction to Electrodynamics (third ed.). Prentice Hall.  for a good description of how P relates to the bound charge.
  12. ^ See David J. Griffiths (1999). "6.2.2". Introduction to Electrodynamics (third ed.). Prentice Hall.  for a good description of how M relates to the bound current.
  13. ^ a b c d Andrew Zangwill (2013). Modern Electrodynamics. Cambridge University Press. ISBN 978-0-521-89697-9. 
  14. ^ a b c Kittel, Charles (2005), Introduction to Solid State Physics (8th ed.), USA: John Wiley & Sons, Inc., ISBN 978-0-471-41526-8 
  15. ^ P.M. Jack (2003). "Physical Space as a Quaternion Structure I: Maxwell Equations. A Brief Note.". Toronto, Canada. arXiv:math-ph/0307038free to read. 
  16. ^ A. Waser (2000). "On the Notation of Maxwell's Field Equations" (PDF). AW-Verlag. 
  17. ^ a b Peter Monk (2003). Finite Element Methods for Maxwell's Equations. Oxford UK: Oxford University Press. p. 1 ff. ISBN 0-19-850888-3. 
  18. ^ Thomas B. A. Senior & John Leonidas Volakis (1995-03-01). Approximate Boundary Conditions in Electromagnetics. London UK: Institution of Electrical Engineers. p. 261 ff. ISBN 0-85296-849-3. 
  19. ^ a b T Hagstrom (Björn Engquist & Gregory A. Kriegsmann, Eds.) (1997). Computational Wave Propagation. Berlin: Springer. p. 1 ff. ISBN 0-387-94874-0. 
  20. ^ Henning F. Harmuth & Malek G. M. Hussain (1994). Propagation of Electromagnetic Signals. Singapore: World Scientific. p. 17. ISBN 981-02-1689-0. 
  21. ^ David M Cook (2002). The Theory of the Electromagnetic Field. Mineola NY: Courier Dover Publications. p. 335 ff. ISBN 0-486-42567-3. 
  22. ^ Jean-Michel Lourtioz (2005-05-23). Photonic Crystals: Towards Nanoscale Photonic Devices. Berlin: Springer. p. 84. ISBN 3-540-24431-X. 
  23. ^ S. G. Johnson, Notes on Perfectly Matched Layers, online MIT course notes (Aug. 2007).
  24. ^ S. F. Mahmoud (1991). Electromagnetic Waveguides: Theory and Applications. London UK: Institution of Electrical Engineers. Chapter 2. ISBN 0-86341-232-7. 
  25. ^ John Leonidas Volakis, Arindam Chatterjee & Leo C. Kempel (1998). Finite element method for electromagnetics : antennas, microwave circuits, and scattering applications. New York: Wiley IEEE. p. 79 ff. ISBN 0-7803-3425-6. 
  26. ^ Bernard Friedman (1990). Principles and Techniques of Applied Mathematics. Mineola NY: Dover Publications. ISBN 0-486-66444-9. 
  27. ^ Taflove A & Hagness S C (2005). Computational Electrodynamics: The Finite-difference Time-domain Method. Boston MA: Artech House. Chapters 6 & 7. ISBN 1-58053-832-0. 
  28. ^ H Freistühler & G Warnecke (2001). Hyperbolic Problems: Theory, Numerics, Applications. p. 605. 
  29. ^ J Rosen. "Redundancy and superfluity for electromagnetic fields and potentials". American Journal of Physics. 48 (12): 1071. Bibcode:1980AmJPh..48.1071R. doi:10.1119/1.12289. 
  30. ^ J.A. Stratton (1941). Electromagnetic Theory. McGraw-Hill Book Company. pp. 1–6. 
  31. ^ B Jiang & J Wu & L.A. Povinelli (1996). "The Origin of Spurious Solutions in Computational Electromagnetics". Journal of Computational Physics. 125 (1): 104. Bibcode:1996JCoPh.125..104J. doi:10.1006/jcph.1996.0082. 
Further reading can be found in list of textbooks in electromagnetism

Historical publications[edit]

The developments before relativity:

External links[edit]

Modern treatments[edit]

Other[edit]