Evert Jan Post, 7933 Breen Ave, Westchester, CA 90045-3357
Summary: The manifold of space and time in which physical events evolve permits
a subdivision of laws dependent or independent of a universal metric in the form
of a metric tensor. Dimensional analysis and geometric transformation theory
shed light on this aspect if the mass unit is replaced by an action unit. One so
obtains a systematic separation between geometric and physical units of
reference. These criteria permit the delineation of a subdivision of metric-free
laws, especially a category of metric-free global relations culminating in a set
of 1,2 and 3-dimensional residue integrals; the residues of which can be
assessed as counting elementary flux, charge and action quanta. The charge
counter is simply the AmpŠre-Gauss law of Maxwell theory. The flux and action
counters have traditionally been viewed as asymptotic byproducts of
Schroedinger's equation. Since, the AmpŠre-Gauss law is taken to have universal
macro-micro validity, it is now reasonable to extend similar basic exactness
also to the flux and action counters. The Schroedinger equation now emerges as a
derived entity, applicable to ensembles of phase and orientation randomized
identical systems. Schroedinger's own recipe for obtaining his wave equation
then graduates to the level of a derivation; thus establishing its position as a
tool for primeval ensembles while excluding single system applications.
This essay argues that the nonclassical conceptualization of quantum mechanics
has been precipitated by a rather silent yet faulty assumption underlying
Copenhagen views. For peace of mind, let it be known that this conceptual
deficiency can be corrected without unduly affecting standard tools invoked when
working with everyday physical or technical problems. So, no mathematical
equations need to be displayed, which gives adjacent disciplines a glimpse of
what is going on within physics. Yet the major hurdle to overcome is that
Copenhagen protagonists consider their nonclassical rationale to be a unique
reflection of reality, which cannot be affected by classical arguments.
The presentation is organized in a near toastmaster fashion. Those interested in
the main theme may just read the introduction and conclusion. The two central
sections sketch more esoteric arguments that have led to the strong suspicion
that the principal claim expressed in the title is solid. The bonus is an
opening of perspectives where none were before. All that changes is that
validity domains of disciplines need a redefining: in some cases a shrinking of
applicability for others an extension. The relation between the theories of
quanta and the theories of relativity benefits from this conceptual
reorganization. The so-called incompatibility of quantum theory and the theory
of general relativity now emerges as merely a consequence of comparing the wrong
parts of the subdivisions in theory.
The older quantum theory of Planck and Einstein, which later culminated in the
Bohr-Sommerfeld integral condition has been traditionally regarded as an
approximation of what is now (mistakenly) believed to be the more exact
Schroedinger-Dirac process. The Brillouin-Kramers-Wentzel (BKW) methodology
capitalizes on this asymptotic closeness for obtaining approximate solutions of
During the early euphoria after the 1925 quantum revolution, the BKW rationale
was predicated by a silent assumption that the Bohr-Sommerfeld integral method
and the Schroedinger equation were believed to address one and the same physical
situation. A rather dramatic break with the past ensues from questioning this
unproven identity of purpose. The Copenhagen interpretation is a byproduct of
this identification. Its view of Schroedinger's equation as an instrument
describing single quantum systems is unsubstantiated. This silent assumption
unleashes the avalanche of nonclassical propositions, which are needed to
accommodate the consequences of an unproven assumption.
With the help of quite elementary arguments based on dimensional analysis and
transformation theory, this discussion aims at verifying the existence of a set
of nearly ideal single system quantum tools. Ironically it is, in many ways, an
already existing global super-structure of Maxwell theory. It consists of a set
of residue integrals counting flux, charge and action quanta. The
Bohr-Sommerfeld integral emerges as a special reduction of one of these residue
integrals. The wider metric independent invariance and applicability of these
residue integrals constitute a major aspect of this conceptual reorganization.
This global superstructure of Maxwell theory is perfectly compatible with
invariance requirements of the general theory of relativity and addresses itself
to single systems without invoking notions of statistics. So if we were to
believe Copenhagen claims we would now have two candidates for single system
description of which only one complies with the exigencies of the general theory
of relativity. In other words, we sort of backed up into a conclusion that
Schroedinger's equation can't be a single system tool. Its probability
connotation identifies it as a tool describing an ensemble of identical single
systems. Since this conclusion contradicts three quarter century of Copenhagen
gospel, a brief review of history is now in order.
The Copenhagen interpretation of probability, from the beginning, had been woven
around this silent single system thesis. In the contemporary text book
literature the single system idea is injected as a near-foregone conclusion. At
the time nobody seemed to question this single system idea, except perhaps
Slater , who was one of the Copenhageners having second thoughts about this
matter. He felt that Schroedinger's methodology described situations resembling
statistical mechanics. Perhaps, at that time, unbeknown to Slater, there was
work of 1912 by Planck  in which it was shown how a phase averaging of an
ensemble of harmonic oscillators requires a zero-point energy, needed to
maintain an ensemble state of random phase. Schroedinger , aware of this
identity of his own- and Planck's zero-point energy, felt the coincidence
deserved further investigation. Had he followed up on his intention, he and
Copenhagen might have favored ensembles not single systems.
Jammer extensively reviews how ensemble alternatives of wave mechanics were
vividly pursued in the Thirties and later. The names of Popper, Kemble,
Groenewold, Collins, Blokhintsev and Ballentine are mentioned in this context.
Yet, whether these ensemble protagonists were fully aware of Planck's
ensemble-based introduction of zero-point energy is not obvious, because the
Copenhagen-type nonclassical probability views kept also dominating the later
scene in ensemble views of the Schroedinger methodology. It shows, errors are
hard to erase, once they are hidden and anchored in an accepted procedure.
The preliminary conclusion of this intermezzo reveals how the Schroedinger
equation and the Bohr-Sommerfeld integrals and companions are addressing
different physical situations that have an asymptotic physical relationship. The
more sensible choice is to have the Schroedinger equation as a tool applying to
phase- and orientation random ensembles of identical single systems, whereas the
Bohr-Sommerfeld integral is a very natural single system tool.
This new role for the Bohr-Sommerfeld relation reestablishes a primary function
for these cyclic integral companions. By the same token, Schroedinger's equation
is taken off its Copenhagen single system pedestal. Against this background of
changes, the relation between quanta and relativity can be reexamined leading to
a successful reconciliation. For many years this adaptation had been viewed as a
hopeless undertaking. This lack of perspective was due to comparing the wrong
parts of theory. A faulty premise concerning the object of description of
Schroedinger's equation had been blocking the view.
Since this conclusion precipitates the apparent sacrifice of a traditional
feature of quantum theory, it can be expected to cause considerable opposition.
This opposition is even harder if we realize how three quarter century of
nonclassical conceptualization has conditioned Copenhagen supporters to assume a
frame of mind that does not accept classical arguments as a valid means to
refute a Copenhagen edifice put together by nonclassical ingredients.
The predicament is a confrontation between two alien worlds, constructed as two
distinct and foreign logical systems. They don't have a common cross-section
permitting operations in their common realm. To safeguard its position, the
nonclassical world began undercutting the reality of the classical world.
Classical reality, it was said, is not what it claims it is. It merely is a
probability appearance, mistakenly accepted as reality.
This nonclassical game of usurping its sine qua non position for describing
contemporary physics can only be invalidated by disproving its claim of
uniqueness. Planck's counter example  serves that purpose. It proves that the
body of nonclassical conceptualization has been due to a single system's
physical inability to act as a universe of discourse for a classical statistics.
Physical Reference Systems and the Uniqueness of their Units
A primary requirement for quantitative descriptions of physics is a basic
agreement on measurable references. Amazingly, the foursome of length, time,
mass and electric charge [l, t, m, q] suffices. This sequence of symbols
illustrates the historic evolution of man's awareness of nature. Length is a
first concept in man's exploration of space, the duration of processes taking
place gives a concept of time, so the duo [ l, t ] gives us a means of getting
around in space and time. Newton specified the concept of mass [m] with its
aspects of inertia and gravity. Electric charge [q], initially received an
irrational measure in terms of [ l, t, m ], was later given independent status.
Ever since Faraday established the laws of electrolytic deposit there had been
an overriding suspicion that nature provided a fundamental electric unit, known
as the elementary electric charge [e]. This knowledge had been further
substantiated by Milikan's famous oil drop experiment. Compared with the
irrational charge reference in terms of [ l, t, m ], an independent [q] has a
more fundamental connotation. The Coulomb as independent unit for [q] can be
defined as an exact multiple of the elementary charge [e ] .
There is no unique unit of mass. There are electron- proton- and neutron- mass
units, their ratios though don't seem to be rational fractions, thus defying the
existence of a unique universal measure of [m] similar to that of [q].
In the foursome [l, t, m, q] only the new-comer [q] has that special property of
universal uniqueness, which does not apply to the other three. This raises
questions, whether nature provides other natural units that have comparable
universality and uniqueness as [e ]. A quantity that comes to mind is action,
for which Planck has established the existence of a very unique unit known as
Planck's unit of action [h]. Since action has dimension [h] = [m l2 t-1], it is
permissible to adopt a fundamental reference system in which the [m] in [l, t,
m, q] is replaced by [h]. This gives a new reference foursome distinguished by
two physical references sharing unique property of being countable in terms of
unique natural units [h,e ], combined with the two frame related metric units
[l, t]. Let this new reference system, for which practical units can be adapted
in accordance with the MKS convention, be referred to as the
action-charge reference (AC) [l, t, h, e].
This action-charge reference has some conspicuous advantages over the
traditional [l, t, m, q] system. Unlike the old reference in which m retains
metric connotations, in the AC reference system the physical and geometric
references are now completely separated. This separation is underlined by what
is called the general relativistic invariance of the units [h] and [e] as
spacetime scalars. In the standard MKS system q is that sort of a scalar, m is
not; it is the component of a four-vector.
The fundamental position of the AC reference is strongly emphasized by
experimentation on macro- and mesoscopic quantum systems. The Josephson effect
gives very accurate data for the quantum flux unit h/e. The quantum Hall effect
gives accurate data for the Hall impedance h/e2 . Together they give
measurements of fundamental constants approaching 9 to 10 decimal places.
Flux and Hall impedance are global system properties of a more primitive nature
then spectral data requiring more detailed knowledge of the global mechanisms
producing those spectra. Despite sophisticated corrections of a QED nature,
fundamental constants obtained from spectral data lack reproducibility
comparable to the Josephson-Hall effect data.
The ratio h/e2 though has a basic role in the study of spectral fine structure
phenomena. Spectral observations on distant galaxies confirm its apparent
constancy throughout the visible universe. This fact, all by itself, lends
strong global support for adopting an AC reference system that makes the
independent nature of the metric [l, t] basis and physical [h,e ] basis an
explicit feature of general theory.
The AC reference system has a critical role in establishing basic transformation
features of tensor fields in general theory , which in turn is helpful in
identifying physical relations that are independent of metric specifications.
If, at this point, such metric-independence sounds too esoteric, keep in mind a
counting of identical natural physical [h,e ] units should not depend on choices
of units of length or time [l, t] .
Metric and Premetric Aspects of Spacetime
From the times of early Greek mathematics until the present there has always
been a vivid awareness that geometry is a discipline that can be independently
pursued from physical specifics. Mathematicians have always claimed geometry as
their domain and fortunately for geometry they have succeeded in doing so
without undue interference of physicists. However, the dimensional situation in
physics depicted in the previous section, seems to claim one halve of physics as
a matter of geometry. Hence when doing physics, geometry can be expected to
demand the attention it is due.
The Greeks gave us Euclidian geometry and even today 99% of physics'
conceptualization is predicated by Euclidian premises. The theory of general
relativity postulates spacetime to be Riemannian, which gives it locally
Euclidian properties. So, not surprisingly, much Euclidian conceptualization has
carried over into the general theory.
Euclidian and Riemannian geometry both are metric geometries, which means there
is a metric tensor. Euclidian geometry permits frames of reference in which the
metric tensor is constant, Riemannian geometry does not! In fact, the general
theory relates gravity to exactly those intrinsic changes in the metric.
Geometry is here further encroaching on physical territory and so becomes a
joint responsibility of mathematics and physics.
Since in much of physics the metric can be taken as constant, mathematical
procedures common in physics traditionally choose frames of reference that make
the metric invisible. Even in the general theory the metric is still used to
recover some of that good old Euclidian simplicity. For instance, the metric
tensor is used to reduce the transformation manifestations of physical fields;
it is called the process of pulling tensor indices up an down. However, in view
of the gravitational implications of the metric, those operations now obscure
the physical nature of those tensors. All of which raises the question whether
tensorial physical fields have a preferred intrinsic transformation features
unblemished by the obscure metric operations of raising and lowering indices.
To answer questions whether or not physical statements are possible independent
of the metric structure in which physical events take place, it does not suffice
to make the metric invisible. Since the metric relates to gravity, the question
is whether there is a part of physics that remains unaffected by gravity. Some
people have indeed explored this territory.
As an aftermath of the general theory of relativity, in the early Twenties some
workers in the borderline field of physics and mathematics discovered a number
of physical relations within the realm of the general theory of relativity,
which could be rendered in a completely metric-independent manner. It means, it
is possible to give these laws a mathematical formulation that merely calls on
the spacetime manifold properties, whereas any reference to the spacetime metric
tensor field for length and time references is allowed to completely drop out of
the picture. This feature persists under arbitrary (diffeo-4) changes of
The initial response by physicists to these rather mathematical inquiries was
one of puzzlement and mild disbelief. An attempt at a down to earth assessment
suggests that everything in physics invokes measurements of length and time.
What does "metric-free" mean, in fact, how could anything in physics be
The interest in these mathematical observation soon began to wane after the
first amazement had worn off. Now, at the end of this century, almost all
physicists are either unaware of these matters, or long ago they dismissed them
as peculiarities of mere mathematical concern.
Even if there was at the time a growing perceptiveness of topology related
matters, the metric-free physical relations of the early Twenties were, with few
exceptions, all cast in the form of metric-independent differential statements
that have strictly local implications. The purely mathematical interest,
however, becomes a matter of relevant physical interest as soon as the local
interest is extended to a realm of global concern. That is where the topology
comes in. From a physical point of view that is most easily done by looking at
some of the familiar integral laws of physics.
In quite introductory physics courses, it is shown that Gauss' law of
electrostatics really counts the number of net elementary charges residing
inside a closed surface. This statement is said to remain true even if the
enclosure is deformed, as long as no charges cross the enclosing surface. Since
Gauss' law statement really counts the net number of charges inside an
enclosure, this counting will also have to be independent of arbitrary changes
in the frame of reference and its metric specifications.
The standard formulations of electromagnetic theory, even the most advanced
presentations amongst them, don't make the here cited frame- and metric
independence a matter of obvious mathematical perspicacity. The standard
mathematical vehicle of vector analysis, commonly used in physics, is
exclusively metric-based. It makes the option of separating out metric-free
features either very cumbersome or practically impossible.
Even so, the reader may still ask: why all this unrelenting insistence on the
pursuit of metric-free options? To answer this question, be reminded that
absence of metric makes non-metric qualities stand out more clearly. Topological
structure is a prime example of a nonmetric quality. More important though:
metric-free law statements have validity in macro- and in micro-domains, because
the metric is the one and only reference of what is physically small or large.
Hence a pursuit of metric-free options opens the doors to topological
explorations in macro- and micro-domains. Last but not least, since the metric
is known as an exclusive agent of gravity action, metric-free relations have a
position in physics that holds independent of gravity.
When seen in this light, it seems worthwhile to repair shortcomings in
contemporary mathematical presentations of physics if they stand in the way of a
more discerning view of these matters . The explicit discovery of metric-free
statements of physical laws first emerged from the use of tensorial
descriptions, mostly because in some cases the metric-based process of covariant
differentiation would reduce to ordinary differentiation. Since tensor methods
are mostly local in nature, the topological implication of metric-free physics
points towards the global connotations of integral statements.
In answer to these topological needs, mathematics has singled out a technique
specifically meant for dealing with certain metric-free integration aspects of
tensor analysis. It has become known as the method of differential forms. It
lifts the metric-free global aspect out of the realm of appropriately general
invariant tensor methods.
Forms were first introduced by Cartan and later developed by de Rham for
purposes of topology. The integral theorems of Stokes and Gauss encountered in
vector analysis and their generalizations reemerge in differential forms as
metric-independent theorems The so called de Rham theorems relate to the
category of residue integrals of which Gauss' integral of electrostatics is an
early classical example. So, adopting differential forms we get reacquainted
with all sorts of items with which there was already an earlier partial
familiarity, yet they now appear in a new more enhanced context that permits a
more discerning assessment. The flux integral, perhaps first introduced as a
period integral by Fritz London, has more recently become known as the integral
of Aharonov-Bohm. There is a 3-dimensional action integral that is a product
integral of this 1-dimensional flux integral and a 2-dimensional AmpŠre-Gauss
integral.* If the latter yields a point charge residue, the 3-dimensional
integral reduces to a 1-dimensional period integral that has long been known as
the Bohr-Sommerfeld integral of the early quantum theory.
So, it stands to reason if a trend is emerging favoring methods of differential
forms as a future tool for purposes of physics. Yet, contemporary physics has
remained largely uninformed about the pre-metric discoveries of the early
Twenties. Hence no clear distinction emerges between metric-free and metric
dependent forms. As a result, forms in physics are introduced in somewhat ad hoc
manner, not taking advantage of this chosen opportunity to readdress the
physical issues associated with pre-metric physics: e.g., macro- as well as
micro-topological structure invoking the invariants of action h and charge e.
These options have been either ignored or denied for so long, because a
continued use of the traditional dimensional reference system [l, t, m, q]
detracts from a topologically more discerning view of physical structure. The AC
revised dimensional reference system [l, t, h, e], by contrast, serves as a
vivid reminder of how to identify and home in on countable physical quantities.
To create meaningful order in a realm where too many distinct quantities have
traditionally been umped together in indistinguishable fashion, one needs all
the reminders one can get to sort out things.
In retrospect this little reprogramming exercise of contemporary physics leaves
all of its major tools from Maxwell to Schroedinger-Dirac fully intact. It is
the very reason why this plea for conceptual change could be made without
writing down a single mathematical equation. The major casualty in this
rearrangement of basics is the conceptual picture of the Copenhagen
interpretation and its contingency of nonclassical propositions.
Accepting Born's probability identification of Schroedinger's × function, it was
Copenhagen's single system premise that left Born's statistics without a
universe of discourse as a suitable physical home. That is how the idea of a
nonclassical statistics was born. A nonclassical statistics really is a
statistics that lacks a universe of discourse.
Nonclassical protagonists have even gone so far as to claim that classical
equivalents of nonclassical don't exist. Asking such questions, they say;
reveals an incapability of understanding modern physics. Such dictates, however,
resemble a familiar policy of people who don't like to be contradicted. Here,
the position is taken that classical counterparts of presumably nonclassical
entities do exist. In 1912 Max Planck  gave an example that in retrospect
contradicts the nonclassical statistical propositions that were to appear later
in the Thirties.
Seen in this light, a proposition can be made supporting a Gibbs-type ensemble
of conceivable single system manifestations of one and the same single system.
Yet, this proposition would endow every harmonic oscillator with a zero-point
energy, which leads to the notorious QED infinities of vacuum.
Knowing that the Gibbs ensemble was meant as an abstract substitute for an
actual ensemble, such vacuum infinities can be avoided by restricting the
Schroedinger-Dirac equations to real ensembles of identical systems that are
taken to be random in phase and orientation. Most spectroscopic samples meet
that condition, so no spectroscopist should be unhappy about this restriction.
Since this approach to a conceptual reorganization of contemporary physics
starts restricting Schroedinger applicability to ensembles, the upshot of doing
so leaves single systems without an appropriate tool of analysis. The other path
to reorganization, therefore, would have to start out at the opposite end: i.e.,
the single system. It becomes a matter of exploring whether or not the old
Bohr-Sommerfeld condition (by its very nature a single system tool) has brothers
an sisters. This search led to the Aharonov-Bohm and AmpŠre-Gauss integrals,
which could be reunited as siblings, that had been separated at birth. Their
typical single system connotations enhance insight into flux, charge and Hall
impedance quantization, as well as some QED phenomena.
The methodology of establishing the special features of universal single system
tools are based on dimensional analysis and general spacetime theory of
transformation. The metric-independence holds a key role in macro- and
micro-applicabilty, thus finally establishing a sound physical objective for a
number of curious and puzzling investigations of the Twenties and the Thirties.
It is this existence of a set of useful single system tools that poses a major
problem for a continued support of Copenhagen's single system premise and its
aftermath of nonclassical conceptualization. In no way can the single system
premise for the Schroedinger equation compete with the ensemble proposition.
This point of view had already been reached by some people in the Thirties. The
reason why it did not get off the ground at that time was due to the thoroughly
obscuring influence of a presumed nonclassical statistics.
While the counter example statistics deal with mutual phase and orientation of
the systems in an ensemble, the so-called nonclassical implications are seen as
related to the × function of the Schroedinger equation. Mutual phase and
orientation are Hamilton-Jacobi parameters, which through the action function S
relate to the × function through Schroedinger's exponential transformation
interrelating × and S. Explicit proof shows how phase and orientation averaging
of Bohr-Sommerfeld results yields typical Schroedinger results.
All of this hints at a reality that makes the Schroedinger equation a derived
secondary law of nature with the practical consequence of restricting its
applicability to ensembles. By the same token, the Aharonov-Bohm, the
AmpŠre-Gauss and the Bohr-Sommerfeld integrals are elevated to primary quantum
laws with a greatly enhanced realm of applicability in macro- and micro-domains.
From these primary laws the Schroedinger equation can be derived. In fact, the
Schroedinger recipe for obtaining his wave equation automatically becomes a
derivation, once the switch between primary and secondary law status is
Summarizing these considerations, we are here confronted with two different
approaches assessing the Copenhagen situation. One calls on earlier work by
Planck  on the subject of zero-point energy, indicating that the Schroedinger
equation is an ensemble tool. This relegates the wave equation to a position of
a secondary (derived) quantum law. The other approach reassesses the status of a
number of residue integrals, thus establishing their position as primary laws of
metric-independent quanta counters. This feature gives them the natural
character of single system tools, thus disqualifying the Schroedinger equation
to hold that same position. Both approaches home in on a common conclusion:
Copenhagen's single system premise is an inadmissible proposition.
The here given conclusion is the result of a closely knit set of tightly
interlocking arguments. Even if the standard Copenhagen rationale cannot match
the consistency of the alternative, it is no proof that the alternative is
necessarily correct. All that can be said, its probability of being wrong is
smaller than the probability of Copenhagen being wrong. Since experience
cautions against unduly extrapolating newly obtained results, let us say: the
alternative has an appearance of being less wrong than the Copenhagen choice.
Those who don't want the esoterics of dimensional analysis and diffeo-4
transformation theory, may just accept the principal conclusion. It means use
the integrals to cover the global nature of single systems and make sure
Schroedinger is used in situations that can at least accommodate some
statistics. The chances are those guide lines will enhance the relevance of what
is done. If that experience might turn out to be positive, it may arouse an
interest in some of the underlying esoterics, because it is merely a way of
making ourselves more aware and more discerning of these aspects. For further
scrutiny of the subject, there is more published material  on theory and
application. Yet until now, the establishment press has been reluctant to
comment, review or acknowledge the here presented alternatives.
In the long run it would be hard for physics to totally refrain from taking
position in the here presented observations. It goes without saying though that
a demise of Copenhagen's interpretation is a painful experience for everybody
brought up with its ideas. While many instructors present quantum mechanics as
gospel truth, some instructors may already present the subject with a warning
that it might be a preliminary structure. So, all these years, there has been a
small measure of expectations about a future bringing changes in the basic
formulations. However, the idea that Copenhagen teachings could be 180 degrees
out of phase with the here suggested classic order may well come as a total
surprise, which is a total shock. All that can be said at this moment is that a
rather compelling logic is pointing in that classic direction. The idea that
physics had its priorities the wrong way around for three quarter century seems
outrageous. Yet if true, it deals a devastating blow to those nonclassical
procedures that were called upon in the late Twenties and early Thirties.
Sooner or later physics will have to take position with respect to the here
cited alternative to a nonclassical tradition of so many years. This
interpretation alternative is either wrong and of no consequence, or it forces
physics to confront a reality it attempts to ignore by taking liberty with
1 John Slater, A Scientific Biography (NY,Wiley 1965)Ch.2
2 Max Planck, Theory of Heat Radiation (Dover,1959, Germ. ed.,1912)p.141
3 E Schroedinger, Ann. der Physik 81, 112,113(1926)
4 Max Jammer, Philosophy of quantum Mechanics (NY, Wiley. 1974)
5 E J Post, Formal Structure of Electromagnetics (A'dam 1962, NY, Dover1997)
6 E J Post, Quantum Reprogramming (Kluwer, Ac. publ.; Dordrecht, Boston, 1995)
* This integral, of course, includes the Maxwell displacement term, so it could
be denoted as the AGM integral. Similarly recalling Londond's first suggestion
in the Thirties of flux quantization, the Aharonov-Bohm integral might be
denoted as the LAB integral.