The Competitiveness of Nations in a Global Knowledge-Based Economy
Stuart A. Kauffman
Investigations
Chapter 10: A Coconstructing Cosmos?
Oxford
University Press, 2000, 243-266
Content
The Complexity of the Universe
Quantum Mechanics and Classicity
The Problem of Time in General Relativity
The Emergence of a Large-Scale Classical Limit?
A Brief Comment on String Theory |
FROM BIOSPHERES to the cosmos? Yes, because they may share general themes. The major enquiry of Investigations has
concerned autonomous agents and their coconstruction of biospheres and
econospheres whose configuration spaces cannot be finitely prestated. These themes find echoes in thinking about the
cosmos as a whole. But abundant caution:
I am not a physicist, the problems are profound, and we risk nonsense.
Whatever the risk, two facts are true. First, since the big bang our universe has
become enormously complex. Second, we do
not have a theory for why the universe is complex. Equally unarguably, the biosphere has
increased in molecular diversity over the past four billion years, just as the
standing diversity of species has increased. And equally unarguably, the econosphere has
become more complex over the past few million years of hominid evolution. We know this with confidence. If we lack a theory, it is not because the
staggering facts of increasing diversity and complexity that stare us in the
face do not deserve a theory to account for them.
But we have seen hints of such a theory in the coconstruction of biospheres and econospheres by the self-consistent search of autonomous agents for ways to make a living, the resulting exapting novel ways of making livings, the fact that new adjacent niches for yet further new species grow in diversity faster than the species whose generation creates those new adjacent possible niches, and the search mechanisms to master those modes of being. We have seen a glimmer of something like
243
a fourth law, a tendency for self-constructing biospheres
to enlarge their workspace, the dimensionality of their adjacent possible,
perhaps as fast, on average, as is possible - glimmers only, not yet
well-founded theory nor well-established fact. But glimmers often precede later science.
Consider again how the chemical
diversity of the biosphere has become more diverse in the past four billion
years, urged into its adjacent possible by the genuine chemical potential from
the chemical actual into the adjacent possible, where the actual substrates
exist and the adjacent possible products do not yet exist. Each time the molecular diversity of the
biosphere expands, the set of adjacent possible reactions expands even faster. Recall our simple calculation that for
modestly complex organic molecules, any pair of molecules could undergo at
least one two substrate-two product reaction. But then the diversity of possible reactions
is the square of the diversity of chemicals in the system. As the diversity of molecular species
increases, there are always proportionally more novel reactions into the adjacent
possible. If we take the formation of a
chemical species that has never existed in the biosphere, or perhaps the
universe, as a breaking symmetry, then the more such symmetries are broken, the
more ways come into existence by which yet further symmetries may be broken.
And the chemical case makes clear the
linking of the flows of matter and energy in this sprawling chemical diversity
explosion. Many such reactions will link
exergonic and endergonic processes. As
this occurs, energy is pumped from the exergonic partner into the products
requiring endergonic synthesis. These
products - the chemical diversity in the bark of a redwood tree, for example - take
their place in the chemical actual, poising the biosphere, and thus the
universe, for its next plunge into the chemical adjacent possible.
Consider again equilibrium statistical
mechanics. At its core, statistical mechanics
relies on the same kind of statistical argument as does the flipping of a fair
coin 10,000 times. We all understand
that distributions of roughly 5,000 heads and 5,000 tails are far more
probable macrostates than distributions with all heads or all tails. Now consider the general argument I have made
that as molecular diversity increases, the diversity of reactions increases
even faster, and that there is a genuine chemical potential from the actual
into the adjacent possible. And consider
again the general argument made just above that the greater the diversity of
molecular species and reactions, the more likely the coupling of exergonic and
endergonic reaction pairs driving the endergonic synthesis of new adjacent
possible molecules that poise the system to advance again into the next
adjacent possible. While the detailed
statistical form of these chemical reaction graphs are not yet known, they too
smell of “law.” As in the case of fair
coin flips and equilibrium statistical mechanics, it is as if here again the
mathematical structure compels the consequent behavior of matter and energy. In the case of the nonergodic and non-
244
equilibrium
chemical flux into the adjacent possible, the universe is busy diversifying
itself into myriad complexity.
The universe is enormously complex, and we don’t really yet know why. May there be new ways of thinking of the
cosmos itself? If a mere glimmer can be
acceptable as potentially useful early science, then the burden of this chapter
is to suggest perhaps, yes.
It is not obvious, in fact, that the universe should be complex. One
can imagine universes governed by general relativity that burst briefly into
big bang being, then recollapse in a rapid big crunch within parts of a second
or a century. Alternatively, one can
imagine universes governed by general relativity that burst into big bang being
and expanded forever with no further complexity than hydrogen and helium and
smaller particles in an open and ever-expanding dark, cold vastness.
Here we are, poised, it seems (but see below) between a universe that
will expand forever and a universe that will eventually ease into gentle
contraction, then rush to a big crunch.
Our fundamental theories in physics, and just one level up, biology,
remain un-united. Einstein’s austere
general relativity, our theory of space, time, and geometry on the macroscale,
floats untethered to quantum mechanics, our theory of the microscale,
seventy-five years after quantum mechanics emerged in Schrodinger’s equation
form for wave mechanics. Theoretically
apart, general relativity and quantum mechanics are both verified to eleven
decimal places by appropriate tests. But
it remains true that general relativity and quantum mechanics remain fitfully
fit, fitfully un-united. And Darwin’s
view of persistent coevolution remains by and large unconnected with our
fundamental physics, even though the evolution of the biosphere is manifestly a
physical process in the universe. Physicists cannot escape this problem by
saying, “Oh, that’s biology.”
The
Complexity of the Universe
Why the universe is complex rather than simple is, in fact, beginning
to emerge as a legitimate and deep question. In the past several years, I have had the
pleasure to come to know Lee Smolin, whose career is devoted to quantum gravity
and cosmology. Most of what I shall
write in this chapter reflects my conversations and work with Lee and his
colleagues, who have been wonderfully welcoming to this biologist. Sometimes outsiders can make serious
contributions. Sometimes outsiders just
make damned fools of themselves.
Caveat lector, but I will continue.
In Smolin’s book, The Life of the Cosmos, he raises directly the
question of why the universe is complex. Current particle physics has united three of
the four fundamental forces - the electromagnetic, weak, and strong forces
called the “standard
245
model.” With general relativity,
which deals with the remaining force, gravity; this provides a consistent
framework. Particle physics plus general
relativity have altogether some twenty “constants” of nature, which are
parameters of the standard model and general relativity, such as the value of
Planck’s constant, h; the fine structure constant, that is, the ratio of
the electron rest mass to proton rest mass; the gravitational constant, g; and
so forth. Smolin puts approximate
maximum and minimum bounds on these twenty constants and asks a straightforward
question: In a twenty-dimensional parameter space ranging over the plausible
values of these twenty constants, what volume of that parameter space is consistent
with values of the constants that would yield a complex universe with stars,
chemistry, and potentially, life?
Smolin’s rough answer is that the volume of parameter
space for the constants of nature that would yield a complex universe are
something like 10 raised to the minus 27th power. That is, a tiny fraction of the possible
combinations of the values of the constants are consistent with the existence
of chemistry and stars, as well as life. For the universe to be complex, the constants
must be sharply tuned.
Smolin’s argument could be off by many orders of
magnitude without destroying his central point: The fact that our universe is
complex, based on our current theories of the standard model and general
relativity, is surprising, even astonishingly surprising.
Many physicists have remarked upon this fine tuning of
the constants.
There have been several responses to this issue, some
raised prior to Smolin’s work. One is
based on a view of multiple universes and the “weak anthropic principle”. This principle states that there exist multiple
universes, but only those universes that were complex would sport life forms
with the wit to wonder why their universe was complex. So the very fact that we humans are here to
wonder about this issue merely means that we happen to be in one of the complex
universes among vastly many universes. The
argument is at least coherent. But it’s
hard to be thrilled by this answer.
The “strong anthropic principle” goes further - indeed,
too far - and posits that, for mysterious reasons, the universe is contrived such
that life must arise to observe it and wonder at it. Few think the strong anthropic principle
counts as science in any guise.
Smolin points out that there are two possible answers
to the puzzle of the complexity of the universe. Either we will find a parameter-free
description - a kind of supertheory - that yields something like our complex
universe with its constants, or some historical process must pick those
constants. Smolin proposes the possibility
of “cosmic natural selection?’ Here,
daughter universes are born of black holes. Universes with more black holes will have more
daughter universes. Given minor
heritable variation in the constants of the laws of the daughter universes,
cosmic natural selection will select for universes whose constants support the
for-
246
mation of a near-maximum number of black holes. He then argues that on very crude calculations
most alterations in the known constants would be expected to lower the number
of black holes. Lee points out that his
theory is testable, for example, by deducing that our constants correspond to
near-maximum black hole production, and that his theory has not been ruled out
yet.
I confess I am fond of and admire Lee Smolin a great
deal, but I don’t like his hypothesis. Why?
Well, preferably, one would like a
theory that had the consequence that any universe would be complex like ours
and roughly poised between expansion and contraction. We have no such theory at present, of course. The remainder of this chapter discusses ideas
and a research program that just might point in this direction.
As a start, we can begin with the most current view of
the large-scale structure and dynamics of the universe. The most recent evidence suggests that on a
large enough scale the universe is flat, the matter distribution is isotropic,
and - a recent surprise - the universe may be expanding at an accelerating rate.
This latest result, if it holds true,
contravenes the accepted view of the past several decades that the rate of
expansion of the universe has been gradually slowing since the big bang. The hypothesis that the universe is exactly
poised between persistent expansion and eventual collapse has held that the
rate of expansion of the universe will gradually slow, but never stop.
One way to explain a persistent accelerating expansion
of a flat universe is to reintroduce Einstein’s “cosmological constant” into
the field equations of general relativity. A positive cosmological constant expresses
itself as a repulsive force between masses that increases with the distance
between those masses. Some physicists
think that a positive cosmological constant must be associated with some new
source of energy in free space. The
source of such an energy is currently unknown.
Quantum
Mechanics and Classicity
Before turning to the huge difficulties of quantum
gravity, we should review the fundamental mystery of quantum mechanics. Most readers are familiar with the famous
two-slit experiment, which exhibits the fundamental oddness of quantum
interference. Feynman, in his famous
three-volume lectures on physics, gives the mystery as simply as possible: We
begin with a gun shooting bullets. The
bullets pass through one of two holes in a metal plate and fly further on,
landing on a flat layer of sand in a box. Bullets passing through either hole may be
deflected slightly by hitting the walls of the hole. Thus, in the sandbox
behind the metal plate, we would expect, and actually would find, two mounds of
bullets. Each mound would be centered on
the line of flight of the bullet from the gun through the corresponding hole to
the sandbox, with a “Gaussian” or normal bell-shaped distribution of bullet
densities falling away from the peak of each mound.
247
When we use monochromatic light rather than bullets,
we note the following: If the light hits the sandbox, changed into a
photon-counter surface, we find that the size of the energetic impact is the
same each time a photon hits the surface. Photons of a given wavelength have a fixed
energy. A photon either is recorded at a
point on the surface or not. Whenever
one is recorded, the full parcel of energy has been detected at the surface. Now if only one hole is open, one gets the
Gaussian mound result. Most photons pass
through the hole unscathed and arrive in a straight line at the photon-counter
surface. A Gaussian distribution peaked
at that center is present because some photons are deflected slightly by the
edges of the hole.
But if two holes are open, then one gets the famous
interference pattern of light and dark interfering circles spreading from the
centers on the photon-counter surface that were the peaks of the mounds seen when
hole 1 or hole 2 was open. Of course, as
Feynman points out, there is no way to account for this oddness in classical
physics.
Quantum mechanics was built to account for the
phenomenon. The Schrodinger equation is
a wave equation. The wave that propagates
from the photon gun is an undulating spherically spreading wave of probability
“amplitude?’ The amplitude at any point
in space and time is the square root of the probability that the photon will be
found located at that point. To obtain
the actual probability, the amplitude must be squared.
A central feature of Schrodinger’s equation is its
linearity. If two waves are propagating,
the sum and differences of those waves are also propagating. It is the essential linearity of quantum
mechanics that makes the next puzzle, the link from quantum to classical
worlds, so difficult. For a central
puzzle of quantum mechanics becomes the relation between this odd quantum world
of possible events, where the possibilities can propagate, but never become
actual, and the classical world of actual events.
A variety of approaches to the liaison between the
quantum and classical realms exist. The
first is the “Copenhagen interpretation,” which speaks of the “measurement
event;’ when the quantum object interacts with a macroscopic classical object,
the measuring device, and a single one of the propagating possibilities becomes
actual in the measurement event, thereby “collapsing” the wave function. A second approach is the Everett multiworld
hypothesis, which asserts that every time a quantum choice happens, the
universe splits into two parallel universes. No one seems too happy with the Everett
interpretation. And few seem very sure
what the Copenhagen interpretation’s collapse of the wave function might really
mean.
Meanwhile, there are two other long-standing
approaches to the link between the quantum and classical worlds. The first is Feynman’s famous sum over all possible
trajectories, or histories, approach. In
quantum mechanics, we are to imagine a given possible pathway of the photon
from the photon gun through the screen with the two slits to the
photon-counting surface. For each
pathway, there is a well-
248
defined procedure to assign an “action.” This action can be thought of as having an
amplitude and a phase, and the phase rotates through a full circle, 2 pi, many
times along the pathway. According to
the Feynman scheme, classical trajectories correspond to quantum pathways
possessing minimal action.
Consider, says Feynman, all the pathways that start at
the photon gun and end up at the same point on the photon-counting surface. Nearly parallel, nearly straight-line
pathways, have nearly the same action.
So when those pathways interact, they have nearly the same phase, and
their interaction yields constructive interference, which tends to build up
amplitude. Thus, pathways that are near
the classical pathway interact constructively to build up amplitude. By contrast, quirky crooked pathways between
the photon gun and the same point on the counter screen have very different
actions, hence very different phases, and interact destructively, so their
amplitudes tend to cancel. The classical
pathway, therefore, is simultaneously the most probable pathway over the sum of
histories of all possible pathways, and the pathway that requires the least
action.
The result is beautiful, but has two problems. First, Feynman assumes a continuous background
space and time in his theory. Quantum
gravity, as we will see, cannot make that assumption in the first place . Rather
space, or geometry, is a discrete, self-constructing object on its own. Thus, achieving a smooth space and time is
supposed to be a consequence of an adequate theory of quantum gravity. If Feynman’s sum over histories must assume a
smooth background space and time, then it cannot as such be taken as primitive
in quantum gravity. Second, granting a
continuous background space and time, Feynman’s sum over all histories still
only gives a maximum of the amplitude for the photon to travel the classical
pathway, it never gives an actual photon arriving at the counting surface. No more than any other does Feynman overcome
the fundamental linearity of quantum mechanics. We still have to collapse the wave function. Despite these problems, Feynman’s results are
brilliant, and at least we see a link between the classical and quantum worlds,
if not yet actual photons striking counters.
But there is an alternative approach to the link
between the quantum and the classical worlds. This possible approach is based on the
well-established phenomenon of “decoherence.” Decoherence arises when a quantum-coherent
Schrodinger wave is propagating and the quantum system interacts with another
quantum system having many coupled variables, or degrees of freedom. The consequence can be that the Schrodinger
wave function of the first quantum system becomes entangled in very complex
ways with the other complex quantum system, which may be thought of as the
environment. Rather like water waves
swirling into tiny granular nooks and crannies along a rugged fractal beach,
the initial coherent Schrodinger equation representing the initial quantum
system swirls into tiny and highly diverse patterns of interaction with the
quantum system representing the environment. The consequence of this intermixing is
decoherence.
249
To understand the core of decoherence, one must
understand that the exhibition of interference phenomena, the hallmark of
quantum mechanics noted in the double-slit photon experiment, requires that
literally all the propagating possible pathways in Feynman’s sum over
histories that are to arrive at each point on the photon-counter surface, do in
fact arrive at that point. If some fail
to arrive, the sum over all histories fails. In effect, if some of the phase information,
the core of constructive and destructive interference, has been lost in the
maze of interactions of the quantum system with its environment, then that
phase information cannot come to be reassembled to give rise to quantum
interference.
Decoherence is accepted by most physicists. For example, in attempts to build quantum
computers that can carry out more than one calculation simultaneously due to
the linear features of quantum mechanics, actual decoherence is currently a
technical hurdle in obtaining complex quantum calculations.
Decoherence, then, affords a way that phase
information can be lost, thereby collapsing the wave function in a
nonmysterious fashion. Thus, some physicists
hope that decoherence provides a natural link between the quantum and classical
realms. Notable among these physicists
are James Hartle and Murray Gell-Mann, whose views can be found in Gell-Mann’s The
Quark and the Jaguar. In essence,
Hartle and Gell-Mann ask us to consider “the quantum state of the universe” and
all possible quantum histories of the universe from its initial state. Some of these histories of the universe may
happen to decohere. Hartle and Gell-Mann
argue that the decoherent histories of the universe, where true probabilities
can be assigned, rather than mere amplitudes, correspond to the classical
realm. Others have argued that
decoherence itself can be insufficient for classical behavior.
It is striking that there appear to be two such
separate accounts of the relation between the quantum and classical worlds,
Feynman’s sum over histories in a smooth background space-time and decoherence.
For an outsider, it is hard to believe
that both can be correct unless there is some way to derive one from the other.
I will explore one such possibility
below. In particular, I will explore the
possibility that decoherence of quantum geometries is primary and might yield a
smooth space-time in which Feynman’s account is secondarily correct.
I turn next to an outsider’s grounds for doubts about
some of the core propositions of quantum mechanics. Roland Omnes, in The Interpretation of
Quantum Mechanics, is at pains to argue that decoherence is the plausible
route to classicity. In his discussion,
two major points leap to attention. The
first concerns the concept of an elementary predicate in quantum mechanics. Quantum mechanics is stated in the framework
of Hilbert spaces, which are finite or infinite-dimensional complex spaces,
that is, spaces comprised of finite or infinite vectors of complex numbers. In
250
effect, an elementary predicate is a measurement about an “observable”
that returns a value drawn from some set of possible values. And so the first striking point is Omnes’
claim that all possible observables can be stated in Hilbert space. The second striking point is Omnes’ claim
that some observables cannot be observed.
The first point is striking because it is not at all
clear that all possible observables can be finitely stated in Hilbert space. My issue here is precisely the same as my
issue with whether or not the configuration space of the biosphere is finitely
prestatable. As I argued above, there
does not seem to be a finite prestatement of all possible causal consequences
of parts of organisms that may turn out to be useful adaptations in our or any
biosphere, which arise by exaptation and are incorporated in the ongoing
unfolding exploration of the adjacent possible by a biosphere.
In quantum mechanics, an observable corresponds to a
mathematical operator that “projects out” the subspace of Hilbert space corresponding
to the desired observable in a classical measurement context that allows detection
of the presence or absence of the observable. But the biosphere is part of the physical
universe, and the exapted wings of Gertrude the flying squirrel are manifestly
observables, albeit classical observables. If we cannot finitely prestate the observable,
“Gertrude’s wings,” then we cannot finitely prestate an operator on Hilbert
space to detect the presence or absence of Gertrude’s wings. In short, there seems to be no way to pre-specify
either the quantum or classical variables that will become relevant to the
physical evolution of the universe.
It turns out that the above issues may bear on the
problem of time in general relativity, as Lee Smolin realized from our
conversations and as I return to shortly.
Now, the second point. Omnes follows up on it. An observable requires a measuring device. There are some conceivable observables for
which the measuring device would be so massive that it would, of itself, cause
the formation of a black hole. Thus, no
information resulting from the measurement could be communicated to the outside
world beyond the black hole.
A strange situation; even if we could finitely
prestate all possible observables, only some observables can manage to get
themselves observed in the physical universe.
What shall we make of conceivable observables that
cannot, in principle, be observed? More
important, it seems to this outsider, is the following: If observation happens
by coupling a quantum system to some other system, quantum or classical,
whereby decoherence occurs and, in turn, classicity arises because loss of
phase information precludes later reassembly of all the phase information to
yield quantum interference, then there appears to be a relation between an
observable being observed and the very decoherence by which something actual
arises from the quantum amplitude haze.
If that is correct, then only those observables that
can get themselves observed can, in principle, become actual. More, it begins to seem imperative to consider
the
251
specific possible pairs of quantum systems that can couple and
decohere, for only thereby can such pairs become classical via decoherence. This begins to suggest preferred histories of
the universe concerning such comeasuring pairs of quantum systems. Preferentially, those comeasuring pairs of
quantum systems that decohere and become classical will tend to accumulate, due
to the irreversibility of classicity. Thereafter
quantum-classical pairs of systems that cause decoherence of the quantum system
will preferentially accumulate into classicity.
If comeasuring yields classicity, and classicity is irreversible,
the classical universe begins to appear to coconstruct itself. In particular, it is generally accepted that
bigger systems, that is, systems with more coupled degrees of freedom, deco-here
more rapidly when they interact than smaller systems. If so, this begins to refine the suggestion of
preferred histories of the universe concerning comeasuring pairs of quantum
systems toward a preference for the emergence of classical diversity and
complexity: If quantum systems with more coupled degrees of freedom irreversibly
decohere more rapidly into classical behavior when they interact than smaller,
simpler systems, then the kinetics of decoherence should persistently favor the
irreversible accumulation of bigger, more complex quantum systems, rather than
of smaller, simpler, quantum systems.
Chemistry should be an example. Molecules are quantum objects, yet flow into
the chemical adjacent possible. The
adjacent possible explodes ever more rapidly as molecular diversity, and hence
molecular complexity, increases. Reactions
of complex molecules are precise examples of the couplings of quantum systems
whereby decoherence can happen. Decoherence presumably happens more rapidly
among complex reacting molecules than among very simple molecules or the same
total number of mere atoms, nucleons, and electrons in the same total volume. This hypothesis ought to be open to
experimental test. If confirmed, the
flow of possible quantum events into the chemical adjacent possible should, in
part, be made irreversible by the decoherence of complex molecular species as
they couple and react with one another.
If the general property obtains that complex quantum
entities can couple to and interact with other complex quantum entities in more
ways than can simple systems and that the number of ways of coupling explodes
faster than the diversity of entities, and thus faster than the complexity of
those quantum objects, then decoherence should tend to lead to favored pathways
toward the accumulation of complex classical entities and processes. I return to these themes below.
The Problem of
Time in General Relativity
To my delight, I soon found myself coauthor on a paper
with Lee Smolin concerning the problem of time in general relativity. Lee had done the majority of the
252
work, but had taken very seriously my concern that one cannot finitely prestate
the configuration space of a biosphere.
In general relativity, space-time replaces space plus
time. A history becomes a “world-line”
in space-time. But that world-line is a
geometrical object in space-time. Time
itself seems to disappear in general relativity, to be replaced by the geometrical
world-line object in space-time.
But argued Lee, with my name appended, general
relativity assumes that one can prestate the configuration space of a universe.
In that prestated configuration space, a
world-line is, indeed, merely a geometrical object. What if one cannot prestate the configuration
space of the universe? If so, one cannot
get started on Einstein’s enterprise, even if general relativity is otherwise
correct. As concrete examples, Lee pointed
out that four-dimensional manifolds are not classifiable.
How might one do physics without prestating the
configuration space of the universe? Lee
postulated use of spin networks, as described below, with the universe
constructing itself from some initial spin network. In this picture, time and its passage is real.
If there can be a framework in which
time enters naturally, and possibly there is a natural flow of time, or an
arrow of time preferentially from past to future, then, among other possible
consequences, we may be able to break the matter-antimatter symmetry, for
antimatter can be stated as the corresponding matter flowing backward in time. Break the symmetry of time in fundamental
physics and you may buy for free the breaking of the symmetry between matter
and antimatter. If time flows
preferentially from past to future, matter dominates antimatter. That would be convenient since matter does
dominate antimatter, and no one knows just why.
We will head in this direction.
For the sixty years following 1926 and the
emergence of matrix mechanics and the Schrodinger formulation of quantum
mechanics, scant progress was made on quantum gravity. Now, in the past decade or so, there are two
alternative approaches, string theory and spin networks. Of the two, string theory has captured the
greatest attention. I discuss it briefly
below.
Spin networks were invented by Roger Penrose three
decades ago as a framework to think about a quantized geometry. Quite astonishingly, spin networks appear to
have emerged from a direct attempt to quantize general relativity by Carlo
Rovelli and Lee Smolin. In outline, part
of the tension between quantum mechanics and general relativity lies in the
very linearity of quantum mechanics and the deep nonlinearity of general
relativity.
Building on work of Astekar and his colleagues,
Rovelli and Smolin proceeded directly from general relativity along somewhat
familiar pathways of canonical
253
quantization. In outline, general
relativity is based on a metric tensor concerning space-time. The metric tensor is a 4 x 4 symmetric tensor. It turns out that this tensor yields seven
constraint equations. The solutions of
six of the seven have turned out to be spin networks. The solution of the seventh equation would
yield the Hamiltonian function, hence the temporal unfolding, of spin networks
in a space x time quantum gravity.
Spin network theories can be constructed in different
dimensions. The two most familiar are
for two spatial and one temporal or three spatial and one temporal dimension. We will concern ourselves with three plus one
spin networks for concreteness. The
minimal objects in a spin network are discrete combinatorial objects that
constitute first a tetrahedron, with four vertices and four triangular faces. A tetrahedron represents a primitive discrete
unit of geometry, or space. Integer-valued
labels are present on the edges and vertices of these tetrahedra. The labels on the edges represent spin states.
The labels on the vertices represent
“intertwinors” and concern how edges entering a vertex are connected to one
another into and out of the vertex.
Analytic work has associated an area with a face of a
tetrahedron and a volume with its volume. There is, at present, no way to represent the
length of an edge connecting vertices. On
the other hand, one can think of the integer values on the edges around a face
of a tetrahedron as associated with the area of the tetrahedron, such that
larger integers correspond to larger areas.
A geometry is built up by minimal moves, called
“Pachner moves,” in which a given tetrahedron can give rise to daughter
tetrahedra off each face. In addition,
several tetrahedra can collapse to a single tetrahedron.
Thus we may picture an initial spin network, say, a
single tetrahedron. In analogy with
chemistry and combinatorial objects, the founder set of a chemical reaction
graph, and the adjacent possible in the chemical reaction graph, we may
consider the single initial tetrahedron as a founder set, gamma 0. Consider next all possible adjacent spin
networks constructible in any single Pachner move. Let these first adjacent possible spin
networks lie in an adjacent ring, gamma 1. In turn, consider all the spin networks
constructible for the first time from the founder set in two Pachner moves,
hence constructible for the first time in one Pachner move from the gamma-1 set
of spin networks. Let this new set be
the gamma-2 set of spin networks.
By iteration, we can construct a graph connecting the
founder spin network with its i-Pachner move “descendants,” 2-Pachner move
descendants,... N-Pachner move descendents.
Each spin network in each gamma ring represents a
specific geometry, subject to the constraint that two spin network tetrahedra
that share one triangular face must assign the same spin labels to the common
edges, hence, the same area to the common face.
254
Changes in the values of spins on the edges that
change the areas and volumes of the tetrahedra can be thought of as deforming
the geometry so that it warps in different ways. However, it should be stressed that there is
no continuous background space or space-time in this discrete picture. Geometry is nothing but a spin network, and a
change in geometry is nothing but a change in the tetrahedral structure of the
spin network by adding or deleting tetrahedra or by changing the spin values on
the edges of tetrahedra.
Within quantum mechanics, there is an appropriate way
to consider the discrete analogue of Schrodinger’s equation, namely a means
over time of evolving amplitudes from an initial distribution. In particular, the appropriate means of
evolving amplitudes concern what are called “fundamental amplitudes,” which
specify initial and final values of the integer values on edges before and
after Pachner moves.
Consider a given graph linking spin networks from an
initial tetrahedron in gamma 0, outward as in a mandala, to all daughter
networks in gamma 1, gamma 2,... gamma N, where N can grow large
without limit.
The Emergence
of a Large-Scale Classical Limit?
I now describe one approach to thinking about quantum
gravity and the emergence of a smooth large-scale geometry based on this
mandala and on Feynman’s idea of a sum over all histories. Endow the spin networks throughout with the
same fundamental amplitudes, thus, the same law propagating amplitudes applies
everywhere in the spin network mandala. Begin with all amplitude concentrated in the
initial spin network tetrahedron in gamma 0. In this vision, a unit of time elapsing is
associated with a Pachner move, such as a move from gamma 0 to a point in gamma
1. With analogy to Feynman’s sum
over all possible histories, consider the set of all pathways that begin at the
initial tetrahedron in gamma 0 and end on a given specific spin network N time
steps later, for N = 1000. That
final spin network might lie in the gamma-0 ring, the gamma-1 ring, the gamma-2
ring, or any ring out to the gamma-N ring.
Here is a hopeful intuition that may prove true. If we consider the family of all histories
beginning on gamma 0 and ending in a specific spin network in the gamma N =
1000 ring, those pathways must be very similar and few in number. By contrast, if we consider all pathways
length 1000 that begin on the gamma-0 tetrahedron and end, 1000 steps later, on
a specific spin network in the gamma-23 ring after wandering all over the spin
network mandala graph, there may be many such pathways, and they can be very
dissimilar. Now, during the amplitude
propagation along any pathway, an action can be associated with each Pachner
move, hence, we can, with Feynman, think about the constructive or destructive
interference among the family of pathways 1000 steps long that begin on the
gamma-0 tetrahedron and
255
end on any specific spin network. Then the hopeful intuition is that those
pathways that begin on gamma 0 and end on a spin network member of the gamma N
= 1000 ring in 1000 Pachner moves will have very nearly the same
action, hence, show strong constructive interference. By contrast, those pathways that begin on the
gamma-0 tetrahedron and end, 1000 Pachner moves later, on a specific spin network
in the gamma-23 ring will have very different actions, hence, show strongly
destructive interference.
If the constructive interference among the few
pathways to ring N overwhelms any residual constructive interference in
the inner rings - such as ring 23, due to the larger number of pathways
from gamma 0 to gamma 23 - then the hopeful concept is that amplitude will tend
to accumulate in the gamma-N ring. Then
(goes the hope shared with Smolin) the neighboring spin networks in the gamma-N
shell constitute nearly the same geometry and nearly the same action in the sum
of histories reaching them, which begins to suggest that a smooth large-scale
geometry might emerge.
For this line of theory to succeed, it is not actually
necessary that amplitude preferentially accumulate in the outermost, gamma-N,
ring. Rather it is necessary as N
increases that there be some ring, M, where M is less than N but
increases monotonically with N, such that a sufficiently large number of
alternative pathways with sufficiently similar phase end on members of the M
ring that constructive interference is maximum for members of the M ring.
Further, it is necessary that as N increases
and M increases, amplitude continue to accumulate on the Mth ring.
In short, the concept is that, via constructive and
destructive interference as amplitudes propagate in the mandala, some
large-scale smooth geometry will pile up amplitude, hence probability, and a
smooth classical geometry will emerge. Here is at least one image of how a
large-scale smooth geometry might emerge from spin networks and constructive
interference.
At least three major caveats are required. First, no calculation has yet been carried out
for such a model, so such a theory may not work. Second, Feynman’s sum over histories assumes a
classical continuous space and time. It
may be entirely invalid to attempt to use a sum over histories argument in this
quantum geometry setting. Third,
assuming we can use Feynman’s sum over histories, we still have possible
quantum geometries, not an actual geometry.
Self-Selection
of the Laws and Constants of Nature?
Recall the puzzle, nay, the deep mystery, about what
processes, if any, might have “chosen” the twenty constants in the standard model
such that the universe happens, improbably, to be complex. To answer this deep mystery we have, at
present, the anthropic principle, Lee Smolin’s concept of cosmic natural
selection for black
256
hole density, and the hope to find the ultimate parameter-free theory
that would not require multiple universes or a historical process.
With caveats, I now briefly describe a way that may be
useful to begin to think about the emergence of the constants such that any
universe would have a given set of constants.
Tuning the constants corresponds to tuning the laws of
physics. Is there a way to imagine a
self-tuning of a universe to pick the appropriate values of its constants, to
tune its own laws? I think the answer
may be yes. And if the following is
wrong in detail, the pattern of thought may prove useful.
In the spin network mandala picture, a 15J symbol,
present throughout the spin networks in the mandala, generates an analogue of
Schrodinger’s equation, hence, the means to propagate amplitudes in the graph
of spin networks. Thus, a change in a 15J
symbol would correspond to changing the laws of physics about how amplitudes
propagate.
Importantly, the fundamental amplitudes are an ordered
listing of 15 integers, hence, there is a family of all possible
fundamental amplitudes. Since each fundamental
amplitude can be thought of as the “law” about propagating amplitudes among
spin networks, Louis Crane pointed out that there is an infinite family of all
possible laws to propagate amplitude among spin networks.
Thus, imagine an infinite stack of our spin network
mandalas, in which each manadala is a graph from gamma 0, the tetrahedron,
outward to gamma N, for N allowed to be arbitrarily large, of spin networks
reachable in N steps by Pachner moves. The mandala members of the infinite
stack of mandalas differ from one another only in the fundamental amplitudes,
hence laws, that apply to each mandala. (I
concede it may be necessary to have a means to encode in each manadala the
given fundamental amplitudes that apply to that manadala.)
Now consider how amplitudes propagate in each mandala
from an initial state with all the amplitudes concentrated in the gamma-0
tetrahedron. And consider any two
mandalas whose fundamental amplitudes are minimally different. For some such adjacent mandalas with adjacent
laws, the small change in the law may lead to a large change in how amplitudes
propagate in the mandalas. For other
pairs with minimal changes in the fundamental amplitudes or law, the way the amplitude
propagates throughout the mandala may be very slight. Assuming this is true, one intuitively
imagines that the total system is spontaneously drawn to those tuned values of
the fundamental amplitude laws, where small changes in the laws make minimal
changes in how amplitudes propagate.
A simple possible mechanism might accomplish this. Imagine a sum of histories from an initial
gamma-0 tetrahedron in a mandala with some given fundamental amplitude laws
(thereby the initial and boundary conditions are specified), where the pathways
in that set of histories pass up and down the stack of mandalas such that the
fundamental amplitude laws change, as does the spin network, and then
257
consider the bundle of all such histories that end on a given spin
network in a given gamma ring with given, perhaps new, fundamental amplitude
laws. In effect, this conceptual move
allows there to be quantum uncertainty not only with respect to spin networks
along histories, but also quantum uncertainty with respect to the law by
which amplitude propagates.
Then one can imagine a sum over all histories that, by
constructive interference alone, picks those pathways, hence fundamental
amplitude laws, that minimize the change in the ways amplitudes propagate. Such pathways would have similar phase, hence,
accumulate amplitude by constructive interference. Then, by mere constructive interference, one
can hope that such a process would pick out not only the history, but also tune
the law to the well-chosen fundamental amplitudes laws that maximized
constructive interference. Hopefully,
that constructive interference would pick out smooth large-scale geometries
like classical flat or near flat space. In such a large-scale classical-like space and
time, Feynman’s familiar sum over histories that minimizes a least action along
classical trajectories would emerge as a consequence.
Smolin and I discuss this possibility in a second
paper. I find the idea attractive as a
research program because it offers a way in which a known process, constructive
interference, modified to act over a space of geometries and laws simultaneously,
chooses the law. It is, of course,
rather radical to suppose that there is quantum uncertainty in the law, but it
does not seem obviously impossible.
On an even grander scale, particle physicists build
the standard model from an abstract algebra called SU(3) x SU(2) x U(i). One can imagine a similar research program
that by constructive interference alone picks out the particles, constants, and
laws of the standard model. Presumably,
particles governed by sufficiently “nearby” laws would be able to interact,
hence undergo constructive or destructive interference, thus picking the
particles and the laws simultaneously.
There is a further interesting feature, for we appear
to have in our mandala, or mandalas, a new arrow of time. Allow that at any step, any Pachner move can
happen. Some moves add tetrahedra. A equal number delete tetrahedra. Yet the number of spin networks in ring N +
i is larger than the number of spin networks in ring N. Statistically,
there are more ways to advance into ring N + 1 than to retreat from ring
N into ring N - 1. Other things
equal, amplitude should tend to propagate outward from the gamma-0 tetrahedron.
There is an analogy in chemical reaction
graphs to the adjacent possible and the real chemical potential across the
frontier from the actual to the adjacent possible.
But if so, time enters asymmetrically due to the graph
structure of the spin network mandala. Then,
statistically, time tends to flow in one direction, from simpler toward more
complex spin networks into the ever-expanding adjacent possible.
258
A Brief
Comment on String Theory
String theory has gained very substantial attention as
a potential “theory of everything;’ namely, a theory that might link all four
forces and all the particles of the standard model into a single coherent
framework. I do not write with even
modest expertise on the subject. Nevertheless,
it is possible that the concept of the law selecting itself via maximum
constructive interference in a sum over all possible histories in a space of
both spin networks and laws might possibly have relevance to string theory. The description of string theory that I give
draws heavily on Brian Greene’s The Elegant Universe.
As is known qualitatively by many outside the confines
of the physics community, string theory began by giving up the picture of
fundamental particles as zero-dimensional, point particles. In its initial version, in the place of point
particles, string theory posited one-dimensional strings that might be open,
with two ends, or closed loops, with no free ends. Among the fundamental ideas of string theory
is the idea that the different particles and the different forces can all be
thought of as different modes of vibration of such strings. Because strings have finite length, string
theory can hope to overcome the infinities that emerge when attempts are made
to marry point particle quantum theories with general relativity in a continuous
space-time. In effect, the finite length
of the strings prevents consideration of space becoming infinitely curved at a
point. Thus, string theory can dream of
uniting quantum mechanics and general relativity, and it has, in fact, produced
the entity carrying the gravitational force, the graviton, in a natural way.
Current string theory has gone beyond
single-dimensional strings, and now considers two-or-higher-dimensional
entities called M-branes. The rough
present state of the art has shown that there are at least five one-dimensional
string theories and M-brane theory. All
of these theories appear to be linked as cousins of one another via various
dualities among the theories.
String theories posit either
eleven-or-fewer-dimensional space and time, with three of the spatial dimensions
unfurled and large scale, corresponding to our familiar three-dimensional
space. The remaining dimensions are
imagined as curled up on the Planck length scale in what are called “Calabi-Yau”
spaces, or more generally, compactified moduli. Compactification of an eleven-dimensional
space and time can be thought of as a large-scale three-dimensional space and
time, but with the additional dimensions curled up at each point in the
large-scale three-dimensional space.
Calabi-Yau spaces can have different topologies. Consider as an analogy a long thin tube with
two ends and a one-hole torus, like a donut. These two are topologically different. As a consequence, closed one-dimensional
string loops can “live” on these surfaces in different ways. Thus, if you think of a string as a closed
loop,
259
that loop might live on the long tube in two ways, either wrapped
around the tube one or more times or not wrapped around the tube, but lying on
the tube’s surface like a rubber band lying on a surface. By contrast, consider the torus. The closed string might wrap around the torus
in either of two ways, through the hole or around the torus. In addition, the string loop might live on the
surface of the torus without wrapping either dimension. Each of these different ways of being on the
tube or torus and the corresponding modes of vibration constitute different
particles and forces. Calabi-Yau spaces
are more complex than the tube or torus, but the basic consequences are the
same. Different Calabi-Yau spaces, or
more generally, different compactified moduli, with different kinds of holes
around which strings can wrap zero, one, or more times correspond to different
laws of physics with different particles and forces.
Physicists have shown, furthermore, that one
Calabi-Yau space can smoothly deform into another with a “gentle” tearing of
space and time. Hence, the laws, forces,
and particles can deform into one another in a space of laws, forces, and particles.
Within current string theory, it appears
that it is still not certain that there exists a Calabi-Yau space whose string
or M-brane inhabitants would actually correspond to the known particles and
forces, but hopes are high.
However, even if there is a Calabi-Yau space whose
strings and M-branes do correspond to our known particles and forces, string
theorists have the difficulty that it is not clear how the current universe
happens to choose the correct Calabi-Yau space. The familiar ideas on this subject include the
existence of a multiverse and the weak anthropic principle. For example, one could imagine Lee Smolin’s arguments
for choices of Calabi-Yau spaces that lead to fecund universes with a near
maximum of black holes, which are the birthplaces of still further universes.
The parallel between spin networks with different
fundamental amplitude laws and the family of string and M-brane theories that
can deform into one another is that in both theories we confront a family of
theories having the property that different members of the family correspond to
different particles, forces, and laws. In both cases, physicists do not at present
have a theory to account for how, in this embarrassment of riches, our universe
happens to pick the correct laws. I
therefore make the suggestion that the same pattern of reasoning that I
described above, a sum over histories of trajectories that vary both in
configurations and in the laws, which maximizes constructive interference,
might prove a useful approach. In the
string theory context, one would consider a hyperspace of Calabi-Yau spaces, in
which neighboring Calabi-Yau spaces would propagate amplitudes from the same
initial condition in different ways. Presumably, somewhere in the hyperspace of
Calabi-Yau spaces, small changes in Calabi-Yau spaces would yield small changes
in how amplitudes propagate. For other
locations in the hyperspace of Calabi-Yau spaces, small changes in the
Calabi-Yau space would yield large
260
differences in how amplitudes propagate. In the hyperspace of Calabi-Yau spaces, where
one Calabi-Yau space can deform into its neighbors, it should be possible to
construct a sum over all histories of trajectories between an initial and final
state in the same or different Calabi-Yau space, then seek such sums over
histories that maximize constructive interference. The hope is that maximizing constructive interferences
would pick out the Calabi-Yau space corresponding to our particles and forces. Presumably, this would occur in the region of
the hyperspace of Calabi-Yau spaces, where small changes in the Calabi-Yau
space yield the smallest changes in how amplitudes propagate. In short, maximization of constructive interference
may be a useful principle to consider to understand how the universe chooses
its laws.
String theorists recognize the need to confront
further major problems. Most notably,
string theory posits a background space-time in which strings and M-branes
vibrate. But if string theory is to be the
theory of everything, including space and time, then space and time cannot
be assumed as a backdrop. Thus, a virtue
of spin networks is that it affords the hope of a quantized geometry from the
outset. On the other hand, particles and
the three nongravitational forces have yet to be incorporated into a spin
network picture.
A Decoherence Spin Network Approach to
Quantum Gravity
However the universe picks its presumably quantum
laws, somehow the classical realm emerges. I noted above that current theory sees two
approaches to linking the quantum and classical realms. The first is based on Feynman’s sum over histories,
but as a perturbative theory assumes a continuous background space-time and
does not get rid of the linear superposition of possibilities that is the core
of quantum mechanics and interference.
What of the second approach, decoherence? The reality of decoherence is established. If one is to take decoherence seriously, and
also to consider geometry constructing itself, then presumably decoherence can
apply to geometry as it constructs itself. What would it mean to apply decoherence to
quantum gravity itself, to the vacuum, to geometry itself?
Well, to an outsider, the following seems possible. If we are to conceive of an initial spin
network, say a tetrahedron, and all possible daughter spin networks, as well as
all their possible daughter spin networks, propagating amplitudes on the
mandala, then at any moment N steps away from moment 0, more than one geometry
is possible - namely all those reachable in N Pachner moves.
We seem to confront the same problem we confront with
quantum systems coupling to quantum systems, such as electrons coupling to
organic molecules, or to classical systems, such as rocks. These quantum systems can decohere. Can
261
quantum geometries become coupled with one another or different parts
of one quantum geometry become coupled, so to speak, and decohere?
Why not try the idea?
I now discuss one possible approach to this issue. The approach posits a quantum of action, h,
to the generation of a tetrahedron, hence a Planck energy and thus a Planck
mass to a tetrahedron, and decoherence setting in at a sufficient mass and size
scale.
By use of an equation suggested by Zurek relating the
decoherence timescale, Td, to the relaxation timescale, Tr, of
the system, in which increasing mass and area increase the rate of decoherence
in proportion to their product, it can be qualitatively shown (via sufficiently
rough arguments) that geometry may well be thought of as decohering, and doing
so on a length scale of about 10-15 cm, which is smaller than the
Compton radius of the electron and even smaller than the radius of a nucleus.
Now, there are some interesting features of this rough
calculation. First, if we begin with an
initial tetrahedron of geometry, it can have four daughter tetrahedra. In turn, each daughter tetrahedron can have
two or more daughter tetrahedra, hence, the initial spin network can grow exponentially
in the number of tetrahedra before decoherence sets in. This is a clue that a purely quantum account
might be given of an initial exponential expansion of a universe starting with
a single tetrahedron. Thus, it might be
possible to do without the “inflationary hypothesis” of exponential expansion
of a classical space in the early moments after the big bang.
Second, an initial exponential expansion of geometry
might overcome, as the inflationary hypothesis does, the particle-horizon
problem in cosmology, in which we confront the puzzle of why parts of the
universe that have been out of apparent causal contact since the big bang can
be so similar. If the initial expansion
is exponential, then slows to linear, as in the inflationary hypothesis or perhaps
in this purely quantum approach, then the particle-horizon problem may
disappear.
Third, a purely quantum exponential expansion over
many orders of magnitude should presumably yield a flat power spectrum over
many size scales for quantum fluctuations in the earliest small fraction of a
second of the life of the universe prior to the end of this exponential
expansion when decoherence of geometry occurs.
Fourth, we must consider when geometries decohere
whether there may be geometries that are the slowest to decohere. If different parts of a single spin network
geometry can become coupled, it is natural to assume that flat parts might
decohere more slowly than distorted parts of that geometry. Intuitively, phase information can get lost
more readily when two lumpy parts of a geometry couple
262
than when two flat parts of a geometry couple. Of course, an explicit model exploring this is
badly needed and entirely missing, but in its absence, let’s make the assumption.
Given that, then an initial exponential
explosion of flat and warped geometry occurs until decoherence sets in on a
length scale of something like 10-15 cm. At this point, flat geometry “wins” because it
decoheres most slowly. Hence, as soon as
decoherence of geometry sets in, space tends to be flat in the absence of
matter.
But even after decoherence sets in, geometry is busy
all the time trying to build geometry exponentially and everywhere, while
simultaneously decohering. Now an
interesting feature of the Td/Tr equation alluded to above is that
whatever the exponential rate of expansion of geometry may be per Planck time
unit, the exponential rate of decoherence, Td, which grows as the mass
times the size scale squared of the geometry, increases, until eventually the
exponential rate of formation and exponential rate of decoherence of geometry
must balance. The exponential expansion
of the universe is over. However, linear
expansion by construction of geometry can continue. The fastest linear construction of geometry
from any tetrahedron would be at the speed of light.
When the rate of geometry formation and decoherence
balance, geometry keeps building tetrahedra as fast as possible everywhere, but
flat geometry, by hypothesis, decoheres most slowly. In the limit, perhaps flat geometries do not
decohere at all. Then, the geometry of
the universe tends to be flat in the absence of matter, as Einstein requires in
general relativity. And, once again, the
flatness after exponential expansion may overcome a need for the inflationary
scenario and solve the particle-horizon problem.
It may be of interest that the assumption of an
action, h, in the generation of each tetrahedron implies an expanding
total energy in geometry itself, the vacuum itself, as geometry constructs
itself. Indeed, one expects that the
assumption of an action, h, per tetrahedron would lead to a uniform
energy density, a constant scalar quantity even as geometry grows. Such an energy could be related to the cosmological
constant.
It may also be of interest that string theory posits
“extra dimensions” that “curl up” on themselves to yield four-dimensional
space-time. Could the decoherence of
geometry afford a parallel way that extra dimensions can curl up? And could the ever-generating possible
geometries, as they generate exponentially even as they decohere, yield
sufficient extra degrees of freedom to correspond to the modes of oscillation
of strings or M-branes in six or seven extra dimensions?
It may be interesting that the energy content of
geometry could be enormous compared to that of familiar particles of the same
size scale. That might allow the
familiar particles with rest mass to borrow a tiny bit of the vacuum energy for
their mass. Such a possibility could
hint that matter, energy, and geometry might be able
263
to interconvert. Perhaps
different particles would be different kinds of topological “knots” in the spin
network structure of geometry with interconvertible particles being nearby knot
topologies.
We are obviously far from anything like a coherent
theory that implements any of the intuitions above. They remain at best mere suggestions for a
research program.
I began this chapter wondering why the universe is
complex. In place of the anthropic
principle or Lee Smolin’s cosmic selection, I have suggested one possible
approach to the choices of the constants of nature by maximizing constructive
interference over a sum of all histories through a space of both configurations
and laws. Even if that program were to
succeed, it does not necessarily yield a complex universe, let alone one poised
roughly between expansion and contraction.
Might we see ways to understand why the universe is
complex? Perhaps, but merely perhaps. I return to the thoughts earlier in this
chapter that decoherence requires coupling systems and the loss of phase
information. If, in general, complex and
high-diversity quantum systems with many coupled degrees of freedom lose phase
information when they interact more rapidly than an equal number of simple,
low-diversity quantum systems with the same total number of interacting parts,
then the comeasuring of entangled quantum systems should tend toward higher
complexity, diversity, and classicity. In
short, complexity and diversity would beget classicity irreversibly. In turn, this would lead to a preferred
tendency toward a lock-in of complexity and diversity. There is a sense in which classical objects
are like the constraints on the release of energy that permits work to be done.
Classical objects, interacting with
quantum objects, lead to decoherence and more classicity. Complex pairs of quantum objects that decohere
readily, or classical objects and those quantum objects that are caused to
decohere readily when interacting with the classical object, form preferred
pairs that tend to decohere, hence become frozen into classicity. We begin to have an image of preferred pairs
of quantum systems coupling and decohering, hence, an image of a complex and
diverse universe constructing itself as it nonergodically invades the adjacent
possible, rather as a biosphere constructs itself. And if more complexity and diversity means
more comeasurement and faster decoherence of a wider variety of complex quantum
systems, in analogy with the concept that extracting work from increasingly
subtle nonequiibrium systems requires increasingly subtle measuring and
coupling devices, the universe as a whole may persistently break symmetries as
new entities come into existence, and hence expand its diversity, complexity,
and classicity as fast as possible.
Loose arguments? Yes. Testable?
Here and there. Wrong? Probably. Deeply wrong? Maybe not. Does this get the universe to the edge of
expansion versus contraction or
264
to a flat universe expanding forever more rapidly? I would love it to be so. Indeed, would love a view in which matter, energy,
and geometry can all interconvert. After
all, if geometry, the vacuum, has energy, such interconversion does not seem
impossible. Do the considerations of
this chapter require detailed models and supporting calculations to be taken as
more than the merest suggestions? Absolutely. This chapter, like much of Investigations,
is protoscience. But science grows
from serious pro toscience, and I take Investigations to be serious
protoscience.
We enter a new millennium. There will be time for new science to grow.
265