The Competitiveness of Nations in a Global Knowledge-Based Economy
Marjorie
Grene
and David Depew
The Philosophy of
Biology: An Episodic History
Cambridge University Press, 2004
Chapter 11
Biology
and Human Nature
Content |
The evolution of human beings, and the meaning of
“human nature” within a comparative, biologically grounded framework of
inquiry, is a huge topic, extending well beyond the contours of this book. Nonetheless, much of the interest that human
beings have in other living beings reflects our unquenchable interest in
ourselves. In this chapter, accordingly,
we will say a few things about several aspects of human evolution, if only by
way of commending further inquiry to the reader. We will touch, first, on human origins - “the
descent of man,” in Darwin’s phrase - with particular attention to the unity of
the human species; second, on the vexed topic of nature and nurture; third, on
the evolutionary mechanisms required to account for characteristics that human
beings alone possess: large brains, language, and mind; and finally, we will
touch on what implications, if any, can be drawn about the “future of man” from
the Human Genome Project, and, more generally, from the fact that within the
last several decades, we have begun to acquire the technical ability directly
to manipulate genetic material. Much
of what we will have to say involves revisiting some of the figures and theories
we have encountered earlier in this book, with special attention this time to
our own species.
Buffon is generally accorded
the title of “father of anthropology.” In his Histoire naturelle
de l’homme (1749), Buffon
resolved to study human beings in the same way he had been studying other
animals. He set out to review their
anatomical and physiological traits, when and why these
322
traits appear in the life-cycle of this
species, the biological functions of the human senses, and, finally,
geographical variation. In doing this, Buffon became, according to Cuvier,
“the first author to deal ex professo with the
natural history of man” (Cuvier 1843, p. 173). As Topinard, the
author of a late nineteenth-century work, Elements d’anthropologie
generale, put it,
Buffon
founded what would soon be designated
anthropology, whose main branches he sketched out: man in general, considered
at all ages from the morphological and biological [= physiological] point of
view; the description of the races, their origins and intermixing; and finally
the comparison of man with the apes and other animals from the physiological
point of view, from the study of his characteristics, from his place among
other beings, and from his origin. These amount to the three branches of anthropology made
distinct by [Paull Broca:
general, special, and zoological.
Topinard 1885, p. 48
To accept this account it is necessary to circumscribe
it a little. First, as the author
implies, Buffon did not call his natural history of
man anthropology. The term
“anthropology” goes back to the seventeenth century, when it referred to
anatomical studies of human beings of the sort conducted by Vesalius
(Blanckaert 1989; Sloan 1995). Although Buffon
incorporated human anatomy into his work, what he studied was
human beings from the perspective of natural history. This allowed his successors (including Kant,
whose Anthropology from a Pragmatic Point of View can hardly be said to
be about anatomy) to transfer the term anthropology from the anatomist’s
functional to Buffon’s historical biology. Second, Buffon was
confident that his study of human beings as natural entities - geographically
dispersed, and open in their differences to the influence of climate and other
aspects of their environments - would be protected from theological and
philosophical objections because he carefully sequestered man’s “moral”
characteristics - the “metaphysical” attributes of reason, free will, and so
forth - from his natural history of the species. It is true that the masters of the Sorbonne
lodged their usual complaints, delaying the publication of the Histoire naturelle until Buffon
judiciously affirmed his belief in “all that is told [in the Scriptures]”
(Roger 1989/1997, pp. 187-89). Nonetheless,
the enduring influence of the Cartesian separation of mind from matter now made
it possible, ironically enough, to study human beings in everything but their
rational life, to study them, that is to say, as animals among other animals,
and thereby to pose a question
323
that is still with us: whether man’s “moral”
characteristics can be reduced to, or shown to emerge from, his biological
nature. (Rousseau thought the latter,
having discounted the rational side of man.)
Substantively, Buffon’s
greatest contribution to anthropology in the sense just explicated was his
categorical rejection of polygenism - that is, of the
idea that different races of human beings have the status of different species
or sub-species. Polygenism,
which had been defended by Paracelsus in the sixteenth century, always had
about it a whiff of heterodoxy. Among Buffon’s reasons for attacking it was the defense of it by
Voltaire and Rousseau, less as science, to be sure, than as a deliciously
scandalous way of rejecting the Bible. Happily, the overwhelming consensus among
biologists, and philosophers of biology, has always been with Buffon; evidence in favor of monogenism,
most recently genetic, has steadily accumulated.
Buffon’s defense of monogenism was based on his conception of a species as the
“constant succession and uninterrupted renewal of the individuals that comprise
it” (Buffon 1753). Human beings of every sort can produce fertile
offspring, and so, by this criterion, fall into a single species. In making this point, Buffon
was animated by a desire to deflate Linnaeus’s “artificial” system of
classification, which, as we have already noted in Chapter 4, was giving
Linnaeus unwarranted conceptual cover for a classificatory view of various
human races, and, potentially at least, for polygenism
(see also Sloan 1995). We also
discussed in Chapter 4 how Blumenbach and Kant,
writing independently on the subject in 1775, sided with Buffon,
and developed his thought in ways that could accommodate the notion of various
human races without compromising monogenism. The main difference between Kant and Blumenbach, as we have also mentioned, was that Kant
retained Linnaean classifications as natural descriptions, but not natural
history, whereas Blumenbach, in rejecting that
distinction, also rejected underlying assumptions about the Great Chain of
Being. “I am very much opposed,” he
wrote, “to the opinions of those who, especially of late, have amused their
ingenuity so much with what they call the continuity or gradation of nature... There are large gaps where natural kingdoms
are separated from one another” (Blumenbach 1775, 3rd
ed, 1795, in Blumenbach 1865, pp. 150-51). In this chapter, we will see how these
differing, but related, approaches were applied, not so much to the problem of
human races, which we have already addressed in Chapter 4, but to the
relationship between the single human species and other closely related
species, an issue that Buffon’s naturalism had made
central to anthropology.
324
Linneaus, for his part,
extended John Ray’s category of anthropomorpha
- changed, in 1758, to primates - to include the genus Homo, as well
as Simia (apes and monkeys), and Bradypus (sloths,
including lemurs). In doing so, Linneaus was claiming that man is naturally a
quadruped. “He has a mouth like other
quadrupeds,” Linnaeus wrote, “as well as four feet, on two of which he locomotes and the other two of which he uses for prehensile
purposes” (Linnaeus 1754, from English trans. 1786, pp. 12-13). At first, Linneaus
placed only one species in the genus Homo — our own. Later, however, he added a “Homo
troglodytes” or “Homo sylvestris,” which
he derived, none too accurately, from anatomical and ethological descriptions
of the East Asian orangutan. For Blumenbach, this tendency to lump our species together with
great apes (which in his opinion sprang from the continuationist
biases of the scala naturae)
underestimated the vast differences between our species and apes. Human beings, says Blumenbach,
are naturally upright and bipedal, pace Linnaeus. That is clear to anyone who looks at a large
number of traits and their mutual relations, as the Gottingen
biologists proposed to do, instead of a few arbitrary traits that are useful,
at best, for identifying species. Alone
among species, Blumenbach argues, human beings
exhibit a complex, interlocking set of traits that support their upright
posture and bipedality: a bowl-shaped pelvis; hands
that are not, like those of apes, merely pressed into service as grasping devices,
but are anatomically structured to do so; close-set, regular teeth, instead of
the menacing canines and incisors of apes; sexual availability and eagerness at
every season, and relatively shortened gestation times, which, together with
non-seasonal sexuality, produce a constant supply of infants who must be enculturated if they are to survive and flourish at all (Blumenbach 1795). In consequence of all these mutually reinforcing
traits - a fairly good list even to this day, as lists go, if we add an
enlarged brain - Blumenbach concluded that the human
species is naturally in possession of its upright posture, as well as of the
dominion that it exercises over other living things.
Darwin’s thesis of descent with modification from a
common ancestor affected the state of this question greatly. Once Darwin’s claim about descent was
generally accepted (even if his idea of natural selection were discounted - see
Chapter 8), it was no longer possible to regard apes as degenerate human
beings, or to see races as so many departures from a civilized, rational,
white, European norm, as Buffon and Linnaeus had
done. On the contrary, it now appeared
that human beings must have come from apes - or rather, from the common
ancestor of both apes and
325
human beings - and that civilized races must
have come from the most primitive. Darwin,
as we mentioned, was amazed with what the unity of the human species actually
entails when he cast his first glance at the naked savages of Tierra del Fuego (see Chapter 7). Oddly enough, however, the acceptance of
Darwinism, or at least of Darwinism as people construed it in the second half
of the nineteenth century, had the effect of belatedly reconciling Linnaean
classification with natural history. Although
Darwin rejected the Great Chain of Being, he thought of orders, families,
genera, species, sub-species, and varieties as points at which lineages split,
forming thereby a “natural” rather than an artificial system. In consequence, Darwinism tacitly restored the
anti-saltationist ideas that Blumenbach
and other early professional biologists had fought to discount. When Huxley protested against this implication
- Darwin had saddled himself, he famously wrote, with an “unnecessary burden”
in clinging to the maxim “natura non facit saltum” (Huxley to
Darwin, November 23, 1859, in Darwin 1985-, Vol. 7, 1991, p. 391)
- he may well have had in mind this apparent regression to an older conception
of biological order.
One result of the Darwinian revolution of the second
half of the nineteenth century was that African great apes tended to displace
the orangutan as the closest living human relative. Darwin wrote of the gorilla and the
chimpanzee: “These two species are now man’s nearest allies” (Darwin 1871
[1981], p. 199). Darwin reasoned that
only strong, continuous selection pressure (whether natural or sexual, on the
second of which he put such stress in The Descent of Man [see Chapter
7]), could drive a slowly diverging speciation process, and, accordingly, that
the evolution of man must have occurred in a very challenging environment. At one point, Darwin speculated that Africa
might be just the place (Darwin 1871, p. 199). The result was that the earlier enthusiasm for
the shy orangutan, who dwells sleepily in the deep,
rich forests of Southeast Asia, was displaced by enthusiasm for African species.
Huxley, for his part, favored the
gorilla. He turned out to be wrong. Darwin did not make a choice. So he turned out to be half-right; we now know
that chimps share more of their genes with human beings - at least 97 percent -
than other extant apes.
There was another alleged reason behind the preference
of Darwinians for an “African genesis.” Nineteenth-century Darwinism did not do away
with the tendency to distinguish and rank-order human races. The prejudicial notion that the black race is
the oldest (because least civilized) variety of man was combined with the
Darwin-Wallace
326
presumption that new species are generally to be found in the same geographical
areas as their closest relatives to support an inference that Africans are
closer than other races to the common ancestor of Homo sapiens and to
the extant great apes. Since few extinct
hominids or other primates had been discovered during the nineteenth century,
it was also tacitly presumed that the number of intermediates between great apes
and human beings would turn out to be relatively small.
This foreshortening of the presumed distance between
apes and human beings began to give way in the twentieth century with the
discovery of australopithecenes, on the one hand,
and, on the other, a wider number of species of Homo (moving more or
less backward in time - H. neanderthalensis, H. heidelbergensis, H. antecessor, H. erectus, H. ergastor, and H. habilis).
In Mankind Evolving, Dobzhansky wrote, “We recognize two genera [of hominids] - Homo
and Australopithecus” (Dobzhansky 1962, p.
186). Since then, most anthropologists
have recognized Paranthropus (P. robustus, bosei, and aethiopicus) as a distinct genus lying
between the australopithecenes and Homo. Nonetheless, anthropology must continually
struggle with the fact that whereas the nearest relative of most species is
usually another living species, the nearest living relative of H.
sapiens, the chimpanzee, is phylogenetically very
distant from us. (The separation from a
common ancestor occurred between 5 and 7 million years ago.) In consequence, there has been a tendency,
which only a fuller fossil record could diminish, to underestimate the
complexity of what happened since the lineage of great apes diverged from the
lineages that resulted in ourselves (Schwartz 1999; Tattersall and Schwartz 2000).
The distance has begun to be made up by the discovery
of the fossil remains of A. aferensis (“Lucy”),
who lived some 3.5 to 4 million years ago, and of A. africanus. [1] Paranthropus
aside, it is possible that the genus Homo split off from A. africanus, or something like it, in the form of H. habilis, notable for its more concerted and skilled use
of tools; that H. habilis gave way in turn to H.
erectus; and, finally, that H. erectus gave rise to H. sapiens sapiens, as well as to H. [sapiens] neanderthalensis, which Dobzhansky
treated as a distinct race of H. sapiens (Dobzhansky
1962, p. 191). An impression of linear
progress of this sort can easily be given by the fact that, throughout this
history, there has
1. Recently, the world of anthropology
was excited by the discovery in Chad of a 7 million-year-old skull (Sahelanthro pus tchadensis) that
seems to lie close to the point when chimpanzees and hominids diverged.
327
been an increase in both the absolute and
relative size of the brain. A. africanus had a brain of only 441 cc in a body weighing
about 50 pounds. H. erectus, though
heavier, had a brain of about 950 cc. The
upper end of Neanderthal brains is about 1,450 cc. The brain of H. sapiens sapiens
averages a massive 1,500 cc.
Although it discredited racial rank-ordering, the
Modern Evolutionary Synthesis did surprisingly little to dampen the presumed
progressiveness in this picture. The
concepts of grades and trends were dear to its heart. These notions tended to put a directional spin
on what might well have been a very bushy process. Thus, Mayr, having
discounted Paranthropus as not very
different from Australopithicus, wrote
in Animal Species and Evolution, “The tremendous evolution of the brain
since Australopithecus permitted [hominids] to enter so completely a
different ecological niche that generic separation [from australopithecenes])
is definitely justified” (Mayr 1963, p. 631). It is admitted on all sides that many species
and sub-species of Homo may have come and gone since the period that
began some 250,000 years ago, when the climate began markedly to oscillate,
bringing with it concerted selection pressures of a kind that rewarded
cooperation, communication, and cleverness. Still, the impression is often given that,
once up and running, modern human beings were soon dominating the scene. By 30,000 years ago, from an original
population of some 10,000 individuals, populations of modern human beings were
to be found nearly everywhere in the Old World, but remained in sufficient
contact that they constituted, as they still do, a single species.
This linear picture may have to be refined by data
coming from the comparative analysis of mitochondrial DNA and other methods. These suggest that there may have been species
of bipedal apes prior to A. aferensis; that
there were outmigrations of H. erectus from
Africa long before the evolution of modern human beings; and that when H.
sapiens sapiens began its own adaptive radiation
out of Africa a mere 90,000 years ago, it may have killed off Neanderthals (and
perhaps others) in the process, with or without some gene flow (Templeton
2002). In their attack on trends and
grades, some contemporary anthropologists who are also cladists
have called on genetic data to discredit the tendency of the Modern Synthesis
to lump together separate species of Homo (including Neanderthals) and
to see the descent of man as more tree-like and less bushy (Schwartz 1999; Tattersall and Schwartz 2000). Nonetheless, it must be admitted that the
orthodox, mid-century view has the advantage that it can readily explain the
very fact that makes
328
establishing the descent of man so difficult. If we are separated by a great gulf from our
living relatives, unlike other lineages, the reason might well be that the
powerful evolutionary innovations in our generalizing, polytypic, culturally
based species had sufficient power to eliminate competitors, leaving, as Blumenbach had long ago suggested, an enormous hole in the phylogenetic continuum.
Some prominent versions of Darwinism in the late
nineteenth century, notably the biometric research program that in England was
to mark out the path that eventually led toward the genetic Darwinism of the
twentieth century, were deeply implicated in eugenic thinking - that is, in
schemes for “improving the race” - which generally meant the white, European
race - by preventing the “unfit” from having children (“negative eugenics”) and
by encouraging, at the other end of the statistical curve, the presumably most
fit to marry one another and to produce large, fecund families (“positive
eugenics”). English eugenics, under the
influence of Charles Darwin’s first cousin William Galton,
was dominantly positive. The science of
biometry at the University of London, which eventually brought forth the
world’s first department of statistics, was founded with the purpose of
identifying, tracking, and marrying off to one another (before it was too late)
members of families in which “hereditary genius” was presumed to run (see
Chapter 8 for more on Galton). The tendency in the United States, on the
other hand, was toward negative eugenics, backed by state laws preventing the
“shiftless,” “degenerate,” “feeble-minded,” and so forth from having offspring.
Here, the originating influence was
Charles Davenport, whose Eugenics Record Office was the seed from which sprang,
ironically enough, the great biological laboratory at Cold Spring Harbor on
Long Island (Kevles 1985).
R. A. Fisher (who, with Sewall
Wright, was a pioneer of population genetics - see Chapter 9) became Galton’s heir when he was appointed to the Chair of
Eugenics at the University of London. The
second half of Fisher’s the Genetical
Theory of Natural Selection (1930) is replete with assertions that human
beings were appointed by their own evolutionary success to take over the
evolutionary process (presumably from God) in order to beat back the melancholy
effects of the Second Law of Thermodynamics. Natural selection, followed by the discovery
of ever more powerful means by which human beings themselves can intervene in
and
329
direct biological processes, would sustain the
notion of progress that had been undermined by the realization in the last of
third of the nineteenth century that the Second Law dictated the ultimate “heat
death” of the universe. On this
view, artificial selection - the breeding of human beings - would be carrying
on the work of God (Norton 1983; Hodge 1992).
Whereas the resistance of Wright and Dobzhansky to Fisher that we discussed in Chapter 9 was
grounded in purely scientific objections, neither was unaware of Fisher’s
ideological baggage, of which they were critical. To be sure, the heyday of eugenics in the
United States had for many reasons waned by the late 1930s, when the Modern
Synthesis began to take shape. Still,
the kind of Darwinism that the founders of the Synthesis envisioned,
institutionalized, and defended set out to secure its scientific credentials
partly by dissociating Darwin’s name from the eugenic enthusiasms and
distortions that had previously marred it. This effort, as well as a more diffuse shift
from “nature” to “nurture,” had been made all the more salient after the
revelation of eugenics’ ultimate fruit, the Holocaust.
Perhaps more importantly, Dobzhansky’s
theory of balancing selection seemed to provide good scientific reasons for
thinking that eugenics is both impossible and unwise (see Beatty 1994). For Dobzhansky, it
is impossible because genetic variation is plentiful in natural populations,
and is not bunched up at one end of the curve or the other. So the very possibility of
identifying “hereditary geniuses” and the “feeble-minded” makes no sense.
(The opposite, so-called “classical”
view of population structure, according to which one can identify good and bad
outliers in populations whose genes are generally presumed to be fairly
homogeneous, was presupposed by eugenicists. Although not every one who held this view was
a eugenicist, the last significant defender of the classical view was the
left-eugenicist Hermann Muller, who for a time abandoned the United States for
the Soviet Union, where he naively assumed that socialism would include an
effort to select for the traits of Beethoven, Goethe and, alas, Lenin, rather
than the likes of Jack Dempsey and Babe Ruth (Muller 1935; on Muller’s
persistent defense of the classical view of population structure, see Chapter 9;
Muller 1948).
But even if it were a genuine possibility, Dobzhansky suspected that any eugenic program would
be biologically unwise. Balancing selection
stresses the idea that certain genetic combinations that are fit in one
environment may not be fit in another. Having
one allele for sickle
330
cell anemia, for example, confers some
protection against malaria in Africa, where slash-and-burn agriculture had
produced swamps infested with malaria-carrying mosquitoes. The single-dose pattern thus spread through
the population under the control of natural selection, even at the cost of
killing off a predictable number of offspring with two sickling
alleles. In malarial environments,
sickle cell anemia is an adaptation. But
in environments in which malaria is not a problem, it is far from adaptive. Combine this fact with the equally relevant
fact that environments often change, especially through the powerful agency of
human beings themselves, and you might well infer that it would be counterproductive
to try to second-guess nature by declaring what traits are and are not fit, as
eugenicists did. Nature itself creates a
buffer against the effects of environmental change by preserving genetic
diversity in natural populations and by using that variation to produce, when
it can, adaptations that enable a species to deal with changing environments by
its behavioral, and in our case cognitive, flexibility. “Populations that live in different
territories, allopatrically,” wrote Dobzhansky, “face different environments. Different genotypes may,
accordingly, be favored by natural selection in different territories, and a
living species may respond by becoming polytypic” (Dobzhansky
1962, p. 222). Thus it would seem
that nature itself teaches us the best eugenic - or rather anti-eugenic -
lesson. A diverse, panmictic
population, and the democratic beliefs necessary to sustain it,
produce the most adapted, and adaptable, populations (Beatty 1994).
At the 1959 centennial celebration of the
publication of Darwin’s Origin of Species at the University of Chicago,
many speakers stressed that the Modern Synthesis, which was on this occasion
presenting itself in public as the scientific fruit of Darwinism, had shown
that nature prizes diversity, and that Darwin’s vision of man did not pose any
real threat to liberal, and even religious, values. Eugenics was off the table. A prominent theme at the Centennial celebration
was that culture itself is an adaptation - maybe the ultimate adaptation - for
dealing with changing, unpredictable environments and, more generally, for
avoiding adaptive dead ends. Far from
cosseting the unfit, then, as both Social Darwinians and eugenics enthusiasts
had argued earlier in Darwin’s name, culture, with all the different forms of
nurture that it signifies, is, in a real sense, the highest product of natural
selection. At the Chicago Centennial,
speaker after speaker (except for Julian Huxley - see Chapter 9, on
evolutionary progress) repeated the point that the event’s principal organizer,
the anthropologist Sol Tax, had hoped they would
331
state. Waddington
put that point as follows in introducing a plenary discussion on human
evolution:
Conceptual thought and language
constitute, in effect, a new way of transmitting information from one
generation to the next. This cultural
inheritance does the same thing for man that in the subhuman world is done by
the genetic system... This means that,
besides his biological system, man has a completely new ‘genetic’ system
dependent on cultural transmission.
Waddington
1960, pp. 148-149
Several years later, Dobzhansky
was arguing that “culture arose and developed in the human evolutionary
sequence hand in hand with the genetic basis which made it possible” (Dobzhansky 1962, p. 75). The way in which genes and culture are
implicated with each other can be seen in the selection pressure for shorter
gestation times that characterize human beings, which is evidenced in the
immature, paidomorphic features of human neonates
(hairlessness, for example) and which makes maturation radically dependent on
child care and a host of other cultural practices. Many mutually interacting causes are at work
here. Among the most salient are the fact that the birth canal was narrowed with the
evolution of a upright posture and, concomitantly, of a bowl-shaped pelvis,
thereby selecting for earlier, less painful, less fatal deliveries, as well as
the fact that early delivery requires massive care, and so involves the
development of bonds among parents. However
it may have happened, it would seem that in H. sapiens, and perhaps our
closest, extinct relatives, nature had brought forth a “promising primate,” to
borrow Peter Wilson’s title. As Wilson
puts it, in contrast to species that are adapted only to a single or narrow
range of environments, H. sapiens is
most generalized not only in its morphology, but also in
its total inventory of dispositions and capacities. It is both uncertainty and promise. Whereas in other species we may find, for
example, that the relation between the sexes for the sake of reproduction is
specified and particularly adaptive, in the case of humans we should find no
determined and species-specific mode of relationship, but rather generalized
features from which it is necessary to define specific modes... [This means that] an individual has little
advanced information that will help him coexist with others on a predictable
basis... If the human individual is to
coexist with other such individuals, he must arrive at some ground for expectation
and reciprocation. He must work out some
common form of agreement about actions and reactions.
P. Wilson 1980, p. 43
332
The “common forms” for sharing information of this
sort are what we call cultures. Hence,
on the conception of man as a polytypic, generalized, behaviorally plastic, enculturated sort of being, members of our species are
animals who realize their biological nature in and through culture (Grene 1995, p. 107; this is a more accurate way to
put the point than to speak, with Marshall Sahlins,
of man as a biologically “unfinished” animal [Sahlins
1976]). Given this general point of
view, the modern discipline of anthropology contained within itself two mutually
supporting branches: physical anthropology, which relies on natural selection
to deliver man into culture, and cultural anthropology, which appeals to
natural selection to free the notion of culture from the discredited
crypto-Lamarckian idea that culture itself is a site and form of biological,
evolutionary progress. It is true,
writes Dobzhansky, that “the genetic basis of culture
is uniform everywhere; cultural evolution has long since taken over” (Dobzhansky 1962, p. 320). But this does not mean that the genetically
controlled aspects of our behavior are fixed and deterministic, or that the
cultural aspect is free and variable. “This is the hoary misconception that ‘heritable’ traits are not
modifiable by the environment,” Dobzhansky says, “and
that traits so modifiable are independent of the genotype” (Dobzhansky
1962, p. 74). Nor does it imply
that natural selection ceased once human beings had developed culture. “It is a fallacy to think that specific or
ordinal traits do not vary or are not subject to genetic modification;
phenotypic plasticity does not preclude genetic variety” (Dobzhansky
1962, p. 320; see also p. 306). The
spread of sickle cell trait in Africa after the introduction of agriculture,
mentioned earlier, is a case in point.
Without an appreciation of the mid-century consensus
about how cultural and biological factors interact, it is hard to appreciate
the furor that greeted the publication in 1975 of E. O. Wilson’s Sociobiology
(Wilson 1975). Wilson’s application
of inclusive fitness, kin selection, and game-theoretical models for
calculating the “advantage” of genes to the study of ants - about which he knew
more than almost anyone - as well as to other social species would not have
raised such opposition if Wilson had not framed his argument, in both the early
and closing pages of his book, by making a number of provocative remarks about
human evolution. In these remarks,
Wilson proposed, in effect, to shift the boundary between the human, cultural,
or social sciences and biology. There is
more in human behavior that is under genetic control than had been appreciated,
he argued. And although some of the
impulses that undergird “division of labor between
the sexes, bonding between
333
parents and children, heightened altruism [only] toward closest kin,
incest avoidance, other forms of ethical behavior, suspicion of strangers,
tribalism, dominance orders within groups, male dominance above all, and
territorial aggression over limiting resources” can be resisted, the genes do
indeed “have us on a leash,” even if it is a long one (Wilson 1994, p. 332; for
“on a leash,” see Wilson 1978, pp. 167, 207).
What lies behind Wilson’s enthusiasm for these views
is Hamilton’s idea that genes will code for cooperative traits if this
“strategy” enhances their own replication rate (see Chapter 9). This theory did much to liberate the
Darwinian tradition from objections to the effect that, because it portrays
human beings (and perhaps other animals) as more competitive than we observe
them to be, Darwinism must be false. We
are, sociobiologists say, programmed to be
cooperative, at least with our genetic relatives. Predictably, philosophers’ debates soon broke out
about whether human morality, if it is an adaptation with a genetic
underpinning, remains normatively binding, or whether it is merely a trick our
genes play on us to get us to cooperate for their sake. Wilson exposed himself to some obloquy when,
like the philosopher Michael Ruse, he took the second view, as well as when he
and others used his theory to license adaptationist
stories - “just so stories,” Lewontin and Gould
called them - about all manner of highly variable human practices (Ruse and
Wilson 1985; on “just so stories” see Gould and Lewontin
1979). Still, by his own account,
Wilson was taken aback completely when a more incendiary objection was raised. Some highly respected evolutionary biologists,
including heirs of the version of the Synthesis celebrated in Chicago in 1959,
read Wilson’s proposal for human sociobiology as a shift back toward
genetic determinism, and to the spirit, if not the letter, of eugenics (Lewontin, R., S. Rose, and L. Kamin
1984; for Wilson’s reaction, see Wilson 1994, pp. 337-341).
We cannot go into the ins and outs, or the rights and
wrongs, of the “sociobiology controversy.” [2] Wilson was more orthodox than he was
made to appear, sometimes even by himself. He did not believe that we are adapted only to
the environments in which our hominid ancestors arose. On the contrary, he argued (or at least was
soon to argue, in a book written with Charles Lumsden)
that gene-culture interaction means that cooperative genes keep competitive
culture on a
2. We do not think that Ullica
Segerstrale, who has studied the sociobiology
controversy extensively, has been quite judicious enough in assigning praise
and blame. Segerstrale
2000 is mostly a defense of Wilson against enemies on the right and the left.
334
leash! (Lumsden
and Wilson 1981). The real issue
is not the importance of culture, but what kind of scene culture presents. For Wilson, it is a scene of potential
fanaticism. When he had asserted that
“the genes have us on a leash,” he claims to have meant that genes (a la
inclusive fitness) perform the admirable service of damping down the runaway
effects of culture, such as ritual cannibalism (Lumsden
and Wilson 1981, p. 13).
Nonetheless, it is fair to say at the very least that
Wilson was somewhat unfortunate in his allies. Sociobiological
arguments, though not necessarily Wilsonian sociobiological arguments, were used, or abused, by Richard
Herrnstein and Charles Murray to suggest that legislation aimed at improving the
lot of minorities, such as the Head Start Program, would yield only marginal
improvement, so that the game might not be worth the candle (Herrnstein and
Murray 1994). And in a context in which
the mid-century stress on nurture was rapidly shifting back toward nature, the
anthropologist-turned-sociobiologist Napoleon Chagnon argued that among the Yanomami,
a very primitive and often violent tribe who live in the upper reaches of the
Orinoco and Amazon, the fitness of dominant males - their enhanced reproductive
output - was made possible by their ability brutally to commandeer women, kill
rivals, and as a result pass on their high-grade genes to a disproportionately
large number of offspring, thereby enhancing the fitness of the population as a
whole (Chagnon 1988). Chagnon imagined
that among the Yanomami, natural selection is free to
do its brutal work unimpeded by cultural constraints that had lowered the
fitness of civilized societies and undermined social practices that are
consistent with human nature. Old
notions die hard, especially if a novel, ultra-Darwinian conceptual framework
seems to breathe some new life into them.
In recent decades, the sociobiology controversy has
intensified the hitherto manageable tension between anthropology’s two sides. On the one hand, sociobiology has been trying
to pull cultural anthropology into its orbit, as in the case of Chagnon. On the
other, strong versions of cultural relativism have been used to discredit the
very possibility of value-neutral, scientific cultural anthropology and, in its
post-modern version, to dismiss the very idea of human nature as ideologically
contaminated. For our part, we have no
trouble acknowledging the existence of a human nature, characterized by a
species-specific array of highly plastic and variable traits, which, just
because they are plastic, forbid easy normative conclusions about what
behaviors, practices, institutions, laws, moral codes, and so forth are
“natural.” We do not
335
see how the fact that our species is
individuated by its position between two events of phylogenetic
branching - the “species as individuals” argument that we discussed in Chapter
10 - undermines our ability to identify our species by its traits (Grene 1990). While
we believe, too, that Darwinism suggests a naturalistic world view, and so
tends to pull the rug out from under religious dogmatism and “enthusiasm” in
the old sense, we fail to see how appealing to Enlightenment commonplaces,
ultra-Darwinian adaptationist stories, implicitly fixist views about the narrow range of long-gone environments
to which alone we are supposedly adapted, and suspicions about abilities,
including moral and cognitive abilities, that we clearly possess - does much to
advance either anthropology or the explanatory scope of Darwinian thinking. Strenuous proclamations of materialist
metaphysics are not Darwinism. Rather,
they are attempts to use philosophical concepts to advance versions of
Darwinism that demand a skeptical, truncated, even eliminativist
view of human capacities for caring, reflecting, and thinking. Rather than telling us that we don’t really
have our faculties after all, an adequate Darwinian anthropology will provide
us with well-grounded accounts of the faculties we do have. These are, for the most part, the faculties we
think we have, informed by philosophical reflection on scientific discoveries,
including evolutionary discoveries (Grene 1995, pp.
109-112).
Human consciousness is unique. Clearly, our form of consciousness involves
what Daniel Dennett calls “the intentional stance” (Dennett 1987). Mind is minding, attending to, and acting in a
specific environment in accord with what steps might fulfill, or help fulfill,
our hopes, dreams, desires, plans. First, our consciousness is also social. Second, our symbol systems, especially
language, allow us to pursue the curious mix of cooperation and competition
that is our species’ “form of life.” Third, our consciousness is reflective. Members of our species are able not only to
plan what they want to do in specific circumstances, by cooperating with others
or scheming against them, but have the ability to think about their own
thoughts in reflective privacy.
The reflective cast of our minds may well be a
necessity if we are to think in a practical way about the future; good plans
fold back on themselves as a way of anticipating how things might go,
monitoring their progress, and taking corrective measures after missteps. Language is clearly both a presupposition of
this kind of reflection and a means
336
by which we hold our ideas “in mind.” But the reflective nature of our consciousness
is so different from what, to the best of our knowledge, we observe in other
species, and so ethereal in its qualities, that it has suggested to many philosophers the notion that mind is independent of
nature and that it has a contingent relationship to the brain. This independence has been conceived either
functionally, as Aristotle had it, or, more radically, as substantial, as
Descartes notoriously thought. Either
way, dualism is the result. Those who
take a non-natural view of mind generally assume, too, that practical reason -
the thinking in which we engage in order to affect our environment, including,
prominently, the other human beings with whom our fates are so closely
entangled - is a derivative use of what is essentially a contemplative,
reflective sort of consciousness.
There can be little doubt that the diffusion of
evolutionary thinking since the middle of the nineteenth century has initiated
a shift away from anti-naturalism about mind. Nobody (religious folk aside) wants to be a
straightforward dualist today. As we saw
in Chapter 7, the idea that mind is a “secretion” of the brain occurred to
Darwin when he was quite young. Fearing
scandal, he tucked the thought away in the privacy of his reflective mind. Since then, philosophers who favor a
naturalistic stance with respect to mind have come up with a large array of
ways in which mind might be non-contingently related to brain. We do not need to pursue these hypotheses
here. The point we wish to make is that
until recently, surprisingly few of these philosophical hypotheses have been
informed by detailed biological theory, especially evolutionary theory. It is this connection - the connection between
evolution and mind - that will be the focus of the few remarks (most of a
general, and sometimes a cautionary, nature) we will make here about a much
larger topic.
One obvious way to get evolutionary insight into the
human mind is to try to imagine what biological function or functions its
various aspects serve. In precisely this
spirit, William James, who was among the first to think of himself as a Darwinian
psychologist, argued that “unless consciousness served some useful purpose,
it would not have been superadded to life” (James 1875, p. 201; see Richards
1987, pp. 427-33). This question places
the focus back onto the primacy of practical (and technical) reason, and on the
embeddedness of thinking animals like ourselves in
environments, both natural and social, in which we must figure out what to do. That is all to the good (although pragmatists
who were influenced by James, notably Dewey, perhaps went too far in insisting
that the practical nature of our intelligence means that the
337
possibility of objective knowledge for its own
sake, the greatest triumph of reflective reasoning, is merely an ideology
common among those who live idly from the labor of others [Dewey 1925]).
From this useful starting point, it is easy to
conclude that “the intentional stance” is adaptive. It is tempting, in fact, to think of it as an
adaptation - as a way of representing the world to ourselves that has been
produced by natural selection as a fitness-enhancing trait that enables us to
make our way around in a complex, often unpredictable environment. Dennett, for one, asserts this (Dennett 1987).
In its first stirrings, this trait would
have enabled our hominid ancestors to deal with the fluctuating environments in
which the genus Homo evolved. Concerted selection (to the point of fixation
in all normal members of the species) for a planning-oriented form of
consciousness, combined with room for private reflection, would have resulted
from the fact that hominid species, especially our own, had to respond to the
planning and cunning of their fellows, who form the most prominent part of the
hominid’s environment and who can be intentionally deceptive. A plausible story. But we must be careful. It is also possible that the ability to
represent past, present, and future states of affairs to ourselves
in the context of satisfying desires is a by-product of other adaptations,
rather than an adaptation in and of itself.
Certainly, the purely reflective side of our
intentional consciousness - the side on which Descartes concentrated, and that
has traditionally been viewed as necessary for scientific knowledge - seems to
go beyond its practical utility. That
explains why naturalistic philosophers of mind who think that our form of
consciousness is an adaptation have tended to discount, or even
eliminate, the higher dimensions of reflection. Dennett makes this case by combining his arguments
about the evolution of the intentional stance as a fitness-enhancing trait with
genic selectionism of a Dawkinsean cast (Dennett 1995). [3] The genes “use” intentional
consciousness as an interactor in order to get themselves
multiplied more successfully. Indeed,
actions mediated by intentions are, by their very nature, a ruse played on us
by our genes, like the urge to be moral that we discussed above (Dennett 1987;
1991; 1995).
3. Among contemporary naturalistic
philosophers of mind, Paul Churchland is more eliminative
than Dennett; once mental phenomena are explained, most of the object to which
they seemingly refer disappears. Jerry
Fodor is less adaptationist
than Dennett. Neither, however, appeals
as extensively as Dennett to evolutionary arguments. Hence our concentration on
Dennett’s views.
338
Dennett supports this position by appealing to the contemporary penchant
for modeling consciousness on computers. (The penchant is an old one; Descartes and
Locke talked about the mind as a spring mechanism.) On this view, consciousness is like the
monitor of a computer. It displays
representations before our inspecting mind in a sort of private theater, a la
Locke. This being so, we might well be,
as Dennett suspects we are, little more than “dumb robots,” machines that are
very good at passing Turing tests (Dennett 1987; 1991).
For us, the cost of understanding the intentional
stance on these terms is high. Our minds
are certainly adapted to deal with our environments by way of ideas. But our environments are largely cultural, and
the orienting role of means-end reasoning, and of consciousness and
self-consciousness more generally, evolved because the tie that binds us to the
cultural world as agents, caregivers, competitors, speakers, and thinkers
affords us direct (rather than representational) access to the environments in
which we act responsively and, ultimately, responsibly (see Chapter 12 for a
discussion of the relevant epistemological points).
If we suppose that our minds are adaptations, we must
also ask what environments they are adapted to. There has been a marked tendency among reductionistic (and especially eliminativist)
naturalists to think that our minds are adapted to the Pleistocene environment
and social structure of our hominid ancestors. The reason alleged is that too little time has
passed since the dawn of civilized life for adaptations to our present form of
life to take hold. “A few thousand years
since the scattered appearance of agriculture,” write Leda Cosmides,
John Tooby, and Jerome Barkow,
“...is... less than 1% of the two million years [since “Lucy” that] our
ancestors spent as Pleistocene hunter-gatherers” (Cosmides
et al., in Barkow et al. 1992, p. 5). The first generation of sociobiologists thought that the adaptations in questions
were specific behaviors to which we are prone, leaving the impression that we
are full of natural urges (to male promiscuity, female coyness, and the like)
that must be suppressed if we are to live in contemporary societies. Richard Alexander modified this behaviorist
penchant somewhat by arguing that our adaptations are situation-specific rules
of action whose underlying imperative is: “maximize inclusive fitness” by
whatever means possible (Alexander 1979). More recently, a successor program to
sociobiology known as Evolutionary Psychology (EP),
has amended psychological adapationism still further.
Advocates of EP do indeed agree with their colleagues
in assuming that “the evolved structure of the human mind is adapted to the way
339
of life of Pleistocene hunter-gatherers, and
not necessarily to our human circumstances” (Cosmides
et al., in Barkow et al. 1992, p. 5). (Implicit in this argument is a markedly
gradualist vision of the evolution of species-specific traits, which may entail
underestimating the rapid evolution that can be brought about by changes in
developmental timing and differential expression of the same genes.) But what EP’s supporters believe to have
emerged are a set of capacities that, because they are not reducible to
specific hard-wired behaviors, or even to rules of action triggered by specific
situations, involve the mediating role of cognition. This shift from behaviorism to cognitivism is a feature of contemporary psychology. But the cognition in question is viewed by
EP’s advocates as adaptationist because, together
with many contemporary neurobiologists, they regard the brain as a collection
of dedicated modular structures each of which is adapted to deal with a
particular set of problems (Tooby and Cosmides 1992, in Barkow et al.,
p. 97). There are, we are told, distinct
modules for color vision, locomotion, language-acquisition, motor control,
emotional recognition, and so forth. Each
such module is asserted to have been brought into existence by natural
selection (usually interpreted in a gene-selectionist
way). Although each mental function is
supposed to have been optimally adapted to a hunter-gatherer life style rather
than to the kinds of environments that have come onto the scene since the
emergence of agriculturally based civilization, they still function well enough
in our world for us to get by (though at the cost of our making fallacious
inferences on a fairly regular basis).
The evolution of the capacity for language illustrates
the style of this sort of inquiry, as well as its difficulties. Judging by the air of conviction with which
various people put forward their views about the origins of the ability of
neonates to acquire language, one might infer that confidence reigns in this
area. However, the exact opposite is the
case. Language acquisition by H.
sapiens remains a difficult, largely speculative subject. Most contemporary controversies about it take
place against the background of the revolution in linguistics initiated by Noam Chomsky. Language, according to Chomsky, is a
rule-governed activity. It involves a
series of syntactic transformations of simple noun-verb phrases by something
like a computer program whose most basic form, its “machine language,” is
presumably inscribed into the neurological architecture of our brains (but not
into the brains of chimpanzees, which, even though they share some of our
emotional and communicative life [De Waal 1996], lack
the capacity for syntactical transformations). For his
340
part, Chomsky assumes that this capacity
evolved somehow, and lets the subject go at that. He thinks “it is perfectly safe to attribute
this development to ‘natural selection,’ so long as we realize that there is no
substance to this assertion, that it amounts to nothing more than a belief that
there is some naturalistic explanation for these phenomena” (Chomsky 1972, p.
97). Gould says something only slightly
more substantive when he asserts that language capability (sensu Chomsky) is a side product of the expansion and
connectivity of the brain, what he calls a “spandrel” (Gould 1987). It is precisely this claim that has provoked
advocates of EP to argue that linguistic ability must be, straight-forwardly,
an adaptation.
Steven Pinker is a well-known advocate of the adaptationist view of language ability.
He holds that the brain is modular (rather than a “general purpose”
computational device of the sort that was presupposed by most
mid-twentieth-century thinkers, including the architects of the Modern
Synthesis), that its various modules must each perform a distinct function, and
that each is a biological adaptation. From these premises, Pinker concludes that
there must be an evolved module for language competence - a “language
instinct” - that grounds what happens in the brain in what happens in the genes
(presumably a large number of them) (Pinker 1994). “There must have been a series of steps
leading from no language at all to language as we now find it,” write Pinker
and Paul Bloom, “each step small enough to have been produced by a random
mutation or recombination, and each intermediate grammar useful to its
possessor” (Pinker and Bloom 1990, reprinted 1992, p. 475, our italics).
This is a rather a priori argument. For Pinker, indirect, but potentially
empirical, support for it comes from the presumed regularity of syntactical
combination and permutation itself, as well as the universality with which it
is acquired by human beings (Pinker and Bloom 1990, reprint 1992, p. 463). According to Pinker, these features suggest design, and (assuming naturalism) suggest, too, that natural
selection is the designer (Pinker and Bloom 1990, reprint 1992, p. 463). Yet from a view of natural selection in which
natural selection is not a designer, but a process that takes the place of
design, this is the very problem. According
to the Modern Synthesis, traits that are selected typically vary a great deal
from environment to environment (Piattelli-Palmarini
1989). Language competence does not. It is an all-or-nothing affair, occasional
defects notwithstanding. It is possible,
of course, to argue that language acquisition is an evolved adaptation even if
it shows no variation. One might
341
hypothesize, for example, that the emergence of a cultural environment
creates enormous selection pressures in favor of neurological capacities for
participating in that culture, including symbolic and ultimately linguistic
ability; those lacking this capacity would be so disadvantaged that they would
not have survived at all (or, if one prefers a group-selectionist
view, sub-populations that did not do business in this way would have been
crushed by those that did). This
hypothesis has encouraged some to think of language acquisition as an example
of the so-called “Baldwin effect,” which treats culture as an environment that
creates selection pressures that push whatever genetic variation happens to
crop up in the direction first pointed by culture. Dennett argues for this view (Dennett 1995). So does Terrence Deacon (Deacon 1997). In Deacon’s version, however, there are no
genes directly for language acquisition, pace Pinker. Instead, changes in gene frequencies favor
traits that indirectly support language (Deacon 1997). So one might accept an evolutionary account of
language acquisition, but reject some aspects of EP (see Weber and Depew 2003).
A second difficulty is closely related. The necessity of telling an adaptationist story about the acquisition of language
competence described as “designed” will tend to single out particular causes -
“mutations,” for example, that change the position of the larynx, making
vocalization (as well as choking) easier. The difficulty is that these scenarios, even
if they are combined and given a sequential order, probably underestimate the
complex interaction and mutual feedback among a whole variety of factors in the
relatively sudden emergence of language. It is possible, for example, that if we project
at least minimal symbolic capacity back to some of our closest hominid
ancestors, the enlargement of the brain and the acquisition of intelligence
(means-end reasoning, mostly) may be as much effects of language-acquisition as
its causes (Deacon 1997). EP discounts
this possibility by tacitly supposing the existence of a prior urge to
communicate on the part of an already big-brained, intelligent hominid - an
urge that until the larynx descended or some other triggering event occurred,
had been “dammed” up or otherwise held back. A similar picture is implicit in the
presumption that at the exact point where rules, neurons, and genes are hooked
together - EP’s “pineal gland” - there exists no particular language, such as
English or Swahili, but the universal (and hypothetical) language that
Chomsky’s MIT colleague, the philosopher Jerry Fodor, calls “mentalese” (Fodor 1975). But the very notion of “mentalese”
reveals the persistence of an old, not-very-Darwinian view of the mind as a
sort of theater in which
342
pre-linguistic representations of the external
environment are played before an audience of one. It is this picture that gives rise to the presumption
that non-speaking hominids are just bursting to say something, but, lacking a
requisite morphological feature, cannot. This is not a very coherent idea; it is, in
fact, simply a recycling of the old “idea idea” of
Locke.
The Human
Genome Project and Genetic Biotechnology
In 1987, Robert Sinsheimer,
who had once worked on determining how the genetic code specifies proteins, Renato Dulbecco, president of the
Salk Institute, and Charles De Lisi, an administrator
at the Department of Energy (DOE) and a former administrator at Los Alamos
National Laboratory (where a good deal of genetic data had been stored, originally
from post-war surveys of Hiroshima and Nagasaki), proposed that every single
base pair in the human genome be sequenced. Their proposal initially struck some evolutionary
biologists and medical researchers as grandiose, utopian, and misguided in ways
that, from these opposing perspectives at least, were thought to be typical of
physicists and their fellow-travelling molecular
biologists (Lewontin 1991b; Hubbard and Wald 1993). At the
time, very little sequencing had actually been done, and what had been done was
painstakingly slow. Why consume massive amounts of scientific manpower in a
long effort to decode the entire genome when only a small percentage of it (the
part that codes for proteins) makes sense, and when, even in the part that
makes sense, it might be more useful to concentrate on areas of the chromosomes
where there is some evidence of genetic diseases that might, conceivably, be
treated by somatic gene therapy?
Nonetheless, legislation authorizing the Human Genome
Project (HGP) was approved by Congress in 1987. It assigned the information-processing aspects
of the program to the DOE, which would develop automated gene-sequencing and
computational programs for analyzing what eventually would turn up, and gave
the initial mapping, sequencing, medical, and policy-making roles to the
National Institutes of Health. James D.
Watson, of DNA fame, would head the project. Along the way, scientists who found short cuts
defected from the “government” program, and went into business on their own. Largely as a result of rapid technological
development, as well as of a race that set in between the private and public
versions of the project, it was announced
343
in the summer of 2001 (by no less a pair of
personages than the President of the United States and the Prime Minister of
Great Britain) that a “rough draft” of the entire human genome was about to be
published (simultaneously in Science and Nature; International
Human Genome Sequencing Consortium 2001; Venter et al. 2001).
The popular press dispersed the news in pretty much
the same terms that its early champions had used when they were seeking
funding. “The Book of Life,” - the title
of a 1967 book by Sinsheimer (Sinsheimer
1967) - “is now opening,” it was said. Human
identity was at last to be revealed. We
would learn not only what makes human beings different from other species, but,
in the opinion of the pioneering gene sequencer Walter Gilbert, how each of us
differs from others. One will be able,
Gilbert wrote, “to pull a CD out of one’s pocket and say, ‘Here’s a human
being; it’s me’.... Over the next ten years we will understand how we are
assembled [sic] in accord with dictation by our genetic information” (Gilbert
1992, p. 97). [4] In
the same spirit, a new era in medicine was proclaimed: an era of genetic
medicine that would prove even more significant than the nineteenth-century
germ theory of disease, which had dominated twentieth-century medicine. Genetic diseases caused by point mutations
that interfere with the cell’s ability to make a particular protein would be
cured either by therapy aimed at repairing the somatic cells of individuals,
or, eventually, by elimination of the defect from a whole population through
the manipulation of germline cells, egg, and sperm. Beyond that, enthusiasts expected that
multiple-locus traits would in time be identified on the chromosomes, and,
where defective, changed, since complex traits were assumed simply to be
compounds
4. Gilbert might have been on
slightly more solid ground about human diversity - about how we are each
ourselves because we each have a unique genome - if the HGP had been
complemented by the Human Genome Diversity Program (HDGP) that has been
proposed by the population geneticist Luigi Cavalli-Sforza.
The genomes that were actually sequenced
by HGP came from very small populations of Europeans. HDGP would regain the proper perspective by
demonstrating the width of genetic polymorphism in and across human populations
that had been predicted by Dobzhansky and others. Of greatest interest would be small isolated
populations, which are presumably the most different. However, HDGP was never approved at the
international level, in part because it was felt by officials and field
anthropologists that identifying the genetic traits of particular populations
would in practice call forth the very racism and prejudice that the project was
designed to refute in theory. Unfortunately,
this might be true. Particular genetic
markers often do correlate with particular traits of perceived races. We are grateful to Dr. Jeffrey Murray of the
University of Iowa for helpful discussion of this difficult issue.
344
of basic gene-protein units. These traits would include behavioral tendencies
such as alcoholism and homelessness (Koshland 1989). (In the notion that there might be genes for homelessness
we cannot help hearing echoes of the “shiftlessness” that early
twentieth-century eugenicists such as Charles Davenport
thought was hereditary in some families.)
Behind this sort of talk about genetic medicine lies
the fact that since the 1970s human beings have begun to acquire skills in
manipulating the genetic material directly, rather than by using selective
breeding, the relatively slow mimicry of natural selection that our species has
deployed ever since the beginning of agriculture. (The development of cloning goes hand in hand
with genetic engineering; one of its main jobs is to provide the reliably identical
platforms that are required for experimenting on “designed” organisms.) Since the development of restriction enzymes -
agents that can be used to snip portions of DNA out of genomes and to reinsert
them into other portions of the genome, or into the
genomes of other organisms - it has become possible to think that biology can,
for the first time, join physics and chemistry as a “technoscience.”
This has in turn made it possible to
imagine life as something other than a lottery (with all the problems about
privacy and informed consent that uncovering what was hitherto hidden and
chancy brings with it). Life, it would
appear, is something that can now be engineered in an industrial sense, even to
the point where the evolution of our own and other species might be directed. Perhaps this is what Fisher had in mind when
he thought of eugenics as a way of taking over God’s work.
We suspect that the presumed link between the
information that is being “revealed” by the HGP and the promises of genetic medicine
could not have been forged as quickly or confidently as it has been if the
lived body had not already come to be viewed by many as a “printout” from a
“genetic program.” The mere fact that
moving genes from one organism to another, sometimes of a different species (“transgenics”), is popularly imaged as a matter of uploading
and downloading information testifies to the truth of this claim. To a large extent, this picture is the result
of the triumph of molecular genetics. The conception of the gene presumed by the
original advocates of the HGP is the molecular gene. And the notion of the organism as a “printout”
from a genetic program represents a convergence of cybernetic, and ultimately
computational, talk with the Central Dogma of Molecular Biology, which was
originally little more than a working maxim to the effect that information
flows from genes to proteins and from proteins to organisms and their traits,
and not the other way around.
345
At the same time, it should also be noted that ultra-Darwinian
versions of natural selection, which ascribe a good deal of causal agency to
“selfish” genes, which construe adaptations as discrete modules of
antecedently-identified mental, behavioral, or physical functioning, and which
minimize or deny the difference between design and evolution, have done little
to disturb, and indeed much to encourage, what is at root a technological
vision of the living world. Natural
selection is seen as mixing and matching genes in the spirit of a genetic
engineer who uses a computer to model what Dennett calls “searches through
design space” (Dennett 1995). Dobzhansky’s
famous remark that “nothing in biology makes sense except in the light of
evolution” is well taken. But in recent
decades, Dobzhansky’s maxim has been given a twist. Rather than appealing to the contingencies of
the evolutionary process as a constraint on Promethean ambitions - eugenics in Dobzhansky’s day, “designer babies” in ours - evolution is
now being asked by those who have construed natural selection as a designing biotechnician to bless the, transition to the coming age of
biotechnology. In the resulting
biotechnological vision, the inherently complex relationships among genes,
traits, fitness, and diverse environments is displaced by the homogenizing,
standardizing, “quality control” tendencies on which genetic engineering
depends, and which it is explicitly aimed at producing and reproducing. It is no doubt true that we are entering into
an era in which genetic medicine can be expected to do a great deal of good. But genetic medicine will not, we suspect, be
able to achieve its promise so long as it is seen as licensed by a simplistic,
utopian (or, depending on your point of view, dystopian) view of organisms as
technological objects - a view that contrasts with the main line of thinking
about living things that we have followed in this book.
Luckily, the preliminary draft of the sequenced human
genome contained surprises that might well disconcert those holding any such
technological view of life. Originally,
it had been estimated that there might be as many as 100,000 human genes
(Gilbert 1992, p. 83). It has turned out
(in part because of definitional shifts in the meaning of the term “gene”) that
there are only about 30,000. To make
matters even more interesting, it has also turned out, on the basis of
sequencing the genomes of other species - efforts that had been in part funded
along with the HGP for comparison purposes - that most of the 30,000 genes we
do have are also possessed by most other species: not simply by chimpanzees,
but by fruit flies and flatworms as well. This turn of events has provoked many of those
who had convinced themselves and others that
346
all specific and individual differences must
be encoded in the genes - that the HGP would “tell us who we are” - to lay
plans for an even bigger project in “proteomics.” In comparison to the relative simplicity of
the DNA molecule, perhaps protein sequences contain enough information to
capture what makes each of us be ourselves. A more reasonable possibility is that the HGP
has actually been bringing into view not the simple molecular gene of the 1950s
and ‘60s, but, perhaps in spite of itself, the developmental genes that
constitute the fundamental architecture of life (Keller 2000; Gilbert 2000;
Moss 2002). As we noted in Chapter 9,
changes in how these highly conserved gene sequences are expressed - in
their timing, in increased or decreased quantities of the protein products they
specify, in chemical marking of DNA itself, and in a myriad of other subtle
processes - cause evolutionary change, and mark this species off from that. Allelic variations in genes (as well as in the
much larger part of the genome that codes for no proteins at all) do
characterize each individual. Population
genetics tracks changes in these variations. Still, there is little or no reason to think
of these bits of DNA as containing a coded and programmed identity for each of
us, or to think that slow, point-for-point changes are the proximate causes of
large-scale evolutionary change, or to think of genes as selfish “more makers”
rather than as “developmental resources” that interact with cellular processes
and environmental signals (not least, social cues and individual reflections)
in the process by which we come to be ourselves.
Most professionals know better. Still, from now on, philosophers of biology and philosophically alert biologists must think about how evolutionary and developmental biology can inform, and if necessary constrain, an overly technoscientific approach to living things that is widely disseminated in contemporary societies.
347