The Competitiveness of Nations
in a Global Knowledge-Based Economy
May 2003
Nathan Rosenberg
Exploring
the Black Box:
Technology,
economics and history
1 Path-dependent
aspects of technological change
Cambridge
University Press
Cambridge,
U.K. 1994
pp.
1-6
Index
IV – Scientific Progress Dependent on Technological Capability
V – Technological Determination of the Scientific Research Agenda
VI – Path Dependency of Economics, Science & Technology
It is no longer necessary for an economist to
apologize when introducing the subject of technological change. That is, in itself, a (modest) cause for
celebration, since the situation was very different as recently as forty years
ago. At that time, economics had still
not been awakened from its dogmatic slumber on the subject, and was content to
treat - or perhaps a more appropriate operational verb would be “to dismiss” -
technological change purely as an exogenous variable, one that had economic
consequences but no visible economic antecedents. Although sympathetic readers of Marx and
Schumpeter had learned to attach great importance to technological change as a
major impulse - perhaps the major impulse - in generating long-term
economic growth, such an awareness had not yet rubbed
off on the dominant academic traditions of western economics.
Today, the economic importance of technological change
is widely acknowledged. There cannot be
many economists who would dissent from the view that the growth of
technological knowledge is fundamental to the improvement of economic
performance. In addition, it is widely
accepted that, in advanced industrial economies, the growth in technological
knowledge relies increasingly, although in ways that are never clearly
specified, on science. [1]
I have had valuable discussions of the issues treated in this chapter
with Stanley Engerman, William Parker, and Scott
Stern. I owe a particular debt to Paul
David for his gentle but persistent encouragement in formulating my thoughts
about path-dependent phenomena. The
chapter draws, occasionally, upon two earlier papers: “How Exogenous is
Science?”, chapter 7 of Nathan Rosenberg, Inside the Black Box, Cambridge
University Press, Cambridge, 1982, and Nathan Rosenberg, “The Commercialization
of Science by American Industry,” in Kim Clark, Robert H. Hayes, and
Christopher Lorenz (eds.), The Uneasy Alliance, Harvard Business School
Press, Boston (MA), 1985.
1. An interesting index
of this lack of clarity is that, for many years, the most valuable single source
of quantitative information on technological matters was (and still is) the
National Science Foundation’s biennial publication, Science Indicators. Only since the publication of the 1987
issue was it finally acknowledged in the title that the volume is at least
equally concerned with matters pertaining to technology. Since that year it has borne the title Science
and Engineering Indicators.
9
Thus, it seems reasonable to pose two questions: what
can be said about the manner in which the stock of technological knowledge
grows over time? And, to what factors is
it responsive, and in what ways?
In dealing with these questions I will argue that the
main features of the stock of technological knowledge available at any given
time can only be understood by a systematic examination of the earlier history
out of which it emerged. There is, as I
intend to show, a strong degree of path dependence, [2] in the sense that one
cannot demonstrate the direction or path in the growth of technological
knowledge merely by reference to certain initial conditions. Rather, the most probable directions for
future growth in knowledge can only be understood within the context of the
particular sequence of events which constitutes the history of the system.
Further, although I believe that economic factors have
powerfully shaped the growth of that knowledge, I also believe that there is no
prospect of adequately accounting for the content of that knowledge by
any economic model. In this respect
economic theory is not, and never can be, a substitute for history,
although it is obviously an invaluable complement. Economic forces powerfully influence the
decision to undertake a search process, but they do so in ways that do not
predetermine the nature and the shape of the things that are found. The findings of scientific research, and
their economic consequences, remain shrouded in uncertainty. They reflect certain properties of the
physical universe that are uncovered by the search, and not the economic goals
that were in the mind of decision-makers who allocated resources to the
research process in the first place. [3]
2. The most rigorous formulation of path-dependent phenomena in
terms of their relevance for history is by Paul David. See, in particular, “Path Dependence: Putting
the Past Into the Future of Economics,” unpublished
manuscript, Stanford University, July 1988.
Elsewhere David has stated: “[I]t is sometimes not possible to uncover
the logic (or illogic) of the world around us except by understanding how it
got that way. A path-dependent sequence
of economic changes is one in which important influences upon the eventual
outcome can be exerted by temporally remote events, including happenings
dominated by chance elements rather than systematic forces. Stochastic processes like that do not converge
automatically to a fixed-point distribution of outcomes, and are called non-ergodic. In such
circumstances ‘historical accidents’ can neither be ignored, nor neatly
quarantined for the purposes of economic analysis; the dynamic process itself
takes on an essentially historical character.” Paul David, “Understanding the Economics of
QWERTY: The Necessity of History,” in William N. Parker (ed.), Economic
History and the Modern Economist, Basil Blackwell, Oxford, 1986, p. 30. See also Brian Arthur, “Competing
Technologies, Increasing Returns, and Lock-In by Historical Small Events,” Economic
Journal, 99 (1989), pp. 116-131.
3. As Arrow once
succinctly put it: “European desire for spices in the fifteenth century may
have had a good deal to do with motivating Columbus’ voyages, but the brute,
though unknown, facts of geography determined what in fact was their economic
results.” Kenneth Arrow, “Classificatory
Notes on the Production and Transmission of Technological Knowledge,” American
Economic Review Papers and Proceedings (May 1969), p. 35.
10
Of course, it would not be quite correct to say that
economic analysis has ever totally ignored the subject of technology. Rather, an explicit examination of technology
and knowledge about technology has simply been suppressed by introducing
certain assumptions, often only implicit, into the theory of the firm. Central to that theory, and therefore at the
foundation of modern microeconomics, has been the assumption of a given set of
tastes and some given stock of technological knowledge. This technological knowledge is (somehow)
embedded in a set of production possibilities, a collection of known
alternative combinations of factor inputs that may be employed in producing a
given volume of output. Given this
knowledge of tastes and technology, the firm then determines its optimal
behavior, including the choice of technique, through the explicit consideration
of factor prices. The implications for
resource allocation of changes in technology or in factor prices can
then be readily examined within this static equilibrium framework.
For many purposes this would seem to be quite enough
to get the analytical ball rolling. [4] If one is interested only in exploring the implications of maximizing
behavior, one is surely entitled to say that it is not a matter of primary
concern to that analysis to know how any particular state of the world came to
be that way. And exploring the
implications of maximizing behavior subject to certain constraints is,
obviously, a legitimate intellectual exercise.
I want to suggest that, even at this level, serious
problems arise - not, of course, as a matter of pure logic, but as a matter of
the potential explanatory usefulness of an analysis built on such premises. Moreover, the problems are not “merely”
epistemological, but are central to the question of how to understand the level
of technological competence that prevails in an economy at any particular time.
Why, to begin with, is it plausible to assume that a
firm would know a range of technical options that are located far from
the one that is presently employed? Once
it is recognized that the acquisition of new technological knowledge is a
costly process, why should resources be expended in acquiring knowledge that is
not - or is not expected to be - economically useful?
One answer would rely on drawing a sharp distinction
between the state of scientific knowledge and the state of technological
knowledge. Such an answer might argue
that a given level of scientific knowledge will always
4. Not for the purposes of Joseph Schumpeter, though. For a discussion of Schumpeter’s criticism of
neoclassical economics, see chapter 3 of this book pp. 47-61.
11
illuminate
a wide spectrum of technological options, and that these are precisely the
options represented on a production isoquant; that
is, the production isoquant simply identifies the
technological options that are made available by the existing stock of
scientific knowledge. This is
essentially the position that was argued by W.E.G. Salter in his valuable book,
Productivity and Technical Change.
At one level this position is totally plausible. However difficult it may be to speak of the
state of scientific knowledge as if it were some quantifiable magnitude, surely
it is meaningful to say that the body of presently available scientific
knowledge imposes certain constraints on what is technologically
possible and also, by the same token, permits a range of technological
alternatives to be taken up within the frontiers imposed by that knowledge. [5] As a statement about the scientific and technological realms, this is
obviously useful. As a statement that
has relevance for the economic realm, however, it is distinctly
problematical.
Perhaps it is helpful to invoke a distinction that
Boswell offered to his readers in his Life of Johnson: “Knowledge,” he
said, “is of two kinds. We know a
subject ourselves, or we know where we can find information upon it.” Precisely. Science will often provide the capability to
acquire information about technological alternatives that we do not presently
possess, but science does not make the acquisition of this information
costless. Indeed, it may for certain
purposes be useful to think of science as a guide for exploring the
technological realm, and it is also plausible to believe that, ceteris
paribus, the greater the stock of scientific knowledge, the lower will be
the cost of acquiring necessary, but presently unavailable, information
concerning technological alternatives. But
I suggest that the starting point for serious thinking about technological
knowledge is the recognition that one cannot move costlessly
to new points on the production isoquant, especially
points that are a great technological distance from the present location of
productive activities. There are, I
believe, distinct limits to the usefulness of the notion of technological
alternatives being “on-the-shelf.” Although
we may indeed, as Boswell suggested, know where we can find information on the
subject at hand, acquiring the information, in the detailed sense of
being able to base productive activities upon it, may be, and surely often is,
a very expensive activity. [6] And one need not belabor the
5. I put aside here the
important consideration that technological progress can - and does - often go beyond
the frontiers of what is understood in a scientific sense. The limited scientific understanding of the
combustion process has not prevented the operation of blast furnaces or
coal-fired electric power generating plants, and the
absence of a theory of turbulence has not posed an impossible barrier to the
design of reliable aircraft.
6. Even when certain
blueprints are literally on the shelf, the technology may not be as “freely”
available as might be assumed. Ken Arrow
pointed out a number of years ago that “when the British in World War II
supplied us with the plans for the jet engine, it took ten months to redraw
them to conform to American usage.” Arrow,
“Classificatory Notes,” p. 34.
12
point
that the cost of alternative courses of action is precisely what economic
analysis is all about.
One valuable perspective on the cost of acquiring
information is offered by the available data on R&D expenditures. These data are additionally valuable in
showing the extent to which the generation and diffusion of knowledge has
become an endogenous economic activity. In the year 1991, according to Science and
Engineering Indicators, total R&D spending in the United States was
estimated to amount to $152 billion, of which private industry
financed almost 56 percent. Of
particular importance for present purposes is the fact that the great bulk of
total R&D spending is for Development activities, not for Basic or Applied
Research. Development expenditures
accounted for approximately 67 percent of total R&D spending. These figures, at the very least, suggest
great skepticism about the view that the state of scientific knowledge
at any time illuminates a wide range of alternative techniques from which the
firm may make cost-less, off-the-shelf selections. It thereby also encourages skepticism toward
the notion that is so deeply embedded in the neoclassical theory of the firm,
that one can draw a sharp and well-delineated distinction between technological
change and factor substitution. Although
it is essential to the argument of this paper that the D of R&D encompasses
a wide range of diverse, information-acquiring activities, it also includes many expenditures that are essential to make possible what
economists have in mind when speaking of factor substitution. [7]
The extent to which total R&D spending is
dominated by the Development component calls attention to some critical aspects
of the manner in which technological knowledge grows. At least in respect of “high-technology”
products, it is misleading to speak of some as-yet-untried but on-the-shelf technologies
as “known.” It is of the essence of
these technologies that their designs need to undergo protracted periods of
testing, redesign and modification, and retesting before their performance
characteristics are well enough understood for them to be produced and sold in
reasonable confidence. [8] Although these expensive and time-consuming Development
activities are typically not of great interest for their specific scientific
content, the information so acquired is absolutely essential from an economic
point of view. Performance
characteristics of high-technology products simply cannot be accurately
predicted without extensive testing. A
new jet-engine design, or airplane wing, or weapons system, or electronic
switching system, or synthetic-fuel plant, or pharmaceutical product, may
7. This argument is
pursued further in chapter 6 of this book, which argues that the relative
abundance of natural resources within the United States (in addition to a host
of other variables) affected the direction of American technological change
throughout the first half of the nineteenth century.
8. Some of these issues
are examined in Rosenberg, “Learning by Using,” Inside the Black Box, chapter
6.
13
require
an enormous amount of testing before its performance characteristics can be
understood with a high enough degree of accuracy and reliability to warrant
commercial introduction. A large part of
the D of R&D is devoted precisely to acquiring such information. [9] It cannot be
overemphasized that such information typically cannot be deduced from
scientific principles. [10]
It is curious that whereas so much attention in the
last few decades has been properly devoted to incorporating the effects of
uncertainty into economic analysis, these effects should have been totally
neglected in this particular realm - the determination of optimal design of
specific products. Such uncertainties
are of very limited interest from the point of view of academic science. But the essential economic point is that these
uncertainties are extremely costly to reduce or resolve. When considering the possibility of
technological alternatives that are so far only on the shelf, the reduction of
design, cost, and performance uncertainties is of absolutely central economic
importance. In fact, workable
technological knowledge in highly industrialized societies today is, in
considerable measure, the (eventual) product of Development activities. Much of the Development effort is, in effect,
directed toward the progressive reduction of cost and performance uncertainties
in product (and process) design.
This observation concerning the importance of
Development activities highlights an additional feature of the growth of
technological knowledge. That is, most
Development activities at any time are not devoted to the introduction of
entirely new products, but rather to the improvement and modification of
existing products. Although it is
difficult to draw precise boundaries among the separate components of
Development activities, undoubtedly the bulk of such activities, at any time, is devoted to efforts to improve existing products rather
than to the introduction of entirely new products. In this respect, present activities are
powerfully shaped by technological knowledge inherited from the past. Existing technologies commonly throw off
signals and focusing devices indicating specific directions in which
technological efforts can be usefully exercised. These internally generated pressures and
compulsions play a large role in shaping day-to-day Development activities. [11] Such activities involve endless minor
9. The means of
acquiring this information are themselves being transformed by new
technologies. New aircraft designs are
increasingly “tested” on supercomputers rather than in more traditional wind
tunnels. Nevertheless, simulated
testing, or other forms of laboratory testing, is often still very remote from
actual operating conditions, and therefore of limited reliability.
10. For a full documentation
of this point in the context of aeronautical engineering, see Walter Vincenti, What Engineers Know and How They Know It, The
Johns Hopkins University Press, Baltimore (MD), 1991.
11. For further
discussion of these themes, see Nathan Rosenberg, “The Direction of
Technological Change: Inducement Mechanisms and Focusing Devices,” in Nathan
Rosenberg, Perspectives on Technology, Cambridge University Press,
Cambridge, 1976, [chapter 6. See also
Paul A. David, Technical Choice, Innovation and Economic Growth, Cambridge
University Press, Cambridge, 1975, introduction and chapter 1, for an
illuminating analysis of the learning issues underlying the process of
technological change.]
HHC: [bracketed] displayed on page 15 of original.
14
modifications and improvements in existing products, each of which is of small
significance but which, cumulatively, are of major significance. Once the basic technology of generating
electric power through the burning of fossil fuels had been introduced at the
beginning of the twentieth century, it set the stage for several decades of
minor plant improvements. This included
a steady rise in operating temperatures and pressures, new alloys, modification
of boiler design, etc. Although only
specialists would be able to identify even a few of the associated
improvements, the amount of coal required to generate a kilowatt-hour of
electricity fell by almost an order of magnitude in the course of the following
decades. More recently, by focusing upon
a succession of individually small improvements, the semiconductor industry was
able to move from products incorporating a single transistor on a chip to
products incorporating more than a million such components. Similarly, in the computer industry the speed
of computational capability has been increased, again by individually small increments,
by many orders of magnitude.
The instances of the electric power plant, the
transistor, and the computer may be useful as a way of defining a major
innovation. A major innovation is one
that provides a framework for a large number of subsequent innovations, each of
which is dependent upon, or complementary to, the original one. We can readily think of the framework
established by the invention of the steam engine, machine tools, the internal
combustion engine, electric power, or the vacuum tube in this context. But another way of expressing these
connections is that each constitutes the initiation of a long sequence of
path-dependent activities, typically extending over several decades, in which
later developments cannot be understood except as part of a historical
sequence.
There is commonly a certain logic in the sequence of
some technological developments, a kind of, at least, “soft determinism,” in
which one historical event did not rigidly prescribe certain subsequent
technological developments, but at least made sequences of technological
improvements in one direction easier - and hence both cheaper and more probable
- than improvements in other directions. Technological knowledge is by nature
cumulative: major innovations constitute new building blocks which provide a
basis for subsequent technologies, but do so selectively and not randomly. The ability to generate and transmit electric
power certainly did not make the invention of the vacuum tube inevitable, but
it is difficult to
15
think of
the vacuum tube, and the transistor, without the prior development of some sort
of electric-power generating capability. Again, sequences matter. Technological knowledge grows in distinctly
path-dependent ways.
In all these ways, then, ongoing technological
research is shaped by what has gone before. There is always a huge overhang of
technological inheritance which exercises a powerful influence upon present and
future technological possibilities. Much
technological progress at any given time, therefore, has to be understood as
the attempt to extend and further exploit certain trajectories of improvement
that are made possible by the existing stock of technological knowledge. There are continuities of potential
improvements which are generally well understood by engineers and product
designers. Expert knowledge of the
workings of the vacuum tube did not provide an adequate basis for a
“discontinuous leap” to the transistor. However,
once the transistor was invented, it created a set of opportunities for further
improvement by pursuing a trajectory of miniaturization of components
(including integrated circuitry) which has occupied the attention of technical
personnel for nearly half a century.
So far the discussion of path dependence has been
confined to its functioning within certain restricted technological spheres. However, it has also been important,
historically, between fields that stood in some sort of complementary
relationship to one another, and even between the realms of technology and
science.
Scientific knowledge has been closely dependent upon
progress within the technological realm. It would not be difficult to show, by drawing
upon the long history of the microscope (starting from the simple screw-barrel
type in the eighteenth century and proceeding through the compound microscope
of the nineteenth century to the electron microscope of the twentieth century),
the telescope (including the more recent radio telescope), and the recent
histories of x-ray crystallography, the ultracentrifuge, the cyclotron, the
various spectroscopies, chromatography, and the
computer, how instrumentation possibilities have selectively distributed
opportunities in ways that have pervasively affected both the rate and the direction
of scientific progress. [12] At the same time, to leave the discussion at that level would constitute
a rather crude sort of technological determinism. In fact, the relationship between technology
and science is far more interactive (and dialectical) than such a determinism
would imply . For the decision to
push hard in the improvement of one specific class of instruments will often
reflect a determination to advance a particular field of science as well as an
expectation that the relevant instrumentation is ripe for improvement. Furthermore, instrumentation technologies differ enor-
12. An extended
discussion of this phenomenon is taken up in the final chapter in this book.
16
mously in the range of their scientific impact. The linear accelerator and the ultracentrifuge
are each relevant to a much narrower portion of the scientific spectrum than,
say, the computer. The computer, in
fact, has turned out to be a general-purpose research instrument, although it
was certainly not visualized in that way by the scientists who invented it. Thus, different instruments may differ
enormously in the specificity or generality of their impact upon fields of
science. And, consequently, the rate and
direction of progress in science is likely to be powerfully shaped by the
peculiar characteristics of prior progress in scientific instruments.
At the same time, improvements in observational
capabilities were, by themselves, of limited significance until concepts were
developed and hypotheses formulated that imparted potential meaning to the
observations offered by new instruments. The microscope had existed for over 200 years
and many generations of curious observers had called attention to strange
homunculi under the lens before Pasteur finally formulated a bacterial theory
of disease, and the modern science of bacteriology, in the last third of the
nineteenth century. The expansion of
observational capability had to be complemented, in turn, by the formulation of
meaningful interpretive concepts and testable hypotheses before the microscope
could make large contributions to scientific progress. Thus, path dependence was critical in the
sense that a particular body of science, bacteriology, had to await progress in
an observational technology. But such
progress was, of course, not sufficient for the scientific advance, only
necessary.
IV – Scientific Progress Dependent on Technological
Capability
It is possible to accept everything that has been said
so far but to argue that it is nevertheless restricted in significance. After all, much of what has been said can be
captured within the summary observation that the technological trajectories
that have been traversed in the past leave a profound imprint upon the present,
and that they do so in a variety of ways. They serve to define technological
possibilities by facilitating further progress in some directions but not in
others. On the other hand, one might
respond that the occurrence of major new scientific breakthroughs in effect
opens up entirely new technological territories for exploration, thus liberating
the economy from the constraints of the past.
There is undoubtedly some truth in this observation. It can be argued that precisely because new
scientific knowledge opens up new paths, such
knowledge creates discontinuities that loosen the influence of the otherwise
heavy hand of the past. In this sense,
scientific research is a disrupter of technologically generated, path-dependent
phenomena.
17
I believe that this is, at best, only partially true. The possibility of important new scientific
findings does not eliminate the impact of path-dependent forces of the kind
that have been emphasized so far. In
particular, it by no means eliminates the influence of inherited technological
capabilities in shaping the future performance of the economy.
This is because the ability to exploit new
scientific knowledge in a commercial context will depend directly and heavily
upon the technological capabilities that are available within an economy. Consider the great excitement all over the
world concerning the recent remarkable breakthroughs in superconductivity. As a purely scientific breakthrough, the
excitement is well justified. Nevertheless,
it may be decades before this is actually translated into better computers,
magnetically levitated trains, the transmission of electricity without loss, or
the storage of electricity. Achieving
these outcomes is not primarily a matter of scientific research, although
progress toward their achievement may draw very heavily upon scientific
knowledge. Designing
new products that exploit the knowledge of high-temperature superconductors,
and then designing and making the technology that can produce these new
products, are activities that draw primarily upon existing technological
capabilities.
This brings us back again to the D of R&D:
developing new product concepts, casting them in specific design forms, testing
new prototypes, redesigning them, devising new manufacturing technologies that
make it possible to achieve drastic reductions in cost, etc. In fact, one of the most forceful economic
lessons of the post Second World War period - although there were ample prewar
antecedents for those who were interested - is that the ability to achieve the
commercial exploitation of new scientific knowledge is heavily dependent upon
social capabilities that are remote from the realm of science. These capabilities involve skills in
organization, management, and marketing in addition to those of a technological
sort. But, in the context of the issues
addressed in this chapter, it is inherited, path-dependent technological
capabilities that have dominated the eventual commercial exploitation of new
technologies whose underlying technological feasibility has been made possible
by the advancement of science.
Thus, economic and technological considerations remain
powerfully and inextricably involved in converting new scientific research
findings into tangible human benefits. In some cases the new scientific understanding
has been so limited, or so remote from a capability for exploiting it in an
economically meaningful way, that an entirely new discipline had to be created
to bring this about. Such was the case
toward the end of the nineteenth century in chemistry, and the result was the
development of the new discipline of chemical engineering early in the
twentieth century. [13] At
13. For a discussion of
the contemporary situation, see chapter 10 below.
18
about the
same time, the achievement of heavier-than-air flight at the beginning of the
twentieth century gave rise to the entirely new discipline of aeronautical
engineering. Aeronautical engineering,
as a discipline, had far less of a scientific base to draw upon than did
chemical engineering. Indeed, to this
day, aircraft design remains an activity that is less guided by a systematic
scientific base and is therefore compelled to rely much more heavily upon
experimentation and testing of prototypes.
I conclude that there are sharply defined limits to
the extent to which new scientific knowledge can liberate an economy’s
performance from the technological capabilities inherited from the particular
path that it has traversed in arriving at its present state.
V – Technological Determination of the Scientific
Research Agenda
There are other ways in which prior developments in
technology have shaped the progress of science and the economic consequences of
science. A major development of the
twentieth century is that the changing needs of the technological sphere have
come to play a major role in shaping the agenda of science. In this sense, as well, scientific research
itself has become increasingly dependent upon the path of technological change.
Thus, I suggest that the formulation of
the research agenda itself cannot be understood without paying attention to
prior developments in the realm of technology.
This kind of dependence is not, of course, a uniquely
twentieth century phenomenon. It can be
seen in the spectacular developments in the iron-and-steel industry that began
in the 1850s. In the cases of the three
great innovations in the second half of the nineteenth century - the Bessemer
converter, Siemens’ open-hearth furnace, and the Gilchrist-Thomas basic lining
that made possible the exploitation of high phosphorus ores - none of them drew
upon chemical knowledge that was less than half a century old. However, adoption of these innovations
dramatically raised the payoffs resulting from acquisition of new scientific
knowledge concerning the properties of steel.
The very success of the Bessemer process in lowering
the price of steel and in introducing steel to a rapidly expanding array of new
uses made it necessary to subject the inputs of the process to quantitative
chemical analysis. This was because, as
was quickly discovered, the quality of the output, and its structural
integrity, was highly sensitive to even minute variations in the composition of
the inputs. Sulfur and phosphorus
content had an immediate and very deleterious effect upon the quality of the
final product. The addition of even
minute quantities of nitrogen from the air during the course of the Bessemer
blast led eventually to serious and unexpected deterioration in the performance
of the metal, although this
19
causal
relationship was not established until many years later. Indeed, it is fair to say that the modern
science of metallurgy had its origins in the need to solve practical problems
that were associated with the emergence of the modern steel industry.
I suggest that, even well into the twentieth century,
metallurgy can be characterized as a sector in which the technologist typically
“got there first,” that is, developed powerful technologies, or alloys, in
advance of systematized guidance by science. The scientist was commonly confronted by the
technologist with certain properties or performance characteristics that
demanded a scientific explanation. A
particularly fruitful area of research lay in trying to account for specific
properties produced by certain technologies or exploiting particular inputs. Such phenomena as deterioration with age or
the brittleness of metals made with a particular fuel were intriguing to
scientifically trained people. At the
same time, the economic payoff to the solution of such problems had become very
high.
The increasing extent to which science became
influenced by technology was, of course, greatly reinforced by one of the most
important institutional innovations of the twentieth century: the emergence of
a large number of industrial research laboratories - almost 12,000 in 1992. Research at these laboratories was obviously
strongly shaped by the desire to improve the effectiveness of the technology
upon which the firm depended. As these
laboratories have matured, the best of them have not only applied scientific
knowledge to industrial purposes; they have also been generating much of that
knowledge. The recent award of Nobel
Prizes to scientists working at IBM in Europe, and to scientists at Bell Labs
in the United States, is an index of the quality of at least the best
scientific research work that is conducted in industrial contexts where the
research agenda is clearly shaped by a concern with specific advanced
technological systems. The problems
encountered by sophisticated industrial technologies, and the anomalous
observations and unexpected difficulties that they have produced, have served
as powerful stimuli to much fruitful scientific research, in the academic
community as well as the industrial research laboratory. In these ways the responsiveness of scientific
research to economic needs and technological opportunities has been powerfully
reinforced. [14]
How else can one account for the fact that solid-state
physics, presently the largest subdiscipline of
physics, attracted the attention of only a few physicists before the advent of
the transistor in 1948? [15] In fact, at that time there were many universities that did not even
teach the subject. It was the
14 For further
discussion, see Rosenberg, “How Exogenous is Science?”,
in Inside the Black Box, chapter 7.
15. An extended
discussion of the development of the transistor can be found in chapter 11
below.
20
development
of the transistor that transformed that situation by dramatically upgrading the
payoff to research in solid-state physics. Moreover, it is important to emphasize that
the rapid mobilization of intellectual resources in research on the solid state
occurred in the university as well as in private industry immediately after the
momentous findings that were announced in 1948. The sequence of events is essential to my
argument: transistor technology was not building upon a vast earlier commitment
of resources to solid-state physics. Rather,
it was the initial breakthrough of the transistor that gave rise to a subsequent
large-scale commitment of scientific resources. Similarly, surface chemistry has become much
more important for the same reason. More
recently, and to oversimplify somewhat, the development of laser technology
suggested the feasibility of using optical fibers for transmission purposes. This possibility naturally pointed to the
field of optics, where advances in scientific knowledge could now be expected
to have potentially high payoffs. As a
result, optics as a field of scientific research has experienced a great
resurgence in recent years. It has been
converted by changed expectations, based upon past and prospective
technological innovations, from a relatively quiet intellectual backwater of
science to a burgeoning field of research. Under modern industrial conditions, therefore,
technology shapes science in the most powerful of ways: it plays a major role
in determining the research agenda of science.
One could examine these relationships in much finer
detail by showing how, throughout the high technology sectors of the economy,
shifts in the needs of industry have brought with them associated shifts in
emphasis in scientific research. When
the semiconductor industry moved from a reliance upon
discrete circuits (transistors) to integrated circuits, there was also a shift
from mechanical to chemical methods of fabrication. That shift brought with it an identifiable
increase in chemical science and in the volume of resources devoted to that
subject.
Although the technological realm plays a role of
growing importance in identifying research problems, the places where the
eventual findings of science will have useful applications remain full of
uncertainty. Consider information
theory, a powerful intellectual tool developed since the Second World War. That this methodology should have been
developed in the telephone industry, where channel capacity has been perhaps
the most fundamental single constraint on the provision of the industry’s
service, is hardly surprising. [16] Shannon’s analysis of how to determine the transmission
capacity of a communication channel offered insights of critical importance to
engineering design within the telephone system, where channel capacity is, of
course, a dominating constraint. But, as
is often the
16. Claude Shannon, “A
Mathematical Theory of Communications,” Bell System Technical Journal (July
1948).
21
case, a
methodology that had been developed within a very specific context turned out
to be capable of providing illuminating insights far from its place of origin. It has shaped the design of hardware and
software in other communications media, including radio and television, as well
as in data-processing technologies generally. But its uses have not been confined to the
realm of engineering or the physical sciences; information theory has also been
extensively employed in cryptography, linguistics, psychology, and economics.
Here again there have been highly important historical
sequences that cannot, at least in any obvious way, be explained by recourse to
economic (or other) logic. The specific
needs of a particular technology - the telephone system - gave rise to a body
of abstract theory that, in turn, had beneficial applications in numerous and
remote contexts. Thus, although it can
be explained why a telephone company was willing to support research in a
particular direction (possible enlargement of channel capacity) economic
factors are of little help in grasping the distinctive characteristics of what
was learned as a result of the research.
VI – Path Dependency of Economics, Science &
Technology
The purpose of this chapter has been to describe the
manner in which technological knowledge grows over time, and some of the
determinants and consequences of this growth. A main aim has been to emphasize the extent to
which technological change and scientific knowledge are responsive to
underlying economic variables. This
should not be too surprising, in view of the fact that the financing of R&D
is generally undertaken with some explicit economic goal in mind. However, the peculiar nature of the
information-acquisition process, especially the uncertainty of what will be
found once a search has been undertaken, argues against adherence to a belief
in a strict economic determinism. Even
if one believes that technical change is propelled by economic forces, it does
not follow that some simple functional form exists to describe the relationship
between economic incentives and the qualitative nature of technical change. It is true that the transistor was the result
of a search process that was set in motion for good economic reasons, that is,
to reduce AT&T’s costly reliance on vacuum tubes for Long Lines switching. However, the disparate nature of the
technological spillovers and social benefits that emerged from Bell Labs’
research effort is quite difficult to analyze without an appreciation of the
sequence of events that transpired after the invention of the transistor. Ex ante analysis could not have
predicted the transistor’s definitive role in reducing the cost of numerical
calculation by many orders of magnitude through its central role in computer
architecture. It is not simply that an
22
appropriate
probability distribution of the transistor’s social benefits would be
analytically daunting. The deeper point
is that, at the point of invention, a well-defined and even marginally
informative probability distribution simply could not be constructed.
Although modern economic analysis has, in recent
years, paid some explicit attention to technological change, it has not dealt,
in any depth, with its particular characteristics. The misreading of technological change, when
viewed from a neoclassical perspective, should be apparent from the historical
analysis offered in this chapter. Additional knowledge of new production
possibilities is not costless, nor is the rate and direction of technological
change exogenous.
Consequently, understanding the particular sequence of
events that has shaped the knowledge of the technological frontier is crucial,
not only to the historian, but to the economist as well. Technology and science, which are now
generally acknowledged to be central to the achievement of economic growth,
need to be understood as path-dependent phenomena. Indeed, it follows that economic growth itself
needs to be understood in terms of path dependence. An economy’s history has left a large deposit
of technological capabilities and possibilities on the shelf. The cost of taking items off that shelf is
never known with any precision. Historical analysis, however, can allow us at
least to narrow our estimates and thus to concentrate resources in directions
that are more likely to have useful payoffs.
23
The Competitiveness of Nations
in a Global Knowledge-Based Economy
May 2003