The Competitiveness of Nations

in a Global Knowledge-Based Economy

May 2003

AAP Homepage

Nathan Rosenberg

Exploring the Black Box:

Technology, economics and history

8 Critical issues in science policy research

Cambridge University Press

Cambridge, U.K. 1994

pp. 139-158

Index

Introduction

Impact of science on technology

Difficulties of transfer

Interdisciplinary research

Institutional implications

Financial support

Private industry

Transfers between specialties

Instrumentation as a production tool

Conclusion

HHC – titling and index added

Introduction

Everyone knows that the linear model of innovation is dead.  That model represented the innovation process as one in which technological change was closely dependent upon, and generated by, prior scientific research.  It was a model that, however flattering it may have been to the scientist and the academic, was economically naive and simplistic in the extreme.  It has been accorded numerous decent burials, and I do not intend to resurrect it only to arrange for it to be interred once again.

However, in a world in which the economic role of science may reasonably be expected to grow over time, and in which policy-making will need to be based on a more sophisticated understanding of the ways in which science and technology interact and influence one another, a better road-map of the science/technology landscape is vitally necessary.  I will therefore not be primarily examining the determinants of innovation.  Rather, my main focus will be on some of the ways in which the two communities, of scientists and technologists, exercise influences on one another.

Obviously, while my central focus will not be on the determinants of innovation, what I say will, I hope, be highly relevant to that question.  Indeed, I regard it as central to a more useful framework for analyzing the innovation process that it should be based on a more sharply delineated road-map of science/technology relationships.  That road-map ought, at a minimum, to identify the most influential traffic flows between science and technology.  Obviously, such a map cannot at present be drawn.

Consequently, this chapter offers no more than the preliminary findings of a reconnaissance expedition, identifying some of the most significant

This chapter first appeared in Science and Public Policy, 1991, volume 18, no.6, pp. 335-346.  It is reprinted with omissions.  The paper was first presented at the celebration of the twenty-fifth anniversary of the Science Policy Research Unit, University of Sussex, in July 1991.  The author has incurred substantial intellectual debts, in the preparation of this chapter, to Harvey Brooks, Ralph Landau, David Mowery, Richard Nelson, and Ed Steinmueller.

139

features of the landscape - including some rather intriguing features that have been surprisingly neglected - rather than providing a detailed map.  But perhaps that will be sufficient to identify some of the major locations where research is most urgently called for.  If it is successful in this, it will have achieved its major purpose of providing, in the time-honored academic locution, “a guide to further research.”

It is, of course, a matter of definition that science/technology interactions are most significant in the so-called high-technology industries.  But it must be recognized, to begin with, that there still remain crucial portions of high-technology industries where attempts to advance the technological frontier are painstakingly slow and expensive, because of the limited guidance that science is capable of providing.

The development of new alloys with specific combinations of properties proceeds very slowly because there is still no good theoretical basis for predicting the behavior of new combinations of materials, although materials science may now be getting closer to the point of developing models with predictive powers.  Many problems connected with improved fuel efficiency are severely constrained by the limited scientific understand­ng of something as basic as the nature of the combustion process.  The development of synthetic fuels has been seriously hampered in recent years by scientific ignorance of the relationship of the molecular structure of coal (which is known) to its physical and chemical properties.

The requirements of computer architecture remain badly in need of an improved scientific underpinning.  The design of aircraft and steam turbines are both hampered by the lack of a good theory of turbulence.  In the case of aircraft, wind-tunnel tests are still subject to substantial margins of error in terms of predicting the actual flight performance of a new prototype.  Some of the functions of wind-tunnel testing in generating data for aircraft design have been taken over in recent years by computer simulation techniques, but by no means all of them.

It is noteworthy that the rapid growth in development costs in industrial economies shows no sign of subsiding.  The extremely high development costs that prevail in most of the high-technology industries, and their rapid growth, are due to the inability to draw more heavily on a predictive model in determining the performance of specific new designs or materials.

More precisely, the true desideratum is a good predictive model that will lead to a reduction in the cost of determining optimal design.  It needs to be a computationally simplifying model, which is not always the case.  One can learn a great deal about reaction mechanism in the computational chemistry if one has unlimited access to a Cray computer, but Cray computers are extremely expensive.

If science provided a cheaper predictive basis for moving to optimal

140 Index

design configurations, development costs, which constitute about two-thirds of total R&D expenditures in the United States, wouldn’t be nearly so high.  They are as high as they are because engineers and product designers continue to need to engage in very extensive testing activities before they can be sufficiently confident in the performance characteristics of a new product.

On the other hand, although there continue to be sharp limits on the extent to which technology can draw on science, it is far less appreciated that scientific progress has become increasingly dependent on technology.  Indeed, it is tempting to say that an alternative definition of a high-technology industry is one in which problems that arise at the technological frontier exercise a major role in shaping the research agenda of science.  In these industries, it is not enough to say that scientific knowledge is applied to the productive process; rather, to a considerable extent, such knowledge is also being generated there.

An important source of scientific progress, in advanced industrial societies, has derived from the attempt to deal with difficulties, unexpected problems, or anomalous observations that first arose in connection with new product designs or novel productive processes.  Additionally, the industrial context has identified highly specific areas of research in which some expansion of knowledge would make possible a significant improv­ment in quality or in the performance characteristics of a material.

In the course of the twentieth century, that additional knowledge has been produced to an increasing degree by scientists employed inside industrial research labs of high-tech firms.  This has not been just some stroke of good fortune or act of a benign Providence.  Scientists in industry are inevitably confronted with specific observations or difficulties that are extremely unlikely to present themselves in a university laboratory: premature corrosion of an underwater cable, unidentifiable sources of interference in electromagnetic communications systems, or extreme heat generated on the surface of an aircraft as it attains supersonic speeds.

The fact is that industrial activity, especially, but not only, in high-tech sectors, provides unique observational platforms from which to observe unusual classes of natural phenomena.  In this respect, the industrial research laboratory may be said to have powerfully strengthened the feedback loop running from the world of economic activity back to the scientific community. [1]

It must be added that observations are sometimes made in an industrial

1. See “How Exogenous is Science?”, chapter 7 in Nathan Rosenberg, Inside the Black Box, Cambridge University Press, Cambridge 1982, and Stephen J. Kline and Nathan Rosenberg, “An Overview of Innovation,” in Ralph Landau and Nathan Rosenberg (eds.), The Positive Sum Strategy, National Academy Press, Washington (DC), 1986.

141

context by people who are not capable of appreciating their potential significance, or who are simply uninterested in observations that have no immediate practical relevance.  In 1883 Edison observed the flow of electricity across a gap, inside a vacuum, from a hot filament to a metal wire.  Since he saw no practical application, he merely described the phenomenon in his notebook and went on to other matters of greater potential utility in his effort to enhance the performance of the electric light bulb.

Edison was of course observing a flow of electrons, and the observation has since come to be referred to as the Edison Effect.  Had he been a patient scientist and less preoccupied with matters of short-run utility, he might later have shared a Nobel Prize with Owen Richardson who analyzed the behavior of electrons when heated in a vacuum, or conceivably even with J.J. Thomson for the initial discovery of the electron.

Edison’s inventive contributions were so great that it would be rank ingratitude for later generations to chastise him for his “practical” orientation!  But, ironically enough, the Edison Effect, together with other scientific discoveries, eventually had immensely important practical consequences through the development of the vacuum tube and the vast technology that was later associated with the emergence of modern electronics.  However, perhaps it was not ironical after all.  When one speaks of someone as being “practical,” what is usually meant is that he or she is interested in matters of short-run utility only.  Science is, surely, a very practical activity but, typically, only in the long run.

 Index

Impact of science on technology

These considerations suggest an important avenue through which the technological realm has shaped the research agenda, and therefore the eventual findings, of portions of the scientific community.  In considering the flow of traffic in the opposite, and more “traditional” direction - the impact of science on technology - it is useful to make two separate observations.

First, where scientific findings have indeed profoundly influenced technological activities, these findings need not be derived from recent research at the scientific frontier.  Indeed, many points of contention and dispute over the economic importance of science really derive from the fact that the science that was essential to some technological breakthrough was simply “old” science.  Often this science was so old that it was no longer considered by some to be science.

The problem is compounded by the fact that many spokesmen for the economic importance of science are anxious to make a case for larger

142 Index

research budgets.  In order to strengthen the case it helps considerably to emphasize the benefits that may be derived from what goes on at the research frontier, rather than the continuing contribution of, say, nineteenth-century analytical chemistry to the mining industry, or the economic contribution made by “old” science through the current education of engineers. [2]

The fact is that technology draws on scientific knowledge and methodology in highly unpredictable ways - and we are likely to cover up our ignorance by invoking such shameless tautologies as “When the time is ripe.”  The body of knowledge that is called “science” consists of an immense pool to which small annual increments are made at the “frontier.”  The true significance of science is diminished, rather than enhanced, by extreme emphasis on the importance of the most recent “increment” to that pool.

The lags may be very long indeed, often because much essential complementary technology needs to be developed before it can be said that “the time is ripe” for some major invention.  Consequently, the perspective of the economist or the policy-maker needs to be distinctly different from that of the historian of science or, for that matter, of contemporary advocates of larger science budgets in the public sector.

Consider the laser.  The first lasers were developed around 1960, since when they have expanded into a remarkably diverse range of uses, including the printer that produced the manuscript of this chapter.  But, from the point of view of the historian of science, it could be argued that the basic science underlying the laser was formulated by Einstein as long ago as 1916. [3]  A historian of science might say that everything of real interest had been completed by 1916, and the rest was “just” engineering and product development.  At the same time, what is relatively uninteresting to her may be the most essential part of the story from the point of view of the technological innovation.

Amid this specialization of interests, it is essential to retain the point that there may be lags of many decades between a given increment to science and

2. On the significance of old science, see Nathan Rosenberg, “The Commercialization of Science in American Industry,” in Kim B. Clark, Robert H. Hayes, and Christopher Lorenz (eds.), The Uneasy Alliance: Managing the Productivity-Technology Dilemma, Harvard Business School Press, Boston (MA), 1985.

3. “The underlying science involves an understanding of the energy levels of molecules and solids, and the specific principle was that described by Einstein in his 1916 work on stimulated emission.  Much of the technology necessary for the laser emerged only during the Second World War from work on microwave radar - including magnetrons and klystron sources, semiconductor detectors, and wave-guiding networks.”  John R. Whinnery, “Interactions between the Science and Technology of Lasers,” in Jesse H. Ausubel and J. Dale Langford (eds.), Lasers: Invention to Application, National Academy Press, Washington (DC), 1987, p. 124.

143

the useful application that may one day flow from it.  This is one important reason (but only one) why the commercial benefits of basic research need not be captured by firms in the country where the basic research was performed.  Perhaps equally important and equally neglected, the development of sophisticated, high-performance technologies, such as lasers and other complex electronics instrumentation, has generated much new basic research that was recognized as essential to the further improvement of the new technologies. [4]

The second major source of disjunction between an advance in science and its eventual influence on technology and the economy has received little attention.  The problem is that, even when scientific research opens up an entirely new field of technological possibilities, it is usually a multi-stage process.  The reason is that it is not ordinarily possible to proceed directly from new scientific knowledge into production, even when that new knowledge is actually of a specific final product, as opposed to the discovery of some new piece of information about the natural universe that may serve as an “input” into the eventual development of a new product.

In fact, the emergence of the two disciplines of electrical and chemical engineering, beginning in the late nineteenth-century, occurred for precisely this reason.  It was not possible to move directly from the enlarged experimental and theoretical understanding of the electromagnetic and synthetic organic chemical realms into the production of goods that incorporated such new knowledge.

The reason was simple.  The appropriate technologies could in no way be derived from or deduced from the scientific knowledge.  On the contrary, distinctly different bodies of knowledge had to be drawn upon or generated before production could begin.  Sometimes, this required the development of entirely new disciplines.

Consider the synthetic dye industry that was launched by Perkin’s (accidental) synthesis of mauve, the first of the synthetic aniline dyes, in 1856.  The subsequent growth of the synthetic organic chemicals industry did not occur immediately after this dramatic laboratory breakthrough.  So long as dyestuffs could be produced only be enlarging the physical dimensions of the original laboratory apparatus, the industry was destined to remain a small-scale batch operation of little industrial consequence. [5]

 Index

Difficulties of transfer

The essential point is that the design and construction of plants devoted to large-scale chemical processing activities involves an entirely different set of

4. See Harvey Brooks, “Physics and the Polity,” Science (26 April 1968).

5. See W.K. Lewis, “Chemical Engineering - A New Science.” in Lenox R. Lohr (ed.), Centennial of Engineering, 1852-1952, Museum of Science and Industry, Chicago, 1953 p. 697.

144

activities and capabilities than those that generated the new chemical entities.  To begin with, the problems of mixing, heating, and contaminant control, which can be carried out with great precision in the laboratory, are immensely more difficult to handle in large-scale operations, especially if high degrees of precision are required.  Moreover, economic considerations play a much larger role in the design process, since cost considerations come to play a decisive role in an industrial context.

Thus, the discovery of a new chemical entity has commonly posed an entirely new question, one that is remote from the scientific context of the laboratory: how does one go about producing it?  A chemical process plant is far from a scaled-up version of the original laboratory equipment.  Experimental equipment may have been made of glass or porcelain.  A manufacturing plant will almost certainly have to be constructed of different materials.

Moreover, efficient manufacturing is, inherently, something very different from a simple, multiple enlargement of small-scale experimental equipment.  This is what accounts for the unique importance of the pilot plant, which may be thought of as a device for translating the findings of laboratory research into a technically feasible and economically efficient production process. [6]  The translation, however, requires a kind of expertise that need not exist at the experimental research level: a knowledge of mechanical engineering and a careful attention to the underlying economics of the engineering alternatives.

Pilot plants have in the past been essential, and not only for the purpose of the reduction of uncertainties with respect to scale.  Until a pilot plant was built, the precise characteristics of the output could not be determined.  Test marketing could not proceed without the availability of reliable samples.  Other essential features of the production process could not possibly be derived from scientific knowledge alone.

Consider the recycle problem.  Very few chemical reactions are complete in the reaction stage.  Therefore products of the reaction stage will not only

6. “Often, in dealing with a complicated practical situation, the engineer arbitrarily reduces the number of variables in his theory by combining them into dimensionless groups, of which a well-known example is the Reynolds number characterizing the flow of fluid through a pipe.  Such dimensionless groups are evaluated in the laboratory, and are used then for predicting the behavior in a large-scale chemical plant.  But this procedure reduces somewhat our confidence in our predictions; though the group as a whole may have varied widely in the laboratory experiments, one or more of the variables within the group may have been virtually unchanged.  Because of this reduced confidence in using dimensionless groups in scaling-up predictions, the chemical engineer usually builds a pilot plant, intermediate in size between the laboratory system and the proposed full-scale production plant, so that he can check whether the scaling-up predictions of his simplified theory are working sufficiently accurately.”  John T. Davies, “Chemical Engineering: How Did it Begin and Develop?” in William F. Furter (editor), History of Chemical Engineering, American Chemical Society, 1980, pp. 40-41.

145

include desired end products but also intermediates, unreacted feed, and trace impurities - some measurable and some unmeasurable.

Impurities, in particular, are identified by the operation of the pilot plant and methods of removing them devised to achieve a steady-state condition on a continuing basis. [7]

In the twentieth century, a gap of several years has separated the discovery under laboratory conditions of many of the most important new materials from the industrial capability to manufacture them on a commercial basis.  For instance the first polymers that W.H. Carothers had produced with his glass equipment at the Du Pont Laboratories; and polyethylene and terephthalic acid, an essential material in the production of terylene, a major synthetic fibre. [8]

Eventually, to manage the transition from test tubes to manufacture, where output has to be measured in tons rather than ounces, an entirely new methodology, totally distinct from the science of chemistry, had to be devised.  This new methodology involved exploiting the central concept of unit operations.  This term, coined by Arthur D. Little at MIT in 1915, provided the essential basis for a rigorous, quantitative approach to large-scale chemical manufacturing, and thus may be taken to mark the emergence of chemical engineering as a unique discipline, not reducible to “applied chemistry.” [9]

Moving from scientific breakthroughs to technologies ready for commercialization is a highly complex, inherently interdisciplinary subject that is far from well understood and far from adequately studied.  Again in the realm of chemicals, the work of Staudinger and Mark in the 1920s on polymerization provided an excellent scientific basis for the development of polyester fibres.  But the commercial introduction of such fibres required the use of a new raw material - paraxylene - which was not, at the time, an

7. In recent years, computers have begun to displace the reliance on expensive and time-consuming pilot plants.  In the hands of experienced designers, micropilot plant data, combined with good analytical equipment, may yield workable commercial designs.

8. It is important to note that progress in the subdiscipline of polymer chemistry has been primarily an achievement of research in industrial laboratories.

9. In Little’s words: “Any chemical process, on whatever scale conducted, may be resolved into a coordinated series of what may be termed ‘unit actions,’ as pulverizing, mixing, heating, roasting, absorbing, condensing, lixivating, precipitating, crystallizing, filtering, dissolving, electrolyzing and so on.  The number of these basic unit operations is not very large and relatively few of them are involved in any particular process... Chemical engineering research... is directed toward the improvement, control and better coordination of these unit operations and the selection or development of the equipment in which they are carried out.  It is obviously concerned with the testing and the provision of materials of construction which shall function safely, resist corrosion, and withstand the indicated conditions of temperature and pressure.  Its ultimate objective is so to provide and organize the means for conducting a chemical process that the plant shall operate safely, efficiently, and profitably.”  Arthur D. Little, Twenty-five Years of Chemical Engineering Progress, Silver Anniversary volume, American Institute of Chemical Engineers, D. Van Nostrand Company, New York, 1933, pp. 7-8.

146 Index

article of commerce.  It also required a new way of cheaply converting paraxylene to terephthalic acid, since the use of nitric acid was both too expensive and too messy - it produced an unacceptably impure product.

Eventual success in this major breakthrough not only took many years but owed little, if anything, to further scientific research.  Here, as elsewhere, scientific breakthroughs are, at best, only the first step in a very long sequence of knowledge accumulation, if we think in terms of an economic perspective rather than that of the historian of science.

Consider the present-day world-wide search for products in which to embody the recently acquired knowledge of high-temperature superconductivity.  The world may still be decades away from the large-scale commercial exploitation of this knowledge, just as the great breakthroughs in molecular biology of the 1950s are only now beginning to find an embodiment in the products of an emerging biotechnology industry.

 Index

Interdisciplinary research

The complexity of the science - technology interface, and especially the two-way movement of traffic across that interface, clearly calls for some new institutional responses.  Decision-makers in both the public and private sectors will need to address the question of how to improve the organizational conditions and incentive structures at the science-technology interfaces.  The ability to improve the functioning of various specialists at that interface will undoubtedly be an important determinant of future leadership in high-technology industries.  This is so not only for the reasons that have already been suggested, but also because important changes appear to be occurring on the science side of the interface as well as on the technology side.

In particular, there is much evidence that scientific knowledge of a kind that is most likely to be useful in high-technology industries has to be pursued in an increasingly interdisciplinary fashion.  Consider the realm of medicine, a truly high-technology industry, as can be readily verified by a quick walk through the intensive care unit of any major teaching hospital.  In recent years, medical science has benefitted immensely, not only from such “nearby” disciplines as biology, genetics, and chemistry, but from nuclear physics (especially in diagnostic technologies such as magnetic resonance imaging, radioactive tracers, and radioimmunoassays), electronics, and materials science and engineering.  Lasers are now a frequent instrument of choice in extremely delicate surgery, and the availability of fibre-optic technology has made possible the direct visualization of internal organs - as in the esophagoscope, the flexible sigmoidoscope, and the bronchoscope.

An interesting index of the growing importance of electronics in medicine

147

is exhibited by Sony Medical Electronics, a recently established division of the giant consumer electronics company.  This company is now promoting such new products as remote surgical consultation systems and cardiac recording systems.  Other Japanese consumer-electronics companies are in earlier stages of a similar transition into medical applications. [10]

In pharmaceuticals, there have been remarkable advances drawing upon findings in such fields as biochemistry, molecular and cell biology, immunology, neurobiology, and scientific instrumentation.  These advances are creating a situation in which new pharmaceutical compounds, with specific properties, can be targeted and perhaps eventually designed, in contrast with the randomized, exhaustive, and expensive screening methods that have characterized pharmaceutical research in the past. [11]  The essential point is that the newly emerging pattern of innovation is, by its very nature, increasingly interdisciplinary.  That is to say, success requires close cooperation among a growing number of specialists.

In other fields, some of the most fundamental innovations of the postwar world have been the product of interdisciplinary research.  The transistor was the result of the combined efforts of physicists, chemists, and metallurgists.  Optical-fibre light guides, now transforming the telecommunications industry, were essentially the product of these same three disciplines.  Moreover, materials science has now emerged as an independent discipline, representing “a fusion of metallurgy, chemistry, and ceramics engineering with aspects of condensed-matter physics.” [12]

The scientific breakthrough leading to the discovery of DNA was the work of chemists, physicists, biologists, biochemists and, far from the least, crystallographers.  In agriculture, more productive seed varieties, such as the high-yielding rice varieties developed at the International Rice Research Institute in the Philippines, were the work of geneticists, botanists, biochemists, entomologists, and soil agronomists.  These new varieties have totally transformed the food supply situation of Asia in the past twenty-five years.

In some cases, the continuing interdisciplinary nature of technological progress has been underlined by quite unexpected shifts in the bodies of scientific knowledge upon which progress has sometimes depended.  The transition from the transistor to the integrated circuit brought with it a shift from essentially mechanical and metallurgical techniques of fabrication to chemical techniques.  The increasing dependence of semiconductors on

10. Wall Street Journal, 20 May 1991.

11. See Alfonso Gambardella, “Science and Innovation in the U.S. Pharmaceutical Industry during the l980s,” Stanford University doctoral dissertation, 1991.

12. Scientific Interfaces and Technological Applications, Physics Through the 1990s, National Academy Press, Washington (DC), 1986, page 6.  See also chapter 4.

148 Index

metallurgical inputs has played a major role in elevating metallurgy to what is now called “materials science.”

More recently, the continued shrinkage in the size of electronic devices has led to a situation where further technological progress may eventually involve thinking in entirely different terms.  Specifically, the unit of analysis for further progress in miniaturization may no longer be solid materials, but, perhaps, chains of molecules.  If such a transition were to take place, the required knowledge base would no longer be the kind in which electronic engineers have been trained.  It would become, rather, theoretical chemistry.

 Index

Institutional implications

Such a drastic shift in the underlying body of scientific knowledge, on which a technology is based, is not uncommon, and continued commercial viability may turn on the ease, or difficulty, that firms experience in making such a transition.  Consider the shift in electronics from vacuum tubes to solid-state transistors to integrated circuits, or from propeller-driven aircraft engines to jet engines.  The possibilities opened up by such shifts, and the potential difficulties of failing to make such transitions when a firm is suddenly “blindsided” by an unexpected shift, is an important reason for maintaining a substantial in-house scientific capability.  Indeed, it may be a reason for maintaining a portfolio of research capabilities in a range of scientific disciplines.  Unfortunately, only a relatively small number of firms have the resources for maintaining such a portfolio.

The increasing importance of interdisciplinary research has created serious organizational problems.  Such research often runs counter to the traditional arrangements, training, priorities, and incentive structures of the scientific professions, particularly in the academic world where great emphasis is placed on working within well-recognized disciplinary, and therefore departmental, boundary lines.

The American university system in the past forty years or so has been very successful in combining the performance of basic research at the scientific frontiers with the training of future professionals.  Nevertheless, the present organizational structure of American universities along disciplinary lines, as reflected in its departmental structures, poses some serious limitations as the solutions to research problems become increasingly interdisciplinary in nature.

Another major strength of the American university system has been the highly successful interface that it has developed with the industrial world.  This relationship has not been without its problems and dangers.  Industrial financing of university research runs the danger that universities will

149

increasingly have their research agendas set by their external sources of finance.  In so doing, there is the threat that they will compromise their autonomy, focus on short-term problems of immediate interest to industry, and thereby suffer a loss of effectiveness as leaders in fundamental research.

Perhaps even more serious is the possibility that the potentially great commercial value of scientific findings will lead to a loss of free and frank communication among university faculty members, and a reluctance to disclose research findings from which other faculty members or students might derive great benefit.  Such developments could prove to be harmful to future progress in the realms of both science and technology, as well as to education itself.

Nevertheless, as is usually the case, there has been another side of the coin.  Scientific autonomy is always subject to the potential pressures of its funding sources.  Federal funding of research since the Second World War, which has been far greater than industrial funding, has given an immense prominence to the needs and the priorities of the military and to its notorious penchant for secrecy.  But, at the same time, Department of Defense funding has played a highly creative role in the emergence of new specializations of great importance to high-technology industries, such as computer science and materials science.

Moreover, although I say this with some sense of trepidation to an academic audience, it is possible for the notion of autonomy to be carried to extreme lengths.  The dominant role of the academic department in American universities has been, in some respects, excessive.  It has, in particular, been very slow to provide professional career opportunities to those who have identified research problems at the edges, or interstices, of traditional academic disciplines.  Indeed, it is highly significant that there is no research institution in the United States comparable to the Science Policy Research Unit in its commitment to interdisciplinary research at the intersection between science, technology, and economics.

Perhaps it should be added that there is nothing uniquely rigid about the department structure of American universities.  Departmental rigidity is probably the inevitable price to be paid for the fact that, historically, scientific progress has required a high degree of disciplinary specialization.  It is therefore, a widespread phenomenon.  It is doubtful, for example, whether the department structures in German or Japanese universities are any less rigid.  In fact, it may be suggested that the slow pace of German entry into the realm of biotechnology owed much to the inflexible role of German university departments.

The important role attached to the department as a unit of intellectual organization has not prevented the American higher educational system from being remarkably adaptive and responsive to changing social needs in

150 Index

general, and the changing requirements of business and industry in particular. Indeed, it has long been a recurrent criticism of European visitors, especially from Britain, that American colleges and universities have been excessively responsive to the changing dictates of the market­place and to vocational needs of all sorts.

 Index

Financial support

The determination to improve the links between the academic and industrial worlds has already led to a great deal of institutional innovation at many research universities.  And, as is often the case, it has been the availability of new monies from external sources, and the associated possibilities for new hiring, that has generated a willingness to enlarge the traditional, single-discipline focus, and to contemplate new institutional arrangements.  Earlier in the twentieth century the availability of funds from private philanthropies, such as the Carnegie and Rockefeller Foundations and the Guggenheim Fund, served as powerful and highly creative catalysts for intellectual and institutional innovation.  It is fair to say that the financial stringency of recent years has rendered the American academic community more responsive to the introduction of new interdisciplinary arrangements.

The Center for Integrated Systems at Stanford is an interesting example of university-based research with industrial financing.  The Center receives financial support from twenty corporations, and it is devoted primarily to developing methods for designing and manufacturing large-scale integrated microelectronic circuits.  Its research activities draw heavily on computer science, integrated circuit engineering, solid-state physics, as well as other disciplines.

MIT has a Whitehead Institute, with a huge private endowment, which is devoted to biomedical research, as well as a ten-year contract with Exxon Research and Engineering Company to support research in the field of combustion.  West Germany’s huge chemical and pharmaceutical company, Hoechst AG, has given the Massachusetts General Hospital, a teaching arm of the Harvard Medical School, $50 million for the support of basic research in the area of molecular biology.

At the federal level, the National Science Foundation has been instrumental in establishing Engineering Research Centers at a number of major universities.  These represent important institutional departures of a multidisciplinary nature, typically involving a strong underlying emphasis on computers and specialized engineering skills working in close liaison with the rather more traditional scientific disciplines of physics, chemistry, and biology.

151

Columbia University has a center for telecommunications, Harvard University for the application of advanced computers to the design of communications systems, MIT for the improvement of manufacturing processes in the biotechnology industry, and Purdue University for research on highly automated manufacturing systems. The centers repre­sent significant multi-disciplinary innovations. It is too early to appraise their overall effectiveness, although it may be noted that there have already been failures as well as successes.

 Index

Private industry

In some important respects, private industry in the United States has, in the past at least, had a substantial advantage over universities in the organization of multi-disciplinary research.  It has not attached nearly the same significance to the rigid, disciplinary boundary lines that have loomed so large in the academic world.

It has proven easier to bring people from different disciplines together in an industrial environment where research is not organized by discipline but by problems, and where there has been a very different set of incentives and criteria by which the contributions of scientists are evaluated.  In the best American industrial laboratories, unlike the universities, a high value and considerable recognition are likely to go to individuals who are useful in solving the problems encountered by colleagues in fields other than their own.  The most successful American research laboratories in private industry have demonstrated that it is possible to perform research of both a fundamental and an inter-disciplinary nature in a commercial, “mission-oriented” context.  The most successful appear to have been those that have managed to create close interactions, and exchanges of information, between those responsible for performing the research, on the one hand, and those responsible for the management of production and marketing, on the other.  But it is far from clear exactly how this has been accomplished, and precisely what organizational, managerial, and incentive factors have differentiated successful from unsuccessful firms.  The subject is one that deserves a high research priority.

One point worth emphasizing is that the firms with the most outstanding industrial laboratories - Bell Labs., IBM, General Electric, duPont, Eastman Kodak - have developed excellent interfaces with university-based research precisely because they are known to be involved in basic research of high quality.  University scientists believe that they have much to learn from industrial scientists from such laboratories.  Since the flow of knowledge, in these cases, is widely understood to move in both directions, industrial scientists from distinguished laboratories are likely to be enthuse-

152 Index

astically received on university campuses, while university scientists welcome the opportunity to observe or even to become directly involved, as consultants, in industrial research.

A related comment about the Japanese scene may be appropriate at this juncture.  It has been a common practice to point to the low level of commitment of Japanese universities to scientific research as a serious weakness, and as a potential threat to the prospects for continued expansion of Japanese technological capabilities.  Certainly, Japanese universities represent a weak link within the Japanese science and technology systems.

Nevertheless, the concentration of scientific research in Japanese private industry has certain positive, or at least redeeming aspects.  These include the problem orientation that comes so naturally in the industrial context, and the consequent weakness of the barriers to interdisciplinary research that can loom so large in the academic world.  Equally important, perhaps, is the ease with which knowledge can be transmitted between the potential “producers” of new knowledge and those responsible for its eventual industrial application, and it cannot be emphasized too strongly that knowledge is readily transmitted in both directions.

This industrial context is not ideal for the pursuit of long-term basic research.  But basic research is not always “the name of the game.”  The Japanese firm may be well-suited for producing and utilizing precisely the kind of new knowledge that is most directly relevant to providing improved industrial performance. [13]

 Index

Transfer between specialties

A further, significant category of science/technology interactions is closely related to the multi-disciplinary issues that have already been discussed, but nevertheless is sufficiently distinctive to warrant separate recognition.  To put it in the most general terms, an important determinant of both the rate and direction of scientific progress in recent decades has involved the actual transfer of concepts, methodologies, or instrumentation from one scientific discipline, or specialty, to another.

In some cases the scale of these transfers has assumed almost the appearance of an invasion of one discipline by another (perhaps “migration” is a better term, since the transfer has often involved the permanent movement of scientists from one discipline to another).  The field of chemistry has for a long time benefitted immensely from the work of physicists, whose interests in the fundamental nature of matter have

13. See Masahiko Aoki and Nathan Rosenberg, “The Japanese Firm as an Innovating Institution,” in T. Shiraishi and S. Tsuru (eds.), Economic Institutions in a Dynamic Society: Search for a New Frontier, Macmillan, London, 1989.

153

provided a natural intersection of common concerns between physics and chemistry.  In the last couple of decades, the benefits to chemistry from such transfers from outside have increased considerably, in part as a result of the availability of new techniques of instrumentation.  The primary instrument, of course, has been the computer.

In addition, both analytical and synthetic chemistry have experienced transformations in the very nature of their research as a result of new approaches based on the contributions of physicists, mathematicians, statiticians, laser experts, materials specialists, and a formidable arsenal of new computer-controlled instruments.  One result, to which I have already referred, is the increasing capability for “designing” new pharmaceuticals instead of achieving them through crude empirical testing or experimentation.

The creative significance of these transfers to chemistry received broad recognition when the 1985 Nobel Prize in Chemistry was awarded to Herbert Hauptman (mathematician) and Jerome Karle (chemist).  They were the developers of sophisticated mathematical techniques that made it possible to deduce the three-dimensional structure of natural substances from observations based on X-ray crystallography.  The feasibility of their mathematical technique was, in turn, vastly improved by the availability of the computer:

They developed ways of actually calculating structure by analyzing the intensity of the points visible as dots in the X-ray pictures and calculating the “phase” of the atoms in the structures.  In this context, phase is an angular measurement that can vary from zero to 360 degrees.  The advent of powerful modern computers has made it possible to use the two scientists’ mathematical formulations on intensity and phase to determine quickly the three-dimensional structure of a molecule under study. [14]

The 1986 Nobel Prize in Chemistry, awarded to Dudley Herschbach, Yuan Lee, and John Polanyi, for their work on “reaction dynamics,” reflected some of these underlying trends:

They invented a set of tools in the 1950’s and 1960’s that helped bring both the theory and the technology of modern physics into chemistry.  Among them is the technique of using beams of molecules, fired at supersonic speeds, to study chemical reactions molecule by molecule for the first time... Like much of chemistry in the decades that followed, this work had a style that owed much to physics and depended on a broad understanding of theory. [15]

Although modern physics is probably the main “exporter” of concepts and methods to other scientific disciplines, it is by no means the only one.

14. New York Times, 17 October 1985.

15. New York Times, 16 October 1986, page 12.

154 Index

The so-called “life sciences” of biology, genetics, and medicine have been heavily dependent on chemistry.  The intellectual revolution that gave birth to molecular biology had diverse roots that certainly included the contributions of scientists trained in physics, such as Max Delbruck, Leo Szilard, Francis Crick, Maurice Wilkins, and George Gamow.  But essential contributions also came from Mendelian genetics, X-ray crystallography, physical chemistry, and biochemistry. [16]

Within the realm of engineering disciplines, techniques developed in one area frequently turn out to be useful in others.  Sometimes, they turn out to be of much broader significance and applicability.  In aircraft design, a standard problem involves calculations of air flow over wings.  In solving these problems in the very early years of the industry, Ludwig Prandtl devised what has come to be essentially a new branch of mathematics - known as asymptotic perturbation theory.  Applications of that theory can be found in radar design and the study of the combustion process, but also in astronomy, meteorology, and even in biology.  Recently, asymptotic perturbation theory has been used in designing pills so as to provide for optimal timing in the controlled release of medication.

One of the most powerful intellectual tools that has had extensive transfer experience in the past several decades has been information theory.  Claude Shannon, who developed information theory at Bell Labs., actually provided a generalization for calculating the maximum capacity of a communication system for transmitting error-free information. [17]  This generalization has been of great, and obvious, utility to the telephone industry, where a precise understanding of the determinants of channel capacity is central to engineering design.

However the theory, once it had received a rigorous formulation, turned out to be highly relevant in places very remote from the telephone system.  For Shannon’s central notion, that it is possible to give a quantitative expression to information content, had numerous ramifications.  Information theory represented a distinctively new way of thinking about a range of problems that occur in many places, and it has powerfully influenced the design of both hardware and software.  Eventually, information theory grew into a family of models of wide generality, with applications in the behavioral sciences as well as in the physical sciences and engineering.

16. See Horace F. Judson, The Eighth Day of Creation, Simon and Schuster, New York, 1979, especially pages 605-613.  For an account that emphasizes the contribution of physics, see Donald Fleming, “Emigré Physicists and the Biological Revolution,” in Donald Fleming and Bernard Bailyn, The Intellectual Migration: Europe and America, 1930-1960, The Belknap Press of Harvard University Press, Cambridge (MA), 1969, pp. 152-189.

17. Claude Shannon, “A Mathematical Theory of Communications,” Bell System Technical Journal (July 1948).

155

It appears also that instrumentation requirements sometimes serve as a powerful device for bringing together research scientists from separate disciplines.  X-ray crystallography played such a role in the development of molecular biology, precisely because it is, in effect, an instrument-embodied technique.  In a very different way the increasing reliance on supercomputers is serving to bring members of different disciplines together.  The impetus in this case is, to a considerable degree, the high cost of the technology and, consequently, the small number of locations where users need to convene.

 Index

Instrumentation as a production tool

Finally, there is another extremely important science/technology interaction that receives virtually no attention.  It involves movement of new instrumentation technologies, not from one scientific discipline to another, but from the status of a tool of basic research, often in universities, to the status of a production tool, or capital good, in private industry.  This is an “output” of basic research that has been of great significance in specific sectors of the economy.  In fact, instrumentation originating in the world of academic research in the years since the Second World War has been responsible for critical contributions to certain industrial technologies.  In the electronics industry, this would include instruments that are essential to the fabrication of semiconductors, such as ion-implantation technology and the scanning electron microscope. [18]

It is far from clear why this particular economic contribution of scientific research, including research of the most fundamental nature, has been so badly neglected.  In the academic world, of course, high status is usually accorded on the basis of the “purity,” or the abstractness, or the generality, of research findings.  Conversely, matters involving “hardware,” including techniques of instrumentation, are often dismissed as constituting an inferior form of knowledge, some of which may even (mirabile dictum!) turn out to be directly useful.

This sort of academic snobbery should surely have been discarded long ago, even by standards internal to this hierarchical manner of thinking, since a number of Nobel Prizes have been awarded to scientists for contributions that could be classed as hardware - the computer-aided tomography scanner, the electron microscope, and the particle accelerator.  Moreover, a casual glance at the award of Nobel Prizes in science in recent years should make it apparent how crucial it has become in the realm of scientific research to have access to the most sophisticated instrumentation

18. For further discussion, see chapter 13, pp. 255-257.

156 Index

available.  Much more to the present point, when the context of discussion is the economic impact of science, there is no obvious reason for failing to examine the hardware consequences of even the most fundamental scientific research.

 Index

Conclusion

The purpose of this chapter has been to raise questions about the science/ technology interface by examining specific patterns of interaction at various points on that interface.  No assertion is being made that these are the most important of the existing patterns.  I merely call attention to them as being important as well as neglected.  Their significance compared to other activities that have not been discussed here will obviously have to await the results of much further research.

Several policy issues already emerge clearly:

how can organizations and incentives be created that will be conducive to high quality interdisciplinary research?

To what extent is it reasonable to expect such research to be conducted inside individual firms, as contrasted with the resort to collaborative linkages with other firms, or with universities, in order to gain access to complementary skills and capabilities? [19]

How can fruitful interactions between scientists and technologists, as well as among scientists from different disciplines, be most effectively encouraged?

What measures can be taken to ensure that valuable findings or methodologies from any point on the science/technology interface will be transferred rapidly to other points?

It is essential that these issues are not approached in a piecemeal fashion.  The realms of science and technology must be conceived of, not as disconnected bits and pieces of human activity, but as parts of large and complex, interrelated systems.

Equally important, it is apparent that these systems differ very significantly from one country to another.  As a result, policies or institutional arrangements that work well in one country may not be readily transferable to other countries.  Systemic differences need to be taken into account, which brings us back to the importance of mapping, with which this paper began.

Finally, far more attention needs to be devoted to what determines the profitability of private spending on science and technology.  Although the

19. For an analytical treatment of some of the underlying issues, see Ashish Arora and Alfonso Gambardella, “Complementarity and External Linkages: The Strategies of Large Firms in Biotechnology,” Journal of Industrial Economics (June 1990).

157

linear model is no longer credible, the causal sequences that it emphasized still remain dominant in a subtle yet highly significant way, so that there is still a strong preoccupation with how research leads to economic consequences, and little attention is given to how economic factors influence the willingness to do research.  In both the United States and the United Kingdom, for example, much attention is given to the argument that a weakening commitment to R&D may be responsible for the deterioration in the competitive position of these countries in international markets.

Much less noticed is the possibility that causality may also be the other way around.  In the American case, at least, her overwhelming dominance in international markets in the twenty-five years or so after the Second World War surely provided a strong incentive to commit private money to R&D, since the prospects were excellent that American firms would receive the primary rents in international markets from the development of new technologies.  Surely it is reasonable to believe that the private incentive to spend money on R&D in the United States weakened along with the growth of international competitors and the declining prospect of generating profits overseas, as well as at home, from larger R&D budgets.

In practice, far too much of the debate over R&D spending in the United States and the United Kingdom has been over the size of the public component of such spending and far too little on the determinants of private spending and on how private incentives can be strengthened.  This has been particularly unfortunate, in my view, because it is the “downstream” development spending that plays a crucial role in determining who gets to capture the potential rents generated by scientific research.

This is a point that appears to be well understood in Japan, where approximately 80 percent of total R&D spending is financed by private industry - a far higher percentage than in either the United States or United Kingdom.  This also suggests a conclusion which, coming from an economist, will occasion no great surprise: prospects for more rapid economic growth - surely a major though not exclusive goal of science and technology policy - will depend on success in providing a structure of economic incentives and rewards that are supportive of the rapid diffusion of new technologies, once they have been developed.

Decisions to adopt new technologies are, typically, investment decisions, involving the acquisition of new capital goods.  Such decisions are therefore subject to the same sort of economic calculus that attends all investment decisions.  Indeed, precisely the same is true of the decision to commit private resources to R&D in the first place.  Science and technology policy, in this sense, is simply an aspect of economic policy-making and not an entirely separate subject.  The wrong set of economic policies can guarantee the failure of any specific set of policies directed toward the realms of science and technology, no matter how ingeniously conceived.

158

Index

The Competitiveness of Nations

in a Global Knowledge-Based Economy

May 2003

AAP Homepage