The Competitiveness of Nations in a Global Knowledge-Based Economy
Ashish
Arora [a]
and Alfonso Gambardella [b]
The changing technology of technological change: general and abstract knowledge and the division of innovative labour *
Research Policy
Vol. 23, 1994
523-532
Content
2. General and abstract knowledge and
innovation
3. The changing technology of technical
change
4. Economic implications: towards a
division of innovative labour
4.1. Factors limiting the division of
innovative labour
4.2. The division of innovative labour between large and small firms
4.3. Division of innovative labour between users and producers
4.4. Patent protection in a division of innovative labour
In the past, most innovations have resulted from
empiricist procedures; the outcome of each trial yielding knowledge that could
not be readily extended to other contexts. While trial-and-error may remain the primary
engine of innovation, developments in many scientific disciplines, along with
progress in computational capabilities and instrumentation, are encouraging a
new approach to industrial research. Instead
of relying purely on trial-and-error, the attempt is also to understand the
principles governing the behaviour of objects and
structures. The result is that relevant
information, whatever its source, can now be cast in frameworks and categories
that are more universal. The greater
universality makes it possible for the innovation process to be organised in new ways: firms can specialise
and focus upon producing new knowledge, and the locus of innovation may be
spread across both users and producers. More generally the use of general and abstract
knowledge in innovation opens up the possibility for a division of labour in inventive activity - the division of innovative labour. The
implications for public policy, especially that on
intellectual property rights, are discussed.
In The Wealth of Nations, Smith states that “improvements in
machinery... (are sometimes made by) philpsophers and
men of speculation, who... are often capable of combining together the powers
of the most distant and dissimilar objects” (Smith, 1982, p. 114). In fact, little of the technical progress in
industry has stemmed from the ability to relate “distant and dissimilar objects”.
Most innovations and productivity improvements
have resulted from empiricist procedures based on trial-and-error; the outcome
of each trial yielding knowledge that could not be properly extended to other
situations and contexts. [1]
This paper argues that this state of affairs
is changing. Developments in many
scientific disciplines, along with progress in computational capabilities and
instrumentation, are encouraging a new approach to industrial research. Instead of relying purely on trial-and-error
to find what may work, the tendency is to attempt to understand the principles
governing the behaviour of objects and structures, to
‘observe’ phenomena and test hypotheses with sophisticated instruments, and to
simulate processes on computers. This is
not to
a.
Heinz
School, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213,
USA
b.
Istituto di Studi Aziendali, Universitâ di Urbino,
Urbino and JEFE, Università
Bocconi, Milan, Italy
This is a
revised version of a paper prepared for a conference in honour
of Nathan Rosenberg, The Role of Technology in Economics, 8th-9th of November, 1992, Stanford University. We have benefitted
from helpful comments and suggestion from several people. In particular, we would like to thank Sergio Barabaschi, Sub Eswaran, Suresh Konda, Alma Rizzoni, Nate Rosenberg, Ed Steinmueller,
Salvo Torrisi, and Antonello
Zanfel. The
paper has benefited greatly from the helpful comments of two anonymous
referees. The customary disclaimers on
errors and inadequacies apply.
1. For instance, Bessemer, who developed the steel making process named
after him, did not quite understand why or how the process worked. Therefore, when the process was first adapted
in Britain, it failed to work satisfactorily, for the ores used contained
phosphorous whilst the Bessemer process required an acidic medium (Mowery and
Rosenberg, 1989, p. 29).
523
suggest that industrial research can now do without physical
experiments, or that innovations arise from basic research alone. Amongst the most important contributions of
Nathan Rosenberg to our understanding of technological change is that
innovations are often initiated by signals received in the course of production
or from customers and markets, and are based on fairly tedious and (from a scientific
view point) mundane activities. Such
activities remain the primary engine of innovation.
However, as we shall argue below, relevant information for innovation,
whatever its source, can now be cast in frameworks and categories that are more
universal. The greater universality
makes it possible for the innovation process to be organised
in new ways. The opportunities for firms
to specialise and focus upon producing new knowledge
are enhanced and the locus of innovation may be spread across both users and
producers. More generally, the use of
general and abstract knowledge in innovation opens up the possibility for a
division of labour in inventive activity - the
division of innovative labour.
2. General and
abstract knowledge and innovation
Before proceeding, it is useful to clarify our terminology. We distinguish between knowledge on the one
hand, and concrete information, on the other. By the latter we mean ‘facts’ about products,
processes, and markets. Knowledge
provides the context within which information is interpreted. We shall also use the expression general and
abstract knowledge. By ‘abstract’ we
mean the ability to represent phenomena in terms of a limited number of
‘essential’ elements, rather than in terms of their ‘concrete’ features. By ‘general’ we mean knowledge that relates
the outcome of a particular experiment to the outcomes of other, more ‘distant’
experiments. [2]
In the distinction between general and abstract knowledge, and the use
of more practical, empiricist procedures, which generate concrete information,
the reader may have sensed a parallelism with the distinction between ‘science’
and ‘technology’. We have deliberately
avoided the use of the latter terminology. Apart from the contentious nature of the
distinction between the two, both science and technology (however one chooses
to define them) utilise and produce both general and
abstract knowledge and concrete information. Moreover, in both, tacit know-how and skills
are important.
For our purpose, the distinction that is important is that the use of
science or technology for economic objectives entails that one solve problems that are too complex to be adequately represented
only in abstract terms, viz in terms of few essential
elements. In order to come up with new
products or processes that work satisfactorily (though not necessarily
optimally) in practice, one has to delve into the complexity of problems. Put differently, general and abstract knowledge
has to be combined with concrete information, because one also has to attend to
the ‘details’ that are typically ignored by abstract representations.
Thus, industrial research has had to resort to long and
systematic experiments with objects and systems. In aeronautics, for instance, engineers have
long used wind tunnels to simulate the flying conditions of aircraft. Wind tunnels have enabled
them not only to test whether a particular design ‘works’, but also aided in
the search for a better design. Given
the lack of a general theory of aerodynamics, such long and costly experiments
have been the only reliable way to design aircraft (Mowery and Rosenberg, 1982;
Vincenti, 1990). Similarly, drug discovery has required the
laboratory syntheses of a great many molecules and systematic trials before
finding one that showed potential therapeutic effect (e.g. Gambardella, 1994). Process innovation has relied heavily on
trial-and-error. For instance, in the
design of large scale, continuous chemical processes, it has been no simple
matter to go from the lab or bench scale process to producing tonnage levels. One has lacked a sufficiently general and comprehensive
understanding to be reasonably sure that a process that worked well at the
scale of a few pounds a day would not prove to be inefficient, or even
dangerous and unworkable when
2.
Similar distinctions between knowledge and information have
been drawn by others (Nelson and Winter, 1982). The terms general and abstract knowledge and
concrete information are borrowed from Rullani and Vaccà (1987) and Di Bernardo and Rullani (1990). We are
aware of the fact that an epistemologist may find our definitions naive. However, our purpose here is to define a
working terminology that captures the essence of the phenomenon we are analysing.
524
producing a few tonnes a day. Consequently, the process of ‘scale up’ has
required extensive experimentation by skilled and experienced chemical
engineers (Freeman, 1968; Landau and Rosenberg, 1992).
The common feature of these examples is that each trial or experiment
yielded knowledge that was ‘local’ in the sense that what was learnt from each
test could not be readily extended to different situations or contexts. The outcome of the experiments depended upon
many variables in ways that were not properly understood. In order to be able to generalise
information from one set of experiments and to relate it to information
produced by other experiments, one needed to be able to comprehend the
phenomenon being studied in an abstract manner. Only then, could one begin to sort out the
unessential differences between situations from the important differences. For instance, random screening did not provide
many clues as to why a particular compound was effective against a certain
disease, because researchers could not associate the structure and the
properties of the molecule with organic disorders. Neither could one predict with any confidence
how the effectiveness would vary with the sex or age or other factors. Hence, one needed to conduct a great many
experiments in order to be sufficiently confident about the behaviour
of the system for a large range of parameter values.
3. The changing
technology of technical change
The use of general and abstract knowledge in industrial research has
received a great impetus from advances in three areas: theoretical understanding
of problems, instrumentation, and computational capability. The complementarity
between these three areas is apparent, and progress in all three areas is
together changing the ‘technology of technical change’. [5]
Not only can researchers test theories more rapidly and effectively
using sophisticated instruments and greater computational power, they can also
test theories that could not be tested using ‘old’ experimentation technologies
(e.g. theories about the behaviour of ‘nano’-structures). (Apart from testing of theories,
improved instrumentation can also point towards improvements in the theories
themselves.) In turn, advances in instrumentation
have benefitted from greater and cheaper
computational power. Computers are used
to control instruments, record observations, and analyse
the observations quickly and accurately. The value of computational capabilities
depends on the advances in theoretical understanding as well. The use of computer simulation requires that
engineers conceptualise problems in abstract forms. They have to formalise
them in a mathematical language, and translate the mathematical model into
software language. The ability to formalise problems in abstract terms depends critically
upon a good theoretical understanding of the problems themselves.
The point is that the availability of extremely cheap computational
power extends the application (and hence, the development) of theoretical
knowledge. [4] The
analysis of protein structures illustrates the complementarity
nicely. A protein chain of 150 amino
acids can give rise to 5150 possible molecular structures, a number that is
impossible to investigate even with supercomputers. A recently developed theorem, using the
principle of energy minimisation, cut the number of
valid alternatives to 1502 (New Scientist, 1992). This is still too large a number of
possibilities to handle with ‘pen and paper’, but not so with supercomputers. As the example shows, the value of
computational power is higher when combined with a sophisticated theoretical
understanding of phenomenon under study, and vice versa.
The analysis of molecular structures and their interactions with other molecules best exemplifies the benefits arising from the combination of advances in theoretical knowledge, computational and simulation capability, and new instruments (e.g. Baker, 1986). Rational design of molecules is gradually replacing random, trial-and-error ex-
3. We are
indebted to Nathan Rosenberg for this phrase.
4. In some
ways, computer simulation may be thought of as a substitute for theoretical
understanding. Because simulation makes
systematic exploration far more time- and cost-effective than physical tests,
it may be more advantageous to perform extensive trial-and-error rather than to
try to understand the problem in its generality. Nonetheless, we suggest that over long, the
synergy between more efficient computerised trials
and deeper understanding of phenomena will more than offset the substitution
effect.
525
periments with a great many
materials to find one or a few with desired properties.
The drug industry is one where the rise of the new approach has been
most apparent. This industry has been
using computers to design compounds for nearly 10 years (Science, 1992). Growth of scientific understanding in
molecular biology and genetic engineering has clarified important aspects of
human metabolism and the chemical and biological action of drugs. At the same time, powerful new instruments make
it possible to examine the behaviour of proteins and
molecules. For instance, cell receptors
in the human body have particular geometrical structures, and the drug molecule
has to bind to them just as a key fits into a ‘lock. By studying the structure of receptors,
scientists can design (typically on computer) a theoretical compound that
matches a given receptor site, and is expected to counter a
certain pathology. This narrows
laboratory research and. clinical tests to families of molecules whose characteristics
are consistent with the ‘ideal’ molecule (Gambardella, 1994).
The development of new catalysts is another case in point. Much of the technological progress in the
chemical and oil industries depends upon the development of new catalysts. Zeolites, for
instance, are molecular sieves that separate different mixtures through
selective adsorption. The oil industry
has used zeolites as catalysts since the late 1950s. Until the early 1980s, zeolite
catalysts were developed through laborious empirical correlations based on
conventional solid-state chemistry and chemical engineering. Even though the discovery and initial use of zeolites did not owe a great deal to the use of general and
abstract knowledge, in recent years their development has been heavily
influenced by advances in chemistry, especially molecular sieve science, and by
the development of sophisticated instruments and analytical procedures (such as
NMR and X-ray diffraction). [5]
Zeolites have channels and cavities that
filter out substances whose molecular size is smaller than the zeolite pores, which vary in size. As a result, knowledge of the zeolite structure, and of the reactant molecules, makes it
possible to apply a rational approach to zeolite-based
catalysis. Zeolites
also exemplify the wide applicability of basic principles. ZSM-5 is a medium-pore zeolite.
It was developed in the 1960s by Mobil
to convert liquid methanol to gasoline. At
that time, chemical engineers did not know exactly how it worked, and they had
not made significant use of it for some years (Financial World, 1989). Deeper understanding of the structure and the
catalytic action of ZSM-5 has boosted its use in quite a few processes. For example, selectoforming
is a process to convert low octane components into high octane components. Selectoforming is
limited by the contamination of the desired aromatic compound by larger
paraffin molecules which are not filtered by the small-pore selectoforming
catalyst. However, complete removal of
both small and large paraffin molecules in the reformat entails some loss of
the product which is to be cracked into gas. Research showed that ZSM-5 not only separates
small and large paraffin molecules from the final product, but also prevents
the loss of product into gas, producing higher octane compounds without affecting
gasoline yields. [6]
New materials is another field where general
and abstract knowledge is being applied with good effect. Models which are based on the relationship
between molecular structure and the properties of materials are used to guide
the search for new materials. Such
models can be improved by observing the behaviour of
microscopic particles using new powerful instruments, and by modelling microscopic structures on computer. A recent study conducted at the Pacific
Northwestern Laboratory (PNL) reports the result of interviews with several
R&D managers in this field (Eberhardt et al.,
1991). The managers
5. “While the development of
new catalysts was empirical fifteen years ago, research innovations in chemical
sciences over the latter years are converting catalysis from an art to science”
Research Briefings, 1983, p. 79). “Before 1980 catalysts were synthesized and
manually tested in bench-scale reactions to achieve a reasonable level of
activity, selectivity, and life; subsequently, the synthesis was modified by
trial and error to ultimately obtain economic attractive catalytic performance...
Indeed chemical and physical technologies have undergone revolutionary change
during the last decade. Technology has
unquestionably moved to the molecular level - chemical molecular design is
becoming a pervasive methodology - . - The catalyst and preliminary process
concept are then designed on paper and certain aspects are simulated by
computer” (Cusumano, 1992, pp. 5-6).
6. Cusumano (1992) reports
other such examples. See also Research
Briefings (1983), Chemical Week (1988 and 1989), Chemical
Engineering (1989), Financial World (1989).
526
emphasized
that ‘materials by design’ allows a more complete optimisation
of a material because its performance can be simulated under a wide variety of
operating conditions; it also enables researchers to eliminate inefficient
alternatives before conducting expensive experiments and tests. The study also indicated that materials by
design could reduce the time of exploratory research vis-a-vis
more traditional trial and error experiments from 5 to 2.5 years,
while leading to higher quality products.
The new approach is also being applied to the analysis, design and optimisation of complex systems, like production processes
or airplanes. The behaviour
of complex systems can increasingly be simulated on computers. This allows the exploration of many different
designs, far more cheaply than physical experimentation, and to optimise a number of design features before performing the
more costly physical tests. [7]
One recent important advance in this field has been the development of
the so-called ‘genetic algorithms’, which already being used to project
turbo-jet engines such as those normally used in regular passenger airplanes
(Holland, 1992). Each alternative design
is defined by a string that identifies different aspects of the engine (e.g.
the shape of internal and external walls, or the pressure, speed and turbulence
of the air flow in different points within the cylinder). Each set of characteristics produces a certain
performance. The computer uses
‘selective adaptation’ procedures to generate progenies of ‘better’ strings. [8] It discards strings with
low performance, and mixes the characteristics of high performance strings. With sufficient computational power, engineers
can scan a great many alternative projects to select a smaller set of designs
that blend in a satisfactory way a number of desirable features (even though
not yet in a ‘globally’ optimal way). [9]
One could list other technological sectors as well
where general and abstract knowledge is being applied, but it is not truly
necessary. Although we may not have
provided conclusive evidence, we think that our examples are highly suggestive
of the way in which the nature of the innovation process is changing. Not only is this change interesting in itself,
we believe that it also has important implications for the organisation
of innovative activity, and it is to the latter that we now turn.
4. Economic
implications: towards a division of innovative
labour
4.1. Factors limiting the division of
innovative labour
The thrust of our argument is that the use of generalised
knowledge and abstraction increases the proportion of relevant information that
is articulable in universal categories. Hence, it makes a greater fraction of
information intelligible and applicable in diverse contexts. When innovations depended primarily on
trial-and-error procedures based on physical experiments, much of the knowledge
base of the firm was experience-based and tacit. The research process that was carried out
based on such firm specific knowledge produced information that was ‘local’ and
context dependent. Almost by definition,
context dependent information could not be used by an agent unfamiliar with the
context within which the information was generated (or only at a great cost). This implied that the innovation process
worked best when the innovator possessed the downstream complementary assets
needed to develop and commercialise the innovation.
As concrete information comes to be related to more general classes of
phenomena, it becomes less context dependent, and can be codified in ways that
are more meaningful and useful for other firms as well. Furthermore, as more firms utilise
general and abstract knowledge, their frameworks for organising
and representing in-
7. We would
like to thank Sergio Barabaschi for an illuminating
discussion of the use of simulation experiments and ‘virtual prototypes’ in
industry.
8. Each
individual project for a turbojet engine involves more than one hundred
variables and about fifty constraints. The
need for great computational power is self-evident.
9. A study
showed that, using traditional engineering techniques, an engineer needed about
8 weeks to come out with a satisfactory project of a turbo-jet engine. Another engineer, assisted by an expert system
as a ‘seed’ for the genetic algorithm, a third engineer took about 2 days to
develop a project with three times as many improvements as in the ‘traditional’
engineer case and one and a half times as many improvements as the engineer
assisted by the expert system (Holland, 1992).
527
formation tend to overlap to a greater degree than in the
past. The developments in communications
technology, itself closely related with advances in computer technologies, also
reduce the costs of inter-firm communication.
To be sure, we are not saying that experience-based
learning, and tacit skills and capabilities embedded in organisational
routines, are no longer important. Not
only does the generation of general and abstract knowledge itself depend on
tacit skills and capabilities, but, as discussed in Section 2, firms cannot be
content just with understanding problems in abstract terms. In order to come up with specific new products
or processes, they have to deal with the complexity and idiosyncratic aspects
of applying knowledge to concrete problems, a process which relies heavily upon
tacit abilities and trial-and-error. Moreover,
the decision to invest in general and abstract knowledge is an economic one. It depends on the complexity of the problem,
the radicalness of the planned change, and the
diversity of sources of information. If
one is contemplating only minor modifications in a process, if one has a
relatively short term planning horizon, or if the problem is very complex, it
would be more sensible to adopt an empiricist approach rather than try to understand
the process in fundamental ways.
Our point is that the changing technology of
technical change is making the production process of new technologies more
divisible. Boundaries between various
sub-tasks can be more usefully drawn because the output from the different
tasks can be represented in terms of abstract and universal categories, and
hence be combined with each other. Within
the boundaries, idiosyncratic information, and tacit knowledge and skills would
continue to play an important role. In
sum, the body of knowledge and information for innovation has become more
‘divisible’ - pieces of knowledge, and bodies of
expertise and (tacit) information can be separated into different organizations
and re-assembled at a later stage. The
use of generalised knowledge and abstraction in
industry may thus have important implications for the boundaries of the
innovating firm, and more generally for the theory of the firm itself. We suggest that general and abstract knowledge
encourages a division of labour in innovation - the
‘division of innovative labour’. [10]
While many sectors and economic activities have
shown a fairly extensive division of labour,
innovations has typically been an exception. But what has constrained the division of
innovative labour and limited the market for
technology? Why is it that the
predominant mode of industrial research has been as a part of enterprise which
also carries out activities such as manufacturing and distribution? The transaction cost literature provides one
perspective. Teece
(1988) argues that contracts for intangible outputs are very difficult to
specify ex-ante, and problems of
lock-in can arise due to sunk costs. Alongside,
such contracts may encounter formidable problems of appropriation because of
the natural difficulties in appropriating (and hence exchanging) knowledge and
information (Nelson, 1959; Arrow, 1962). These factors raise the potential for
opportunistic behaviour.
Rather than enter into a debate about the merits of the transaction
cost perspective, a more useful approach is to ask the following. What factors determine the costs of writing feasible
and efficient contracts which would allow a division of labour
in inventive activity? We submit that
there is a ‘technical’ constraint upon the division of innovative labour which is logically distinct from the constraint
posed by opportunism. This constraint
arises from the fact that relevant information for innovation can be strongly
context-dependent; more generally, the knowledge base of a firm, which provides
the context for interpreting and utilising
information, can be highly firm-specific. Consequently, the cost of transferring
information across firm boundaries can be far higher than those of intra-firm
transfers. Equally important, context
dependent information can be contracted for only with great difficulty. However, as firms increasingly use knowledge
bases that are more universal categories, and produce information that is
usable in a number of different contexts, the costs of contracting decrease. [11]
4.2. The division of
innovative labour between large and small firms
As argued by many authors, large and small firms have a ‘natural
comparative advantage’ in
10. Wes Cohen and Paul David have joint claims on the
paternity of the phrase.
[11.
Clearly, there can still be problems of appropriation, and hence a division of
innovative labour will have to rely on strong
intellectual property rights. See below
for further discussion. Also, our
division of innovative labour will not be characterised by arms-length market transactions. The need for complementary tacit knowledge,
skills and assets, and the need to restrain opportunistic behaviour,
imply that it will have to rely on more complicated forms of governance
structure, like joint-ventures and collaborative alliances. This point is discussed extensively in the
burgeoning literature on innovation networks (e.g. Freeman, 1991).]
HHC: [bracketed]
displayed on page 530 of original.
528
different stages of the innovation process. This debate goes at least as far back as Jewkes, Sayers and Stillerman
(1958) who argued that a number of well known innovations did not originate
with the firms that are now associated with those innovations. Mueller (1962) showed that most of Du Pont’s major innovations between 1920 and 1950 came from
outside sources. He concluded that Du Pont’s comparative advantage was in large scale development
and improvement of ‘inventions’ rather than in the inventive process itself.
Arrow (1983) has argued that the organisational
flexibility of a small firm and the lower organisational
distance amongst its internal units reduce asymmetric information between
innovators and people making decisions about internal allocation of resources. Smaller firms then have greater incentives to
carry out more novel and riskier innovation projects, provided that they can
finance them. Big firms are better at
large scale development, production and marketing. Similarly, Holmstrom
(1989) has argued that different organisational structures
have differential advantages in performing hard to measure activities like
innovation vis-a-vis more routine activities (like
development and commercialisation). Holmstrom argues that
bureaucratisation that charactenses large firms is
an optimal organisational response to the need of
coordinating many tasks in large firms, but is hostile to innovation.
The empirical evidence on the question of firm size
and ‘innovativeness’ is mixed (Cohen, 1992). But if our premise is correct, the mixed
evidence should come as no surprise. A
division of innovative labour has been severely , impeded by the factors discussed above. Hence, smaller firms, even though in principle
more efficient, would be less likely to invest in innovation. [12]
Not
only would an innovative small firm be faced with the difficult task of
acquiring the necessary downstream assets required for commercialisation, it
might also be handicapped by the large fixed costs of creating a firm-specific
knowledge base called for by the trial-and-error approach to innovation. If our premise is correct therefore, we should
expect to see an increase in the ‘innovativeness’ of small firms in the future.
[13]
Not only does the use of general and abstract knowledge promote a
division of innovative labour, the converse is true
as well. A more extensive division of labour would imply ‘thicker’ markets for information and
knowledge based services. With many
sellers and buyers, the uncertainty of having to rely on outside sources would
be reduced, and hence also the incentives to vertically integrate. As the markets develop, the incentives to
comprehend and articulate the knowledge base of a firm in terms of more universal
categories are likely to increase as well since this would enhance the ability
to participate in the division of innovative labour. If intellectual property rights adapt, smaller
firms can profitably invest in knowledge-based products and services. Correspondingly, larger firms could specialise in large scale development and marketing, and in
research based on lumpy assets. A
division of innovative labour would then be socially
desirable as it would allow firms to specialise in
the activities to which they are comparatively better suited. [14]
12. Cohen and Klepper (1992) provide a related
explanation. Larger firms can spread the cost of the innovation over a larger
output, and hence would invest more than smaller firms.
13. Not surprisingly, an extensive division of innovative labour between small and large firms is observed today in
the pharmaceutical industry. (See, inter alia, Arora and Gambardelta (1993)).
Molecular biology and genetic
engineering have supplied a generalised method for
discovering new drugs. This, along with
a strong system of intellectual property rights, has encouraged specialisation of biotech companies in upstream research,
and changed the structure of the industry from one where innovation was highly
integrated (from research to distribution) in large firms.
14. Arrow (1983) also noted the possibility of a division of labour in innovation according to firm size, although he
did not explicitly recognise the constraint posed by
the context-dependent nature of technological information: “Availability of
research outcomes on the market will reduce the incentives... [of large firms] to use only internally generated research
outcomes... There are limits to relying on the market for research inputs... But
clearly some substitution will take place.”
(Arrow, 1983, pp.
26-27.)
529
4.3. Division of innovative labour between users and producers
Another way of looking at the division of innovative labour is to examine it in the context of user-producer
interactions. Von Hippel
(1990) argues that a great deal of information is ‘sticky’, in the sense that
it is very costly to transfer across organisations. He suggested that economic benefits can arise
if one could shift the locus of problem-solving rather than moving the sticky
information. For example, producers of
application-specific integrated circuits (ASIC) used to acquire detailed
information about the needs of their clients before designing the customized cirçuits. This was
inefficient because a lot of this information was costly to transfer. ASIC manufacturers now supply their clients
with ‘generic’ components, along with user-friendly CAD-software packages so
that users can adapt the basic component to their needs. [15]
Our conceptualisation advances Von Hippel’s in that we also analyse
the factors that enable one to move the locus of problem-solving. In terms of the framework developed here, in
order for the locus of problem-solving to move across organisational
boundaries, problems must be conceptualised in
general and abstract forms. For
instance, ASIC manufacturers had to create circuits that were general enough
and flexible enough that they could be tailored to a number of different
applications. Moreover, the customisation was being done by the users themselves, many
of whom might not have any significant expertise in the design of ICs. To make this possible, the diverse applications
for ICs had to be conceptualised in a sufficiently
general (and hence also, abstract) manner. Equally important, the parameters of IC design
had to be related to the parameters of IC use (as understood by users). Only then could users adapt a generic tool to
their specific needs. (See also Steinmueller,
1992.)
In our characterisation, furthermore, the
boundaries of vertical integration are not given. Sticky, or context-dependent information, as
we prefer to call it, encourages vertical integration. For instance, despite current attempts to
develop generalised layers of software on which to
build applications, software production is still largely idiosyncratic to
specific problems. As a consequence, a
great deal of software is still produced by users themselves (Steinmueller, 1993).
4.4. Patent
protection in a division of innovative,
labour
As noted above, the idea that the problem of appropriating rents from
innovation leads to vertical integration in the innovation process is well
established. Levin et al. (1987) found
that in most industries, other means of appropriating rent, such a secrecy, and
first mover advantage, were more important than patents.
Two points should be noted. First,
a division of innovative labour requires better defined
and vigorously enforced intellectual property rights. Specialisation in
producing information or knowledge based services requires that the producer be
able to protect unpermitted use of its ‘output’. A loose system of intellectual property rights
would then severely undermine the ex-ante
incentives of specialised suppliers to innovate (and
therefore to exist), as they would not really have many other means to protect
their outcomes.
Second, the very forces which encourage a, division of innovative labour also enable patents (and intellectual property
rights more generally) to play a more important role in sustaining such a
division of labour. While it is tempting to see the effectiveness
of patents solely as a creature of patent policy, one must also keep in mind
the role of the underlying knowledge base. The effectiveness of patents also depends on
the extent to which the new ideas and knowledge can be articulated in terms of
universal categories, cheaply and effectively. As knowledge can be
expressed in more general and universal categories, the object of a patent, and
its scope and applicability, can be defined more precisely. Hence, general and abstract knowledge would
also enhance the ability to use patents to protect innovations. [16]
Our conceptualisation sheds a
different light on patent policy issues. Some authors maintain that broad patents, by
reducing the number of innovators, would reduce the diversity of approaches,
and hence the rate of technological progress. (e.g.
Merges and Nelson, 1990). If our
15. Similarly,
instead of developing user-specific applications, software manufacturers are
trying to produce generic tools (using object-orientated programming), on which
users can build their own specialised applications.
[16.
Thus the much greater importance of patents in industries such as chemicals and
pharmaceuticals is to be understood as resulting partly from the better
articulated knowledge base. As our
examples have suggested, these are industries where general and abstract
knowledge has been used extensively. The
R&D statistics are consistent with our claim. The ratio of R&D expenditure on basic and
applied research to development is about 0.5
in Chemicals and Allied Products. The corresponding ratios for other “high tech”
sectors such as Electrical Machinery, Aeronautics, and Automobiles are of the
order of 0.15. (Landau and Rosenberg [1992].)]
HHC: [bracketed]
displayed on page 531 of original.
530
premise is correct, then the opposite conclusion would hold:
broader patents, by encouraging innovators that lack
the size and downstream capabilities, would increase the rate of technological
progress.
Put differently, all else held constant, narrow patents encourage vertical
integration in the generation and commercialisation
of innovations. Under weak patent regimes,
large firms will generate in-house the technologies that they commercialise, if only because of the limited supply of
external research outcomes. Broader
patents would instead encourage investments in ideas and products embodying
more generalised knowledge, by firms that are
relatively more efficient at these activities and relatively less efficient at
large scale production and commercialisation. Correspondingly, it would stimulate larger
firms to specialise in downstream activities, where they
have comparative advantage.
Clearly, broad patents will also protect the innovations of larger firms
more effectively. But because large firms
can already protect their innovations through other assets, the marginal benefit
of broader patents will be higher for small firms. Apart from its direct effect on the incentives
to innovate, a strong intellectual property rights regime would also enhance
the incentives to trade in technology. The
latter applies with particular force to the case where tacit know-how must also
be transferred. Stronger intellectual
property rights increase the efficiency of contracts for the sale of
technology, and hence, increase the incentives for firms to specialise
in the production of technology. (See Arora (1991) for details.) In sum, to the extent that different firms or
different organisational structures have differential
efficiency in different stages of the innovation process, there could be social
advantages to broader patents.
We have argued that industrial research and innovation increasingly
rely on more generalised and abstract knowledge. The ability to represent concrete information
in abstract and universal categories allows it be used in a number of locations
and organisations, even those that are ‘distant’ from
the source. With suitable intellectual
property rights, this encourages a division of labour
in innovation, with different firms and organisations
specialising in the stages of the innovation process
where they have a comparative advantage.
The empirical evidence that we can offer may not be conclusive. Nonetheless, we believe it is highly
suggestive of the underlying trends. We
have attempted to show that if, as we suggest, the technology of technical
change is indeed changing, it will have far reaching
effects on the organisation of technical change. Thus far, the so-called ‘high tech’ industries,
biotechnology, new materials, semi-conductors and software, have shown the
greatest extent of specialisation, and an upsurge of
network-like arrangements for innovation. These are the sectors where the use of general
and abstract knowledge has been the greatest and where intellectual property
rights are well defined and better protected. If our arguments are correct, the changing organisation of inventive activity in these sectors is a
harbinger of the future.
Arora, A. and A.
Gambardella, 1993, Division of innovative
labour in biotechnology, Working Paper 93-30,
Heinz School, CMU, Pittsburgh, PA.
Arora, A. and A. Gambardella,
1992, New trends in technological change, Revista Internazionale di Sciencze Sociale 3 (July-September),
259-277.
Arora, A., 1991, Licensing tacit knowledge: intellectual property
rights and the market for know-how, Working Paper 91-35, Heinz School, CMU,
Pittsburgh, PA.
Arrow, K.,
1962, Economics of welfare and the allocation of resources for invention, in: The Rate and Direction of Inventive Activity,
NBER (Princeton University Press, Princeton, NJ).
Arrow, K.,
1983, Innovation in large and small firms, in: J. Ronen
(editor), Entrepreneurship (Lexington
Books, Lexington).
Baker, W.,
1986, The physical sciences as the basis for modern
technology, in R. Landau and N. Rosenberg (editors),
531
The
Positive Sum Strategy (National Academy Press, Washington, DC).
Chemical Engineering, 1989, Designer catalysts are all the rage, September, pp. 31-46.
Chemical Week,
1988, Catalysts ‘88: restructuring for technical clout, 29 Jiine,
pp. 20-62.
Chemical Week,
1989, Catalysts ‘89, 28 June, pp. 24-40.
Cohen, W., 1992, Empirical studies
of innovative activity and performance, Working Paper, CMU, Pittsburgh.
Cohen, W. and
S. Klepper, 1992, The
anatomy of industry R&D intensity distributions, American Economic Review 82 (4) (September), 773-799.
Cusumano, J., 1992, Creating the future of the chemical industry: catalysis by
molecular design, in: J. Thomas and K. Zamarev
(editors), Catalysis in the 21st Century
(Blackwell Scientific, New York).
Di Bernardo, B., and E. Rullani, 1990, Il Management e le Macchine, 11 Mulino, Bologna.
Eberhardt, J., J. Young, P. Molton and J. Dirks, 1991, Technological Advancements in Instrumentation: Impacts on R&D
Productivity, Draft Report (Pacific Northwestern Laboratory, Richmond, WA).
Financial World,
1989, Agents of change, 5 September, pp. 44-46.
Freeman, C.,
1968, Chemical process plant: innovation and the World Market, N.I.E.R. 45, 29-51.
Freeman, C.,
1991, Networks of innovators: A synthesis of research issues, Research Policy 20, 499-514.
Holland, J., 1992, Genetic algorithms, Scientific American 267 (1), 66-72.
Holmstrom, B., 1989, Agency costs and innovation, Journal of Economic Behavior and
Organization 12, 305-327.
Jewkes, J., D. Sawyers and
R. Stillerman, 1959, The Sources on Invention (W.W. Norton, New York).
Landau, R.
and N. Rosenberg, 1992, Successful commercialization in the chemical process
industries, in R. Landau and N. Rosenberg (editors), Technology and the Wealth of Nations (Cambridge University Press,
Cambridge, MA).
Levin, R. et a1., 1987, Appropriating the returns from industrial
research and development, Brookings
Papers on Economic Activity 3, 783-820.
Merges, P.,
and R. Nelson, 1990, On the complex economics of
patent scope, Columbia Law Review 90
(4), 840-916.
Mowery, D. and N. Rosenberg, 1989, Technology
and the Pursuit of Economic Growth (Cambridge University Press, Cambridge,
UK).
Mueller,
W.F., 1962, The origin of the basic inventions underlying Dullont’s
major product and process innovations, 1920 to 1950, in R.R. Nelson, ed., The Rate and Direction of Inventive Activity
(Princeton University Press, Princeton).
Nelson, R.,
1959, The simple economics of basic scientific
research, Journal of Political Economy
67, 297-306.
Nelson, R.
and S. Winter, 1982, An Evolutionary Theory of Economic Change
(Harvard University Press, Cambridge,
MA).
New Scientist,
1992, The shape of proteins to come, 9 May, p.16.
Rullani, E., and S. Vacca, 1987, Scienza e tecnologia nello sviluppo industriale, Economia e Politica Industriale 53, 3-41.
Steinmueller, E., 1992, The economics of flexible integrated circuit technology, Review of Industrial Organization 7, 327-349.
Teece, D., 1988,
Technological change and the nature of the firm, in: G. Dosi
et al. (editors), Technological Change
and Economic Theory (Pinter, Lonion).
Vincenti, W., 1990, What Engineers Know and How They Know It
(The Johns Hopkins University Press, Baltimore).
Von Hippel, E., 1990, The impact of ‘sticky’
information on innovation and problem-solving, Working Paper No.
3147-90-BPS, April (MIT Sloan School of Management, Cambridge, MA).
Research Briefings, 1983, Opportunities
in Chemistry, (National Academy Press, Washington, DC).
Science, 1992,
The ascent of odorless chemistry, 17 April, Vol 256, pp. 306-308.
Smith, W., 1991, Molecular mechanisms of aspirin action, drug news & perspectives, Vol. 4
(6), pp. 362-366.
532