In Industrial
Organization firms in a given industry tend to follow
identifiable patterns of Conduct or behavior in adapting and
adjusting to a changing and evolving marketplace. Key
mainstream variables include pricing, advertising and product
differentiation, capacity, legal tactics, quality of output as
well as process/product innovation. In policy terms, Conduct
constitutes the strategy of the firm.
Pricing strategy
includes the choice between short- or long-run profit
maximization as well as between single and tied goods, e.g.,
selling printers cheap but ink at a high price. Advertising is
intended to persuade consumers – final or intermediary – to buy
a particular brand. Sometimes brands are technically similarly
but advertising can differentiate them in the minds of
consumers. Capacity refers to the ratio of actual to potential
output expressed as a percentage, e.g., 80-85% capacity
is generally considered ‘full’ capacity allowing downtime for
maintenance and ‘recreation’ of workers. Legal tactics includes
the tendency to litigate or use other legal means rather than
the market to settle or foreclose disputes with consumers,
suppliers and competitors, e.g., the EULA software
agreement that limits liability, e.g., downtime suffered
by users . Quality, among other things, refers to whether the
target market is upscale or downscale, e.g., tailor
products to each customer or sell mass produced goods &
services. Process/product innovation refers to whether firms
tend to rely on the continual production and application of new
knowledge or on ‘traditional’ ‘tried-and-true’ methods and
products.
3.1 Collusion
Conduct also includes
collusive behaviour among sellers as well as buyers especially
in oligopolistic and oligopsonistic industries. The small
number of majors makes collusion relatively easy and while
usually illegal it is well rooted in history.
“People of the same trade seldom meet
together, even for merriment and diversion, but the conversation
ends in a conspiracy against the public, or in some contrivance
to raise prices.”
Adam Smith, The Wealth of Nations,
1776
Such Conduct includes
price fixing which involves agreement to buy or sell
a good only at a fixed price and to manipulate supply and/or
demand to maintain that price. The result of such cartels
approximates the outcome of monopoly. It also includes
agreements to geographically divide up markets. Maintaining
discipline among members of the cartel is, however, often
difficult because of the incentive to cheat. It should be noted
that imperfect knowledge is involved. Cartel members know
prices are fixed but the public does not.
A recent example of such collusive behaviour
occurred in 2012 with the rate rigging scandal concerning the
Libor, the London Interbank Offer Rate. Some fifteen blue-chip
banks ‘guess’ their borrowing costs, throw out high and low, and
use the resulting rate as the benchmark against which to mark up
riskier loans. These firms have already been fined billions of
dollars for manipulation of the Libor rate. Such behaviour has
recently, included price fixing of integrated chips, dynamic
random access memory (DRAM) chips, liquid crystal display
panels, lysine, citric acid, graphite electrodes, bulk vitamins,
perfume as well as airlines in various countries around the
world. The result of collusive industrial behaviour has been
the institutionalization of government anti-trust and
anti-combines policies around the world beginning in the United
States with the Sherman Anti-Trust Act of 1890.
3.2 Game
Playing
As will be seen, a profit maximizing price/quantity solution for
oligopoly cannot be found within the Standard Model of Market
Economics. To treat the indeterminacy of oligopoly,
economists, beginning with Cournot in the 1830s, have struggled
for a solution. The outcome, however, depends not only on the
decisions of a given firm but also the reaction of its
competitors. To get around the problem Cournot suggested a firm
should guess what competitors would do. If it guessed correctly
it would maximize profits; if not, then it would guess again and
again until it guessed right. Hardly an elegant solution!
What I call the dance of the oligopolists with one step being
matched by a counter-move also led in the 1950s to the Nash-Cournot
solution which involves page upon page of mathematical equations
(the Nash Program) that generates a solution if the underlying
assumptions are correct. If not, the search for a solution also
begins again.
The ‘action-reaction’ nature and the complexity of oligopoly
with a variety of possible ‘profit maximizing’ outcomes led
economics to ‘spin off’ a whole new sub-field called Game
Theory. For a brief history of Game Theory please see:
An Outline of the History of Game Theory by Paul Walker
http://www.econ.canterbury.ac.nz/personal_pages/paul_walker/gt/hist.htm.
Today it is sometimes claimed that video games, as an industry,
is larger than the motion picture and music industries
combined. Apps for smart phones are being designed around game
theory to encourage everything from weight loss and exercise to
saving. Modern corporations and the military actively engage in
game playing including role playing to anticipate outcomes of
competition, bargaining and other actions. Even the Arts are
involved in that actors are often hired by businesses,
governments, medicine, the military and other institutions to
‘role play’ in games to hone the skills of personnel. For
example, actors are used to prepare physicians for the range of
possible reactions of a patient being told they have terminal
cancer. In many ways the contemporary ethos or zeitgeist is
game playing. This sentiment is summed up in the neologism ‘gamification’. This has
resulted from economic game
theory developed in response to the indeterminancy of
oligopoly.
3.3 Legal
Tactics
As noted in
Introduction, legal tactics includes the tendency to litigate or
use other legal means rather than the market to settle or
foreclose disputes with consumers, suppliers and competitors,
e.g., the EULA software agreement that limits liability,
e.g., downtime suffered by users. Legal tactics embrace
contract law, non-contractual liabilities or the law of torts
as well as intellectual property rights and property rights in
general.
It is important to note
that Law is a cultural artifact that varies between countries.
Thus in the Anglosphere world of Common Law legal persons
(bodies corporate) are assumed to enjoy the same rights as
natural persons, i.e., flesh and blood human beings.
Under European Civil Code, they do not. Similarly torts under
Common Law are treated by precedent while under Civil Code they
are treated by principle.
Law, in all Nation-States, is made at four
levels: international, statutory, regulatory and case.
International law is made by Nation-States and International
Organizations through the treaty-making process. For our
purposes what is important is that to ratify a multilateral
instrument often requires changing domestic law.
Statutory law is made by domestic legislators in
parliaments, legislatures, congresses, etc. Regulatory
is made by bureaucrats – domestic and international -
interpreting and implementing a statute or treaty. Case law is
made by judges – domestic and international - interpreting and
enforcing international, statutory and/or regulatory law.
Complicating matters, however, is that when
judges make Law it is by setting precedent. In the Anglosphere
this body of precedent is called the Common Law. If a similar
case was resolved in the past, a current court is bound to
follow the reasoning of that prior decision under the principle
of stare decisis. The process is called casuistry
or case-based reasoning.
If, however, a current case is different then a
judge may set a precedent binding future courts in similar
cases. Often such precedents compel legislators and bureaucrats
to change statutory and regulatory law. This is especially true
with respect to intellectual property rights like copyright.
Rapidly changing technology, among other things,
increasingly brings novel cases before the lower courts forcing
legislators and bureaucrats to keep up or allow casuistry to run
its course. The problem is that a court decision in a specific
case can, for better or worse, establish ‘path-dependency’ for
emerging techno-economic regimes (David
1990), e.g., in biotechnology,
software, etc. This reflects the more general
psychological Law of Primacy: That which comes first
affects all that comes after. In Law it is called precedent; in
Economics it is called path dependency.
Furthermore, precedent established in one
jurisdiction may spill-over into others. This is especially
true of IPR precedents set by courts in the United States.
These have great influence in other Common Law countries
including Canada. The sheer scale of the American economy
insures that case law is greater in volume if not better thought
out than in smaller jurisdictions.
Over the last few
decades State-sponsored intellectual
property rights or IPRs have increasingly become a tool of
predatory competition as opposed to an incentive for
innovation. It has been claimed that major American
corporations now spend more on the legal defense of IPRs than
on research & development. The spectacular number of cases
filed by Apple against Samsung in courts around the world is
only the tip of the iceberg. So what are IPRs?
While economics is poor at prediction it is extremely good at
ex poste rationalization, e.g., it cannot accurately
predict the Depression but can explain it very well after the
fact. Thus intellectual property rights (IPRs) have evolved
over the course of centuries (Chartrand
2011) but as economist Paul David: observed,
they have not been created “by any rational, consistent, social
welfare-maximizing public agency” (David
1992). The resulting regime is “a Panda’s
thumb”, i.e., “a striking example of evolutionary
improvisation yielding an appendage that is inelegant yet
serviceable” (David
1992).
In economic theory, IPRs today are justified by market failure,
e.g., when market price does not reflect all benefits to
consumers and all costs to producers, e.g., pollution
costs. These are known as external costs and benefits, i.e.,
external to market price.
IPRs, in this view, are created by the State as a protection of,
and incentive to, the production of new knowledge which
otherwise could be used freely by others (the so-called
free-rider problem). In the knowledge industries the average
total cost curve is not ‘U’ shaped but rather ‘L’ shaped. The
first unit of Windows 8 may have cost $250 million or more but
the second and all subsequent copies cost the price of a blank
disc – 99 cents. After all knowledge is a public good. Without
enforcement of State sponsored IPRs no rational firm would make
such an investment.
In return, the State expects creators (usually a corporate
'proprietor' of an author's copyright - the 1709/10 Statute
of Queen Anne, the first 'modern' copyright act allowed the
full and total assignment of all author's rights, unlike the
French tradition [Chartrand
2011]) to make new knowledge
available and that a market will be created in which it can be
bought and sold. But while the State wishes to encourage
creativity, it does not want to foster harmful market power.
Accordingly, it builds in limitations to the rights granted to
creators. Such limitations embrace both Time and Space. They
are also granted only with full disclosure of the new knowledge,
and only for:
a fixed period
of time, i.e., either a specified number of years
and/or the life of the creator plus a fixed number of years;
and,
fixation in
material form, i.e., it is not ideas but rather their
fixation or expression in material form (a matrix) that receives
protection.
Eventually, however, all intellectual property (all knowledge)
enters the public domain where it may be used by anyone without
charge or limitation. In other words a public good first
transformed by law into private property is transformed back
into a public good. Growth of the public domain is, in fact,
the historical justification of the short-run monopoly granted
to creators of intellectual property. In the case of
copyright, for example, the full title of the Statute of
Queen Anne is:
AN ACT for the Encouragement of Learning by
vesting the Copies of printed Books in the Authors or Purchasors
of such Copies during the Times therein mentioned.
Inside the Act, purchaser is named proprietor, a.k.a.,
printer or publisher.
Even while IPRs are in force, however, there are exceptions such
as ‘free use’, ‘fair use’ or ‘fair dealing’ under copyright.
Similarly, national statutes and international conventions
permit certain types of research using patented products and
processes. And, the Nation-State retains the sovereign right to
waive all IPRs in “situations of national emergency or other
circumstances of extreme urgency” (WTO/TRIPS 1994, Article 31b),
e.g., following the anthrax terrorist attacks in 2001 the
U.S. government threatened to revoke Bayer’s pharmaceutical
patent on the drug Cipro (BBC
News October 24, 2001).
Statutory IPRs include:
Copyrights
- protecting the expression of an idea but not the idea itself;
Patents -
protecting the function of a device or process but only after
disclosure of all knowledge necessary for a person normally
skilled in the art to replicate the device or process;
Registered
Industrial Designs – protecting the aesthetic or
non-functional aspects of a device; and,
Trademarks
– protecting the name, reputation and good will of a Maker,
Legal or Natural, as well as Marks of Origin such as Okanagan
Made.
Contractual rights to knowledge include Know-How and Trade
Secrets. These take the form of non-disclosure and/or
confidentiality clauses in commercial contracts as well as
contracts of employment, e.g., Chrysler & Volkswagen Case.
Copyright & Patent Abuse
Legislative Collusion
Patent Thickets & Wars
Trolls
Summary Survey of Intellectual Property
in the Global Village
3.4 Price
Competition
Pricing strategy
includes the choice between short- or long-run profit
maximization as well as between single and tied goods, e.g.,
selling printers cheap but ink at a high price. Strict price
competition, however, is restricted to perfect competition.
Under imperfect competition firms are price makers rather than
price takers. Microsoft Windows 95 to XP
3.5
Product Differentiation
Advertising is intended to persuade consumers – final or
intermediary – to buy a particular brand. Sometimes brands are
technically similarly but advertising can differentiate them in
the minds of consumers, e.g.,
Tide vs. Cheer, effectively splitting off part of the industry
demand curve as its ‘owned’ share. In the Standard Model of
Market Economics only factual product information qualifies as a
legitimate expense. Attempting to ‘persuade’ or influence
consumer taste is ‘allocatively inefficient’ betraying the
principle of ‘consumer sovereignty’, i.e., human wants,
needs and desires are the roots of the economic process.
This mainstream view connects with consumer behaviour research
which calls this approach the ‘information processing’ model. A
consumer has a problem, a producer has the solution and the
advertiser brings them together. It is a calculatory process.
An alternative consumer behavior school of thought, ‘hedonics’
argues that people buy products to fulfill fantasy, e.g., people
do not buy a Rolls Royce for transportation but rather to
fulfill a lifestyle self-image
(Holbrook & Hirschman 1982; Holbrook 1987). Thus product
placement, i.e., placing a product in a socially
desirable context, enhances sales (McCracken
1988). In this regard the proximity of
Broadway and especially off- and off-off-Broadway (the centre of
live theatre) and Madison Ave. (the centre of the advertising
world) in New York City is no coincidence. Marketeers search
the artistic imagination for the latest ‘cool thing’, ‘style’,
‘wave’, etc. Such pattern recognition is embodied in the
new professional ‘cool hunter’ (Gibson 2003). In fact
peer-to-peer brand approval is consumer business success in the age
of Blog.
Take the case of advertising biotechnology. The ‘advertising &
marketing’ of GM products, specifically food vs.
medicine, highlights these divergent approaches. In reaching
out to the final consumer GM food advertising and marketing
generally takes the form of well researched and well meaning
‘risk assessments’. Such cost-benefit analyses are presented to
a public that generally finds calculatory rationalism
distasteful and the concept of probability unintelligible,
e.g., everyone knows the odds of winning the lottery yet
people keep on buying tickets. It would appear that the chances
of winning are over-rated. By contrast the even lower
probability of losing the GM ‘cancer’ sweepstakes are similarly
over-rated. Attempts have been made to place this question
within the context of known/unknown contingencies such as GM
food safety within Kuhn’s ‘normal science’ (Khatchatourians
2002). The labeling debate also illustrates the
‘information processing’ view. At a minimum it would require all GM food products to be labeled as such. At a maximum it would
require that all GM food products be traceable back to the
actual field from which they grew.
While attempts have been made to highlight the health and safety
of GM foods little has been done to demonstrate that they
‘taste’ better. This may be the final hurdle, maybe not.
Observers have noted, however, that the GM agrifood industry has
been rather inept in its ‘communication’ with the general public
(Katz
2001). For whatever reasons, to this point in
the industry’s development, GM foods appear to feed nightmares,
a.k.a., Frankenfood, not fantasies in the mind of the final
consumer.
By contrast the ‘advertising & marketing’ of medical GM products
and services has fed the fantasies of millions with the hope for
cures to previously untreatable diseases and the extension of
life itself. Failed experiments do not diminish these hopes.
Even religious reservations appear more about tactics, e.g., the
use of embryonic or adult stem cells, rather than the strategy
of using stem cells to cure disease and extend life.
Given that intermediate rather than final demand currently feeds
the biotechnology sector one must also consider what might be
called ‘intermediate advertising & marketing’. Such activities
are conducted by trade associations and lobbyists. The audience
is not the consumer but rather decision makers in other
industries and in government. Such associations exist at both
the national, e.g.,
BIOTECanada, and regional level, e.g.,
Ag.West Bio Inc.
Beyond advertising another technique to achieve product
differentiation in the minds of consumers is ‘design’. Apple is
the outstanding example today. In effect design technology
involves making the best looking thing that works. Picture
going into a computer store and seeing two technically identical
systems, one is ugly, the other attractive. Which do you buy?
Economist Robert H. Frank’s economic guidebook unlocks everyday
design enigmas. An explanation of his findings is available on
a
YouTube a lecture at Google HQ.
What is important to realize is that product differentiation
through advertising or design require an investment that a lean,
mean perfectly competitive firm cannot afford. It is excess or
economic profit that allows a firm to make such investments.
3.6
Process/Product Innovation
With respect to process/product
innovation I begin with a distinction between invention
and innovation. Invention involves creating something new;
innovation involves successfully bringing it to market. To
paraphrase Einstein: it is 1% inspiration (invention) and 99%
perspiration (innovation).
Process/product innovation forms part of what economist Joseph
Alesoph Schumpeter called
creative
destruction or the:
… process of
industrial mutation - if I may use that biological term - … that
incessantly revolutionizes the economic structure from within,
incessantly destroying the old one, incessantly creating a new …
Creative destruction is the essential fact about capitalism. It
is what capitalism consists in and what every capitalist concern
has got to live in. (p.83)
… Every piece
of business strategy acquires its true significance only against
the background of … the perennial gale of creative destruction;
it cannot be understood irrespective of it or, in fact, on the
hypothesis that there is a perennial lull. (pp. 83-84)
From this observation, and other evidence, Schumpeter concluded
that the Standard Model of Market Economics missed the point.
Competition was not about long run lowest average cost per unit
output but rather about innovation and surviving the perennial
gale of creative destruction.
In 1962, economist Robert Solow published “Technical Progress,
Capital Formation and Economic Growth” in the American
Economic Review. In it he presented what is known as the
Solow Residual. It begins with a symbolic equation for the
production function: Y = f (K, L, T) which reads:
national income (Y) is some function (f) of capital (K),
labour (L) and technological change (T).
Technological change in the standard model of Market Economics
refers to the impact of new knowledge on the production function
of a firm or nation. The content and source of that knowledge
is not a theoretical concern; what matters is its mathematical
impact on the production function.
Over the last hundred years, depending on the study, something
like 25% of growth in national income is measurably attributable
to changes in the quantity and quality of capital and labour
while 75% is the residual Solow attributed to technological
change. Yet we have no idea of why some things are invented and
others not; and, why some things are successfully innovated and
brought to market and other are not. The Solow Residual is
known in the profession as ‘the measure of our economic
ignorance’. It is why I became an economist.
The effects of technological change in the orthodox model can be
broken out into two dichotomous but complimentary categories:
disembodied & embodied and endogenous & exogenous technological
change.
Implicitly disembodied technological change dominated economic
thought since the beginning of the discipline. It refers to
generalized improvements in methods and processes as well as
enhancement of systemic or facilitating factors such as
communications, energy, information and transportation
networks. Such change is disembodied in that it is assumed to
spread out evenly across all existing plant and equipment in all
industries and all sectors of the economy. It is what
Victorians would have called ‘Progress’.
Also implicitly, the concept of embodied technological change
traces back to Adam Smith’s treatment of invention as the result
of the division and specialization of labour (1776). It refers
to new knowledge as a primary ingredient in new or improved
capital goods. The concept was refined and extended by Marx and
Engels (1848) in the 19th and by Joseph Schumpeter in the 20th
century with his concept of creative destruction (1942). No
attempt was made, however, to measure it until the 1950s (Kaldor
1957; Johansen 1959). And it was not until 1962 that Solow
introduced the term ‘embodied technological change’ into the
economic lexicon, and by default, disembodied change was
recognized (Solow May1962).
Formalization of embodied technological change arguably emerged
out of ‘scientific’ research and development (R&D) during the
Second World War followed by the post-war spread of organized
industrial R&D. This demonstrated that new scientific knowledge
could be embodied in specific products and processes, e.g.,
the transistor in the transistor radio. Conceptual development
of embodied technological change has, however, “lost its
momentum” (Romer 1996, 204). Many theorists, according to Romer,
have returned to disembodied technological change as the
force locomotif of the economy meaning: “Technological
change causes economic growth” (Romer 1996, 204).
While embodied/disembodied refers to form, endogenous and
exogenous refers to the source of technological change. The
source of exogenous technological change is outside the economic
process. New knowledge emerges, for example, in response to the
curiosity of inventors and pursuit of
‘knowledge-for-knowledge-sake’. Exogenous change, with respect
to a firm or nation, falls from heaven like manna (Scherer 1971,
347).
By contrast, endogenous technological change emerges from the
economic process itself - in response to profit and loss. For
Marx and Engel, all technological change, including that
emanating from the natural sciences, is endogenous. Purity of
purpose such as ‘knowledge-for-knowledge-sake’, like religion,
was so much opium for the masses cloaking the inexorable
teleological forces of capitalist economic development. The
term itself, however, was not introduced until 1966 (Lucas 1966)
as was the related term ‘endogenous technical change’ (Shell
1966).
Endogenous change is evidenced by formal industrial research and
development or R&D programs. It therefore includes what are
usually minor modifications and improvements – tinkering - to
existing capital plant and products called ‘development’
(Rosenberg & Steinmueller 1988, 230). In this way industry
continues the late medieval craft tradition of experimentation.
R&D varies significantly between firms and industries. At one
extreme, a change may be significant for an individual firm but
trivial to the economy as a whole. On the other hand, ‘enabling
technologies’ such as computers or biotechnology may radically
transform both the growth path and the potential of an entire
economy. How to sum up the impact on the economy of the
endogenous activities of individual firms remains, however,
problematic.
With respect to the Nation-State, endogenous and exogenous
technological change has a different meaning. They refer to
whether the source is internal, i.e., produced by
domestic private or public enterprise, or external to the
nation, i.e., originating with foreign sources.
Furthermore, in the 1980s a ‘New
Economic Geography’ arose inspired by the work of Nobel Prize
winning economist Paul Krugman (Martin & Sunley 1996). A
central feature is the ‘industrial cluster’ such as ‘Silicon
Valley’. While economies of scale and scope are available
within a single firm, external economies are available only
outside. High tech firms operating in the same sector benefit
from physical proximity. Such clusters, in turn, crystallize
around the University as a nucleating agent or prime attractor.
The success of Government sponsored ‘clusters’, however, remains
problematic (Economist Oct. 11, 2007).
A key industrial example of the role
of the University as an exogenous source of technological change
is biotechnology. With the decoding of DNA a new enabling or
transformative technology was unleashed. Its leaders are
generally University-based (Zucker et al 1998, 293). It
is they who take new knowledge and commercialize it. It is they
who attract the best students. Often they establish new firms
within an existing cluster or start a new cluster with the
assistance of the University which shares in patent royalties.
Many new biotech firms are in fact founded with the intent of
selling them to large established firms (Arora & Gambardella
1990, 362).
3.7 Satisficing
Behaviour & the Problem of Agency
Dangers of monopoly were
a concern to Marx whose solution was public ownership of the
means of production. The extremity of this solution fuelled
Alfred Marshall efforts in the 1920s to set out a model of
perfect competition and demonstrate the comparative costs of
monopoly. According to Marshall, the monopolist was like a tree
in the forest; it would grow but eventually fall. Reasons
included the idea that inheritors to the monopolist’s power
would be less able than the founder until eventually the firm
died –
Following a series of
Harvard Law Review articles written by Adolf A. Berle, Jr. and
E. Merrick Dodd, Jr., in 1932 the authors published their
influential book, The Modern Corporation and Private Property.
This text established the concept of separation of ownership and
control of the ‘modern’ corporation and laid the foundation for
John Kenneth Galbraith’s concept of the ‘technostructure’,
i.e, large firms can become self-perpetuating or ‘immortal’
through the self-genesis of management.
Findings by Berle and
Means exposed the problem of agency in the widely held public
corporations that have come to dominate the economy. The
Standard Model of Market Economics assumes a one produce, profit
maximizing firm with the owner in the store. When ownership is
spread out by share equity the owners are not in the store but
rather hire employees entrepreneurial and managerial to run the
firm. This raise the question of whether or not the objective
function of the owners, i.e., profit maximization, is the
same as their agents?
In 1956 Herbert Simons
introduced the concept of satisficing vs. maximizing
behaviour. Thus managers of a widely held public corporation
have to satisfy not only the owners but also workers, customers
and the government. To do so they do not pursue profit
maximization but rather satisfying all these various
stakeholders. If successful management is then able to satisfy
its own needs for things like corporate jets, oak-lined board
rooms and other perks of office.
3.8
International Trade: A Positive Sum Game?