Mark Blaug
Economic theory in
retrospect
Chapter 17: A Methodological Postscript
Cambridge University
Press, 5th Edition, 1996, xx-yy.
Index
1 Falsifiability in classical economics
2 Falsifiabiity in
neo-classical economics
3 The limitations of the falsifiabiity
criterion in economics
4 The role of value judgements
6 Why bother with the history of economic theory?
How much does
economics explain? What are the grounds
on which economic theories have been accepted or rejected? What are the characteristics of endurable
economic ideas? What practical use is
economic knowledge? These were some of
the questions posed in the introduction to this book. Have any or all of them been answered in the
course of the text?
Since the days of Adam
Smith, economics has consisted of the manipulation of highly abstract
assumptions, derived either from introspection or from casual empirical
observations, in the production of theories yielding predictions about events
in the real world. Even if some of the
assumptions involved nonobservable variables, the
deductions from these assumptions were ultimately related to the observable
world: economists wanted to ‘explain’ economic phenomena as they actually
occur. In short, economists have always
regarded the core of their subject as ‘science’ in the modern sense of the
word: the goal was to produce accurate and interesting predictions that were,
in principle at least, capable of being empirically falsified. In practice, they frequently lost sight of
this scientific objective and the history of economics is certainly replete
with tautological definitions and theories so formulated as to defy all efforts
at falsification. But no economist
writing on methodology, whether in the nineteenth or in the twentieth century, has ever denied the relevance of the now widely
accepted demarcation rule of Popper: theories are ‘scientific’ if they are
falsifiable, at least in principle, and not otherwise. Such methodologists as Senior, J. S. Mill, Cairnes, Sidgwick, Jevons, Marshall, John Neville Keynes, Böhm-Bawerk
and Pareto frequently emphasised other matters and
undoubtedly underemphasised the problem of devising
appropriate empirical tests of theories but nothing they wrote denied
689
the idea that to ‘explain’ is ultimately to predict that
such and such will or will not happen.
Robbins’s Essay on
the Nature and Significance of Economic Science (1932) is frequently cited
as a prime example of the opposite tendency, emphasising
the irrelevance of empirical testing to the truth of economic theories. But the purpose of Robbins’s book was to purge
economics of value judgements. It is not clear whether Robbins really wanted
economists to abandon welfare economics altogether or merely to separate
‘positive’ from ‘normative’ economics, so as to deny scientific status to the
latter. Nor is it
clear, even after repeated reading, whether he really meant to commit himself
to ‘radical apriorism’, despite the fact that many
passages in the book do invite that interpretation. ‘Radical apriorism’
holds that economic theories are simply a system of logical deductions from a
series of postulates derived from introspection, which are not themselves subject to empirical verification. In stark contrast to radical apriorism is ‘ultra-empiricism’, which refuses to admit any
postulates or assumptions that cannot be independently verified;
ultra-empiricism, in other words, asks us to begin with facts, not assumptions.
But an ‘apriorist’
may agree that the predicted results deduced from subjective assumptions, if
not the subjective assumptions themselves, should be subject to empirical
testing. And few ‘ultra-empiricists’, no
matter how much they insist that all scientifically meaningful statements must
be falsifiable by observation, go so far as to deny any role whatever to tautologies
and identities in scientific work. The
controversy is over matters of emphasis and most economists ever since Senior
and J. S. Mill, the first methodologists of the subject, have occupied the
middle ground between ‘radical apriorism’ and
‘ultra-empiricism’.
1 Falsifiability in classical economics
Nevertheless, the striking
fact about the history of economics is how often economists have
violated both their own and later methodological prescriptions. The classical economists emphasised
the fact that the conclusions of economics rest ultimately on postulates
derived as much from the observable ‘laws of production’ as from subjective
introspection. Methodological disputes
in the classical period took the form of disagreement over the realism and relevance
of the underlying assumptions on which the whole deductive structure was built,
while everyone paid lip service to the need to check the predictions of logical
deductions against experience. The
empirical verification of economics was regarded as too simple to require
argument: it was simply a matter of ‘look and see’. But despite J. S. Mill’s authoritative
pronouncement that ‘we cannot too carefully endeavour
to verify our theory, by comparing... the results which it would have led us to
predict, with the most trustworthy accounts we can obtain of those which have
actually been realised’, no real effort was made to
test classical doctrines against the body of statistical material that had been
accumulated by the middle of the nineteenth century. The debatable issues in Ricardian
economics all hinged on the relative weight of forces making for
historically diminishing and increasing returns in the production of wage
goods. This question was capable of
being resolved along
690
empirical lines, given the fact that some information
on money wages and the composition of working class budgets had been made
available by the 1840s and that the concept of a price index had passed by this
time into general currency. Yet despite
the knowledge that population was no longer ‘pressing’ upon the food supply,
that ‘agricultural improvements’ were winning the race against numbers, that
the rise of productivity in agriculture was steadily reducing the real cost of
producing wage goods, the classical writers clung to a belief in the imminent
danger of natural resource scarcities.
The standard defence was to attribute every contradiction to the
strength of ‘counteracting tendencies’. In effect, the classical economists treated
certain variables that entered into their analysis as exogenously determined,
such as the rate of technical improvement in agriculture, the disposition of
the working class to practise family limitation, and
the supply of entrepreneurship. Instead
of confessing their ignorance about the exogenous variables, however, they
advanced bold generalisations about their probable
variations through time. For the most
part, they did not raise the question whether the exogenous variables were
really independently determined constants. In addition, they failed to inquire whether
the phenomena labelled ‘counteracting tendencies’
entered, as it were, as additional parameters to the original equations of
their model, or whether they in fact altered the structure of the
equations themselves. It was because the
motives for family limitation were not in fact independent of the outcome of
the race between population and the food supply that the Maithusian
theory of population predicted so poorly. It was because Ricardian
economics failed to deal with the problems of technical change in agriculture -
falling back upon the belief, denied by historical experience, that English
landlords were not ‘improvers’ - that the Corn Laws did not entail the harmful
effects that Ricardo had predicted. Had the
classical economists acted on Mill’s urging to ‘carefully endeavour
to verify our theory’ such weaknesses in the structure would have come to light
and led to analytical improvements. As
it was, the absence of any alternative theory to that of Ricardo, having equal
scope and practical significance, discouraged revisions and promoted a
defensive methodological attitude.
Marx is another case
in point. His tendency
to attribute all discrepancies between his theory and the facts to the
dialectical ‘inner contradictions’ of capitalism provided him with a perfect
safety valve against refutations. In addition, he was a past master of the
‘apocalyptic fallacy’ (see chapter 3, section 4): there were ‘laws of motion’
which were confirmed by evidence, unless of course ‘counteracting tendencies’
were at work, in which case the evidence would soon bear out the law in question.
Nevertheless, the ambiguity with which
Marx formulated his secular predictions suggests that he was well aware that
there is some weight of contrary evidence sufficient to refute any so-called
‘law’ - ‘laws of motion’ that are never verified do not deserve the
label. Thus, even Marx subscribed in the
final analysis to the methodological canon that economic theories should be
capable of being falsified; it was simply that he could not bring himself to
face up to the requirements of this canon.
691
2 Falsifiabiity
in neo-classical economics
The model of perfect
competition that evolved in the heyday of the Marginal Revolution owed much to
the older welfare propositions of the loosely stated Invisible Hand type. By limiting the scope of the analysis,
however, greater rigour in model construction became
possible. The argument was typically
related to a few continuous variables and it was confined to explaining the
direction of small changes in these variables. All the growth-producing factors, such as the
expansion of wants, population growth, technical change and even the passage of
time itself, were placed in the box of ceteris paribus. The remaining system of endogeneous variables was then shown to have a unique
steady-state solution. The problem of
achieving equilibrium in the first place was passed over by the method of
comparative statics: analysis usually began with an
equilibrium situation and then traced out the adjustment process to a new
stable equilibrium given a change in the value of one or more of the
parameters. Walras
saw the problem and deceived himself in thinking that he had solved it: his
concept of tâtonnement, or Edgeworth’s
analogous notion of recontracting, demonstrated in
effect that markets would attain equilibrium by one bold leap from any initial
starting point, thus effectively ruling out the disturbances created by
disequilibrium trading. Indeterminacy of
equilibrium was eliminated by excluding all interdependence among utility and
production functions, and stability of equilibrium was ensured by placing
various restrictions on the underlying functions and by abstracting from
ignorance and uncertainty. The entire
procedure was justified by the short-run purpose of the analysis, although this
did not prevent excursions into welfare economics involving long-run
considerations.
The endogenous
variables manipulated in neo-classical models were frequently incapable of being
observed, even in principle, and most of the theorems that emerged from the
analysis likewise failed to be empirically meaningful. Furthermore, the microeconomic character of
the analysis made testing difficult in view of the fact that most available
statistical data referred to aggregates: the problem of deducing macroeconomic
theorems from microeconomic propositions was not faced squarely until Keynes’s
work revealed that there was a problem. In addition, the rules for legitimately
treating certain variables as exogenous - they must be independent of the
endogenous variables in the model, or related to them in a unidirectional
manner, and they must be independent of each other - were constantly violated. It is obvious that tastes, population and
technology not only affect and are affected by the typical endogeneous
variables of neo-classical models but that they affect each other in turn.
The standard excuse
for treating as exogenous variables that clearly are not exogenous is
analytical tractability and expository convenience. For a whole range of practical problems, it
is in fact a very good excuse. But
the temptation to read more significance into the analysis than is inherent in
the procedure is irresistible and most neo-classical writers succumbed to it. Ambitious propositions about the desirability
of perfect competition were laid down with insufficient scruples. Of course, it was recognised
that competition was a regulatory device of limited applicability.
692
Important differences between private and social
costs, the phenomenon of ‘natural monopoly’ via increasing returns to scale and
ethically undesirable distributions of income, not to mention the existence of
‘public goods’ and second-best problems, gave scope to government action. But these qualifications were grafted on,
rather than incorporated in, the competitive model. Furthermore, the growth-producing factors that
were now regarded as noneconomic in character ceased
to receive systematic analysis. Having
marked the boundaries of economics, neo-classical writers openly confessed noncompetence outside that boundary and were satisfied to
throw out a few commonsense conclusions and occasionally a suggestive insight. It takes no effort of historical perspective
to realise that the second half of the nineteenth
century invited a complacent attitude to economic growth: it is only natural
that an author like Marshall should think that growth would take care of
itself, provided that ‘free’ competition supported by minimum state controls
would furnish an appropriate sociopolitical environment. Nevertheless, the result was to leave
economics without a theory of growth or development other than the discouraging
one that the long-period evolution of an economy depends largely on the
neglected noneconomic factors.
The besetting
methodological vice of neo-classical economics was the illegitimate use of microstatic theorems, derived from ‘timeless’ models that
excluded technical change and the growth of resources, to predict the
historical sequence of events in the real world. A leading example of this vice was the
explanation of the alleged constancy of the relative shares of labour and
capital by the claim that the aggregate production function of the economy is
of the Cobb-Douglas type, although the theory in question referred to
microeconomic production functions and no reasons were given for believing that
Cobb-Douglas microfunctions could be neatly aggregated
to form a Cobb-Douglas macrofunction. But we have witnessed numerous other instances
of the vice: the argument that welfare can be improved by taxing
increasing-cost industries and subsidising decreasing-cost
industries (see chapter 9, section 16; chapter 10, section 6); the theory that
conditions of monopolistic competition lead to excess capacity (see chapter 10,
section 9); the idea that existence of an equilibrium solution ensures stability
of equilibrium (see chapter 10, section 21); the view that factor payments in
accordance with marginal productivity provide a clear rule for increasing
aggregate employment in the economy and a theory of the determination of
relative shares (see chapter 11, section 9); the notion that the failure of
concentration ratios to rise in all industries shows that there is an optimum
size of firms (see chapter 11, section 17); the proposition that the capital
intensity or ‘average period of production’ of an economy is a monotonic
function of the rate of interest (see chapter 12, section 1 5), that capital
intensity falls at the upper turning point and rises at the lower turning point
of the business cycle because of the Ricardo Effect (see chapter 12, section
27) and that revaluation of the capital stock as a change in investment alters
the rate of interest is the key to the theory of capital accumulation (see
chapter 12, section 41); the theory that the economy tends continually to
return to a given natural rate of unemployment because deviations from it are
due to the failure of expectations to catch up with events, which failure
693
can only be momentary (see chapter 16, section 24); and,
lastly - the vice writ large - the view that perfect competition is a
sufficient condition for allocative efficiency (see
chapter 13, section 13).
Since economic activity
takes place in time, can any ‘timeless’ economic theory ever hope to predict
anything? We must begin by
disenchanting ourselves of the idea that economic predictions must be
quantitative in character to qualify as scientific predictions. Clearly, the predictions of most economic
models are qualitative rather than quantitative in nature: they specify the
directions of change of the endogenous variables in consequence of a change in
the value of one or more exogenous variables, without pretending to predict the
numerical magnitude of the change. In
other words, all neo-classical economics is about the signs of first-
and second-order partial derivatives and that is virtually all it is
about.
As Samuelson put it in the Foundations
of Economic Analysis: ‘The method of comparative statics
consists of the study of the response of our equilibrium unknowns to designated
changes in the parameters... In the absence of complete quantitative
information concerning our equilibrium equations, it is hoped to be able to
formulate qualitative restrictions on slopes, curvatures, etc., of our equilibrium
equations so as to be able to derive definite qualitative restrictions upon the
responses of our system to changes in certain parameters.’ This is what he called the ‘qualitative
ca1cu1us’, that is, the attempt to predict directions of change without
specifying the magnitude of the change. Now
it is an obvious fact that the mere presence of an equilibrium solution for a
comparative static model does not guarantee that we can apply the ‘qualitative
calculus’: all the marginal equalities in the world may not add up to a
testable prediction. This is perfectly
familiar from the theory of household behaviour: whenever substitution and
income effects work in opposite directions, the outcome depends on relative
magnitudes and hence on more than the first- and second-order conditions for a
maximum. A moment’s reflection,
therefore, will show that a great many neo-classical theories are empty from
the viewpoint of the ‘qualitative calculus’; unless they are fed with more
facts to further restrict the relevant functions, they tell us only that
equilibrium is what equilibrium must be. If that is so, why have economists not
abandoned all such empty models?
3 The limitations of the falsifiabiity
criterion in economics
In 1953, Friedman published
an essay on ‘The Methodo1ogy of Positive Economics’ which quickly generated a
methodological controversy almost as heated as that produced by Robbins’s Essay
in 1932. Friedman argued that most
traditional criticism of economic theory had scrutinised
assumptions instead of testing implications; the validity of economic theory,
he contended, is to be established, not by the descriptive ‘realism’ of its
premises, but by the accuracy of the predictions with which it is concerned. Friedman’s methodological position would seem
to be unassailable - most assumptions in economic theory involve unobservable
variables and it is meaningless to demand that such variables should conform to
‘reality’ - until it is realised that he is insisting
on predictive accuracy as the sole criterion of validity.
If a theory is rigorously
formulated to the extent of being axiomatised,
realism of
694
assumptions is logically equivalent to realism of implications. The trouble is that few economic theories have
been successfully axiomatised and, in general,
economic hypotheses are not tightly linked to their assumptions in an
absolutely explicit deductive chain. In
that sense, evidence from direct observation of such behavioural
assumptions as transitive preference orderings among consumers, or such
technical assumptions as the constant-returns-to-scale characteristics of the
production function, is capable of shedding additional light on a theory. But precisely because the theory is loosely
formulated, such evidence can never do more than suggest that the theory is
worth testing in terms of its falsifiable consequences. In short, Friedman is quite right to attack
the view that realism of assumption is a test of the validity of a theory
different from, or additional to, the test of predictive accuracy of implications.
At the same time, it must
be admitted that the edict: ‘test implications, instead of assumptions’, is not
very helpful by itself. The criterion of
falsifiable implications can be interpreted with different degrees of
stringency. If the predictions of a
theory are not contradicted by events, the theory is accepted with a degree of
confidence that varies uniquely with the magnitude of the supporting evidence. But, what if it is contradicted? If no alternative ‘simple’, ‘elegant’ and
‘fruitful’ theory explaining the same events is available - for these are the
grounds on which we choose between theories predicting the same consequences -
frequent contradiction will be demanded. But what degree of frequency of contradictions
will prove persuasive? Economists abhor
a theoretical vacuum as much as nature abhors a physical one and in economics,
as in the other sciences, theories are overthrown by better theories, not
simply by contradictory facts. Since
there are few opportunities to conduct controlled experiments in the social
sciences, so that contradictions are never absolute, economists are bound to be
more demanding of falsifying evidence than, say, physicists. By the standards of accuracy applied to predictions
in the natural sciences, economics make a poor showing and hence economists are
frequently forced to resort to indirect methods of testing hypotheses, such as
examining the ‘realism’ of assumptions or testing the implications of theories
for phenomena other than those regarded as directly relevant to a particular
hypotheses. This opens the door to the
easy criticism that economics is a failure because most of its typical assumptions
- such as transitive preferences, profit maximisation
at equal risk levels, independence of utility and production functions, and the
like - do not conform to behaviour observed in the real world. If economics could conclusively test the
implications of its theorems, no more would be heard about the lack of realism
of its assumptions. But conclusive
once-and-for-all testing or strict refutability of theorems is out of the
question in economics because all its predictions are probabalistic
ones.
Once we have accepted the
basic idea that the presence of ‘disturbing’ influences surrounding economic
events precludes absolute falsifiability of economic
theorems, it is easy to see why economics contains so many nonfalsifiable
concepts. Many economic phenomena have
not yet lent themselves to systematic theorising and
yet economists do not wish to remain silent because of some methodological
695
fiat that real science should consist only of falsifiable
theorems. A ‘theory’ is not to be
condemned merely because it is as yet untestable, not
even if it is so framed as to preclude testing, provided it draws
attention to a significant problem and provides a framework for its discussion
from which a testable implication may someday emerge. It cannot be denied that many so-called
‘theories’ in economics have no empirical content and serve merely as filing
systems for classifying information. To
demand the removal of all such heuristic devices and theories in the desire to
press the principle of falsifiability to the limit is
to proscribe further research in many branches of economies. It is perfectly true that economists have
often deceived themselves - and their readers - by engaging in what Leontief once called ‘implicit theorising’:
presenting tautologies in the guise of substantive contributions to economic
knowledge. But the remedy for this
practice is clarification of purpose, not radical and possibly premature
surgery.
Furthermore, it is not
always easy to draw the line between tautologies and falsifiable propositions. A theory that is obstensibly
a mere collection of deductions from ‘convenient’ assumptions, so framed as to
be nonfalsifiable under any conceivable circumstance,
may be reinterpretable as a verifiable proposition. After a hundred years of discussion,
economists are still not quite agreed as to whether the Malthusian theory of
population is nothing but a very complicated tautology that can ‘explain’ any
and all demographic events, or a falsifiable
prediction about per capita income in the event of population growth. Whatever Malthus’s
own intention, the theory can be so restated as to meet the criterion of falsifiability, in which case it has in fact been
falsified. The concept of a negatively
inclined demand curve in conjunction with an inclusive ceteris paribus clause
is not a falsifiable concept, because if quantity and price are both observed
to decline together in the absence of changes in other prices, incomes and
expectations, it is always possible to save the original proposition by the
contention that tastes have changed. But
the concept can be rendered falsifiable if we hypothesise
that tastes are stable over the relevant period of time, or that tastes change
in a predictable fashion over time. The
assumption of stable tastes is a genuine empirical hypothesis and all work on
statistical demand curves has been concerned in one way or another with testing
this hypothesis.
The same comments apply to
the supply side. The notion of a
production function - the spectrum of all known techniques of production - is
by itself a concept so general as to be empty. Businessmen have not experienced all known
techniques and the cost of obtaining more experience with techniques is not
negligible; the vital difference for an individual firm is not between known
and unknown but between tried and untried methods of production. The convention of putting all available
technical knowledge in one box called ‘production functions’ and all advances
in knowledge in another box called ‘innovations’ has no simple counterpart in
the real world where most innovations are ‘embodied’ in new capital goods, so
that firms move down production functions and shift them at one and the same
time. Nevertheless, the concept of a
production function can be given an empirical interpretation if we hypothesise that production functions are stable. This may well be
696
very difficult to verify in practice but in principle
it is verifiable and work in recent years on ‘embodied’ and ‘disembodied’
capital-growth models, however inconclusive it has proved to be, has been
precisely concerned with testing the hypothesis of stable production functions.
And so the two fundamental propositions
of neo-classical price theory, to wit, positive excess demand leads to a rise
in price and an excess of price over cost leads to a rise in output, are both
capable of being falsified, despite the fact that they have frequently been
laid down as immutable laws of nature.
To drive the point home,
let the reader question whether the following familiar propositions - the list
is merely suggestive - constitute falsifiable or heuristic statements; if the
former, whether they are falsifiable in principle or in practice and, if the
latter, whether and in what sense they are defensible as fruitful points of
departure for further analysis.
1 A specific tax on an article will raise its price by
less than the tax if the elasticity of demand is greater than zero and
the elasticity of supply is less than infinity.
2 The elasticity of demand for a commodity is governed
by the degree of substitutability of that commodity in consumption.
3 The impact effect of a rise in money wages in
a competitive industry is to reduce employment.
4 In the absence of technical change, a rise in the
average capital-labour ratio of an economy causes wage rates to rise and
capital rentals to fall.
5 A laboursaving innovation is one that reduces capital’s
relative share of output at given factor prices.
6 An ‘industry’ is a group of firms whose products are
perfect or near-perfect substitutes for each other.
7 Perfect competition is
incompatible with increasing returns to scale.
8 Profit maximisation is a plausible assumption about business behaviour
because the competitive race ensures that only profit maximisers
survive.
9 An equal rise in government expenditures and
receipts will raise national income by the amount of that rise if the
community’s marginal propensity to consume is positive and less than one.
10 A tax imposed on an industry whose production
function is linearly homogeneous results in a loss of consumers’ surplus
greater than the amount of the tax receipts.
11 Increasing or diminishing returns to scale are
always due to the indivisibility of some input.
12 Price expectations are always ‘rational’ in the
sense that the expected mean value of the probability distribution of forecasted
prices is identical to the mean value of the probability distribution of actual
prices.
An hour spent thinking
about these propositions will convince anyone that it is not easy to make up
one’s mind whether particular economic theories are falsifiable or not; it is
even more difficult to know what to make of these theories that are not
falsifiable; and as for the ones that are indeed falsifiable, it is still more
difficult to
697
think of appropriate methods of putting them to the test. In short, empirical testing may be the heart of
economics but it is only the heart. [1]
4 The role of value judgements
Even if all
economics could be neatly divided into testable and untestable
theories and even if unanimous agreement had been obtained on the validity of
the testable theories, we would still have to assess their significance or
relevance. This introduces the
problem of normative as distinct from positive economics. After a series of attacks on utilitarian
welfare economics, a new Paretian welfare economics
was erected in the 1930s that purported to avoid interpersonal comparisons of
utility. ‘Scientific’ welfare economics
has lately come in for its share of destructive criticism and some economists
have echoed once again the old Seniorian cry that
economics should be wholly ‘positive’ in character. But whatever we may think of modern welfare
economics, there can be no doubt that the desire to evaluate the performance of
economic systems has been the great driving force behind the development of economic
thought and the source of inspiration of almost every great economist in the
history of economics.
Indeed, it would be
difficult to imagine what economics would be like if we succeeded in
eliminating all vestiges of welfare economics. For one thing, we would never be able to
discuss efficient allocation of resources, for the question of efficient
allocation of scarce means among competing ends cannot even be raised without a
standard of evaluation. The fact that
the price system is a particular standard of evaluation, namely, one that
counts every dollar the same no matter whose dollar it is, should not blind us
to the fact that acceptance of the results of competitive price systems is a
value judgement. The price system is an
election in which some voters are allowed to vote many times and the only way
people can vote is by spending money. Economists
are constantly engaged in making the fundamental value judgement that only
certain types of individual preferences are to count and, furthermore, to count
equally. We all know, of course, why
economics has confined its attention to those motives for action that can be
evaluated with ‘the measuring rod of money’
[1] A few words about a subject like psychoanalysis
will show that the difficulties of applying the falsifiability
criterion are not confined to economics. Is psychoanalysis a science or merely a
psychic poultice for the rejects of industrial civilisation?
If it is a science, are its leading
concepts – the Oedipus Complex; the division of the mind into id, ego and
superego; sublimation; repression; transference; and the like - falsifiable? Despite the fact that psychoanalysis is now
almost a century old, there is still very little agreement on these questions
either among analysts or among critics of psychoanalysis. In one sense, the situation in psychoanalysis
is much worse than economics. At least
economists do agree that economics is a science and that its principles must
ultimately stand up to scientific testing. Psychoanalysts, however, sometimes argue that
what Freud tried to do was not to explain neurotic symptoms in terms of cause
and effect but simply to make sense of them as disguised but meaningful
communication; psychoanalysis is, therefore, an art of healing and must be
judged in terms of its success in curing patients. Even so, there has been surprisingly little research
on psychoanalytic ‘cures’, and, of course, it is difficult to see how
psychoanalysis could cure patients if its interpretations of neurotic
behaviour did not somehow correspond with reality. At any rate, it would be
fair to say that the status of the falsifiability
criterion in economics is about halfway between its status in psychoanalysis
and its status in nuclear physics.
698
but the fact remains that value judgements
are involved at the very foundation of the science.
If economists are
necessarily committed to certain value judgements at
the outset of analysis, how can it be claimed that economics is a science? This innocent question has been productive of
more methodological mischief than any other posed in this chapter. Ever since Max Weber attempted to settle this
question by defining the prerequisites of ethical neutrality in social science,
there has been an endless debate on the role of value judgements
in subjects like sociology, political science and economics. Critics of economics have always been
convinced that the very notion of objective economics divorced from value judgements is a vain pretence. Working economists, on the other hand, more or
less aware of their own value judgements, and very
much aware of the concealed value judgements of other
economists with whom they disagree, never doubted that the distinction between
positive and normative economics was as clear-cut as the distinction between
the indicative and imperative mood in grammar. But how can there be such total disagreement
on what appears to be a perfectly straightforward question?
The orthodox Weberian position on wertfrei
social science is essentially a matter of logic: as David Hume taught us,
‘you can’t deduce ought from is’. Thus,
the descriptive statements or behavioural hypotheses
of economics cannot logically entail ethical implications. It is for this reason that J.N. Keynes, the
leading neo-classical methodologist, could write as long ago as 1891: ‘the
proposition that it is possible to study economic uniformities without passing
ethical judgements or formulating economic precepts
seems in fact so little to need proof, when the point at issue is
clearly grasped, that it is difficult to say anything in support of it that
shall go beyond mere truism’. Nevertheless,
time and time again it has been claimed that economics is necessarily value-loaded
and that, in Myrdal’s words, ‘a “disinterested social
science” has never existed and, for logical reasons, cannot exist’. When we sort out the various meanings that
such assertions carry, they reduce to one or more of the following
propositions: (1) the selection of questions to be investigated by economics
may be ideologically biased: (2) the answers that are accepted as true answers
to these questions may be likewise biased, particularly since economics abounds
in contradictory theories that have not yet been tested; (3) even purely
factual statements may have emotive connotations and hence may be used to persuade
as well as to describe; (4) economic advice to political authorities may be
value-loaded because means and ends cannot be neatly separated and hence policy
ends cannot be taken as given at the outset of the exercise; and (5) since all practical
economic advice involves interpersonal comparisons of utility and these are not
testable, practical welfare economics almost certainly involves value judgements. Oddly
enough, all of these assertions are perfectly true but they do not affect the
orthodox doctrine of value-free social science in any way whatsoever.
Proposition (1) simply
confuses the origins of theories with the question of how they may be
validated. Schumpeter’s History of
Economic Analysis continually reminds the reader that all scientific theorising begins with a ‘Vision’ - ‘the pre-analytic
cognitive act that supplies the raw material for the analytic effort’ - and in
699
this sense science is ideological at the outset. But that is quite a different argument from
the one that contends that for this reason the acceptance or rejection of
scientific theory is also ideological. Similarly,
both propositions (1) and (2) confuse methodological judgements
with normative judgements. Methodological judgements
involve criteria for judging the validity of a theory, such as levels of
statistical significance, selection of data, assessment of their reliability
and adherence to the canons of formal logic, all of which are indispensable in
scientific work. Normative judgements, on the other hand, refer to ethical views about
the desirability of certain kinds of behaviour and certain social outcomes. It is the latter which alone are said to be
capable of being eliminated in positive science. As for propositions (3) and (4), it may be
granted that economists have not always avoided the use of honorific
definitions and persuasive classifications. Nor have they consistently refused to
recommend policy measures without first eliciting the policy maker’s preference
function. But these are abuses of the
doctrine of value-free economics and do not suffice to demonstrate that
economics is necessarily value-loaded. We
conclude that when economists make policy recommendations, they should
distinguish as strongly as possible between the positive and the normative
bases for their recommendations. They
should also make it clear whether their proposals represent second-best
compromises or concessions to considerations of political feasibility. But they should not refuse to advise simply
because they do not share the policy maker’s preference function and they
should stoutly resist the argument that economic advice depends entirely on the
particular economist that is hired.
Proposition (5) deserves separate comment. Welfare economics, whether pure or applied,
obviously involves value judgements, and, as we noted
earlier, the idea of value-free welfare economics is simply a contradiction in
terms. This question would never have
arisen in the first place if the new Paretian welfare
economics had not adopted the extraordinary argument that a consensus on
certain value judgements renders these judgements ‘objective’; apparently, the only value judgements that fail to meet this test involve
interpersonal comparisons of utility and these were therefore banned from the
discussion.
Despite obeisance to the
concept of ‘positive’ economics and the principle of verifying predictions by
submitting them to evidence, most economists who have had qualms about the
value of received doctrine have stilled these qualms, not by searching for
tangible evidence of the predictive power of economic theory, but by reading
the substantive contributions of leading critics of orthodox analysis. Bad theory is still better than no theory at
all and, for the most part, critics of orthodoxy had no alternative
construction to offer. One obvious
exception to this statement are Marxist critics. Another possible exception are
the American Institutionalists. Indeed, no discussion of methodology in
economics is complete without a mention of this greatest of all efforts to
persuade economists to base their theories, not on analogies from mechanics,
but on analogies from biology and jurisprudence.
‘Institutional economics’,
as the term is narrowly understood, refers to a move-
700
ment in American economic thought associated with such
names as Veblen, Mitchell and Commons. It is no easy matter to characterise
this movement and, at first glance, the three central figures of the school
seem to have little in common: Veblen applied an
inimitable brand of interpretative sociology to the working creed of businessmen;
Mitchell devoted his life to the amassing of statistical data, almost as an end
in itself and Commons analysed the workings of the
economic system from the standpoint of its legal foundations. More than one commentator has denied that
there ever was such a thing as ‘institutional economics’, differentiated from
other kinds of economics. But this is
tantamount to asserting that a whole generation of writers in the interwar
years deceived themselves in thinking that they were rallying around a single
banner. Surely, they must have united
over certain principles?
If we attempt to delineate
the core of ‘institutionalism’, we come upon three main features, all of which
are methodological: (1) dissatisfaction with the high level of abstraction of
neo-classical economics and particularly with the static flavour
of orthodox price theory; (2) a demand for the integration of economics with
other social sciences, or what might be described as faith in the advantages of
the interdisciplinary approach; and (3) discontent with the casual empiricism
of classical and neo-classical economics, expressed in the proposal to pursue
detailed quantitative investigations. In
addition, there is the plea for more ‘social control of business’,
to quote the title of J. M. Clark’s book, published in 1926; in other words, a
favourable attitude to state intervention. None of the four features is found in equal
measure in the works of the leading institutionalists.
Veblen cared
little for digging out the facts of
economic life and was not fundamentally opposed to the abstract deductive
method of neo-classical
economics. Moreover, he refused to admit
that the work of the German Historical School constituted scientific economics.
What he disliked about orthodox
economics was not its method of reaching
conclusions, but its underlying
hedonistic and atomistic conception of human nature - in short, the Jevons-Marshall theory of consumer behaviour. Moreover, he dissented vigorously from the
central implication of neo-classical welfare economics that a perfectly competitive
economy tends, under certain restricted conditions, to optimum results. This amounted to teleology, he argued, and
came close to an apology for the status quo. Economics ought to be an evolutionary science,
Veblen contended, meaning an inquiry into the genesis
and growth of economic institutions; the economic system should be viewed, not
as a ‘self-balancing mechanism’, but as a ‘cumulatively unfolding process’. He defined economic institutions as a complex
of habits of thought and conventional behaviour; it would seem to follow, therefore, that ‘institutional
economics’ comprised a study of the social mores and customs that becomes crystallised into institutions. But what Veblen
actually gives the reader is Kulturkritik, dressed
up with instinct psychology, racist anthropology and a flight of telling
adjectives: ‘conspicuous consumption’, ‘pecuniary emulation’, ‘ostentatious
display’, ‘absentee ownership’, ‘discretionary control’ - these are just a few
of Veblen’s terms that have passed into the English
language. It was a mixture so unique and
individual to Veblen that even his most avid
disciples were unable to
701
extend or develop it. Books like The Theory of the
Leisure Class (1899) and The Theory of Business Enterprise (1940)
appear to be about economic theory but they are actually interpretations of the
values and mores of the
‘captains of industry’.
To fully appreciate the
difficulty of evaluating Veblen’s ideas, let us take
one striking example. No matter what
book of Veblen we open, we find the idea that life in
a modern industrial community is the result of a polar conflict between
‘pecuniary employments’ and ‘industrial emp1oyments’, or ‘business enterprise’
and ‘the machine process’, or ‘vendibility’ and ‘serviceability’, that is,
making money and making goods. There is
a class struggle under capitalism, not between capitalists and proletarians,
but between businessmen and engineers. Pecuniary
habits of thought unite bankers, brokers, lawyers and managers in a defence of private acquisition as the central principle of
business enterprise. In contrast, the
discipline of the machine falls on the workmen in industry and more especially
on the technicians and engineers that supervise them. It is in these terms that Veblen
describes modern industrial civilisation. As we read him, we have the feeling that
something is being ‘explained’. Yet what
are we really to make of it all? Is it a
contrast between subjective and objective criteria of economic welfare? Is it a plea to abandon the emphasis on
material wealth, implying in the manner of Galbraith that we would be better
off with more public goods and less trivia? Is it
a demonstration of a fundamental flaw in the price system? Is it a call for a technocratic revolution? There is evidence in Veblen’s
writings for each of these interpretations but there is also much evidence
against all of them. Furthermore, Veblen never tells us how to find out whether his
polarities explain anything at all. It
is not simply that he never raised the question of how his explanations might
be validated but that he is continually hinting that a description is a theory,
or, worse, that the more penetrating is the description, the better is the
theory.
Mitchell, on the other
hand, was a thinker of a different breed. He showed little inclination for
methodological attacks on the preconceptions of orthodox economics and eschewed
the interdisciplinary approach. His
‘institutionalism’ took the form of collecting statistical data on the notion
that these would eventually furnish explanatory hypotheses. He was the founder of the National Bureau of
Economic Research and the chief spokesman of the concept that has been
uncharitably described as ‘measurement without theory’.
Commons alone wrote a book
specifically entitled Institutional Economics (1934) which, together
with his Legal Foundations of Capitalism (1926), analysed
the ‘working rules’ of ‘going
concerns’ that governed ‘individual transactions’; ‘transactions’, ‘working
rules’, ‘the going concern’, these were the building blocks of his system. In his own day, Commons was much better known
as a student of labour
legislation. His theoretical writings
are as suggestive as they are obscure and few commentators have succeeded in
adequately summarising them.
Thus,
despite certain common tendencies, the school of ‘institutional
economics’ was never more than a tenuous inclination to dissent from orthodox
economics. This may explain why the phrase itself has
degenerated into a synonym for ‘descriptive economics’, a sense in which it may
be truly said: ‘we are all institutional economists
702
now’. Of course, if we are willing to recast
our terms and to include in our net all those that have contributed to
‘economic sociology’ - which Schumpeter regarded as one of the four fundamental
fields of economics, the other three being economic theory, economic history
and statistics - we would have to treat Marx, Schmoller,
Sombart, Max Weber, Pareto and the Webbs, to cite only a few, as ‘institutional economists’. It has been said that if economic analysis
deals with the question of how people behave at any time, ‘economic sociology’
deals with the question of how they come to behave as they do. Economic sociology, therefore, deals with the
social institutions that are relevant to economic behaviour, such as
governments, banks, land tenure, inheritance law, contracts
and so on. Interpreted in this way,
there is nothing to quarrel with. But
this is hardly what Veblen, Mitchell and Commons
thought they were doing. Institutional
economics was not meant to complement economic analysis as it had always been
understood but to replace it.
There are few economists
today who would consider themselves disciples of Veblen,
Mitchell and Commons; although there is an Association of Evolutionary Economics, publishing its own journal,
the Journal of Economic Issues, which is determined to revitalise the spirit of the founding fathers of American
institutionalism. Nevertheless, the institutionalist movement ended for all practical purposes
in the 1930s. This is not to deny that
there were lasting influences. Mitchell’s
contribution to our understanding of the business cycle and in particular to
the revolution in economic information that separates twentieth- from
nineteenth-century economics is too obvious to call for comment. There is a renewed interest in evolutionary
economics and “the new institutional economics” (property rights, transaction
costs, etc.), which holds out great promise. But the old, institutionalist
economics of Veblens and Commons never did supply a
viable alternative to neo-classical economics and for that reason, despite the
cogency of much of the criticisms of orthodoxy, it gradually faded away. The moral of the story is simply this: it
takes a new theory, and not just the destructive exposure of assumptions or the
collection of new facts, to beat an old theory.
6 Why bother with the history of economic
theory?
There are no simple rules
for distinguishing between valid and invalid, relevant and irrelevant theories
in economics. The criterion of falsifiability can separate propositions into positive and
normative categories and thus tell us where to concentrate our empirical work. Even normative propositions can often be shown
to have positive underpinnings, holding out the prospect of eventual agreement
on the basis of empirical evidence. Nevertheless,
a core of normative theorems always remains for which empirical testing is
irrelevant and immaterial. Moreover,
there is an undetermined body of economic propositions and theorems which
appear to be about economic behaviour but which do not result in any
predictable implications about that behaviour. In short, a good deal of received doctrine is
metaphysics. There is nothing wrong with
this, provided it is not mistaken for science. Alas, the history of economics reveals that
economists are as prone as anyone else to mistake chaff for wheat and to claim
possession of the truth
when all they possess are intricate series
703
of definitions or value judgements
disguised as scientific rules. There is
no way of becoming fully aware of this tendency except by studying the history
of economics. To be sure, modern
economics provides an abundance of empty theories parading as scientific predictions
or policy recommendations carrying concealed value premises. Nevertheless, the methodological traps are so
subtle and insidious that the proving ground cannot be too large. One justification for the study of the history
of economics, but of course only one, is that it
provides a more extensive ‘laboratory’ in which to acquire methodological
humility about the actual accomplishments of economics. Furthermore, it is a laboratory that every
economist carries with him, whether he is aware of it or not. When someone claims to explain the
determination of wages without bringing in marginal productivity, or to measure
capital in its own physical units, or to demonstrate
the benefits of the Invisible Hand by purely objective criteria, the average
economist reacts almost instinctively but it is an instinct acquired by the
lingering echoes of the history of the subject. Why bother then with the history of economic
theory? Because it is better to know
one’s intellectual heritage than merely to suspect it is deposited somewhere in
an unknown place and in a foreign tongue. As T.S. Eliot put it: ‘Someone
said: “The dead writers are more remote from us because we know so much
more than they did.” Precisely, and they
are that which we know.’
704