Download Full The Political Economy of Antitrust 1st Edition V. Ghosal And J. Stennek (Eds.) PDF All Chapters

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

Download the full version of the ebook at ebookname.

com

The Political Economy of Antitrust 1st Edition V.


Ghosal And J. Stennek (Eds.)

https://ebookname.com/product/the-political-economy-of-
antitrust-1st-edition-v-ghosal-and-j-stennek-eds/

OR CLICK BUTTON

DOWNLOAD EBOOK

Download more ebook instantly today at https://ebookname.com


Instant digital products (PDF, ePub, MOBI) available
Download now and explore formats that suit you...

The Political Economy of Work Routledge Frontiers of


Political Economy 1st Edition David Spencer

https://ebookname.com/product/the-political-economy-of-work-routledge-
frontiers-of-political-economy-1st-edition-david-spencer/

ebookname.com

The political economy of work Spencer

https://ebookname.com/product/the-political-economy-of-work-spencer/

ebookname.com

The political economy of Saudi Arabia 1st Edition Tim


Niblock

https://ebookname.com/product/the-political-economy-of-saudi-
arabia-1st-edition-tim-niblock/

ebookname.com

How to Value Your Business and Increase its Potential 1st


Edition Jay Abrams

https://ebookname.com/product/how-to-value-your-business-and-increase-
its-potential-1st-edition-jay-abrams/

ebookname.com
The Case for Discrimination 1st Edition Walter Block

https://ebookname.com/product/the-case-for-discrimination-1st-edition-
walter-block/

ebookname.com

Erythropoietin 1st Edition Edition Gerald Litwack (Eds.)

https://ebookname.com/product/erythropoietin-1st-edition-edition-
gerald-litwack-eds/

ebookname.com

Introduction to Psychoneuroimmunology Second Edition Jorge


H. Daruna

https://ebookname.com/product/introduction-to-psychoneuroimmunology-
second-edition-jorge-h-daruna/

ebookname.com

Volcanoes of the World 3rd Edition Lee Siebert

https://ebookname.com/product/volcanoes-of-the-world-3rd-edition-lee-
siebert/

ebookname.com

Music Education as Critical Theory and Practice Selected


Essays 1st Edition Lucy Green

https://ebookname.com/product/music-education-as-critical-theory-and-
practice-selected-essays-1st-edition-lucy-green/

ebookname.com
Placental Bed Disorders Basic Science and its Translation
to Obstetrics 1st Edition Robert Pijnenborg

https://ebookname.com/product/placental-bed-disorders-basic-science-
and-its-translation-to-obstetrics-1st-edition-robert-pijnenborg/

ebookname.com
Introduction to the Series
This series consists of a number of hitherto unpublished studies, which are in-
troduced by the editors in the belief that they represent fresh contributions to
economic science.
The term ‘economic analysis’ as used in the title of the series has been adopted
because it covers both the activities of the theoretical economist and the research
worker.
Although the analytical method used by the various contributors are not the
same, they are nevertheless conditioned by the common origin of their studies,
namely theoretical problems encountered in practical research. Since for this
reason, business cycle research and national accounting, research work on be-
half of economic policy, and problems of planning are the main sources of the
subjects dealt with, they necessarily determine the manner of approach adopted
by the authors. Their methods tend to be ‘practical’ in the sense of not being too
far remote from application to actual economic conditions. In addition, they are
quantitative.
It is the hope of the editors that the publication of these studies will help to
stimulate the exchange of scientific information and to reinforce international
cooperation in the field of economics.

The Editors
Acknowledgements
A book of this scope and magnitude can only be made possible by the generous
contributions of time and effort by many. The editors, Vivek Ghosal and Johan
Stennek, express their deep gratitude to Joe Harrington for providing invaluable
help and guidance during many stages of the development of this volume as
well as very generously agreeing to contribute to the first chapter. Apart from
Chapter 1, all the papers in this volume were subject to a single-blind refereeing
process meeting international standards. We thank each of the authors in this
volume for their contributions and for showing patience and understanding as
the chapters went through the extensive refereeing and editorial process. We
express our gratitude to Russell Pittman (Antitrust Division, U.S. Department of
Justice), Michele Polo (University of Bocconi) and Lucía Quesada (University
of Wisconsin, Madison) for providing us with valuable comments on some of
the specific contributions included in this volume. Finally, we thank Joy Ideler,
Jeroen Loos, Tomas Martišius, Lisa Muscolino, Mark Newson and Valerie Teng
at North-Holland for providing expert help and guidance through the different
stages of the development of this volume.
List of Contributors
Numbers in parenthesis indicate the pages where the authors’ contributions can
be found.

Cécile Aubert (123) Université Paris Dauphine (EURIsCO), Dpt. Economie


Appliquée, 75 016 Paris, France. E-mail: [email protected]
Timothy J. Brennan (417) Professor of Public Policy and Economics, Univer-
sity of Maryland Baltimore County; Senior Fellow, Resources for the Future.
E-mail: [email protected]
Paolo Buccirossi (81) Lear—Laboratorio di economia, antitrust, regolamen-
tazione. E-mail: [email protected]
Joe Chen (59) Faculty of Economics, University of Tokyo, Tokyo, 113-0033
Japan. E-mail: [email protected]
Jay Pil Choi (241) Department of Economics, Michigan State University, East
Lansing, MI 48824. E-mail: [email protected]
John M. Connor (177) Purdue University, West Lafayette, IN. E-mail:
[email protected]
Tomaso Duso (303) Humboldt University, Berlin, and WZB, Germany. E-mail:
[email protected]
Antoine Faure-Grimaud (383) London School of Economics, FMG and
CEPR, UK.
Jérôme Foncel (349) GREMARS, University of Lille 3, France.
Joseph Francois (463) Tinbergen Institute, Rotterdam, and CEPR.
Sven-Olof Fridolfsson (287) Research Institute of Industrial Economics, Stock-
holm, Sweden. E-mail: [email protected]
Luke Froeb (369) Owen Graduate School of Management, Vanderbilt Univer-
sity, Nashville, TN 37203. E-mail: [email protected]
Vivek Ghosal (1) Georgia Institute of Technology, Atlanta, GA, USA.
Klaus Gugler (303) University of Vienna, Austria. E-mail: klaus.gugler@
univie.ac.at
Joseph E. Harrington Jr. (1, 59) Johns Hopkins University, Baltimore, MD,
USA. E-mail: [email protected]
Henrik Horn (259) Research Institute of Industrial Economics, Stockholm, and
CEPR, London.

xv
xvi List of Contributors

Henrik Horn (463) IIES Stockholm University, The Research Institute of In-
dustrial Economics (IUI), and CEPR, Sweden.
Marc Ivaldi (217, 349) University of Toulouse, EHESS and IDEI, Toulouse,
France.
Bruno Jullien (217) IDEI, Toulouse, France.
William E. Kovacic (149) U.S. Federal Trade Commission.
Robert C. Marshall (149) Pennsylvania State University, USA.
David Martimort (383) Université de Toulouse, and IUF, France.
Stephen Martin (25) Department of Economics, Purdue University, West
Lafayette, IN 47907-2056, USA. E-mail: [email protected]
Leslie M. Marx (149) Duke University, USA.
R. Preston McAfee (453) Humanities and Social Sciences, California Institute
of Technology, Pasadena, CA 91125. E-mail: [email protected]
Hugo M. Mialon (453) Department of Economics, Emory University, Atlanta,
GA 30322-2240. E-mail: [email protected]
Sue H. Mialon (453) Department of Economics, University of North Dakota,
Grand Forks, ND 58202. E-mail: [email protected]
Valérie Rabassa (349) European Commission, Chief Economist Office, Direc-
torate General for Competition, Belgium.
Matthew E. Raiff (149) Bates White, LLC.
Patrick Rey (217) IDEI, Toulouse, France.
Paul Seabright (217) IDEI, Toulouse, France.
Giancarlo Spagnolo (81) Stockholm School of Economics, Consip Research
Unit, and CEPR. E-mail: [email protected]
Johan Stennek (1, 259) Research Institute of Industrial Economics, Stockholm,
and CEPR, London.
Jean Tirole (217) IDEI, Toulouse, France.
Steven Tschantz (369) Department of Mathematics, Vanderbilt University,
Nashville, TN 37203. E-mail: [email protected]
Gregory J. Werden (369) U.S. Department of Justice, Washington, DC 20530.
E-mail: [email protected]
Burcin Yurtoglu (303) University of Vienna, Austria. E-mail: burcin.yurtoglu@
univie.ac.at
CHAPTER 1

Issues in Antitrust Enforcement


Vivek Ghosala , Joseph E. Harrington Jr.b and Johan Stennekc
a Georgia Institute of Technology, Atlanta, GA, USA
b Johns Hopkins University, Baltimore, MD, USA
c Research Institute of Industrial Economics, Stockholm, Sweden

Motivated by recent events and experiences in antitrust enforcement and policy


in the United States and the European Union, and new insights and findings
from academic research, this book presents a collection of theoretical, em-
pirical and public policy-oriented articles representing recent research on the
political-economy of antitrust. Political-economy is defined broadly to include
the demand-side drivers of antitrust activity such as market failures and interest-
groups, along with supply-side drivers including ideology and partisan politics
as well as the importance of informational limitations in antitrust enforcement
and the institutional structure of the antitrust agencies. Examining issues re-
lated to the political-economy of antitrust is important as antitrust policy and
enforcement provide a key mechanism for preserving the competitiveness of
markets, with implications for innovation, efficiency, growth and welfare. This
book brings together contributions by leading academic researchers in the areas
of political-economy, cartels, merger and non-merger enforcement, as well as
economists working with antitrust authorities in the U.S. and E.U., to make a
timely contribution for researchers and practitioners.
The chapters in this volume cover the full range of topics: enforcement of
cartels; merger control; monopolization and abuse of dominance; and systemic
issues in antitrust enforcement and policy. Since the last few years have seen
significant changes in both the U.S. and E.U. in the attitudes towards cartels, the
book places emphasis on antitrust enforcement of cartels, including topics such
as the corporate leniency programs that have recently been introduced in the U.S.
and E.U., optimal deterrence mechanisms against cartels and detection of car-
tels. While the individual chapters of the book make independent contributions
and may be read separately, the book brings together articles from various sub-
areas to present a more encompassing picture. This chapter provides an overview
of some the trends and recent research in antitrust enforcement and policy and
highlights the contributions made by the chapters in this volume.

CONTRIBUTIONS TO ECONOMIC ANALYSIS © 2007 ELSEVIER B.V.


VOLUME 282 ISSN: 0573-8555 ALL RIGHTS RESERVED
DOI: 10.1016/S0573-8555(06)82001-8
2 V. Ghosal et al.

1.1. Shifting winds in antitrust

Changes in intellectual thinking in economics, law and politics have produced


significant shifts in antitrust enforcement and policy in the U.S. over the last
several decades. The intellectual underpinnings of some of the key changes can
be traced back to the rising tide of criticisms of the U.S. antitrust enforcement
in the 1950s and 1960s by the Chicago-School scholars, and the genesis of their
law and economics movement is often recognized to be Aaron Director and Ed-
ward Levi.1 Director and Levi (1956) criticized the state of antitrust, disagreed
with a variety of business practices like tying and vertical restrictions being
anti-competitive and abuse of monopoly power, downplayed the likelihood of
predatory pricing, emphasized efficiencies and noted flaws in key antitrust de-
cisions like Standard Oil (1911) and Alcoa (1947).2 In similar tone, Director
(1957) criticized the United Shoe Machinery (1918) decision. Many of these
arguments went on to become guideposts for the Chicago-School’s law and eco-
nomics thrust. The influential contributions by Stigler (1964) and Williamson
(1969), along with Demsetz (1973, 1974), Bork (1966, 1978)3 and Posner (1969,
1974, 1976) among others, solidified the modern law and economics framework
in the U.S. Overall, the thinking shifted in two important ways. First, vertical
and conglomerate mergers, resale price maintenance, vertical restrictions and
other conduct that were often viewed as anti-competitive under the older an-
titrust regime were given pro-competitive and efficiency interpretations. Second,
the focus shifted to areas of clearer harm to welfare such as horizontal mergers
in concentrated markets and price-fixing.4 Ghosal (2006a) presents an empirical
analysis of the long-term patterns of enforcement and finds noticeable structural-
breaks in the U.S. enforcement data in the mid-to-late-1970s, symptomatic of a
regime-shift in enforcement towards greater emphasis on prosecuting cartels and
lesser emphasis on merger and non-merger (civil) enforcement. While there are
competing explanations of this shift, the Chicago-School engineered rethinking

1 Director joined the Chicago Law faculty in 1946, founded the Journal of Law and Economics in
1958, and his students included Robert Bork, Frank Easterbrook and Richard Posner—legal scholars
and judges who greatly influenced antitrust. Bork notes: “[Director’s] teachings . . . made him the
seminal figure in launching the law and economics movement, which transformed wide areas of
legal scholarship.” (From: Aaron Director, Founder of the Field of Law and Economics, University
of Chicago Press, 2004.)
2 Influenced by Director’s hypothesis that firms would prefer mergers and other practices to attain
monopoly status as opposed to predatory pricing, McGee (1958) tested whether Standard Oil en-
gaged in predatory pricing, a key issue in the 1911 antitrust decision. His results suggested this was
not the case.
3 It is noteworthy that while Bork’s book was published in 1978, it was completed much earlier in
1968–69.
4 As Baker (1997) notes: “Our own discipline, antitrust, underwent its Copernican revolution
within the professional experience of all but the most recent antitrust practitioners. I am speaking,
of course, of the rise of the Chicago school approach.” Baker (2002, 2003), Crandall and Winston
(2003), Kovacic and Shapiro (2000) and Motta (2004, pp. 2–9) provide discussion of these shifts.
Issues in Antitrust Enforcement 3

about the role of antitrust and emphasis on the efficiency aspects of business
conduct is an important explanation. While the changes in the U.S. intellectual
and enforcement mindset pre-dates changes in any other country, recent years
have seen broadly comparable changes in enforcement patterns in several other
countries and the E.U. Symptomatically, criminal enforcement and merger con-
trol are also the main focus of this volume.
The second chapter by Stephen Martin provides an overview of the devel-
opment of antitrust and industrial economics, the interdependencies between
the two and some of the political-economy aspects. In particular, Martin ad-
dresses the question of what antitrust has contributed to the study of industrial
economics. He notes that industry deconcentration proposals were widely sup-
ported by mainstream economists in the 1950s and 1960s and that opposition to
such proposals was critical to the evolution of the First Chicago School approach
from the “Positive Program for Laissez Faire” of Henry Simons (which, equally
suspicious of public and private power, regarded antitrust as an essential ele-
ment of public policy) to that of the Second Chicago School (which emphasized
distrust of public power at the expense of distrust of private power). Reaction
to the Second Chicago School emphasis on the neoclassical models of perfect
competition and monopoly was one motivating factor in the displacement of the
structure–conduct–performance framework by game-theoretic models in the late
1970s and 1980s.

1.2. Enforcement of cartels

The last 15 years has witnessed a new era in fighting cartels. In the case of the
United States, two complementary events were responsible for this sea change.
The first event was the 1991 revision of the Federal Sentencing Guidelines which
allowed for a ratcheting up of penalties to be levied. Government fines, which
were historically paltry, have risen to as high as $500 million for a single firm
and fines in the tens of millions of dollars are now commonplace. At the same
time, the incarceration of price-fixers has become routine, even of foreign citi-
zens, and the average length of a sentence has noticeably increased to about 18
months.
The second event was the 1992 revision of the U.S. Department of Justice’s
Corporate Leniency Program. This program waives all government penalties to
the first cartel member to come forward and cooperate fully. As noted by then
Deputy Assistant Attorney General James Griffin (Griffin, 2003), the revision
encompassed three significant changes: (1) amnesty is automatic if there is no
pre-existing investigation; (2) amnesty may still be available even if cooperation
begins after the investigation is underway; and (3) all officers, directors, and em-
ployees who cooperate are protected from criminal prosecution. In response to
this revision, the application rate went from about one per year to about two per
month. As a leniency program is more effective when it permits the avoidance of
more severe penalties, the increase in penalties and the revision of the leniency
program reinforced each other in creating a more effective anti-cartel policy.
4 V. Ghosal et al.

To take a big picture look at some of the changes in the U.S. enforcement of
cartels, we present Figures 1.1–1.3 (from Ghosal, 2006b). Figure 1.1 displays
the data on the total number of price-fixing cases prosecuted in the post-war
era, 1948–2003. These data reveal a sharp increase in the number of crimi-
nal antitrust cases prosecuted starting in the late-1970s and the early-1980s.
Figures 1.2 and 1.3 present data on the average fine per corporation and per
individual convicted over the 1968–2003 period. While the fines were typically
very low for most of the sample period, there were dramatic increases starting
around mid-1990s.
Though the key policy and enforcement initiatives may have originated in the
U.S., the movement to a tougher policy against cartels has occurred in many
industrialized countries. The E.U. initiated a leniency program in 1996 and ex-
perienced a near-doubling of the annual rate of convictions between 1990–95

Fig. 1.1: U.S.: Total criminal antitrust cases filed.

Fig. 1.2: U.S.: Fine per corporation.


Issues in Antitrust Enforcement 5

Fig. 1.3: U.S.: Fine per individual.

and 1996–2003 (Brenner, 2005).5 Leniency programs have been implemented


in Australia, Brazil, Canada, France, and Korea as well as many other countries.
Even countries like the Netherlands and Japan, long known as cozy environ-
ments for cartels, have become quite inhospitable. As of January 2006, the Fair
Trade Commission of Japan is empowered with a leniency program and the
capacity to levy a penalty equal to 10% of (total) firm revenue, up from the
previous mark of 6%.
In evaluating these developments, one must recognize that there are three
essential stages in battling hard-core cartels. Cartels must be discovered, dis-
covered cartels must be successfully prosecuted, and successfully prosecuted
cartels must be penalized. Operating effectively at all three stages is crucial
to disrupting existing cartels and deterring new cartels from forming. The pri-
mary impact of the recent changes mentioned above has been in prosecution
and penalization. It is perhaps important to note that while there have been
cases in which a leniency program was responsible for the actual discovery of
the cartel—such as the spontaneous reporting with the monochloroacetic acid
cartel—well-documented cases are rare. The power of a leniency program lies
more in aiding investigation and prosecution when there is already some knowl-
edge or suspicion about collusion. The leniency program can also help ferret
out confessions by firms in instances where there is some evidence/information
that the government is investigating a cartel—potentially triggering a race to the
competition authority or courthouse to avail of leniency.
A next natural policy step is then to improve methods of detection. One ap-
proach is screening industries, which refers to the analysis of market data—such
as prices and market shares—to find evidence suggestive of collusion. A flagged
industry would be one that warrants further investigation. Though antitrust au-
thorities have not generally engaged in screening, there have been some recent
attempts. At the Bureau of Economics of the U.S. Federal Trade Commission,

5 Also see Harding and Joshua (2004) for discussion of changes in the enforcement of cartels in
Europe.
6 V. Ghosal et al.

former Director Jonathan Baker used price increases after an industry-specific


trough in demand to identify the exercise of market power (FTC History, 2003,
pp. 108–110), while former Director Luke Froeb made progress in develop-
ing a screen in terms of the price variance (Abrantes-Metz et al., 2005). In
the Netherlands, the competition authority recently uncovered collusion in the
shrimp industry using screening. The time is right to invest in developing screen-
ing methods as leniency programs and screening are complements. If an antitrust
authority identifies an industry for further scrutiny through some form of screen-
ing and conveys these suspicions to the suspected firms, it could well induce
some cartel members to come forward and apply for leniency. Harrington (2006)
presents analysis and discussion of various facets of detection.
In the broad area of discovery of cartels, Ghosal (2006b) focuses on the gen-
esis and taxonomy of criminal investigations and discusses the various avenues
via which information may flow to the Antitrust Division about possible car-
tel activities leading to investigations and prosecutions. Using time-series data,
he examines the interrelationships between the criminal enforcement variables
as well as the potential linkages between civil and criminal enforcement. The
findings include: (1) current period increases in grand jury investigations or
criminal cases initiated or the number of individuals or firms convicted gen-
erates increases in most of these variables in future periods. This suggests that
information unearthed during a given criminal investigation often reveals in-
formation about other conspiracies leading to future investigations; and (2) an
increase in civil enforcement leads to future increases in the criminal prosecu-
tions and firms and individuals convicted, suggesting that information gleaned
during civil (e.g., mergers or monopolization) investigations may sometimes
reveal information about collusive behavior in markets leading to criminal in-
vestigations. The results point to potentially important complementarities in the
antitrust investigative processes. The findings appear to offer some practical ad-
vice for firms: (1) if you are neck-deep in a price-fixing agreement, be very
careful about submitting a merger application to the DOJ or FTC; and (2) if you
are caught price-fixing in one market and you are engaged in similar activities in
other markets, you may want to quickly head for the corporate leniency door!6

6 See Ghosal (2006b) for some examples from actual cases. For example, the Antitrust Division’s
investigation of the lysine cartel involving Archer-Daniels Midlands and several Asian firms un-
covered evidence on the vitamins and related cartels leading to their prosecution including large
multinationals like Hoffman-La Roche and Rhone-Poulenc. Block and Feinstein (1986), for exam-
ple, present evidence on spillover investigations in the highway construction industry where the
Antitrust Division prosecuted about 200 contractors on charges of bid-rigging. Regarding the in-
terface between civil and criminal investigations, some examples include the Division’s successful
challenge of the UPM Kymmene-Bemis MACtac merger a few years back due to price-fixing al-
legations. Further, it spawned a grand jury investigation into the alleged price fixing. Another was
the FTC’s “3 Tenors” case which came out of an HSR investigation of a proposed merger between
Time Warner & EMI. The contracts that were ultimately challenged were discovered during the HSR
investigation.
Issues in Antitrust Enforcement 7

The recent progress in fighting cartels has led not to complacency but rather
an ambition in policy circles and academia to make further improvements. The
E.C. revised its leniency program in 2002 and currently there are discussions
about adopting the U.S. model of private customer damages. Criminalization of
price-fixing is on the rise; as of 2002, Ireland and the U.K. joined Canada, Israel,
and the U.S. in having prison sentences as an instrument to punish managers for
colluding. With the Antitrust Criminal Penalty Enforcement and Reform Act of
2004, the U.S. increased the maximum prison sentence from three to ten years
and expanded leniency by reducing liability from treble customer damages to
single damages.
That the development of stronger anti-cartel policies is high on the policy
agenda makes the papers in this volume all the more timely and valuable. By
generating a better understanding about collusion and how antitrust policy influ-
ences firm behavior, they provide the foundation for making further innovations
in the battle against cartels.
The chapter by John Connor takes stock as to the magnitude of penalties
levied since 1990 along with other dimensions to enforcement. He finds vast
differences between the E.U. and the U.S. The time between “first notice” and
the first cartel member being sanctioned is around two years in the E.U. which
is noticeably longer than in the U.S. In addition, government fines and private
damage recovery in the U.S. are more than four times as large as in the E.U.
Connor estimates for the U.S. that total penalties are about 150% of damages
which is insufficient to make collusion unprofitable. Though progress has been
made, we are still far short of penalties being big enough and detection being
likely enough to make collusion exclusively a topic for economic historians.
In asking how severe the penalty must be to deter cartel formation, the usual
approach is to find that value whereby the cartel participation constraint (CPC)
is violated; that is, the minimum expected penalty that exceeds the incremental
expected gain in profit from colluding. The chapter by Paolo Buccirossi and Gi-
ancarlo Spagnolo questions the validity of this approach. With most cartels, the
biggest challenge is maintaining internal stability, which is modeled using the
incentive compatibility constraint (ICC); the satisfaction of which ensures that a
firm prefers to collude than to cheat. A leniency program can significantly affect
the ICC because a firm that cheats can, at the same time, apply for leniency. In
this way, a leniency program can disrupt cartel stability and deter cartels from
forming. What Buccirossi and Spagnolo show is that it is generally the ICC that
is binding and not the CPC. Thus, expected penalties can be such that collu-
sion is profitable but a cartel still may not form because the ICC is violated; the
cartel would be unstable. By calibrating a simple model, they are able to show
that, with a leniency program, the necessary penalty to violate the ICC is a mere
fraction of that required to violate the CPC. In spite of the ability of a leniency
program to amplify the impact of penalties, the authors conclude that E.U. fines
are still too low to deter cartel formation, which reinforces the conclusion of
Connor.
8 V. Ghosal et al.

An exploration into the effect of leniency programs is also conducted in the


chapters by Cécile Aubert and Joe Chen and Joseph Harrington. While they both
find a leniency program can enhance welfare, they also find some perverse ef-
fects that may generate inefficiencies. Previous theoretical research examining
leniency programs has, for reasons of tractability, restricted the model so that
there is only one collusive price (thus, firms cannot control the degree of col-
lusion) and both the penalty and the probability of detection and conviction are
fixed and independent of firm behavior. The general conclusion of that work is
the optimality of maximal leniency—waiving all fines to the first firm to ap-
ply. Using numerical analysis, the chapter by Chen and Harrington explores a
richer model that allows for a range of collusive prices and for the penalty and
the probability of detection and conviction to be sensitive to the collusive price
path; higher prices result in a larger penalty and larger price changes imply a
higher probability of detection. With this model they not only consider the ef-
fect of a leniency program on cartel formation but also its effect on the price
path when a cartel does form. Supportive of previous work, maximal leniency is
shown to be optimal. However, they also find that partial leniency can, relative
to a policy of no leniency, actually make collusion easier which is reflected in a
higher cartel price path. It is important to note that the U.S. does not provide full
leniency since a firm is still liable for single customer damages which are often
substantial.
A very different approach to modeling firm behavior is taken in the chapter
by Aubert. Building on the earlier work of Aubert et al. (2006), she takes ac-
count of the important fact that those agents who are colluding are almost never
significant shareholders. Rather, they are managers whose interest is dictated by
their compensation scheme and this inevitably means they care about more than
just expected profit. This approach permits one to explore how antitrust pol-
icy influences compensation schemes and thereby the incentives of managers to
collude. The agency problem lies in that a manager can deliver reasonably high
profit by either colluding with low effort or competing with high effort. Society
prefers the latter though the manager prefers the former since effort is costly.
Without antitrust penalties, there is no way to induce competition and high ef-
fort. Introduction of penalties allows the manager’s payoff to differ between
collusion/low effort and competition/high effort which then makes it possible to
induce the socially preferred outcome. The introduction of a leniency program
can deter collusion when it would have occurred otherwise but it can also re-
sult in inefficiencies that would not have occurred if there were penalties but
no leniency. Taking account of the fact that managers, not shareholders, are the
ones colluding—as done in this chapter—is an important direction for future
research.
Though our attention thus far has focused on explicit collusion, tacit collusion
can be just as welfare-reducing in spite of being legal. Tacit collusion is of par-
ticular relevance to merger analysis since a primary consideration is whether a
proposed merger would have coordinated effects by making tacit collusion more
likely. This issue is addressed in the chapters by William Kovacic, Robert Mar-
Issues in Antitrust Enforcement 9

shall, Leslie Marx, and Matthew Raiff and Marc Ivaldi, Bruno Jullien, Patrick
Rey, Paul Seabright, and Jean Tirole.
As the distinction between tacit and explicit collusion is not one that the exist-
ing theoretical framework can easily accommodate, empirical work pertaining
to tacit collusion is especially valuable and the chapter by Kovacic et al. offers
a novel analysis. They build on the idea that firms, upon discovery that they had
been operating a cartel, might substitute tacit collusion for explicit collusion.
Identifying the circumstances under which they are able to make that transition
could be informative towards identifying the circumstances under which tacit
collusion is sustainable. They focus on 30 vitamins markets that were involved
in the vitamins cartel of the 1990s. Vitamins markets with only two firms are
found to maintain prices after the plea period, which is consistent with having
replaced explicit with tacit collusion; while markets with three or more firms
experience a large drop in price. This suggests that concerns about coordinated
effects from a proposed merger are particularly relevant when it means reducing
the number of firms to two.
Finally, the chapter by Marc Ivaldi et al. is an excellent primer on the theory of
collusion; it is comprehensive yet concise, rigorous yet readable. For the reader
who is not knowledgeable about how industrial organization economists think
about collusion, this chapter will let you in on our little secrets. The chapter is
of particular value in identifying the structural variables relevant to evaluating
the possible coordinated effects of a merger.

1.3. Merger control

Merger control in the U.S. has seen significant milestones. The first Merger
Guidelines were introduced in 1968. Among the important objectives were to in-
form the markets and the public of the use of the federal antitrust laws to the eval-
uation of mergers and to streamline the procedures to provide more transparency
about the process. Implementation under the 1968 guidelines largely reflected
the structure–conduct–performance paradigm with heavy emphasis on market
shares and concentration and a near-paranoia on entry barriers (Williamson,
2002). Shifts in economic and legal thinking—for example, away from narrow-
minded market concentration based evaluations to a broader understanding of
business practices—eventually led to changes in the guidelines.
The 1982 Merger Guidelines introduced several innovations. An important
one was the hypothetical monopolist test. As noted by Werden (2002):
The hypothetical monopolist paradigm became a major organizing principle of the 1982
Merger Guidelines, and the hypothetical monopolist paradigm came to provide the sole test
for market delineation . . . The hypothetical monopolist paradigm was the lens through which
all evidence was to be viewed . . . the contribution of the 1982 Merger Guidelines was not
the hypothetical monopolist paradigm itself, but rather a carefully constructed algorithm for
merger analysis built around that paradigm.

Werden goes on to note that, due to the 1982 Merger Guidelines, the hypothet-
ical monopolist paradigm was embraced, in varying degrees, by competition
10 V. Ghosal et al.

authorities in many countries. The innovation in the 1984 revision of the Merger
Guidelines was to place significant focus on efficiencies and made it an integral
part of the competitive effects analysis. As noted by Kolasky and Dick (2002),
this focus remained intact until 1997 when the DOJ and FTC revised the Merger
Guidelines to elaborate on the tools they had developed to evaluate efficiency
claims.
The 1992 incarnation of the Merger Guidelines produced enhanced empha-
sis on qualitative competitive effects analysis and an even greater openness to
considering efficiency arguments (Kolasky and Dick, 2002). The 1992 Horizon-
tal Merger Guidelines also distinguished between anti-competitive mergers that
may make it more likely for firms to coordinate their actions versus mergers
that make it profitable for the merging firms to reduce output and raise price
unilaterally. The unilateral effects theories and the methods for their evalua-
tion gained currency starting the 1992 guidelines. Baker (1997) provides an
insightful discussion of unilateral effects and notes two key factors that made
this development possible: (1) the theoretical literature started by Salant et al.
(1983); and (2) the econometric methodology and point-of-sale scanner data that
made it possible to identify the extent to which consumers consider individual
products close substitutes. Baker goes on to note:
The 1992 Horizontal Merger Guidelines recognize these economic developments by setting
forth several ways in which mergers may “less[en] competition through unilateral effects.”
The settings in which this may occur include two in which competition is localized—a spatial
location model of competition among sellers of differentiated products, and an auction model
variant—and a third in which firms sell homogeneous products and are distinguished primarily
by their capacities.

Finally, an important refinement in the 1997 revision of the Merger Guidelines


(1997) related to whether the efficiencies had to be passed on to consumers in or-
der for it to matter. Some interpreted the 1997 revisions as adopting a “consumer
welfare” approach to efficiencies in which efficiencies would count only to the
extent they are likely to be passed on to consumers in the form of lower prices
and expanded output. However, Kolasky and Dick (2002) note that “a close
reading of the 1997 revisions shows that the agencies preserved the possibil-
ity of weighing positively efficiencies that would not immediately be passed on
to consumers. Significantly, the revisions did not include a pass-on requirement
in defining cognizable efficiencies.” They label the 1997 revisions “a hybrid
consumer welfare/total welfare model.”
To look at the big picture of merger enforcement, we present Figures 1.4–1.6
(from Ghosal, 2006c). Figure 1.4 presents data on the total number of mergers
challenged in court by the U.S. Department of Justice over the period 1958–
2003.7 The data show a significant cooling-off of merger challenges from about

7 U.S. merger enforcement is jointly carried out by the U.S. Department of Justice and the Federal
Trade Commission. On average the task is probably split evenly. Here, to take a quick look, we only
present the DOJ data.
Issues in Antitrust Enforcement 11

Fig. 1.4: U.S.: Total number of mergers challenged.

Fig. 1.5: U.S.: Total number of mergers.

the mid-1970s to the mid-1990s, after which the data show a small increase be-
fore falling off again.8 The absolute number of mergers challenged of course
is not the best indicator of the intensity of merger enforcement because the to-
tal number of mergers in the U.S. varies a lot over time. To take a look at this,
Figure 1.5 presents the total number of mergers in the U.S. over the same time

8 In an early case—U.S. v. General Dynamics Co. (1974)—the Supreme Court went against the
antitrust mindset of the 1950s and 1960s and did not find a violation even thought the existing mar-
ket shares were high. The Antitrust Division had defined the product market as “coal.” The Court
disagreed with this definition and considered the market to be the more overarching “energy” which
included oil, gas, nuclear and geothermal power. The Antitrust Division had defined the geographic
market narrowly. The Court disagreed with the geographic market definition and broadened it con-
siderably arguing that the market area should be defined in terms of the transportation networks and
freight charges that determine the cost of delivering coal and other energy. In addition, the Court
examined in detail the actual and potential competition and entry conditions in the markets un-
der consideration. This wide ranging evaluation of market conditions, and considering significantly
wider product and geographic markets, was a radical departure from the narrow concentration based
mindset of the earlier decades and set the stage for significant changes in future merger evaluations.
12 V. Ghosal et al.

Fig. 1.6: U.S.: Ratio of merger challenges to merger wave.

period. These data show a merger wave in the 1980s and another in the late-
1990s. To take a more accurate look at the intensity of merger enforcement,
Figure 1.6 presents the ratio of mergers challenged by the DOJ to the total
number of mergers in the U.S. The data in Figure 1.6 show a dramatic drop
in the intensity of merger challenges in the mid-1970s and remain low there-
after. Ghosal (2006c) discusses the evolution of merger control and conducts an
econometric analysis to shed light on the political-economy of merger enforce-
ment. Kovacic (2003) presents a lucid discussion about the underlying forces
that affected the path of the U.S. merger enforcement as well as myriad enforce-
ment issues that are not easily captured in a simple count of mergers challenged
by the government.
Traversing the Atlantic, the European Union’s merger policy is enshrined in
the so-called Merger Regulation.9 As merger control is not specifically provided
for in the Treaty, the Commission attempted to fill this lacuna by developing
the law under Articles 81 (anticompetitive agreements) and 82 (abuse of domi-
nance) to scrutinize mergers. However, these tools were deemed inadequate and
the first Merger Regulation was adopted in 1989. The E.U. has seen significant
changes in the intensity of screening of mergers as well as numerous administra-
tive changes. Motta (2004, Ch. 1) and Wish (2001) present some of the details.
To provide a quick look at recent patterns in E.U. merger enforcement, Fig-
ures 1.7 and 1.8 present the total number of merger investigations that reached
the Phase I and the more critical Phase II stage of evaluation.10 These data point
to a steady rise in scrutiny from 1990 to 2001 before tapering off in recent years.

In recent years, merger policy in Europe has taken on a somewhat controver-


sial turn. The year 2002 was exceptional as The Court of First Instance annulled

9 Council Regulation No 139/2004 of 20 January 2004 on the control of concentrations between


undertakings, which entered into force on May 1, 2004.
10 These data are from Duso et al. (this volume). Phase I refers to the initial (roughly one month)
investigation and Phase II refers to the more substantive (roughly four month) investigation.
Issues in Antitrust Enforcement 13

Fig. 1.7: E.U.: Total number of Phase I merger investigations.

Fig. 1.8: E.U.: Total number of Phase II merger investigations.

three of the Commission’s merger decisions: Airtours and First Choice; Schnei-
der and Legrand; and Tetra Laval and Sidel. In 2004, the Court annulled the
prohibition of the WorldCom (now MCI) and Sprint merger. Even the Commis-
sion’s decision to block the merger between Volvo and Scania was criticized
by some due to the Commission’s narrow focus on market shares in specific
countries and ignoring broader issues related to demand and supply substitu-
tion. Finally, the E.C. blocked the merger between two U.S.-based companies,
General Electric and Honeywell. While the Court of First Instance upheld the
Commission’s decision, they noted that the Commission had committed mani-
fest errors of assessing the conglomerate effects of the GE/Honeywell merger.
The cumulative impact of these events called for a reform of the European
merger control system and the E.U. decided on a major reform package which
entered into force in 2004. Several elements of this package also appear to re-
duce the differences between the U.S. and the E.U. The new merger regulation
now includes a new substantive test including unilateral effects. The Commis-
sion also issued horizontal merger guidelines which elaborate on the analysis of
unilateral effects and efficiency gains as part of the competition test, and make
14 V. Ghosal et al.

clear that it will use a consumer welfare standard. And, for the first time, a
Chief Competition Economist was appointed. It is, however, somewhat unclear
whether the new regulation signifies a shift in policy. The new Significantly
Impeding Effective Competition test (the SIEC test) appears to superficially
rearrange the terms of the old dominance test. An innovation, however, is to
increase legal certainty. The new regulation clarifies that mergers in oligopolis-
tic markets may harm competition, even in the absence of collusion. The courts
had not expressly interpreted the old regulation to include such unilateral effects.
Therefore, the new regulation explicitly states that the substantive test extends
beyond dominance (recital 25). It is also declared that the guidance that may be
drawn from past judgments of the courts and Commission decisions pursuant
to the old regulation should be preserved, and therefore the substantial test still
refers to dominance (recital 26). Thus, although it might be unclear whether the
old regulation included unilateral effect, it should be clear the new one does.11
Several of the chapters in this volume contribute to the analysis of some of
the important issues in the development of merger control.
The blocking of the proposed merger between General Electric and Honeywell
—two U.S. based companies—by the E.U. generated a transatlantic war of
words with some accusing the E.U. of placing greater weight on protecting Eu-
ropean competitors. The chapter by Jay Pil Choi analyzes the political economy
aspect of international antitrust in light of the GE/Honeywell decision. Choi
argues that this case demonstrates the need for consistent simple rules and bet-
ter coordination by harmonizing antitrust controls across antitrust enforcement
agencies in different jurisdictions, especially between Europe and the United
States. One reason is that with the current system, the enforcement decision on
a merger does not reflect the majority view and any international merger will
be essentially determined by the least permissive agency. Another reason is that
efficient mergers can be blocked since each agency ignores the external effects
of the merger in other jurisdictions.
The European Commission has intervened against a number of domestic
mergers in small Member States. Against this backdrop, Henrik Horn and Johan
Stennek discuss regional aspects of merger control. For instance, the Commis-
sion prohibited Volvo’s acquisition of Scania, arguing that competition would be
reduced in nationally defined markets. These interventions triggered a political
debate about merger control and market definitions. Smaller countries accused

11 Since the 1992 Merger Guidelines, the U.S. “significant lessening of competition test” (SLC)
has been interpreted to include unilateral effects. But the recent Oracle decision has caused some
controversy. According to this decision a plaintiff must demonstrate that the merging parties would
enjoy a post-merger monopoly or dominant position at least in a “localized competition space” in
order to prove unilateral effects. Still, the decision does acknowledge that economic analyses such as
merger simulations or econometric estimates of diversion ratios (that do not rely on the identification
of a market or submarket) may be useful to address unilateral effects. (Useful discussions of this can
be found in the issue of Antitrust (Spring 2005; Vol. 19, No. 2) published by the American Bar
Association.)
Issues in Antitrust Enforcement 15

the Commission of making it impossible for their companies to merge and ob-
tain leading global positions. The E.U. officials responded that companies in
smaller countries can obtain leading positions, by merging with companies from
other countries. The Volvo/Renault and Scania/Volkswagen partnerships that
followed the prohibition of the Volvo/Scania merger, clearly showed that there
were alternative ways for these companies to grow. The critics acknowledge
that international mergers may indeed constitute an alternative. But international
mergers may be less advantageous for smaller countries. They may have adverse
effects on employment and the location of both head quarters and production.
E.U. officials concede that E.U. merger control does not take into account a
possible move of firms abroad and that mergers are controlled for the interest
of consumers. Horn and Stennek note that international firms have an incentive
to locate their production to the larger countries with the larger markets. They
may serve also the smaller markets from the same production facilities to avoid
duplication of plant-specific fixed costs. The consumers in smaller markets will
then have to pay higher prices, to cover the trade costs incurred when exporting
goods from the larger to the smaller countries.
Since the beginning of antitrust enforcement starting with the U.S. Sherman
Act of 1890, there has been a debate about the objectives of antitrust in general
and merger control in particular. The debates have centered around issues related
to consumer welfare, total welfare, redistribution of wealth, protection of small
businesses, among other considerations. The chapter by Sven-Olof Fridolfsson
discusses the goal of merger control. The goal in the U.S. is typically perceived
to be to protect the consumers. Also the new horizontal merger guidelines clearly
indicate that the E.C. will use a consumer welfare standard. The question is why
firms’ profits are not considered? The answer perhaps lies in the concern for
the distribution of wealth in society, combined with the belief that firm owners
typically are wealthier than consumers. It is far from clear, however, that merger
control can influence distribution much. And, in any case, taxes and transfers
are probably more effective. Many economists have advocated a shift of focus
to economic efficiency—that is, merger control should attempt to maximize the
sum of the firms’ profits and the consumers’ surplus. But maybe the authorities
are right after all and there should be a consumer bias even though the ultimate
goal may be overall efficiency. Fridolfsson notes that firms can be expected to
propose the most profitable mergers, among those that would be accepted by
the authorities. By demanding mergers to also benefit consumers the firms are
forced to propose mergers that are profitable because of important synergetic
gains, as opposed to being profitable due to lessening of competition. This is
efficiency enhancing.
Some mergers that should have been blocked may have been cleared by the
authorities. Other mergers that were cleared probably should have been chal-
lenged. Finally, if mergers were cleared subject to remedies, were they the right
one’s? How do we know that the competition authority made the right decision?
Tomaso Duso, Klaus Gugler and Burcin Yurtoglu notice that problematic merg-
ers today are often cleared, but subject to conditions that remove competitive
16 V. Ghosal et al.

concerns such as the divestiture of some assets or other behavioral obligations


like licensing agreement or access to essential facilities. For instance, during
its fiscal years 1998 and 1999, the Federal Trade Commission challenged 63
mergers. Of these 41 (65%) involved negotiated restructuring, 18 (29%) were
abandoned, and only four (6%) were litigated. In Europe, only 19 mergers have
been blocked since 1990. During the same time more than half of Phase II de-
cisions (72 out of 121, or 59%) are cleared subject to commitments. Duso et al.
provide an international comparison of institutional arrangements and regulatory
approaches to deal with remedies in merger control. They conclude that there is
a clear convergence on some shared principles that guide competition authorities
in the application of remedies. They also provide a first empirical assessment of
the use of remedies in European merger control, using an event study method-
ology to identify the competitive effects of mergers and remedies. They suggest
that the Commission’s views on competitive effects quite often appear to differ
from the view of the market, as it is expressed by the movements in stock-prices.
Moreover, stock markets seem to evaluate remedies to be on average effective
only when applied in the first investigation phase and not so when adopted after
an in depth inquiry.12
Evaluating the unilateral effects of mergers became important in the U.S.
starting with the 1992 Merger Guidelines and the E.U., in recent years, has taken
similar steps in this direction. The chapter by Jérôme Foncel, Marc Ivaldi and
Valérie Rabassa provides an extensive discussion of the new substantive test of
the new European Merger Regulation, with an empirical illustration from the La-
gardère/Editis case. They argue that this case demonstrates an enhanced interest
of the European Commission for the measurement of unilateral effects. They
also argue that the use of an econometric model based on unilateral effects clari-
fies and complements the traditional dominance approach, by providing explicit
measures of the predicted price increases.
Finally, the chapter by Luke Froeb, Steven Tschantz, and Gregory Werden
discusses the effects of mergers between competing manufacturers of differen-
tiated consumer products that are sold through retailers. They demonstrate that
the effects of the mergers on consumers are determined to a large extent by the
manufacturers’ relationships with their retailers. The paper illustrates how the
sort of formal modeling that has become common with differentiated products
mergers must account for vertical issues relating to horizontal mergers.

1.4. Non-merger enforcement

Non-merger enforcement in the U.S.—under the Sherman Act Section 1 and


Section 2—has seen some landmark antitrust cases such as Standard Oil (1911),
Aluminum Corp. of America (1947), AT&T (1980), Microsoft (1999), to name
a few. In the more recent era, however, non-merger enforcement in the U.S. has

12 Also see Duso et al. (2006) on using stock price data to evaluate merger control decisions.
Issues in Antitrust Enforcement 17

Fig. 1.9: U.S.: Sherman Act Section 1 cases.

Fig. 1.10: U.S.: Sherman Act Section 2 cases.

experienced a marked decline. In significant part, the Chicago-school’s focus on


examining the efficiency aspects of business practices provided the initial im-
petus that de-emphasized non-merger civil antitrust enforcement. Probably the
most significant early case that dwelled on efficiency issues is Continental TV v.
GTE Sylvania (1977). In this case the Supreme Court emphasized concepts re-
lated to competition in the market and argued that vertical restrictions are likely
to promote interbrand competition by allowing producers to achieve efficiencies
in distribution. This is the first time that the Court explicitly noted efficiencies to
argue in favor of the pro-competitive effects. Figures 1.9 and 1.10 (from Ghosal,
2006a) provide a bird’s-eye view which reveal clear declines in non-merger en-
forcement. Ghosal (2006a) conducts an econometric analysis of these patterns
and finds that both political and economic factors appear to play an important
role in the shifts in enforcement.
Turning to non-merger enforcement in the E.U., we begin by taking a quick
look at the E.C. antitrust decisions by the type of alleged infringement. These
data are presented in Table 1.1 and Figure 1.11, both of which are from Schinkel
et al. (2006). Schinkel et al. present one of the most comprehensive analyses of
18 V. Ghosal et al.
Table 1.1: E.C. antitrust decisions specified by the type of alleged infringement (adapted from
Schinkel et al., 2006).

Negative Exemption Infringement Interim


clearance measure

Horizontal constraints 31 58 136 0


Abuse of dominance 0 0 44 3
Vertical restraints 29 31 61 1
Licensing 6 12 11 0
Joint ventures 12 46 3 0
Procedural issue 0 0 30 0

European enforcement to date and survey the development of European compe-


tition law enforcement of Articles 81, 82 and 86—excluding merger control and
state aid—since its foundation in the Treaty of Rome of 1957. Their time-series
data reveal a stepped up enforcement by the E.U. in recent decades, a shift from
notifications and third party complaints to decisions in investigations started at
the Commission’s own initiative, and an exponential increase in fines. Unlike
some of the findings for the U.S., they do not find evidence of political cycles in
the E.U. enforcement. They also relate the types of infringements to the OECD
sector classification and the probability of an appeal being lodged with the Eu-
ropean Court of Justice and the Court of First Instance.
The E.C. is currently reviewing its policies against dominant companies’ ex-
clusionary conduct, such as predatory pricing, rebates, tying and bundling, and
refusal to supply (part of Article 82). A recent discussion paper (E.C., 2005) out-
lines a more effects-based (or economic) approach and clarifies that the objective
of the rules against abuses of dominant positions is to protect the consumers, and
not the competitors of the dominant companies. The suggested framework for
analysis is quite detailed, with three tests. The first step is to test if the conduct
has the capability to foreclose competitors from the market. The second step is
to establish actual or likely foreclosure effects. At this point the market coverage
(incidence) of the conduct, network effects and other circumstances are inves-
tigated. In the third step, the dominant companies will have the opportunity to
rebut a presumption of abuse, for example, by demonstrating efficiencies.
During the consultation following the publication of the discussion paper and
the public hearing, several issues were discussed.13 For instance, consistency
with economic principles may come at the cost of legal uncertainty. A too com-
plicated system may also mean that enforcement decisions may come too late
to protect the rivals of much stronger dominant firms. Some new entrants are
afraid that not using per-se rules and presumptions will make it more difficult for
them. But many commentators also call for safe harbors (e.g., in terms of market

13 The comments are published at: http://ec.europa.eu/comm/competition/antitrust/others/article_82


_review.html.
Issues in Antitrust Enforcement
Fig. 1.11: E.C. antitrust decisions by type (from Schinkel et al., 2006).

19
20 V. Ghosal et al.

shares) for potentially dominant firms. One of the more specific issues with the
economic approach is whether and how efficiencies should be included in the
analysis of abuse of dominance. Unlike the rules on anticompetitive agreements
(Article 81) and on mergers, Article 82 does not explicitly refer to efficiencies.
Still, the discussion paper does outline an efficiency defense, similar to that of
the other competition policy areas. A difficulty in designing an efficiency de-
fense is that, on the one hand, efficiencies should be an integral part of the
Commission’s investigation of anticompetitive effects. On the other hand the
dominant companies have an informational advantage and should bear the bur-
den of proof.
In the arena of non-merger enforcement, the chapter by Timothy Brennan
dwells on the important issue of monopolization law. Brennan notes that U.S.
monopolization law is controversial as evidenced by the AT&T and Microsoft
cases. He argues that monopolization is typically portrayed as a dominant
firm protecting itself by harming nascent rivals. Since the competitive process
itself—offering more and better products at lower prices—also harms rivals,
antitrust observers fall into two warring camps. Brennan argues that we could
potentially limit the controversy and avoid these spurious requirements if mo-
nopolization law can be recast to make it akin to the relatively less controversial
branches of collusion and merger law. Brennan argues that this entails rejecting
the focus on rivals and instead treating Section 1 cases as about monopolization
of otherwise competitive complementary markets, for example, in production
inputs, distribution channels, or retail outlets.

1.5. Systemic issues

The final three chapters of this volume cover general topics that are of relevance
for all areas of competition policy—which include non-merger enforcement as
well as cartels and merger control.
The institutional structure of competition and regulatory agencies can vary
across countries and even within countries. For example, the Assistant Attorney
General who heads the Antitrust Division of the U.S. Department of Justice
is appointed by the U.S. President, whereas the E.U. competition commis-
sioner has no such direct political affiliation. In the event that different political
principal’s may have different views about intervention in markets, such differ-
ences in institutional structure may affect enforcement. The chapter by Antoine
Faure-Grimaud and David Martimort discusses the pros and cons of politically
independent competition agencies. Independence may for example mean the
right to open and close a merger review on clear and pre-specified criteria. Is-
sues of political influence often surface in connection to international mergers.
In the United States concerns about foreign ownership of ports and oil facili-
ties blocked two foreign acquisitions. In Germany the Ministry of Economics
overruled the antitrust authority’s recommendation to prohibit Eon’s takeover of
Ruhrgas on the ground that the merger would create a substantial export power-
house. Faure-Grimaud and Martimort argue that independence of the regulator
Issues in Antitrust Enforcement 21

stabilizes regulatory policies and avoids much of the fluctuations induced by an


exogenous political uncertainty on the electoral outcomes. However, indepen-
dence also increases the cost of preventing regulatory capture. Adding up both
effects, independence nevertheless increases ex ante social welfare.
The U.S. has seen a dramatic increase in private antitrust litigation over the
last few decades. One view of this trend is that these lawsuits purportedly are for
gaining respite from anti-competitive activities by market participants. A com-
peting view is that many of these lawsuits are for strategic reasons. Preston
McAfee, Hugo Mialon and Sue Mialon discuss firms’ incentives to use antitrust
lawsuits for strategic purposes—to prevent procompetitive efficiency improve-
ments by rival firms. They argue that smaller firms in more fragmented industries
are more likely to use the antitrust laws strategically than larger firms in concen-
trated industries.
Finally, the chapter by Joseph Francois and Henrik Horn explores important
linkages between competition policy and international trade and discusses the
scope for an international agreement to curb the beggar-thy-neighbor competi-
tion policies. They argue that countries that are net exporters in the sectors that
are more easily cartelized do have incentives to pursue such policies. In addi-
tion, there is a certain political logic to the fact that there are attempts to bring
such an agreement into a structure like the WTO. This is, in part, because such
an agreement would enhance trade. This is also because a competition policy
agreement may require side payments, and a trade agreement, like the WTO or
regional schemes, offers plenty of scope for members to trade off gains under
one agreement with losses under another agreement. There is reason to believe
that support for such an agreement could come from a wide spectrum of factor
owners in both exporting and importing countries.

Acknowledgements

We are grateful to Hugo Mialon and Maarten Pieter Schinkel for helpful com-
ments.

References

Abrantes-Metz, R.M., Froeb, L., Geweke, J., Taylor, C. (2005), A variance screen for collusion. U.S.
Federal Trade Commission, Bureau of Economics Working Paper No. 275, March.
Aubert, C., Rey, P., Kovacic, W. (2006), The impact of leniency and whistle-blowing programs on
cartels, International Journal of Industrial Organization, in press.
Baker, J. (1997), Unilateral competitive effects theories in merger analysis. Antitrust 11, 21–26.
Baker, J. (2002), A preface to post-Chicago antitrust. In: van den Bergh, R., Pardolesi, R., Cucinotta,
A. (Eds.), Post-Chicago Developments in Antitrust Analysis. Edward Elgar.
Baker, J. (2003), The case for antitrust enforcement. Journal of Economic Perspectives 17, 27–50.
Block, M., Feinstein, J. (1986), The spillover effect of antitrust enforcement. The Review of Eco-
nomics and Statistics 68, 122–131.
Bork, R. (1966), Legislative intent and the policy of the Sherman Act. Journal of Law and Eco-
nomics 9, 7–48.
Random documents with unrelated
content Scribd suggests to you:
human thought, resembling those deeper geological layers, which
only show themselves in a partial and fragmentary manner.
But none of these mythologists attached the least importance to the
names of the divinities, and if they were told that they were nothing
but names, it sounded almost like heresy to them, and they ignored
the fact that one of the latest scientific discoveries was being
submitted to them. Yet it is indubitable that the sun and the moon
were in the places occupied by them at present before they were
named; but not till they were named was there a Savitar, a Helios, a
Selene or a Mene. If then it is the name which makes the gods in
mythology, in enabling us to distinguish one from another, it follows
that we must call the Science of Language to our aid in order to
solve the problem of mythology, since that alone discloses the
causes which have despoiled the names of their primitive meaning,
and that alone shows how the germs of decrepitude, inherent in
language, affect both the phonetic portion and also the signification
of words, since words naturally react on thought and mould it.
CHAPTER VIII
BETWEEN SLEEPING AND WAKING

The habit which I have contracted of living in the society of our


ancestors of prehistoric times, would, it might be thought, naturally
cause me to notice the dissimilarities between us and them rather
than the likenesses; this often happens, but not always. Our fathers,
for instance, did not know the thousandth part of our vocabulary,
which is very copious; this would seem to indicate that our
knowledge has considerably increased in the course of thirty or forty
centuries. Words of deep import are familiar to us; who amongst us
does not know and use such as these—Law, Necessity, Liberty,
Spirit, Matter, Conscience, Belief, Nature, Providence, Revelation,
Inspiration, the Soul, Religion, Infinite, Immortality, and many
others, which are either of recent origin, or have become new
because their meaning has changed? Here the difference between
our fathers and ourselves springs into sight.
But the points of resemblance are still more striking.
Long before our present era, certain philosophers asserted that their
world was full of gods, we may say with equal truth that God fills our
world; His name is in every mouth, and our little children know it
well. Moreover the complete identity between certain mental acts of
our fathers and our own is easily recognised. Our fathers were
satisfied not to enquire concerning the nature of their gods, they
knew their names, and that sufficed. We too have become
accustomed to hear God’s name repeated frequently, without always
questioning ourselves as to its meaning, and in what way He has
made the earth His habitation.
To talk of what we do not grasp must be essentially human, since
we find the practice in two social conditions, separated from each
other by thousands of years.
It is incredible to what a point we of the nineteenth century carry
our lack of enquiry. If one day we were to count on our fingers the
number of interesting subjects we had allowed to pass by us without
any interrogations concerning them, fifty hands would not suffice us
for the tale; our ignorance would then become apparent. Should we
feel humiliated? In all probability no, for before arriving at this much
to be desired consummation, we should have been carried away by
many thoughts in no way bearing on the subject, and the one
thought which would come prominently to the front and hinder us
from passing our conduct in review would be, “I see no necessity to
apply myself to them.” In fact, nothing is easier and nothing so
reposeful to our mind as acquiescence in the popular opinion, which
we allow to guide us in our estimation of words and phrases; as so
frequently happens with ourselves (by “ourselves” I mean that very
considerable portion of society which separates the working classes
from the savants and philosophers).
“All things are full of the gods,” was said by the heathen in former
days; and in fact divinities abounded; this was not surprising. “God
has chosen to Himself a people and spread His name over the whole
earth, and to make His will to be known,” as we say now. Thus we
know that God is, and that His commandments must be kept.
To consider words as ideas is not wise. Why do we not imitate the
savages who when they hear an organ for the first time have a great
desire to open it in order to see what is inside; and we who are
civilised play with much light-hearted readiness on the gigantic
instrument of language without seeking to know the value of the
sounds we draw from it; and the names of beings and objects which
should exercise the most powerful influence to which moral things
can be subjected, are treated as mere sounds.
Have we asked ourselves the meaning of the word God? Many must
answer no to this question. This is not well, in spite of the fact that
those who have asked it in this form have not always succeeded in
obtaining an answer; no one has formed a complete conception of
God, since neither sense nor reason is equal to the task. Plato,
although named “Divine” by the ancient philosophers and by
Christian theologians, did not like to speak of The Gods, but
replacing the plural by the singular used the word “Divine,” but he
did not explain what he understood by this word. Plato certainly
mentions the Creator of the Universe, the Father of humanity, but
—“he does not tell His name, for he knew it not; he does not tell His
colour, for he said it not; he does not tell His size, for he touched it
not.”[51] Xenophanes, who lived 300 years before Plato, said, “There
is one God, the greatest amongst gods and men; neither in form nor
in thought like unto mortals.”[52]
The Greek philosophers protested against all attempts to apply a
name which should be adequate to the Supreme Being; since all the
words chosen failed to grasp His essence, and only designated
certain sides and points of view, predicting of Him whatever was
most beautiful in nature. For this reason early Christian writers who
were Greeks rather than Jews, who had studied in the schools of
Plato and Aristotle, spoke of God in the same abstract language, the
same negative terms; they said, “We cannot call Him Light, since
Light is His creation; we cannot call Him Spirit, since the Spirit is His
breath; nor Wisdom, since Wisdom emanates from Him; nor Force,
since Force is the manifestation of His Power.”
Thus instead of saying what God is, the philosophers, heathen as
well as Christian, prefer to say what He is not. But in that case what
idea could man form of a Being whom the wisest amongst them
could not represent or describe? Do we understand the nature of
this Supreme Being better by using the name so well known of
Providence? Again no; since we have introduced several meanings
into this word which are inconsistent the one with the other.
Amongst them there might well be some that are erroneous, which
would thus lead us to rest our hopes on false foundations.
This mist, hiding from us the meaning of words and obscuring our
ideas, is partly owing to a fault committed by the ancients
themselves.
When our ancestors communed with their divinities, they did not ask
themselves what the names they pronounced really meant; in
invoking Varuna, Helios, Athene, Prithvi, and the others, they were
satisfied, at least for the time being, since names possess a strange
calming property; this unquestioning acquiescence has been
bequeathed to us. We are neither more enquiring, more exact, nor
more pedantic than the greater part of our ancestors; we speak of
angels, for instance, without seeking to fathom their nature, much in
the same way as we might mention lords and dukes without
troubling ourselves to reflect that the one means “bread-giver” and
the other “dux,” or one capable of being a leader of men.
In speaking of the soul, the immortality of the soul, and of religion,
we use words which have become common property, and it is not
necessary to analyse them in order to feel sure that they represent
things which are very real; still we do not strive to understand what
these things really are. Thus it happens that words whose meaning
is unknown to us or escapes us, are generally those of which we
make daily use; we keep to the impression received of them in our
childhood, or accepted by current opinion, or with which sentiment
invests them, but this is unsatisfactory; we should feel ashamed of
not possessing more accurate knowledge than this of geography or
arithmetic. On the other hand, there are scientific terms which seem
to us so technical that we willingly abandon their use to experts, and
yet their meaning can be readily and definitely grasped.
What meaning, for instance, has the word infinite for us, even if
taken in its most simple acceptation; this infinite towards which our
thoughts travel when we raise our eyes to the skies? Astronomers
say to us, “Look at something greater than the greatest possible
greatness, that is the infinitely great.” They then quote figures, but
these figures of infinite greatness elude our imagination, we repeat
them mechanically and only out of respect to the high scientific
authority who guarantees the accuracy of the calculations or the
value of the appreciation.
A small object, apparently of the size of a homeopathic globule,
moves in space, it contains our continents and our oceans, this
globule moves in company with other globules of the same nature.
Astronomers speak to us of the millions of miles separating us from
the sun, yet this distance dwindles down to nothing as compared
with the nearest star, which, we are told, lies twenty millions of
millions of miles from our earth. Another stupendous thought is that
a ray of light traverses space at the rate of 187,000 miles in a
second, and yet it requires three years to reach us.
But this is only a small matter.
More than one thousand millions of such stars have been discovered
by our telescopes, and there may be millions of millions of suns
within our siderial system which are as yet beyond the reach of our
best telescopes; even that siderial system need not be regarded as
single within the universe, thousands of millions of similar systems
may be recognised in the galaxy or milky way.[53]
Now let us turn our eyes to the infinitely little. One drop of water
taken from the ocean contains atoms so small that a grain of the
finest dust would seem colossal by the side of them; chemists are
now able to ascertain the relative positions of atoms so minute that
millions of them can stand upon a needle’s point.
All this we gather from science when—working together with the
telescope—it investigates space; and this may still be little compared
to what we might see through glasses, which should magnify objects
some millions of times more than our best instruments.
The infinite in space has engaged the attention of many thinkers; I
will quote from two only, as this infinite, which they studied from
different points of view, yet suggests thoughts somewhat alike.
Kepler, the discoverer of the laws on which our planetary system is
based, said, “My highest wish is to find within the God whom I have
found everywhere without.” Kant, the philosopher, to whom the
Divine in nature and the Divine in man appeared as transcendent
and beyond our cognisance, and who refused to listen to any
theological argument tending to prove the existence of God, yet
says, “Two things fill me with new and ever growing admiration and
awe: the starry firmament above me, and the moral law within me;
neither of them is hidden in darkness, I see them both before me,
and I connect them directly with the consciousness of my own
existence.”[54]
These are very abstract thoughts; and it is pertinent to notice that
the most solemn religious terms, and the most striking expressions
of admiration, and poetical phrases of love, have their source in
verbal roots, indicative of acts and conditions palpable to the senses.
But I am approaching too closely to matters of high import. I am
drawn by the word Infinite. Aristotle said truly, “the Infinite attracts.”
He was thinking of that other infinite, which is not the one intended
by astronomers; but for myself the infinite in nature captivates me
so powerfully that I find it difficult to touch earth again. Let us walk
in beaten paths; let us endeavour to grasp the meaning of the more
simple words learnt mechanically at school, such as those denoting
abstraction as well as nouns, and terms both general and particular;
and let us see to what phase of thought and speech these
grammatical exercises will carry us.

Each palpable object is known to us according as it affects our


senses, that is to say, by its properties; all impalpable objects cannot
be known otherwise than by their qualities; but nothing exists in
nature, whether palpable or impalpable, that has only one property
or one quality, each object has several; an object as it exists in
reality is concrete, and has a concrete name. If we wished to
consider only one of its attributes, we should have to take that apart
and isolate it, in order to fix our thoughts exclusively on that; “we
must drop that of which the attributes are attributes.”[55] We see
white snow, white chalk, white milk, we have the sensation of the
white colour; but to take whiteness apart from the snow, the chalk,
and the milk, is an operation which requires an instrument, a means,
this we possess in a word, viz., the word white. Without that word
we should have the sensation of whiteness, but not the idea; it is
the word white, whilst separating the white colour from the snow,
the chalk, and the milk, that gives us the abstract idea as well as the
abstract term whiteness. This mental act is called abstraction: and it
is by this process of abstraction that we really arrive at the true
knowledge of anything, apart from the sensation of it only.
Here is another example of abstraction. Let us suppose that two
persons are in one room, and that there are in the room two
windows, two doors, two tables and two chairs. Let us try to
obliterate in our mind the persons, the windows, the doors, the
tables and the chairs; nothing now remains but the abstraction two.
Now two, as such, apart from objects, does not exist in nature; still
it is a conception we can retain in our mind, and this abstract idea
can be incorporated in the abstract word two.
These two examples of abstraction tell us but little of what is meant
by it; and although they teach us little of the part abstraction plays
in our mental life, they are correct from a logical point of view, and
clearly demonstrate the impossibility of retaining a thought apart
from the word expressing it, since evidently the representation of
two and of whiteness could not have been made if the words had
been lacking.
The faculty of abstraction has no doubt taken time to develop in
man, and the absence of abstract words and consequently of
abstract ideas was complete in primitive man as it now is in our very
young children. The faculties of brutes can by no means attain to
abstraction. One reason, amongst others, why we have no ground to
think brutes have abstract general ideas is that they do not speak,
that they have no use of the words without which it is impossible to
carry out the operation which I have just described, and to cause a
conception to arise from a sensation.
When, in our early days, our parents gave us instruction on the
three divisions of natural history, and explained to us of what they
consisted, we did not suspect that a period of immense length had
elapsed before man succeeded in thus skilfully classifying the vast
mass of names in the manner which struck us as so natural and
inevitable. Many thousands of objects were before us, each one
entitled to bear an expressive name; and in proportion as our
knowledge of things increased was science called upon to furnish
new terms; their name became legion and memory failed to retain
them. It therefore became a necessity to classify the objects of a
common nature under one name; hence the evolution of the terms
animal, vegetable and mineral, which relieved us from the burden of
enumerating all the objects composing genus and species; then in
speaking of them to others we use the generic term, which at the
same time presents the image to our own minds. Thus when we
wish to denote men having the same nationality as ourselves we
employ the collective term compatriot; in the same way the word
furniture includes all that serves to furnish our rooms. By the help of
this ingenious combination we relieve our memories of a mass of
encumbering words, we economise our time and our powers, and
simplify the machinery of our thoughts.
This is evidently an advantage. But now a difficulty presents itself.
When employing these general terms, such as vegetable, animal, the
human race, we are speaking of things of which we are ignorant,
and are therefore for us as if they had no existence. We cannot have
a complete knowledge of vegetables since that word comprehends
all plants and trees on the earth; neither of animals, since “animal”
includes not only all beasts lacking reason but also man who is
endowed with it. We are equally ignorant of the human race, since it
is composed of all human creatures, past, present and to come. It is
evident that we only know individual persons and things, such as
this fir tree or that oak, this horse, this cow, Paul or James, and we
know them because we are in a position to distinguish them by
naming them, or indicating them.
How is it that philosophers of the mental calibre possessed by Locke,
Hume and Berkeley—whose minds follow so closely the progress of
the perception of general ideas—did not question how it was that
terms which were applicable to these ideas could equally well be
applied to particular things? What was the origin of the word man
that it could be as suitable for Paul or James as for many men, in
fact the whole human race? This is a fact about which philosophers
do not appear to have troubled themselves, and which the science of
language alone can explain.
In the time of our primitive ancestors human knowledge was
evolved gradually from what was confused and vague, before
arriving at what was deemed settled and distinct. Man’s vocabulary
was small, substantives were rare; that which we now understand by
garden, courtyard, field, habitation, was merged into one and the
same conception, and would be expressed by one vocable, of which
the modern equivalent is enclosure; the word serpent designated all
creatures that crawled, the word fruit implied all that could be eaten,
the word man all who could think; each name was a general term
expressive of a general idea.
We may remember that the Sanscrit word sar, to run, which was at
first used for rivers in general, became a particular name; a
demonstrative element joined to the verb, changing it into sarit, run
here, sufficed at once to turn it into an intelligible phrase, and the
name of a particular river. In order to form the word man-u-s, man,
the constructors of language combined the root man, measurer,
thinker, in its secondary form man-u, with the suffix s, which gives
the meaning think-here. This was at first not of general application,
but as it could be repeated any number of times and referred each
time to different persons, who could each be named thinker-here, it
became a general term. We thus see that the name manus was from
the beginning something more than a mere conventional sign
applied to a particular person as are all proper names. It was a
predicative name, that is applicable to all possessing the same
attributes, viz., of being able to think, and capable of the same act,
that of thinking.
This discovery was followed by another not less unexpected. When
examining the oldest word for name, which in Sanscrit is nâman, in
Greek onoma, in Latin nomen, we find that it dates from a time
when the Sanscrit, Greek and Latin languages were all one;
consequently the English name and the German Name are not as we
supposed, words invented by the ancient Saxons, but they already
existed before the separation of Teutonic idioms from their elder
brothers.
After some further steps our contemporary philologists discovered
the sources whence proceeded this Sanscrit nâman; it is formed of
the root nâ, originally gnâ, to know, joined to a suffix which
generally expresses an instrument, a means; nâman is the
representative of gnâman, which we recognise in the Latin
cognomen, the consonant g being dropped as in natus, son, which
was formerly gnatus. This word name had at first a much more
extended meaning than that of a simple arbitrary sign applied “to
what we call a thing.” The constructors of the word were aware of a
fact of which consciousness was afterwards lost, and which the
learned ignored during all the supervening centuries—viz., that all
names, far from being mere conventional signs used to distinguish
one thing from another, were meant to express what it was possible
to know of a thing; and that a name thus places us in a position
really to be cognisant of a thing. A natural insight taught the early
framers of our language a truth only acquired by us after
interminable researches, such as Hegel expresses when saying, “We
think in words,” and which we find again in this somewhat
tautological expression “nominibus noscimus” = “tel nom, telle
notion.”
The fact that names, which are signs not of things, but of particular
concepts, are all derived from general ideas, is one of the most
fruitful discoveries of the science of language; since it not only
expresses the truth which has been stated below, that language and
the capability of forming general ideas separate man from the
animals, but also a second truth that these two phenomena are two
sides of the same truth. This explains the reason why the science of
language rejects equally the interjectional theory and the mimetic,
but accepts the final elements of language, those roots which all
contain concepts.
The name man, which we all apply to ourselves, is a title of nobility
to which none other can compare. It is the direct issue of man,
which in its turn came from mâ, to measure, this gave mâs, moon,
to the Sanscrit language. The word man contains in itself the kernel
of subtle thought; if we connect the word with the celestial body
that helps us to measure our time, we do not therefore necessarily
invest the moon with a living and thinking personality; it is sufficient
to consider that if our ancestors conceived of it as measuring the
nights and days, they had in themselves the capabilities with which
they invested the words they created.
We must also notice that the creators of this name having connected
it with the loftiest thing of which they could conceive—thought—did
not stop there; the sight of what was lowest—the dust—inspired
them with another name, homo = earth-born; this Latin word having
the same source as humus = the soil. Our fathers also gave
themselves a third name, which was brotos in Greek, mortalis in
Latin, and marta—the dying—in Sanscrit; they could hardly have
applied the word mortal to themselves if they had not at the same
time believed in other beings who did not die.
And this strange fact has come to pass, that on our planet there
existed in former days men—simple mortals as they were—who
manipulated thought, incorporating it with language, the only
domain in which it can exist; then these marvellous men so entirely
eclipsed themselves, and passed out of our ken, that their posterity
do not recognise them under their modest garb of anonymity; for
their work though still living through thousands of centuries, is so
unrecognised that men ask themselves, “Why is it not possible to
think apart from words?”
Thus we acknowledge the profound wisdom of the conceptions of
our ancestors; but their understanding worked unequally, on certain
points it was very advanced, but on others behindhand.
In following the march of human intellect in the past, we are struck
by the slowness with which thought and speech co-operated. As
long as our ancestors had no occasion to speak of the action of
covering a surface with a liquid or soft substance, they did not
possess the word var = to cover; “the name of colour in Sanscrit is
varna, clearly derived from this word; and not till the art of painting,
in its most primitive form, was discovered and named, could there
have been a name for colour.” For some time they continued to view
various objects differently coloured without distinguishing the tints;
it is well known that the distinction of colours is of late date; our
ancestors gazed on the blue sky, or the green trees, as in a dream,
without recognising blue or green, as long as they lacked words to
define the two colours, and some time elapsed before they
particularised the colours by giving each its proper title.
We speak of the seven colours of the rainbow, because the
intermediate tints elude us; the ancients acted much in the same
way, Xenophanes speaks of the rainbow as a cloud of purple, red,
and yellow; Aristotle also speaks of the tri-coloured rainbow, red,
yellow, and green; and Democritus seems only to have mentioned
black, white and yellow.
Does this indicate that our senses have gradually become more
acute and accurate? No, no one has asserted that the sensitiveness
of the organs of sense was less thousands of years ago than it is
now; the sensation has not changed, but “we see in this evolution of
consciousness of colour how perception goes hand in hand with the
evolution of language, and how, by a very slow process, every
definite concept is developed out of an infinitude of indistinct
perceptions.”[56]
The names of colours have not been applied arbitrarily, any more
than the names given to divinities. Blue, for instance, owes its origin
to the visible results of violence, or of an accident; the science of
etymology shows us that the Old Norse words, blár, blá, blatt, which
now mean blue, meant originally the livid colour of a bruise. Grimm
traces these words back to the Gothic bliggvan, to strike; and he
quotes as an analogous case the Latin cæsius—a bluish grey, from
cædere, to cut. If the assertion that blue and green are rarely
mentioned until a late date be correct, it would follow that they had
been worked out of an infinity of colours before they took their place
definitely as the colour of the sky and the colour of the trees and
grass.
As we trace etymology to its source, we see how man’s perception
was confused at first. From the Sanscrit root ghar, which has many
different meanings, such as to heat, to melt, to drip, to burn, to
shine, come not only many words—heat, oven, warmth, and
brightness, but also the names of many bright colours, all varying
between yellow, green, red, and white. But the most striking
example is afforded by the Sanscrit word ak-tu. Here we have the
first instance of the uncertainty in the meaning of the names of
colours which pervades all languages, and which can be terminated
at last by scientific definition only. This word has two opposite
meanings—a light tinge or ray of light, and also a dark tinge, and
night; this same word in Greek, ak-tis, means a ray of light. Thus,
whilst ideas are not definitely named, even the most simple, such as
those of white and black, are not realised; philosophers have long
known this, but the learned in physical science seem only recently to
have drawn attention to the fact. Virchow was the first to make the
following assertion: “Only after their perceptions have become fixed
by language, are the senses brought to a conscious possession and
a real understanding of them.”[57]
Surgeons have explained that the faculty of sight proceeds from the
movement of an unknown medium, which in the case of light has
been called ether, this strikes the retina, and is conveyed to the brain
by the optic nerve; “but what relation there is between the effect,
namely, our sensation of red, and the cause, namely, the 500
millions of millions of vibrations of ether in one second, neither
philosophy nor physical science has yet been able to explain.”[58]
We are able to picture to ourselves the difficulties which assailed
man in his efforts to express his impressions in primitive times, since
we find ourselves at times struggling with the same difficulties, and
there are occasions when we struggle in vain, we do not conquer the
difficulty.
Sensations which are subjective and personal are of all others the
most difficult to define, since we lack words to express what is from
its nature purely personal; and yet we have frequently occasion to
mention them, how can we best express ourselves? As the required
word does not seem forthcoming we have recourse to metaphor, and
almost unconsciously we use terms borrowed from external
phenomena connected with the sense of hearing, of smelling, and of
tasting, and which for the most part are acts or conditions in the
domain of the sense of sight. Our old acquaintances the roots,
whose meanings are to cut, to pinch, to bite, to burn, to hit, to
sting, to soften, having formed the base of the adjectives sharp,
sweet, keen, burning, we use these to describe certain sensations.
We do not know how better to particularise a physical pain than by
comparing it to something that tears, cuts or stings. But if certain
physical ills, certain colour perceptions, certain impressions of
sharpness, sweetness and heat experienced when tasting various
foods find metaphorical expression in external acts, there still
remains a whole category of simple ideas for which no words can be
found. There are certain sensations of taste which cannot be
expressed in words. Yesterday I ate a pear, to-day I have eaten a
peach; I am quite capable of distinguishing the special flavour of
each, but finding nothing in the world of facts with which to
compare them, I am without words to apply to them, and it would
be as impossible for me to convey an idea of the flavour to any one
who had never eaten a pear or a peach as to make any person
understand if I spoke in a language which was unknown to him.
Since all words that succeed in expressing our sensations are drawn
from external phenomena, we are in a position to know the origin
and historic past of these words. But I cannot thus easily foresee
even the near future of some of these words. The sound of the
clarionet and that of the hautbois, the whistling of the wind, the
whisper of the waves, the yellow of the straw and that of the lemon,
the green of the emerald and the blue of the sky, all characterise
objects belonging to the material world; but if these words: clarionet
and hautbois, wind and waves, straw and lemon, emerald and sky,
which alone enable us to define clearly to our minds certain sounds
and certain colours were lacking in our vocabulary, I do not know
how a musician could have composed a symphony, or an artist
painted his picture, although the creation of both works of art
proceeds equally from personal inspiration invisible to the eye.
The tie that binds thought to speech has been alternately
acknowledged and forgotten; if Plato believed that the origin of
language was the imitation of the voices of nature (an error which
weighed heavily on humanity during the space of two thousand
years), he also knew that words are indispensable to man for the
very formation of thought. Abelard was more explicit on this point,
he said: “Language is generated by the intellect, and generates
intellect.” Hobbes understood so well that language was meant first
of all for ourselves, and afterwards only for others, that he calls
words, as meant for ourselves, notæ, and distinguishes them from
signa, the same words as used for the sake of communication, and
he added: “If there were only one man in the world he would
require notæ.”[59] The close connection between thought and speech
cannot be more clearly or concisely expressed.
This discovery makes its way slowly in the world, because certain
philosophers who have been rendered immobile by tradition, darken
counsel by their speculations. Some of the Polynesians would seem
to have a far truer insight into the nature of thought and language
than these philosophers to whom I have made allusion; they call
thinking “speaking in the stomach,” which means of course to speak
inaudibly, and it is this absolutely inarticulate speech which is so
often mistaken for thought without words; because the fact is
ignored that notion and name are two words for one thing. “It is
certain,” they say, “that a thought may be conceived in the mind, but
is formulated at a later period; for instance, if you have to write a
letter of no great importance, and which affects you little, take your
pen, and before the idea appears to you completely clothed, your
hand has passed over the paper, and you proceed to read your ideas
in the words you see before you.” This is an illusion. We can no
doubt distinguish the written word from the word-concept, but the
former could not exist without the latter. I defy our opponents to
think of the most ordinary and familiar object, such as a dog for
instance, without saying to themselves the word dog. They would
explain that the remembrance only of a special dog, or of its bark
would suffice to call up the image of the dog in their minds; they do
not see that the likeness of a dog, or the remembrance of its bark is
equivalent to the word dog, and that they cannot possibly become
conscious to themselves of what they appear to be thinking, without
having the word in reserve in some part of themselves, either “in the
stomach,” as some savages say, or, as is more gracefully expressed
by the Italians, in petto.
Descartes was a learned Christian, who pondered for some time over
the questions whether the human mind could be certain of anything
without being supernaturally enlightened; he resolved to prove it;
and to this end he imagined that he, Descartes, was certain of
nothing—doubted of all—even mathematical conclusions; he then
reflected on this position, and after a time the idea occurred to him
that as he was capable of reflection it proved without a doubt that
he, Descartes, existed, and that consequently it was no longer
possible to have doubts of his own identity.
The portrait of this philosopher as depicted on the cover of his
works, represents him reclining in a chair thinking—thinking—
thinking—and exclaiming, “Cogito ergo sum.”
Those persons amongst us who are not specially interested in any
system of philosophy are certainly in the majority; all know that such
systems exist, and that they are noted, but from the want of
reflection, however little, some persons look upon them as having
sprung fully equipped, and in their present form, from the brains of
their founders. But it would be incorrect, simply on the evidence of a
frontispiece, to consider these philosophical processes as thus
instantaneous. The systems of philosophy, even those of small value,
require much time for their elaboration, and ripen slowly, and are
never free from opposition. They establish close links between the
living thinkers of to-day, and those who are no longer on earth. The
philosophers of the Middle Ages consulted those of antiquity, the
thinkers of to-day strove to be in agreement with those alike of the
Middle Ages and of antiquity, and there arise from this
intercommunion of knowledge, groups of ideas of which some are
borrowed and some original, some true and some false; these are
dependent on the intellectual lucidity and vigour of the latest arrivals
in the arena. Many problems are thus threshed out before our eyes.
Not long ago three philosophers were in dispute and Noiré records
the arguments; the discussion turned on the question of priority of
thought or speech.
They agreed on the fundamental point, all three said there could be
no reason without language, nor language without reason. But as
they penetrated more deeply into the question, they perceived
divergencies; although the conception and the word be inseparable,
yet there may be a moment of time—infinitely little, doubtless—
between the arrival of the one and of the other, as with twins.
According to Schopenhauer conceptions were the first in the field,
and their immediate duty consisted in creating words; since the
mind could not deal with ideas at will, could neither evoke them,
grasp them, nor reject them, whilst no signs were attached to them.
To this Geiger objected. How could ideas be produced whilst no
signs existed with which to represent them? Words came first, and
thought, rendered possible by the development of language,
followed; “language has created reason; before language, man was
without reason.”[60]
Max Müller replied to both. How could there be a sign when there
was nothing to represent? Conceptions and words, inseparable from
the beginning, were produced on the same day; the day when man’s
history begins; before that what was a fugitive impression and a
vocal sound void of sense, became a conception. Max Müller adds:
“If Geiger had said that with every new word there is more reason,
or that every progress of reason is marked by a new word, he would
have been right, for the growth of reason and language may be said
to be coral-like, each shell is the product of life, and becomes in turn
the support of new life.”[61]
The most important results obtained during the Middle Ages on
these subjects find their representations in this discussion carried on
by the three learned contemporaries. Max Müller’s point of view is
one which reconciles the two diverse opinions.
Men still find themselves under the magic influence of the past after
some thousands of years; the first words which our ancestors used
in the midst of their ordinary occupations have not ceased to appear
in our daily conversations, in our philosophical writings, and in the
reports of scientific proceedings; it is impossible to speak of our
family or social relations, of our affections, our ordinary obligations,
our most sacred duties, our observance of laws, without having
recourse to words and expressions, which represent the acts of
linking or tying, those early activities of our ancestors. The chemist
speaks of the affinity of the substances with which he is working;
the poet and the devout believer when giving free scope to their
highest aspirations do not find truer or loftier terms than links,
chains, ties, for that which connects them with the Giver of all pure,
sublime thoughts.
As it is possible in the present day to speak of delving into a
question (creuser) and of racking our brains (creuser) when we
puzzle over a conundrum; of linking one idea to another; of
polishing our manners by the help of art and letters; of seeking to
soften the heart of God by offerings (as if He were a mercenary
Judge), of linking ourselves with others the better to accomplish a
good work, of uniting in freeing ourselves from an undesirable
opponent; it follows that our ancestors as they emerged from their
condition of muteness found it necessary to dig (creuser) cabins for
themselves, to polish stones, to weave and plait branches together,
and to soften tough roots for their nourishment. The same words
repeat themselves from time immemorial.
But how comes it that these words, which have remained the same
outwardly, have so completely changed their meaning as exactly to
adapt themselves to modern usage? We have been deceived by
appearances. These words have not changed their meaning, but at
first they were applied to tangible objects and visible acts, those
which were the most necessary and the most usual in daily life at
that time; and now these words are applied to intangible things, and
invisible acts, the most necessary and usual in our present mental
life.
Nor is this which follows less curious. This adaptation of the old
words to modern usages could only have been accomplished on one
condition, that we should forget many things, and be utterly
oblivious to the original destination of these words; that we should
put from before our eyes all images of caves, branches, stones and
tough roots; and this condition we have fulfilled absolutely; the
forgetfulness has been complete; no one suspects the source of
these expressions; only a small number of men knows it, but these
men are thoroughly aware that they are making use of the true
primitive forms of the human language.
A difficulty to be avoided still remains. It might be said that, as it is
the result of concerted action undertaken from a community of
interest, that these images have become fixed in the memory, and
that if the ideas and representations exercised so potent a spell on
us, that we were compelled to use the words which can be traced
back to the first period of language, does it not follow that we
absolutely resemble each other, and that consequently we must
renounce the idea of attributing the least individuality to ourselves?
This is a great mistake. Each one of us gives to these
representations of ideas that form towards which he is impelled by
his own nature, his education, his environment. A man who has
some knowledge of astronomy will look at the star-lit sky with quite
another eye to that of the poet, who knows nothing of the subject
but is struck with its inexpressible splendour. A landscape painter
would see in a tree details of beauty which would quite escape one
who admired it, but had never sought to draw it; a clever architect
with one glance at a newly-built house could assign it a place either
with the failures or with those houses which were a success, and this
glance would sufficiently account for the murmured exclamation,
“How gladly would I live in it!”
CHAPTER IX
A DECISIVE STEP

How is it that primitive man, provided with five senses which bring
him into contact with the material world only, has found it possible
to conceive the existence of an invisible world peopled with beings
whom his eyes cannot see, nor his hands touch, nor his ears hear?
Between the birth of human reason and the invention of writing a
long period of time elapsed; when the art of writing was followed by
that of printing, man then printed all that he had thought and
written, and at present we possess thousands of volumes which will
inform us on all the truths and errors which have alternately
illuminated and obscured the human mind.
Whoever would take the trouble to examine this mass of documents,
and read those which furnish an approximate estimate of the mental
activity of our primitive ancestors, will see that the human ego
pursued science unconsciously long before scholars appeared, and
applied the name of philosophers to themselves, because they had
sought patiently and with many discussions, through thousands of
centuries, to find the best way of arriving at the truth.
These ancestors of ours were of an enquiring turn of mind.
The appearance of religion amongst men is at the same time the
most natural and the most supernatural fact in the history of
humanity.
The greater number of philosophers have recognised that the
tendency of the human mind to turn towards that which is outside
the domain of the senses is as powerful in man as the desire of
eating and drinking is in all living beings. The ancients acknowledged
this to be a true sense, as irresistible as the rest of the operations of
our external senses, and they have well named it sensus numinis—
the consciousness of the divine. The desire of understanding the
secrets with which the Unknown was invested naturally led to the
investigation of the influence which these secrets might exercise on
the destinies of mankind. Amongst certain peoples this gave birth to
the art of divination. To this they abandoned themselves in all
sincerity, not doubting that omnipotent beings would always be
ready to make their will known to mortals.
The men of modern times have shown that they have the critical
faculty more highly developed, and their investigations have dealt
more with practical matters. In the eighteenth century, writers,
historians and philosophers—Voltaire amongst the number—wishing
to know how the phenomenon of mental religion appeared in the
world, collected all the data to be obtained from travellers
concerning savages; they found that without exception all believed
in occult powers, as distinct from material or human forces, and
doubted not the efficacy of certain magic arts in use amongst them
to attract these powers to themselves, and to constrain them to act
on their behalf. Judging by analogy these writers contend that
primitive man, doubtless impressed by the alarming phenomena of
nature, would make search for the unknown beings around him,
whom the storms, the thunders and the lightnings obey, but these
beings were invisible, consequently there must be an invisible world
in communication with the visible or human world.
In this way were the beliefs of the present-day savages supposed to
be those current at the dawn of religious conceptions of humanity.
The ignorance of a subject, of whatever nature, has never prevented
the laying down of axioms concerning that subject. Towards the end
of the eighteenth century some Portuguese navigators, who never
embarked without providing themselves with talisman and amulet,—
to protect them during their voyages,—which they called feitiços,
seeing some negroes of the Gold Coast prostrating themselves with
every appearance of reverence, before bones, stones, or the tails of
some animals, concluded at once without further investigation that
these were considered as divinities by the negroes; and on their
return to their native land, they spread the report that savage races
worshipped feitiços. This word feitiços corresponds to the Latin
factitius, meaning that which is made by hand, as the amulets were
which belonged to the Portuguese sailors. The well-known President
de Brosses used the name and promulgated the idea, and without
having set foot on countries inhabited by negroes, composed and
published a book on their fetishes. In this manner the French
language was enriched in 1760 by the new word fetish. All this
seemed so natural and plausible that the word, and the idea of the
adoration of fetishes became quite general; the theory of the
worship of fetishes penetrated rapidly, and took deep root in the
public mind, it found its way very readily into school books and
manuals, and we were taught that the religion of savages consists
solely in the worship of fetishes, and learned writers draw the
conclusion that fetishism must necessarily have been the primitive
religion of humanity.
With what readiness do well-instructed persons, no less than the
ignorant, allow themselves to speak without sufficiently reflecting on
what they say. In order to elevate material objects, of whatever
kind, to the rank of divinities, it would be necessary previously to
possess the concept of a divinity. Writers on religion speak of that as
existing in primitive times which they seek to describe; they might as
well say that primitive men mummified their dead before they had
mûm or wax to embalm them with. Fetishism cannot be considered
as absolutely primitive, seeing that from its nature it must
presuppose the previous growth of the predicate God. This idea of
De Brosses and his successors will remain for ever a striking
anachronism in the history of religion.
The history of all primitive races opens with this note. “Man is
conscious of a divine descent, though made from the dust of the
earth; the Hindoo doubted it not, though he called Dyn his father,
and Prithvi his mother; Plato knew it when he said the earth
produced men, but that God formed them.”
On the banks of the Rhine, Tacitus listened to the war-songs of the
Germans; they were to him in an unknown tongue. “It resembles the
whisperings of birds,” he said, but added, “They are cries of valour,”
and his ear caught the sound of two words which recurred
frequently, “Tuisto Mannus!”
We now know what formed the basis of these songs; the Germans
were celebrating their lineal ancestors under the names of Tuisto,
and Mannus, his son. Tuisto appears to have been one form of Tiu,
the Aryan god of light. Tacitus tells us that the Germans “called by
the names of gods that hidden thing which they did not perceive
except by reverence.”[62] Mannus, so the Germans considered,
sprang from the earth, which they venerated as their mother-earth
who before nourishing her children on its fruits first gave them life.
This Mannus, grandson of the god of light, meant originally man.
Certain races living beyond the pale of organised religious systems
having been interrogated have furnished the following information
concerning their belief.
A very low race in India is supposed to worship the sun under the
name of Chando or Cando; they declared to the missionaries who
had settled amongst them that Chando had created the world. “How
is that possible! Who then has created the sun itself?” They replied
with “We do not mean the visible Chando, but an invisible one.”[63]
“Our god,” said the original natives of California to those who asked
in what god they believed, “our god has neither father nor mother,
and his origin is quite unknown. But he is present everywhere, he
sees everything even at midnight, though himself invisible to human
eyes. He is the friend of all good people, and he punishes the evil-
doers.”
A Blackfoot Indian, when arguing with a Christian missionary, said:
“There were two religions given by the Great Spirit, one in a book
for the guidance of the white men, who, by following its teaching,
will reach the white man’s heaven; the other is in the heads of the
Indians, in the sky, rocks, rivers and mountains. And the red men
who listen to God in nature, will hear his voice, and find at last the
heaven beyond.”
These Indians consider that that external nature which to us is at
the same time the veil and the revelation of the Divine, is sufficient
to teach them so much concerning the Supreme Being that
missionaries are superfluous.
Amongst those whose thoughts are occupied by the origin of
religious perception in man, there exist several theories; the first,
that the idea of infinity is a necessity to the mind of man, and that
by enlarging the boundaries of space and of time, it arrives at that
which is without space and without time. Thus may a true
philosopher reason; but primitive man was no philosopher, and the
infinite of philosophy had no existence for him. Another theory is
that man is naturally endowed with religious instincts, which render
him—alone of all living creatures—capable of perceiving the infinite
in the invisible; but the nature of this innate instinct not being clearly
defined, it is in vain that we try to explain one mystery by another.
Others again affirm that religious impressions were the result of a
supernatural revelation, but they seem vague with regard to the
time in the life of humanity, to which people, and in what manner
this came to pass. At the same time they draw attention to the fact
that men have always arrived at conclusions rapidly, and, as they
consider, without due reflection; one of these conclusions is that God
is. Let us, for the sake of argument, replace the word man by the
word intuitive sense or apprehension, and we shall understand why
this intuitive sense renders it a superfluous task to make great
researches as to the reasons of man’s decision that God is. This
intuitive sense is wise, and utters at times great truths; but the
philosophers who consider it their metier to seek for the reason of
things are not content with what satisfies intuitive sense, and they
act on their right.
In our days the religious problem is viewed from two sides. What is
understood by these words—the conception of God? This is the
question of questions; and the names of the writers on the subject,
both philosophical and theological, are too numerous to give. It is a
psychological and thought impelling study.
How did the idea of God first arise in the minds of primitive man?
This is another question which few try and answer. It is a historical
study.
This presentation of the problem is perhaps not calculated to inspire
excitement or let loose agitating passions; and apparently the end of
the nineteenth century will not witness the renewal of the
philosophical debates on the subject which characterised the last
half of the eighteenth.
Never either, before or since, has there been so much agitation, nor
have men’s minds been so tossed by diverse currents. Many various
theories were promulgated at the time, but opinions grouped
themselves chiefly round two diametrically opposite schools of
thought, towards one or the other of which they leaned.
According to Hume, Condillac and their adherents, matter alone
exists; our understanding, our feelings, our will are only transformed
sensations. This was pure materialism. Pure idealism was
represented by Berkeley, who went so far as to deny the reality of
matter; according to him the bodies making up the universe have no
real existence; the true realities were God and the ideas He
produced in us.
Those who preserved their ancient beliefs were the most troubled,
they began to ask themselves whether the foundations of their faith
were solid, and they much desired to see certain problems solved.
These thoughts had exercised the minds of the sages of India, the
thinkers of Greece, the dreamers of Alexandria, and the divines and
scholars of the Middle Ages. They were the old problems of the
world, what we know of the Infinite, the questions of the beginning
and end of our existence; the questions of the possibility of absolute
certainty in the evidence of the senses, of reason or of faith.
How much was comprehended in these enquiries.
One hundred years previously, the cautious reasoner, Descartes,
instead of asking “What do we know?” posed in its place the
question, “How do we know?”
This was in fact a fundamental question which appealed to
philosophers who followed Descartes, as of the utmost importance,
and they also asked themselves, “After what manner does the
human mind acquire what it knows?”
What is called Locke’s tenet, “Nihil est in intellectu quod non ante
fuerit in sensu,” Leibnitz answered by “Nihil—nisi intellectus.” Noiré
gives this sentiment a fresh turn by saying: “There is nothing in this
plant that was not already in the soil, the water and the atmosphere,
but that which causes this plant to be a plant.”
Condillac, who agreed with Locke, thus formulated his opinion:
“Penser c’est sentir”; or, “In order to feel it is necessary to possess
senses,” which is self-evident.
Nevertheless, this sentence scandalised some of the philosophers,
they considered it degraded thought. It degraded thought only in
Condillac’s mouth, since he and his school had previously taken out
of sentir or sensation all that possessed the right to be called
thought; but for those who admit that sensation is really
impregnated with thought it is no degradation; it is then true to say
that thought is sensation, in the same way as an oak-tree may be
said to be the acorn; and a little reflection will show us that “the
acorn is far more wonderful than the oak, and perceiving far more
wonderful than thinking.” This was not acknowledged by some who
disagreed as to the nature of reason and sensation; they considered
the former a mysterious power that could only be a direct gift of the
Creator, and the senses, to which we owe our perceptions, appeared
so natural and simple, as not to require a scientific explanation.
If philosophers, such as Descartes and Leibnitz, succeeded in
influencing certain enlightened spirits, their language was not
understood by the general public; and Berkeley’s idealism when
pushed to the extreme point, proved too abstract to counterbalance
the sensualist doctrines; its language hardly penetrated beyond the
inner circle of the experts dealing with the subject, whereas the
writings of Locke, Condillac and Hume permeated all classes of
society; everywhere the same questions were asked, and often
unanswered amidst the maze of metaphysics, in which it would have
been difficult to obtain a precise explanation of a science not yet
clearly defined.
It is natural that reason after its high flight in pursuit of truth,
frightened by the obstacles met in its ascent, and by the
contradictions found in itself, should fall heavily to earth, exclaiming
with Voltaire, “O metaphysics, we are as advanced as in the times of
the Druids.” This same feeling of distrust towards proceedings which
resulted only in hypothesis, was also expressed by Newton, who,
recognising that philosophy moved nowhere so freely nor with such
certainty as in the domain of facts, recently cried, “O physics,
preserve me from metaphysics.”
“Towards the end of the eighteenth century the current public
opinion had been decidedly in favour of materialism, but a reaction
was slowly setting in in the minds of independent thinkers when
Kant appeared”; he came so exactly in the nick of time that one
almost doubts whether the tide was turning, or whether he turned
the tide.
To sketch briefly the chief points in Kant’s system such as he has
given us in his book called Critique of Pure Reason, is a rash
proceeding; my object, which is to satisfy the imperious and more
immediate wants of our moral being, could only be attained by
ignoring the irradicable difficulties; this is excusable if we, unlearned
members of society, are to form any idea of this same philosophy.
The technical terms which abound in philosophical works are useful
in the exposition of a system, but rather the reverse for those who
are striving to grasp its salient features; for understanding these
terms partially only, or not understanding them at all, they are
tempted to imagine that they take in the meaning; this leads to
vague notions being entertained on a subject which is nevertheless
earnestly studied. Generally I abstain from the use of esoteric terms,
but Kant having coined fresh ones to express his ideas it behoves us
to use his own formula. To paraphrase them so as to render them
intelligible without multiplying them might only further obscure the
sense, and yet, on the other hand, to enter freely into further
developments would require a volume, and the end would be better
served by going direct to Kant’s work. Hence the embarrassment I
feel on approaching the subject.

Kant’s Teaching.
Kant undertook a work which no one before him had attempted.
Instead of criticising, as was then the fashion, the result of our
knowledge, whether in religion or in history, or science, he shut his
eyes resolutely to all that philosophy, whether sensualistic or
spiritualistic asserted as true, and making Descartes his starting
point he boldly went to the root of the matter; he questioned
whether human reason had the power of perceiving the truth, and in
cases where this power existed—but with limits—he sought to
discover why these limits existed. He therefore resolved to subject
reason itself to his searching analysis, and thus to assist, as it were,
at the birth of thought. He accomplished this extraordinary task with
an ease of which no one previously would have been capable.
The world is governed by immutable laws, and the human race is
subject to them. Kant gives an account of those which it must
necessarily obey in order to pass from a passive “mirror” into a
conscious mind.

Sensation.
In any material object I may seek to obtain, such as a table, my
interests are concentrated in the table itself, not on the tools which
the workman has used in its manufacture; but if it were a question
of thought, then the means by which it was produced by the human
mind engage us; and these means, of course, consist in the proper
use of the instruments at man’s disposal.
That which was at the origin of mankind is repeated at the birth of
every human being; he comes into the world in a lethargic condition,

You might also like