Contemporary World - Criminology

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

What is the Gold Standard?

By Nick K. Lioudis

Updated Feb 3, 2019

The gold standard is a monetary system where a country's currency or paper money has a value
directly linked to gold. With the gold standard, countries agreed to convert paper money into a
fixed amount of gold. A country that uses the gold standard sets a fixed price for gold and buys
and sells gold at that price. That fixed price is used to determine the value of the currency. For
example, if the U.S. sets the price of gold at $500 an ounce, the value of the dollar would be
1/500th of an ounce of gold.

The gold standard is not currently used by any government. Britain stopped using the gold
standard in 1931 and the U.S. followed suit in 1933 and abandoned the remnants of the system in
1971. The gold standard was completely replaced by fiat money, a term to describe currency that
is used because of a government's order, or fiat, that the currency must be accepted as a means of
payment. In the U.S., for instance, the dollar is fiat money, and for Nigeria, it is the naira.

The appeal of a gold standard is that it arrests control of the issuance of money out of the hands
of imperfect human beings. With the physical quantity of gold acting as a limit to that issuance, a
society can follow a simple rule to avoid the evils of inflation. The goal of monetary policy is not
just to prevent inflation, but also deflation, and to help promote a stable monetary environment in
which full employment can be achieved. A brief history of the U.S. gold standard is enough to
show that when such a simple rule is adopted, inflation can be avoided, but strict adherence to
that rule can create economic instability, if not political unrest.

Where to Buy a $10 Million Coin

Gold Standard System Versus Fiat System

As its name suggests, the term gold standard refers to a monetary system in which the value
of currency is based on gold. A fiat system, by contrast, is a monetary system in which the value
of currency is not based on any physical commodity but is instead allowed to fluctuate
dynamically against other currencies on the foreign-exchange markets. The term “fiat” is derived
from the Latin "fieri," meaning an arbitrary act or decree. In keeping with this etymology, the
value of fiat currencies is ultimately based on the fact that they are defined as legal tender by
way of government decree.

In the decades prior to the First World War, international trade was conducted on the basis of
what has come to be known as the classical gold standard. In this system, trade between nations
was settled using physical gold. Nations with trade surpluses accumulated gold as payment for
their exports. Conversely, nations with trade deficits saw their gold reserves decline, as gold
flowed out of those nations as payment for their imports.
The Gold Standard: A History

"We have gold because we cannot trust governments," President Herbert Hoover famously said
in 1933 in his statement to Franklin D. Roosevelt. This statement foresaw one of the most
draconian events in U.S. financial history: the Emergency Banking Act, which forced all
Americans to convert their gold coins, bullion and certificates into U.S. dollars. While the
legislation successfully stopped the outflow of gold during the Great Depression, it did not
change the conviction of gold bugs, people who are forever confident in gold's stability as a
source of wealth.

Gold has a history like that of no other asset class in that it has a unique influence on its own
supply and demand. Gold bugs still cling to a past when gold was king, but gold's past also
includes a fall that must be understood to properly assess its future.

A Gold Standard Love Affair Lasting 5,000 Years

For 5,000 years, gold's combination of luster, malleability, density and scarcity has captivated
humankind like no other metal. According to Peter Bernstein's book The Power of Gold: The
History of Obsession, gold is so dense that one ton of it can be packed into a cubic foot.

At the start of this obsession, gold was solely used for worship, demonstrated by a trip to any of
the world's ancient sacred sites. Today, gold's most popular use is in the manufacturing of
jewelry.

Around 700 B.C., gold was made into coins for the first time, enhancing its usability as a
monetary unit. Before this, gold had to be weighed and checked for purity when settling trades.

Gold coins were not a perfect solution, since a common practice for centuries to come was to clip
these slightly irregular coins to accumulate enough gold that could be melted down into bullion.
In 1696, the Great Recoinage in England introduced a technology that automated the production
of coins and put an end to clipping.

Since it could not always rely on additional supplies from the earth, the supply of gold expanded
only through deflation, trade, pillage or debasement.

The discovery of America in the 15th century brought the first great gold rush. Spain's plunder of
treasures from the New World raised Europe's supply of gold by fives times in the 16th century.
Subsequent gold rushes in the Americas, Australia and South Africa took place in the 19th
century.

Europe's introduction of paper money occurred in the 16th century, with the use of debt
instruments issued by private parties. While gold coins and bullion continued to dominate the
monetary system of Europe, it was not until the 18th century that paper money began to
dominate. The struggle between paper money and gold would eventually result in the
introduction of a gold standard.
The Rise of the Gold Standard

The gold standard is a monetary system in which paper money is freely convertible into a fixed
amount of gold. In other words, in such a monetary system, gold backs the value of money.
Between 1696 and 1812, the development and formalization of the gold standard began as the
introduction of paper money posed some problems.

The U.S. Constitution in 1789 gave Congress the sole right to coin money and the power to
regulate its value. Creating a united national currency enabled the standardization of a monetary
system that had up until then consisted of circulating foreign coin, mostly silver.

With silver in greater abundance relative to gold, a bimetallic standard was adopted in 1792.
While the officially adopted silver-to-gold parity ratio of 15:1 accurately reflected the market
ratio at the time, after 1793 the value of silver steadily declined, pushing gold out of circulation,
according to Gresham’s law.

The issue would not be remedied until the Coinage Act of 1834, and not without strong political
animosity. Hard money enthusiasts advocated for a ratio that would return gold coins to
circulation, not necessarily to push out silver, but to push out small-denomination paper notes
issued by the then-hated Bank of the United States. A ratio of 16:1 that blatantly overvalued gold
was established and reversed the situation, putting the U.S. on a de facto gold standard.

By 1821, England became the first country to officially adopt a gold standard. The century's
dramatic increase in global trade and production brought large discoveries of gold, which helped
the gold standard remain intact well into the next century. As all trade imbalances between
nations were settled with gold, governments had strong incentive to stockpile gold for more
difficult times. Those stockpiles still exist today.

The international gold standard emerged in 1871 following its adoption by Germany. By 1900,
the majority of the developed nations were linked to the gold standard. Ironically, the U.S. was
one of the last countries to join. In fact, a strong silver lobby prevented gold from being the sole
monetary standard within the U.S. throughout the 19th century.

From 1871 to 1914, the gold standard was at its pinnacle. During this period, near-ideal political
conditions existed in the world. Governments worked very well together to make the system
work, but this all changed forever with the outbreak of the Great War in 1914.

The Fall of the Gold Standard

With World War I, political alliances changed, international indebtedness increased and
government finances deteriorated. While the gold standard was not suspended, it was in limbo
during the war, demonstrating its inability to hold through both good and bad times. This created
a lack of confidence in the gold standard that only exacerbated economic difficulties. It became
increasingly apparent that the world needed something more flexible on which to base its global
economy.
At the same time, a desire to return to the idyllic years of the gold standard remained strong
among nations. As the gold supply continued to fall behind the growth of the global economy,
the British pound sterling and U.S. dollar became the global reserve currencies. Smaller
countries began holding more of these currencies instead of gold. The result was an accentuated
consolidation of gold into the hands of a few large nations.

The stock market crash of 1929 was only one of the world's post-war difficulties. The pound and
the French franc were horribly misaligned with other currencies; war debts
and repatriations were still stifling Germany; commodity prices were collapsing; and banks were
overextended. Many countries tried to protect their gold stock by raising interest rates to entice
investors to keep their deposits intact rather than convert them into gold. These higher interest
rates only made things worse for the global economy. In 1931, the gold standard in England was
suspended, leaving only the U.S. and France with large gold reserves.

Then, in 1934, the U.S. government revalued gold from $20.67/oz to $35.00/oz, raising the
amount of paper money it took to buy one ounce to help improve its economy. As other nations
could convert their existing gold holdings into more U.S dollars, a dramatic devaluation of the
dollar instantly took place. This higher price for gold increased the conversion of gold into U.S.
dollars, effectively allowing the U.S. to corner the gold market. Gold production soared so that
by 1939 there was enough in the world to replace all global currency in circulation.

As World War II was coming to an end, the leading Western powers met to develop the Bretton
Woods Agreement, which would be the framework for the global currency markets until 1971.
Within the Bretton Woods system, all national currencies were valued in relation to the U.S.
dollar, which became the dominant reserve currency. The dollar, in turn, was convertible to gold
at the fixed rate of $35 per ounce. The global financial system continued to operate upon a gold
standard, albeit in a more indirect manner.

The agreement has resulted in an interesting relationship between gold and the U.S. dollar over
time. Over the long term, a declining dollar generally means rising gold prices. In the short term,
this is not always true, and the relationship can be tenuous at best, as the following one-year
daily chart demonstrates. In the figure below, notice the correlation indicator which moves from
a strong negative correlation to a positive correlation and back again. The correlation is still
biased toward the inverse (negative on the correlation study) though, so as the dollar rises, gold
typically declines.

Figure 1: USD Index (right axis) vs. Gold Futures (left axis)

Source: TD Ameritrade - ThinkorSwim

At the end of WWII, the U.S. had 75% of the world's monetary gold and the dollar was the only
currency still backed directly by gold. However, as the world rebuilt itself after WWII, the U.S.
saw its gold reserves steadily drop as money flowed to war-torn nations and its own high demand
for imports. The high inflationary environment of the late 1960s sucked out the last bit of air
from the gold standard.

In 1968, a Gold Pool, which included the U.S and a number of European nations, stopped selling
gold on the London market, allowing the market to freely determine the price of gold. From 1968
to 1971, only central banks could trade with the U.S. at $35/oz. By making a pool of gold
reserves available, the market price of gold could be kept in line with the official parity rate. This
alleviated the pressure on member nations to appreciate their currencies to maintain their export-
led growth strategies.

However, the increasing competitiveness of foreign nations combined with the monetization of
debt to pay for social programs and the Vietnam War soon began to weigh on America’s balance
of payments. With a surplus turning to a deficit in 1959 and growing fears that foreign nations
would start redeeming their dollar-denominated assets for gold, Senator John F. Kennedy issued
a statement in the late stages of his presidential campaign that, if elected, he would not attempt to
devalue the dollar.

The Gold Pool collapsed in 1968 as member nations were reluctant to cooperate fully in
maintaining the market price at the U.S. price of gold. In the following years, both Belgium and
the Netherlands cashed in dollars for gold, with Germany and France expressing similar
intentions. In August of 1971, Britain requested to be paid in gold, forcing Nixon's hand and
officially closing the gold window. By 1976, it was official; the dollar would no longer be
defined by gold, thus marking the end of any semblance of a gold standard.

In August 1971, Nixon severed the direct convertibility of U.S. dollars into gold. With this
decision, the international currency market, which had become increasingly reliant on the dollar
since the enactment of the Bretton Woods Agreement, lost its formal connection to gold. The
U.S. dollar, and by extension, the global financial system it effectively sustained, entered the era
of fiat money.

The Bottom Line

While gold has fascinated humankind for 5,000 years, it hasn't always been the basis of the
monetary system. A true international gold standard existed for less than 50 years - from 1871 to
1914 - in a time of world peace and prosperity that coincided with a dramatic increase in the
supply of gold. The gold standard was the symptom and not the cause of this peace and
prosperity.

Though a lesser form of the gold standard continued until 1971, its death had started centuries
before with the introduction of paper money – a more flexible instrument for our complex
financial world. Today, the price of gold is determined by the demand for the metal, and although
it is no longer used as a standard, it still serves an important function. Gold is a major financial
asset for countries and central banks. It is also used by the banks as a way to hedge against loans
made to their government and as an indicator of economic health.
Under a free-market system, gold should be viewed as a currency like the euro, yen or U.S.
dollar. Gold has a long-standing relationship with the U.S. dollar, and, over the long term, gold
will generally have an inverse relationship. With instability in the market, it is common to hear
talk of creating another gold standard, but it is not a flawless system. Viewing gold as a currency
and trading it as such can mitigate risks compared with paper currency and the economy, but
there must be an awareness that gold is forward-looking. If one waits until disaster strikes, it may
not provide an advantage if it has already moved to a price that reflects a slumping economy.

Compete Risk Free with $100,000 in Virtual Cash


Put your trading skills to the test with our FREE Stock Simulator. Compete with thousands of
Investopedia traders and trade your way to the top! Submit trades in a virtual environment before you
start risking your own money. Practice trading strategies so that when you're ready to enter the real
market, you've had the practice you need. Try our Stock Simulator today >>

https://www.investopedia.com/ask/answers/09/gold-standard.asp

Bretton Woods system


From Wikipedia, the free encyclopedia

The Bretton Woods system of monetary management established the rules for commercial and
financial relations among the United States, Canada, Western European countries, Australia, and
Japan after the 1944 Bretton Woods Agreement. The Bretton Woods system was the first
example of a fully negotiated monetary order intended to govern monetary relations among
independent states. The chief features of the Bretton Woods system were an obligation for each
country to adopt a monetary policy that maintained its external exchange rates within 1 percent
by tying its currency to gold and the ability of the IMF to bridge temporary imbalances of
payments. Also, there was a need to address the lack of cooperation among other countries and to
prevent competitive devaluation of the currencies as well.

Preparing to rebuild the international economic system while World War II was still raging, 730 delegates
from all 44 Allied nations gathered at the Mount Washington Hotel in Bretton Woods, New Hampshire,
United States, for the United Nations Monetary and Financial Conference, also known as the Bretton
Woods Conference. The delegates deliberated during 1–22 July 1944, and signed the Bretton Woods
agreement on its final day. Setting up a system of rules, institutions, and procedures to regulate the
international monetary system, these accords established the International Monetary Fund (IMF) and
the International Bank for Reconstruction and Development (IBRD), which today is part of the World
Bank Group. The United States, which controlled two thirds of the world's gold, insisted that the Bretton
Woods system rest on both gold and the US dollar. Soviet representatives attended the conference but
later declined to ratify the final agreements, charging that the institutions they had created were
"branches of Wall Street".[1] These organizations became operational in 1945 after a sufficient number of
countries had ratified the agreement.
On 15 August 1971, the United States unilaterally terminated convertibility of the US dollar to
gold, effectively bringing the Bretton Woods system to an end and rendering the dollar a fiat
currency.[2] This action, referred to as the Nixon shock, created the situation in which the U.S.
dollar became a reserve currency used by many states. At the same time, many fixed currencies
(such as the pound sterling) also became free-floating.

Origins

The political basis for the Bretton Woods system was in the confluence of two key conditions:
the shared experiences of two World Wars, with the sense that failure to deal with economic
problems after the first war had led to the second; and the concentration of power in a small
number of states.

Interwar period

There was a high level of agreement among the powerful nations that failure to coordinate
exchange rates during the interwar period had exacerbated political tensions. This facilitated the
decisions reached by the Bretton Woods Conference. Furthermore, all the participating
governments at Bretton Woods agreed that the monetary chaos of the interwar period had yielded
several valuable lessons.

The experience of World War II was fresh in the minds of public officials. The planners at
Bretton Woods hoped to avoid a repeat of the Treaty of Versailles after World War I, which had
created enough economic and political tension to lead to WWII. After World War I, Britain owed
the U.S. substantial sums, which Britain could not repay because it had used the funds to support
allies such as France during the War; the Allies could not pay back Britain, so Britain could not
pay back the U.S. The solution at Versailles for the French, British, and Americans seemed to
entail ultimately charging Germany for the debts. If the demands on Germany were unrealistic,
then it was unrealistic for France to pay back Britain, and for Britain to pay back the US.[3] Thus,
many "assets" on bank balance sheets internationally were actually unrecoverable loans, which
culminated in the 1931 banking crisis. Intransigent insistence by creditor nations for the
repayment of Allied war debts and reparations, combined with an inclination to isolationism, led
to a breakdown of the international financial system and a worldwide economic depression.[4]
The so-called "beggar thy neighbor" policies that emerged as the crisis continued saw some
trading nations using currency devaluations in an attempt to increase their competitiveness (i.e.
raise exports and lower imports), though recent research suggests this de facto inflationary policy
probably offset some of the contractionary forces in world price levels (see Eichengreen "How to
Prevent a Currency War").

In the 1920s, international flows of speculative financial capital increased, leading to extremes in
balance of payments situations in various European countries and the US.[5] In the 1930s, world
markets never broke through the barriers and restrictions on international trade and investment
volume – barriers haphazardly constructed, nationally motivated and imposed. The various
anarchic and often autarkic protectionist and neo-mercantilist national policies – often mutually
inconsistent – that emerged over the first half of the decade worked inconsistently and self-
defeatingly to promote national import substitution, increase national exports, divert foreign
investment and trade flows, and even prevent certain categories of cross-border trade and
investment outright. Global central bankers attempted to manage the situation by meeting with
each other, but their understanding of the situation as well as difficulties in communicating
internationally, hindered their abilities.[6] The lesson was that simply having responsible, hard-
working central bankers was not enough.

Britain in the 1930s had an exclusionary trading bloc with nations of the British Empire known
as the "Sterling Area". If Britain imported more than it exported to nations such as South Africa,
South African recipients of pounds sterling tended to put them into London banks. This meant
that though Britain was running a trade deficit, it had a financial account surplus, and payments
balanced. Increasingly, Britain's positive balance of payments required keeping the wealth of
Empire nations in British banks. One incentive for, say, South African holders of rand to park
their wealth in London and to keep the money in Sterling, was a strongly valued pound sterling.
Unfortunately, as Britain deindustrialized in the 1920s, the way out of the trade deficit was to
devalue the currency. But Britain couldn't devalue, or the Empire surplus would leave its banking
system.[7]

Nazi Germany also worked with a bloc of controlled nations by 1940. Germany forced trading
partners with a surplus to spend that surplus importing products from Germany.[8] Thus, Britain
survived by keeping Sterling nation surpluses in its banking system, and Germany survived by
forcing trading partners to purchase its own products. The U.S. was concerned that a sudden
drop-off in war spending might return the nation to unemployment levels of the 1930s, and so
wanted Sterling nations and everyone in Europe to be able to import from the US, hence the U.S.
supported free trade and international convertibility of currencies into gold or dollars.[9]

Post war negotiations

When many of the same experts who observed the 1930s became the architects of a new, unified,
post-war system at Bretton Woods, their guiding principles became "no more beggar thy
neighbor" and "control flows of speculative financial capital". Preventing a repetition of this
process of competitive devaluations was desired, but in a way that would not force debtor nations
to contract their industrial bases by keeping interest rates at a level high enough to attract foreign
bank deposits. John Maynard Keynes, wary of repeating the Great Depression, was behind
Britain's proposal that surplus nations be forced by a "use-it-or-lose-it" mechanism, to either
import from debtor nations, build factories in debtor nations or donate to debtor nations.[10][11] The
U.S. opposed Keynes' plan, and a senior official at the U.S. Treasury, Harry Dexter White,
rejected Keynes' proposals, in favor of an International Monetary Fund with enough resources to
counteract destabilizing flows of speculative finance.[12] However, unlike the modern IMF,
White's proposed fund would have counteracted dangerous speculative flows automatically, with
no political strings attached—i.e., no IMF conditionality.[13] According to economic historian
Brad Delong, on almost every point where he was overruled by the Americans, Keynes was later
proved correct by events.[14]
Today these key 1930s events look different to scholars of the era (see the work of Barry
Eichengreen Golden Fetters: The Gold Standard and the Great Depression, 1919–1939 and How
to Prevent a Currency War); in particular, devaluations today are viewed with more nuance. Ben
Bernanke's opinion on the subject follows:

... [T]he proximate cause of the world depression was a structurally flawed and poorly managed
international gold standard. ... For a variety of reasons, including a desire of the Federal Reserve
to curb the U.S. stock market boom, monetary policy in several major countries turned
contractionary in the late 1920s—a contraction that was transmitted worldwide by the gold
standard. What was initially a mild deflationary process began to snowball when the banking and
currency crises of 1931 instigated an international "scramble for gold". Sterilization of gold
inflows by surplus countries [the U.S. and France], substitution of gold for foreign exchange
reserves, and runs on commercial banks all led to increases in the gold backing of money, and
consequently to sharp unintended declines in national money supplies. Monetary contractions in
turn were strongly associated with falling prices, output and employment. Effective international
cooperation could in principle have permitted a worldwide monetary expansion despite gold
standard constraints, but disputes over World War I reparations and war debts, and the insularity
and inexperience of the Federal Reserve, among other factors, prevented this outcome. As a
result, individual countries were able to escape the deflationary vortex only by unilaterally
abandoning the gold standard and re-establishing domestic monetary stability, a process that
dragged on in a halting and uncoordinated manner until France and the other Gold Bloc countries
finally left gold in 1936. —Great Depression, B. Bernanke

In 1944 at Bretton Woods, as a result of the collective conventional wisdom of the time,[15]
representatives from all the leading allied nations collectively favored a regulated system of
fixed exchange rates, indirectly disciplined by a US dollar tied to gold[16]—a system that relied
on a regulated market economy with tight controls on the values of currencies. Flows of
speculative international finance were curtailed by shunting them through and limiting them via
central banks. This meant that international flows of investment went into foreign direct
investment (FDI)—i.e., construction of factories overseas, rather than international currency
manipulation or bond markets. Although the national experts disagreed to some degree on the
specific implementation of this system, all agreed on the need for tight controls.

Economic security

Also based on experience of the inter-war years, U.S. planners developed a concept of economic
security—that a liberal international economic system would enhance the possibilities of postwar
peace. One of those who saw such a security link was Cordell Hull, the United States Secretary
of State from 1933 to 1944.[Notes 1] Hull believed that the fundamental causes of the two world
wars lay in economic discrimination and trade warfare. Specifically, he had in mind the trade and
exchange controls (bilateral arrangements)[17] of Nazi Germany and the imperial preference
system practiced by Britain, by which members or former members of the British Empire were
accorded special trade status, itself provoked by German, French, and American protectionist
policies. Hull argued
[U]nhampered trade dovetailed with peace; high tariffs, trade barriers, and unfair economic
competition, with war … if we could get a freer flow of trade…freer in the sense of fewer
discriminations and obstructions…so that one country would not be deadly jealous of another
and the living standards of all countries might rise, thereby eliminating the economic
dissatisfaction that breeds war, we might have a reasonable chance of lasting peace.[18]

https://en.wikipedia.org/wiki/Bretton_Woods_system

General Agreement on Tariffs and Trade


The General Agreement on Tariffs and Trade (GATT) is a legal agreement between many
countries, whose overall purpose was to promote international trade by reducing or eliminating
trade barriers such as tariffs or quotas. According to its preamble, its purpose was the "substantial
reduction of tariffs and other trade barriers and the elimination of preferences, on a reciprocal
and mutually advantageous basis."

It was first discussed during the United Nations Conference on Trade and Employment and was
the outcome of the failure of negotiating governments to create the International Trade
Organization (ITO). GATT was signed by 23 nations in Geneva on 30 October 1947, and took
effect on 1 January 1948. It remained in effect until the signature by 123 nations in Marrakesh on
14 April 1994, of the Uruguay Round Agreements, which established the World Trade
Organization (WTO) on 1 January 1995. The WTO is a successor to GATT, and the original
GATT text (GATT 1947) is still in effect under the WTO framework, subject to the modifications
of GATT 1994.[1]

GATT, and its successor WTO, have successfully reduced tariffs. The average tariff levels for the
major GATT participants were about 22% in 1947, but were 5% after the Uruguay Round in
1999.[2] Experts attribute part of these tariff changes to GATT and the WTO.

GATT and the World Trade Organization


Main article: Uruguay Round

In 1993, the GATT was updated (GATT 1994) to include new obligations upon its signatories.
One of the most significant changes was the creation of the World Trade Organization (WTO).
The 76 existing GATT members and the European Communities became the founding members
of the WTO on 1 January 1995. The other 51 GATT members rejoined the WTO in the following
two years (the last being Congo in 1997). Since the founding of the WTO, 33 new non-GATT
members have joined and 22 are currently negotiating membership. There are a total of 164
member countries in the WTO, with Liberia and Afghanistan being the newest members as of
2018.

Of the original GATT members, Syria[12][13], Lebanon[14] and the SFR Yugoslavia have not
rejoined the WTO. Since FR Yugoslavia,(renamed as Serbia and Montenegro and with
membership negotiations later split in two), is not recognised as a direct SFRY successor state;
therefore, its application is considered a new (non-GATT) one. The General Council of WTO, on
4 May 2010, agreed to establish a working party to examine the request of Syria for WTO
membership.[15][16] The contracting parties who founded the WTO ended official agreement of the
"GATT 1947" terms on 31 December 1995. Montenegro became a member in 2012, while Serbia
is in the decision stage of the negotiations and is expected to become a member of the WTO in
the future.

Whilst GATT was a set of rules agreed upon by nations, the WTO is an institutional body. As
such, GATT was merely a forum for nations to discuss, while the WTO is a proper international
organization (which implies physical headquarters, staff, delegation ...). The WTO expanded its
scope from traded goods to include trade within the service sector and intellectual property
rights. Although it was designed to serve multilateral agreements, during several rounds of GATT
negotiations (particularly the Tokyo Round) plurilateral agreements created selective trading and
caused fragmentation among members. WTO arrangements are generally a multilateral
agreement settlement mechanism of GATT.[17]

Effects on trade liberalization

The average tariff levels for the major GATT participants were about 22 percent in 1947.[2] As a
result of the first negotiating rounds, tariffs were reduced in the GATT core of the United States,
United Kingdom, Canada, and Australia, relative to other contracting parties and non-GATT
participants.[2] By the Kennedy round (1962–67), the average tariff levels of GATT participants
were about 15%.[2] After the Uruguay Round, tariffs were under 5%.[2]

In addition to facilitating applied tariff reductions, the early GATT's contribution to trade
liberalization "include binding the negotiated tariff reductions for an extended period (made
more permanent in 1955), establishing the generality of nondiscrimination through most-favored
nation (MFN) treatment and national treatment, ensuring increased transparency of trade policy
measures, and providing a forum for future negotiations and for the peaceful resolution of
bilateral disputes. All of these elements contributed to the rationalization of trade policy and the
reduction of trade barriers and policy uncertainty."[2]

According to Dartmouth economic historian Douglas Irwin,[5]

The prosperity of the world economy over the past half century owes a great deal to the growth
of world trade which, in turn, is partly the result of farsighted officials who created the GATT.
They established a set of procedures giving stability to the trade-policy environment and thereby
facilitating the rapid growth of world trade. With the long run in view, the original GATT
conferees helped put the world economy on a sound foundation and thereby improved the
livelihood of hundreds of millions of people around the world.

https://en.wikipedia.org/wiki/General_Agreement_on_Tariffs_and_Trade
Keynesian Economics
Reviewed by Jim Chappelow

Updated Apr 11, 2019

What Is Keynesian Economics?

Keynesian economics is an economic theory of total spending in the economy and its effects on
output and inflation. Keynesian economics was developed by the British economist John
Maynard Keynes during the 1930s in an attempt to understand the Great Depression. Keynes
advocated for increased government expenditures and lower taxes to stimulate demand and pull
the global economy out of the depression.

Subsequently, Keynesian economics was used to refer to the concept that optimal economic
performance could be achieved—and economic slumps prevented—by influencing aggregate
demand through activist stabilization and economic intervention policies by the government.
Keynesian economics is considered a "demand-side" theory that focuses on changes in the
economy over the short run.

Key Takeaways

 Keynesian Economics focuses on using active government policy to manage aggregate demand
in order to address or prevent economic recessions.

 Keynes developed his theories in response to the Great Depression, and was highly critical of
classical economic arguments that natural economic forces and incentives would be sufficient to
help the economy recover.

 Activist fiscal and monetary policy are the primary tools recommended by Keynesian economists
to manage the economy and fight unemployment.

Understanding Keynesian Economics

Keynesian economics represented a new way of looking at spending, output, and inflation.
Previously, classical economic thinking held that cyclical swings in employment and economic
output would be modest and self-adjusting. According to this classical theory, if aggregate
demand in the economy fell, the resulting weakness in production and jobs would precipitate a
decline in prices and wages. A lower level of inflation and wages would induce employers to
make capital investments and employ more people, stimulating employment and restoring
economic growth. The depth and severity of the Great Depression, however, severely tested this
hypothesis.

Keynes maintained in his seminal book, The General Theory of Employment, Interest, and
Money and other works that during recessions structural rigidities and certain characteristics of
market economies would exacerbate economic weakness and cause aggregate demand to plunge
further.

For example, Keynesian economics disputes the notion held by some economists that lower
wages can restore full employment, by arguing that employers will not add employees to produce
goods that cannot be sold because demand is weak. Similarly, poor business conditions may
cause companies to reduce capital investment, rather than take advantage of lower prices to
invest in new plants and equipment. This would also have the effect of reducing overall
expenditures and employment.

Keynesian Economics and the Great Depression

Keynesian economics is sometimes referred to as "depression economics," as Keynes's General


Theory was written during a time of deep depression not only in his native land of the United
Kingdom but worldwide. The famous 1936 book was informed by directly observable economic
phenomena arising during the Great Depression, which could not be explained by classical
economic theory.

In classical economic theory, it is argued that output and prices will eventually return to a state of
equilibrium, but the Great Depression seemed to counter this theory. Output was low and
unemployment remained high during this time. The Great Depression inspired Keynes to think
differently about the nature of the economy. From these theories, he established real-world
applications that could have implications for a society in economic crisis.

Keynes rejected the idea that the economy would return to a natural state of equilibrium. Instead,
he argued that once an economic downturn sets in, for whatever reason, the fear and gloom that it
engenders among businesses and investors will tend to become self-fulfilling and can lead to a
sustained period of depressed economic activity and unemployment. In response to this, Keynes
advocated a countercyclical fiscal policy in which, during periods of economic woe, the
government should undertake deficit spending to make up for the decline in investment and
boost consumer spending in order to stabilize aggregate demand. (For more, read Can Keynesian
Economics Reduce Boom-Bust Cycles?)

Keynes was highly critical of the British government at the time. The government cut welfare
spending and raised taxes to balance the national books. Keynes said this would not encourage
people to spend their money, thereby leaving the economy unstimulated and unable to recover
and return to a successful state. Instead, he proposed that the government spend more money,
which would increase consumer demand in the economy. This would, in turn, lead to an increase
in overall economic activity, the natural result of which would be recovery and a reduction in
unemployment.

Keynes also criticized the idea of excessive saving, unless it was for a specific purpose such as
retirement or education. He saw it as dangerous for the economy because the more money sitting
stagnant, the less money in the economy stimulating growth. This was another of Keynes's
theories geared toward preventing deep economic depressions.
Both classical economists and free-market advocates have criticized Keynes' approach. These
two schools of thought argue that the market is self-regulating and businesses responding to
economic incentives will inevitably return it to a state of equilibrium. On the other hand, Keynes,
who was writing while the world was mired in a period of deep economic depression, was not as
optimistic about the natural equilibrium of the market. He believed the government was in a
better position than market forces when it came to creating a robust economy.

Keynesian Economics and Fiscal Policy

The multiplier effect is one of the chief components of Keynesian countercyclical fiscal policy.
According to Keynes's theory of fiscal stimulus, an injection of government spending eventually
leads to added business activity and even more spending. This theory proposes that spending
boosts aggregate output and generates more income. If workers are willing to spend their extra
income, the resulting growth in the gross domestic product( GDP) could be even greater than the
initial stimulus amount.

The magnitude of the Keynesian multiplier is directly related to the marginal propensity to
consume. Its concept is simple. Spending from one consumer becomes income for another
worker. That worker's income can then be spent and the cycle continues. Keynes and his
followers believed individuals should save less and spend more, raising their marginal propensity
to consume to effect full employment and economic growth.

In this way, one dollar spent in fiscal stimulus eventually creates more than one dollar in growth.
This appeared to be a coup for government economists, who could provide justification for
politically popular spending projects on a national scale.

This theory was the dominant paradigm in academic economics for decades. Eventually, other
economists, such as Milton Friedman and Murray Rothbard, showed that the Keynesian model
misrepresented the relationship between savings, investment, and economic growth. Many
economists still rely on multiplier-generated models, although most acknowledge that fiscal
stimulus is far less effective than the original multiplier model suggests.

The fiscal multiplier commonly associated with the Keynesian theory is one of two broad
multipliers in macroeconomics. The other multiplier is known as the money multiplier. This
multiplier refers to the money-creation process that results from a system of fractional reserve
banking. The money multiplier is less controversial than its Keynesian fiscal counterpart.

Keynesian Economics and Monetary Policy

Keynesian economics focuses on demand-side solutions to recessionary periods. The


intervention of government in economic processes is an important part of the Keynesian arsenal
for battling unemployment, underemployment, and low economic demand. The emphasis on
direct government intervention in the economy places Keynesian theorists at odds with those
who argue for limited government involvement in the markets. Lowering interest rates is one
way governments can meaningfully intervene in economic systems, thereby generating active
economic demand. Keynesian theorists argue that economies do not stabilize themselves very
quickly and require active intervention that boosts short-term demand in the economy. Wages
and employment, they argue, are slower to respond to the needs of the market and require
governmental intervention to stay on track.

Prices also do not react quickly, and only gradually change when monetary policy interventions
are made. This slow change in prices, then, makes it possible to use money supply as a tool and
change interest rates to encourage borrowing and lending. Short-term demand increases initiated
by interest rate cuts reinvigorate the economic system and restore employment and demand for
services. The new economic activity then feeds continued growth and employment. Without
intervention, Keynesian theorists believe, this cycle is disrupted and market growth becomes
more unstable and prone to excessive fluctuation. Keeping interest rates low is an attempt to
stimulate the economic cycle by encouraging businesses and individuals to borrow more money.
When borrowing is encouraged, businesses and individuals often increase their spending. This
new spending stimulates the economy. Lowering interest rates, however, does not always lead
directly to economic improvement.

Keynesian economists focus on lower interest rates as a solution to economic woes, but they
generally try to avoid the zero-bound problem. As interest rates approach zero, stimulating the
economy by lowering interest rates becomes less effective because it reduces the incentive to
invest rather than simply hold money in cash or close substitutes like short term Treasuries.
Interest rate manipulation may no longer be enough to generate new economic activity if it
cannot spur investment, and the attempt at generating economic recovery may stall completely.
This is know as a liquidity trap.

Japan's Lost Decade during the 1990s is believed by many to be an example of this liquidity trap.
During this period, Japan's interest rates remained close to zero but failed to stimulate the
economy.

When lowering interest rates fails to deliver results, Keynesian economists argue that other
strategies must be employed, primarily fiscal policy. Other interventionist policies include direct
control of the labor supply, changing tax rates to increase or decrease the money supply
indirectly, changing monetary policy, or placing controls on the supply of goods and services
until employment and demand are restored.

https://www.investopedia.com/terms/k/keynesianeconomics.asp

Neoliberalism
Reviewed by Will Kenton
Updated Apr 9, 2019
What Is Neoliberalism?

Neoliberalism is a policy model—bridging politics, social studies, and economics—that seeks to


transfer control of economic factors to the private sector from the public sector. It tends towards
free-market capitalism and away from government spending, regulation, and public ownership.

Often identified in the 1980s with the conservative governments of Margaret Thatcher and
Ronald Reagan, neoliberalism has more recently been associated with so-called Third Way
politics, which seeks a middle ground between the ideologies of the left and right.

Understanding Neoliberalism

One way to better grasp neoliberalism is through its associations, and sometimes-subtle
contrasts, with other political and economic movements and concepts.

It's often associated with laissez-faire economics, the policy that prescribes a minimal amount of
government interference in the economic issues of individuals and society. This theory is
characterized by the belief that continued economic growth will lead to human progress, a
confidence in free markets, and an emphasis on limited state interference.

Key Takeaways

 Neoliberalism supports fiscal austerity, deregulation, free trade, privatization, and greatly
reduced government spending.

 Most recently, neoliberalism has been famously—or perhaps infamously—associated with the
economic policies of Margaret Thatcher in the United Kingdom and Ronald Reagan in the United
States.

 There are many criticisms of neoliberalism, including its potential to endanger democracy,
workers’ rights, and sovereign nations’ right to self-determination.

Neoliberalism is typically seen as advocating more intervention in the economy and society than
libertarianism, the hands-off ideology with which it's sometimes confused. Neoliberals usually
favor progressive taxation, for example, where libertarians often eschew it in favor of such
schemes as a flat tax rate for all. And neoliberals aren't necessarily averse to picking winners and
losers in the economy, and often do not oppose measures such as bailouts of major industries,
which are anathema to libertarians.

Although both neoliberalism and liberalism are rooted in 19th-century classical liberalism,
neoliberalism focuses on markets, while liberalism defines all aspects of a society.

Liberalism vs. Neoliberalism

Discussion abounds over how neoliberalism relates to the term that inspired it. To many,
liberalism at its essence is a broad political philosophy, one that holds liberty to a high standard
and defines all social, economic, and political aspects of society, such as the role of government,
toleration, and freedom to act. Neoliberalism, on the other hand, is seen as more limited and
focused, concerned with markets and the policies and measures that help them function fully and
efficiently.

A Model That Pleases Few

It may be telling that the term neoliberal is often used accusatorily, and seldom if ever as a self-
description. In a politically polarized world, neoliberalism receives criticism from both left and
right, often for similar reasons.

The focus on economic efficiency can, critics say, hinder other factors. For example, assessing
the performance of a public transit system purely by how economically efficient it is may lead to
workers’ rights being considered just a hindrance to performance. Another criticism is that the
rise of neoliberalism has led to the rise of an anti-corporatist movement stating that the influence
of corporations goes against the betterment of society and democracy.

On a similar note is the critique that neoliberalism's emphasis on economic efficiency has
encouraged globalization, which opponents see as depriving sovereign nations of the right to
self-determination. Neoliberalism's naysayers also say that its call to replace government-owned
corporations with private ones can reduce efficiency: While privatization may increase
productivity, they assert, the improvement may not be sustainable because of the world’s limited
geographical space. In addition, those opposed to neoliberalism add that it is anti-democratic, can
lead to exploitation and social injustice, and may criminalize poverty.

https://www.investopedia.com/terms/n/neoliberalism.asp

International Relations is an Increasingly Relevant Field of Study. International Relations is becoming


increasingly relevant as the world grows more and more interconnected through trade and commerce,
migration, the internet and through social media, and concerns about pressing global environmental
problems.

What is IR?
International Relations is concerned with relations across boundaries of nation-states. It
addresses international political economy, global governance, intercultural relations, national and
ethnic identities, foreign policy analysis, development studies, environment, international
security, diplomacy, terrorism, media, social movements and more. It is a multidisciplinary field
that does not restrict students to one approach and employs a variety of methods including
discourse analysis, statistics and comparative and historical analysis.
International Relations is an Increasingly Relevant Field of Study

International Relations is becoming increasingly relevant as the world grows more and more
interconnected through trade and commerce, migration, the internet and through social media,
and concerns about pressing global environmental problems.

A globalized world calls for academics and professionals trained to comprehend these complex
interactions - promoting understanding and crafting policy and business solutions to meet the
challenges of today and the future. International Relations offers a comprehensive and adaptable
toolkit particularly well suited to employment in a rapidly changing world.

International Relations at SF State

The Department of International Relations in the College of Liberal & Creative Arts explores the
interrelations of the world’s primary political institutions, nation-states. As the world is changing,
so is the field of International Relations. Increasingly, International Relations at SF State also
focuses on multinational corporations, international governmental and non-governmental
organizations and social movements. Our curriculum is under constant review to reflect these
global and regional trends.

Our students study specific countries and geographic regions and their interconnections through
political treaties, trade, migration, cultural and ethnic affinities, shared social, economic, and
ideological goals, hierarchies of power and wealth and other factors. We train students in
different theoretical approaches and empower them to make their own methodological choices.

Studying IR at SF State is a step out into the world

Whether you are looking to start or develop your professional career, enter academia, or simply
gain understanding of a globalized world – International Relations at San Francisco State will
provide you with a solid platform of critical knowledge and skills. We offer a wide selection of
courses, taught by a diverse faculty with expertise in the most pressing issues and dynamic world
regions today.

Our alumni can be found across the world in non-profit, private and government positions. These
include the U.S. Department of State and other countries’ foreign ministries, the City of San
Francisco and other local governments, U.S. intelligence agencies, non-profit organizations in
the areas of international development, human rights, international labor, environment, and
international migration as well in business. A significant number of our graduate students also
pursue further professional or doctoral studies at high-ranking academic institutions around the
world.

https://internationalrelations.sfsu.edu/what-ir
Internationalization
Reviewed by Will Kenton

Updated Jun 18, 2019

What Is Internationalization?

Internationalization describes the process of designing products to meet the needs of users in
many countries or designing them so they can be easily modified, to achieve this goal.
Internationalization might mean designing a website so that when it's translated from English to
Spanish, the aesthetic layout still works properly. This may be difficult to achieve because many
words in Spanish have more characters than their English counterparts. They may thus take up
more space on the page in Spanish than in English.

In the context of economics, internationalization can refer to a company that takes steps to
increase its footprint or capture greater market share outside of its country of domicile by
branching out into international markets. The global corporate trend toward internationalization
has helped push the world economy into a state of globalization, in which economies throughout
the world are highly interconnected due to cross-border commerce. As such, they are greatly
impacted by each others' activities and economic well-being.

Key Takeaways

 Internationalization is a term used to describe the act of designing a product in a way that it may
be readily consumed across multiple countries.

 This process is used by companies looking to expand their footprints beyond their counties of
domicile, by branching out into international markets.

 Internationalization often requires modifying products to conform to the technical needs of a


given country, such as creating plugs suitable for different types of electrical outlets.

How Internationalization Works

There are many incentives that might inspire companies to strive for internalization. For
example, in the United States, companies that pay exorbitant overhead costs can shave expenses
by selling products in nations with relatively deflated currencies, or countries that have lower
costs of living. Such companies may also benefit from internationalization by reducing the cost
of business via reduced labor costs.

Economic internationalization can often lead to product internationalization since products sold
by multi-national companies are often used in multiple countries. As of 2017, over 50% of the
revenue earned by companies in the U.S S&P 500 Index came from sources outside of the United
States. This is a clear sign that large U.S. companies are conducting a large amount of their
business internationally.
[Important: Companies looking to step up internalization efforts should be cognizant of
potential trade barriers that may restrict their prospect for overseas commerce.]

Examples of Internationalization

When a company produces products for a wide range of clients in different countries, the
products that are internationalized often must be localized to fit the needs of a given country's
consumers.

For example, an internationalized software program must be localized so that it displays the date
as "November 14" in the United States, and as "14 November" in England. A company that
makes hair dryers or other appliances will need to ensure that their products are compatible with
the different wattages used in various countries.

They must also make sure that the plugs they make properly fit into different types of electrical
outlets.

https://www.investopedia.com/terms/i/internationalization.asp

You might also like