0% found this document useful (0 votes)
69 views20 pages

MMS - Overview

The document summarizes the evolution of market microstructure research over time from the 1970s to present day. It discusses how the focus has shifted from spreads and quotes in the early period to more recent areas like high frequency trading, algorithmic trading, and transaction cost analysis. The document also provides numerous tables that outline important studies within each era and their focus on topics like spread components, cost measurement, pre-trade cost estimation, and trade cost optimization.

Uploaded by

saudeb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views20 pages

MMS - Overview

The document summarizes the evolution of market microstructure research over time from the 1970s to present day. It discusses how the focus has shifted from spreads and quotes in the early period to more recent areas like high frequency trading, algorithmic trading, and transaction cost analysis. The document also provides numerous tables that outline important studies within each era and their focus on topics like spread components, cost measurement, pre-trade cost estimation, and trade cost optimization.

Uploaded by

saudeb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Market Microstructure

Related terms:

Trading Volume, Financial Market, High-Frequency Data, Diffusion

View all Topics

Market Microstructure
Robert Kissell Ph.D, in The Science of Algorithmic Trading and Portfolio Manage-
ment, 2014

Market Microstructure Literature


Academia has been performing market microstructure research since the 1970s.
And while the research has expanded immensely, the current work that is being
incorporated into trading algorithms and black box models is based on many of the
earlier groundbreaking works.

How exactly has market microstructure research evolved over time? First, research
in the 1970s and 1980s focused mainly on spreads, quotes, and price evolution.
In the following decade, 1990–2000, interest turned to transaction cost analysis
but focused on slippage and cost measurement. The following period, 2000–2010,
was marked with decimalization and Reg-NMS, and research turned to algorithmic
trading. Studies focused on pre-trade analysis in order to improve trading decision
and computer execution rules, and the development of optimal trading strategies
for single stocks and portfolios. The 2010 era so far has been marked with market
fragmentation, high frequency trading, algorithmic trading in multi-asset classes,
and exchange traded fund research. And many portfolio managers are studying how
best to incorporate transaction costs into stock selection and portfolio construction.
These eras of market microstructure research are shown in Table 2.1.

Table 2.1. A Brief History of Market Microstructure Research and TCA

Era General Interest


1970–1990 Spreads, Quotes, Price Evolution, Risk Premium
1990–2000
Transaction Costs, Slippage, Cost Measurement,
Friction
2000–2010 Algorithms, Pre-Trade, Black Box Models, Opti-
mal Trading Strategies
2010– Market Fragmentation, High Frequency, Mul-
ti-Assets, and Portfolio Construction

To help analysts navigate the vast amount of market microstructure research and
find an appropriate starting point, we have highlighted some sources that we
have found particularly useful. First, the gold standard is the research by Ananth
Madhavan in two papers: “Market Microstructure: A Survey” (2000) and “Market
Microstructure: A Practitioner’s Guide” (2002). Algorithmic Trading Strategies (Kissell,
2006) provides a literature review in Chapter 1, and Algorithmic Trading & DMA by
Barry Johnson (2010) provides in-depth summaries of research techniques being
incorporated into today’s trading algorithms for both order execution and black
box high frequency trading. Finally, Institutional Investor’s Journal of Trading has
become the standard for cutting edge academic and practitioner research.

Some of the more important research papers for various transaction cost topics
and algorithmic trading needs are shown in the following tables. Table 2.2 pro-
vides important insight into spread cost components. These papers provide the
fundamental needs for order placement and smart order routing logic. Table 2.3
provides important findings pertaining to trade cost measurement and trading
cost analysis. These papers provide a framework for measuring, evaluating, and
comparing trading costs across brokers, trading, and algorithms. Table 2.4 provides
insight into pre-trade cost estimation. These papers have provided the groundwork
for trading cost and market impact models across multi-assets as well as providing
insight into developing dynamic algorithmic trading rules. Finally, Table 2.5 provides
an overview of different trade cost optimization techniques for algorithmic trading
and portfolio construction.

Table 2.2. Spread Cost Analysis

Category Payment Study


Order Processing Order processing fee Tinic (1972)
Service provided Demetz (1968)
Inventory Cost Risk-reward for holding inven- Garmen (1976)Stoll (1978)-
tory
Ho and Stoll (1981)Madha-
van and Sofianos (1998)-
Amidhud and Mendel-
son (1980)Treynor (1971)-
Copeland and Galai
(1983)Gosten and Har-
ris (1988)Huang and Stoll
(1997)Easley and O’Hara
(1982, 1987)Kyle (1985)
Adverse Selection Payment for transacting with
informed investors
Payment for transacting with
investor with private informa-
tion

Table 2.3. Cost Measurement Analysis

Study Type Observation


Perold (1988) Value Line Fund 17.5% Annual

Wagner and Edwards Total Order Cost


(1993)
 Liquidity Demanders 5.78%
 Neutral Investors 2.19%
 Liquidity Suppliers −1.31%
Beebower and Priest (1980) Trades
 Buys −0.12%
 Sells 0.15%
Loeb (1983) Blocks

 Small Sizes 1.1–17.3%


 Large Sizes 2.1–25.4%
Holthausen, Leftwich and Mayer Trades
(1999)
 Buys 0.33%
 Sells 0.40%
Chan and Lakonishok Trades
(1993)
 Buys 0.34%
 Sells −0.04%
Chan and Lakonishok Orders
(1995)
 Buys 0.98%
 Sells 0.35%
Chan and Lakonishok Orders
(1997)
 NYSE 1.01–2.30%
 NASDAQ 0.77–2.45%
Keim and Madhavan Block Orders 3–5%
(1995)
Institutional Orders 0.20–2.57%
Keim and Madhavan
(1997)
Plexus Group (2000) Market Impact 0.33%
Delay 0.53%
Opportunity Cost 0.16%
Kraus and Stoll (1972) Blocks 1.41%

Lakonishok, Shleifer, and 1.30–2.60%


Vishny (1992)

Malkiel (1995) 0.43–1.83%

Haung and Stoll (1996) NYSE 25.8 cents

NASDAQ 49.2 cents


Wagner (2003) 0.28–1.07%

Conrad, Johnson and Wa- 0.28–0.66%


hal (2003)

Berkowitz, Logue and VWAP Cost 5 bp


Noser (1988)
Wagner (1975) Trading Costs 0.25 – 2.00%+

Note: Positive value indicates a cost and negative value indicates a saving.

Table 2.4. Cost Estimation

Study Structure Factors


Chan and Lakonishok Linear Size, Volatility, Trade Time,
Log(Price)
(1997)

Keim and Madhavan Log-Linear Size, Mkt Cap, Style, Price


(1997)

Barra (1997) Non-Linear Size, Volume, Trading Intensity,


Elasticity, etc.
Bertsimas and Lo (1998) Linear Size, Market Conditions, Private
Information
Almgren and Chriss (2000) Non-Linear Size, Volume, Sequence of Trade

Breen, Hoodrisk, and Ko- Linear 14 Factors - Size, Volatility, Vol-


ume, etc.
rajczyk (2002)

Kissell and Glantz (2003) Non-Linear Size, Volatility, Mkt Conditions,


Seq. of Trades
Lillo, Farmer, and Mantegna Non-Linear Order Size, Mkt Cap
(2004)

Table 2.5. Trade Cost Optimization

Study Optimization Technique


Bertsimas and Lo (1998) Price Impact and Private Information

Almgren and Chriss (2000) Risk Aversion, Value at Risk

Kissell and Glantz (2003) Risk Aversion, Value at Risk, Price Improvement

Kissell and Malamut (2006) Trading and Investing Consistency

> Read full chapter

Analysis of High-Frequency Data


Jeffrey R. Russell, Robert F. Engle, in Handbook of Financial Econometrics Tools and
Techniques, 2010

1.3. Economic Questions


Market microstructure economics focuses on how prices adjust to new information
and how the trading mechanism affects asset prices. In a perfect world, new informa-
tion would be immediately disseminated and interpreted by all market participants.
In this full information setting prices would immediately adjust to a new equilibrium
value determined by the agents preferences and the content of the information. This
immediate adjustment, however, is not likely to hold in practice. Not all relevant
information is known by all market participants at the same time. Furthermore,
information that becomes available is not processed at the same speed by all market
participants implying variable lag time between a news announcement and the
agents realization of price implications. Much of modern microstructure theory is
therefore driven by models of asymmetric information and the relationship between
traded prices and the fair market value of the asset.

In the simplest form, there is a subset of the agents that are endowed with su-
perior knowledge regarding the value of an asset. These agents are referred to as
privately informed or simply informed agents. Agents without superior information
are referred to as noise or liquidity traders and are assumed to be indistinguishable
from the informed agents. Questions regarding the means by which the asset price
transitions to reflect the information of the privately informed agents can be couched
in this context. Early theoretical papers utilizing this framework include Glosten and
Milgrom (1985), Easley and O'Hara (1992), Copeland and Galai (1983), and Kyle
(1985). A very comprehensive review of this literature can be found by O'Hara (1995).

The premise of these models is that market makers optimally update bid and ask
prices to reflect all public information and remaining uncertainty. For the NYSE it
was historically the specialist who plays the role of market maker. Even in markets
without a designated specialist bid and ask quotes are generally inferred either
explicitly or implicitly from the buy and sell limit orders closest to the current price.
Informed and uniformed traders are assumed to be indistinguishable when arriving
to trade so the difference between the bid and the ask prices can be viewed as
compensation for the risk associated with trading against potentially better informed
agents. Informed traders will make profitable transactions at the expense of the
uninformed.

In this asymmetric information setting, two types of prices can be defined. Prices that
trades occur at and a notional fair market value that reflects both public and private
information. We call the prices that trades occur at a “transaction price,” although
it could be a posted bid, ask, or midpoint of the bid and ask prices. Following the
microstructure literature, we will call the notional fair market value an “efficient
price” in the sense that it reflects all information, both public and private. If the two
prices do not coincide then trades occur away from their efficient values so there is
a gain to one party and a loss to another in a transaction. Natural questions arise. The
larger these deviations, the larger the loss to one party. Measures of market quality
can be constructed to reflect the size of these deviations. The simplest example would
be half the bid-ask spread. If prices occur at the bid or the ask price and the midpoint
of the bid and ask quotes are on average equal to the efficient price, then this
distance reflects the average loss to trader executing a market order and the gain to
the trader executing the limit order. How do the rules of trade affect market quality?
How is market quality affected by the amount of asymmetric information? How does
market quality vary across stocks with different characteristics such as volatility or
daily volume?

In a rational expectations setting market makers learn about private information


by observing the actions of traders. Informed traders only transact when they have
private information and would like to trade larger quantities to capitalize on their in-
formation before it becomes public. The practical implications is that characteristics
of transactions carry information. An overview of the predictions of these models is
that prices adjust more quickly to reflect private information when the proportion of
uninformed traders is higher. Volume is higher, transaction rates are higher when
the proportion of uninformed traders is higher. The bid ask spread is therefore
predicted to be increasing in volume and transaction rates. This learning process
is central in the study of market microstructure data and is often referred to as
price discovery. Specific examples include price impact studies where the impact
of an event such as a trade on future prices is studied. Related questions involve
multivariate studies of the prices of a single asset traded in multiple markets. Here
questions regarding the origin of price discovery are relevant. For example, does
price discovery tend to occur in the options derivative market or in the underlying.
Similar questions can be asked in regional markets when one asset is traded, say in
a U.S. market and a European market.

A related issue is market quality. In an ideal market, a trader should be able to


transact large quantities at a price very close to the fair value of the asset, over a
short period of time. In reality, the rules of the market, including the algorithms
used to match buyers with sellers, who trades first (priority issues), and the cost or
incentive structure for posting limit orders all play a role in the overall quality of the
market. Accurate measurement of market quality is another central area in market
microstructure. These measures are as simple as bid ask spreads but can also be
modeled with more sophisticated techniques.

Empirical market microstructure plays a role in two larger areas. First, an ideal
market would have transaction prices that accurately reflect all information. That is,
the efficient price and the transaction prices would coincide. In absence of this
perfect world, the deviation would be as small as possible. Because the rules of trade,
that is the market structure, play a role in this relationship, market designers play a
large role in determining market quality with the rules imposed. This can be a natural
objective or imposed by government regulations. A more complete understanding
about market features across different trading rules and platforms aides in market
design.

Recently, a second role of microstructure has emerged. Market participants may


prefer assets traded in more desirable, higher quality markets. In this case, market
quality may have an effect on how traders value an asset and therefore potentially
reflect their equilibrium value. Following the analogy in the introduction, a house
with a sole access road that is occasionally impassable due to flooding might reduce
the value of the home. The price path towards the equilibrium actually affects the
equilibrium value.

Clearly assessing the market microstructure effects requires studying data at an in-
traday, high frequency. Data aggregated over the course of a day will not contain the
detailed information of price adjustment discussed in this section. Implementing
the econometric analysis, however, is complicated by the data features discussed in
the previous section. This chapter provides a review of the techniques and issues
encountered in the analysis of high-frequency data.

> Read full chapter

How Markets Slowly Digest Changes in


Supply and Demand
Jean-Philippe Bouchaud, ... Fabrizio Lillo, in Handbook of Financial Markets: Dy-
namics and Evolution, 2009

2.1.3. Motivation and Scope


Markets are places where buyers meet sellers and the prices of exchanged goods are
fixed. As originally observed by Adam Smith, during the course of this apparently
simple process, remarkable things happen. The information of diverse buyers and
sellers, which may be too complex and textured for any of them to fully articulate,
is somehow incorporated into a single number—the price. One of the powerful
achievements of economics has been the formulation of simple and elegant equi-
librium models that attempt to explain the end results of this process without going
into the details of the mechanisms through which prices are actually set.

There has always been a nagging worry, however, that there are many situations in
which broad-brush equilibrium models that do not delve sufficiently deeply into
the process of trading and the strategic nature of its dynamics may not be good
enough to tell us what we need to know; to do better we will ultimately have to roll
up our sleeves and properly understand how prices change from a more microscopic
point of view. Walras himself worried about the process of tâtonnement, the way in
which prices settle into equilibrium. While there are many proofs for the existence
of equilibria, it is quite another matter to determine whether or not a particular
equilibrium is stable under perturbations—that is, whether prices initially out of
equilibrium will be attracted to an equilibrium. This necessarily requires a more
detailed model of the way prices are actually formed. There is a long history of work
in economics seeking to create models of this type (see e.g., Fisher, 1983), but many
would argue that this line of work was ultimately not very productive, and in any
case it has had little influence on modern mainstream economics.

A renewed interest in dynamical models that incorporate market microstructure


is driven by many factors. In finance, one important factor is growing evidence
suggesting that there are many situations where equilibrium models, at least in
their current state, do not explain the data very well. Under the standard model
prices should change only when there is news, but there is growing evidence that
news is only one of several determinants of prices and that prices can stray far from
fundamental values (Campbell and Shiller, 1989; Roll, 1984; Cutler et al., 1989; Joulin
et al., 2008).2 Doubts are further fueled by a host of studies in behavioral economics
demonstrating the strong boundaries of rationality. Taken together this body of work
calls into question the view that prices always remain in equilibrium and respond
instantly and correctly to new information.

The work reviewed here argues that trading is inherently an incremental process and
that for this reason, prices often respond slowly to new information. The reviewed
body of theory springs from the recent empirical discovery that changes in supply
and demand constitute a long-memory process; that is, that its autocorrelation
function is a slowly decaying power law (Bouchaud et al., 2004; Lillo and Farmer,
2004). This means that supply and demand flow in and out of the market only very
gradually, with a persistence that is observed on time scales of weeks or even months.
We argue that this is primarily caused by the practice of order splitting, in which
large institutional funds split their trading orders into many small pieces. Because
of the heavy tails in trading size, there are long periods where buying pressure
dominates and long periods where selling pressure dominates. The market only
slowly and with some difficulty “digests” these swings in supply and demand. To
keep prices efficient in the sense that they are unpredictable and there are not easy
profit-making opportunities, the market has to make significant adjustments in
liquidity. Understanding how this happens leads to a deeper understanding of many
properties of market microstructure, such as volatility, the Bid–Ask spread, and the
market impact of individual incremental trades. It also leads to an understanding
of important economic issues that go beyond market microstructure, such as how
large institutional orders impact the price and in particular how this depends on
both the quantity traded and on time. It implies that the liquidity of markets is a
dynamic process with a strong history dependence.

The work reviewed here by no means denies that information plays a role in forming
prices, but it suggests that for many purposes this role is secondary. In the last
half of the twentieth century, finance has increasingly emphasized information and
deemphasized supply and demand. The work we review here brings forward the role
of fluctuations and correlations in supply and demand, which may or may not be
exogenous. As we view it, it is useful to begin the story with a quantitative description
of the properties of fluctuations in supply and demand. Where such fluctuations
come from doesn't really matter; they could be driven by rational responses to
information or they could simply be driven by a demand for liquidity. In either case,
they imply that there are situations in which order arrival can be very predictable.
Orders contain a variable amount of information about the hidden background of
supply and demand. This affects how much prices move and therefore modulates
the way in which information is incorporated into prices. This notion of information
is internal to the market. In contrast to the prevailing view in market microstructure
theory, there is no need to distinguish between “informed” and “uninformed”
trading to explain important properties of markets, such as the shape of market
impact functions or the Bid–Ask spread.3

We believe that the work here should have repercussions on a wide gamut of
questions:

• At a very fundamental level, how do we understand why prices move, how


information is reflected in prices, and what fixes the value of the volatility?
• At the level of price statistics, what are the mechanisms leading to price jumps
and volatility clustering?
• At the level of market organization, what are the optimal trading rules to
ensure immediate liquidity and orderly flow to investors?
• At the level of agent-based models, what are the microstructural ingredients
necessary to build a realistic agent-based model of price changes?
• At the level of trading strategies and execution costs, what are the consequence
of empirical microstructure regularities on transaction costs and implemen-
tation shortfall?

We do not wish to imply that these questions will be answered here—only that
the work described here bears on all of them. We will return to discussing the
implications in our conclusions.

> Read full chapter

Handbook of Economic Forecasting


Peter Christoffersen, ... Bo Young Chang, in Handbook of Economic Forecasting,
2013

6 Summary and Discussion


The literature contains a large body of evidence supporting the use of option-implied
information to predict physical objects of interest. In this chapter we have highlight-
ed some of the key tools for extracting forecasting information using option-implied
moments and distributions.

These option-implied forecasts are likely to be most useful when:

• The options market is highly liquid so that market microstructure biases do not
adversely affect the forecasts. In illiquid markets bid-ask spreads are wide and
furthermore, option quotes are likely to be based largely on historical volatility
forecasts so that the option quotes add little or no new information.
• Many strike prices are available. In this case, model-free methods can be used
to extract volatility forecasts without assuming particular volatility dynamics or
return distributions.
• Many different maturities are available. In this case the option-implied forecast
for any desired horizon can be constructed with a minimum of modeling
assumptions.
• The underlying asset is uniquely determined in the option contract. Most
often this is the case, but in the case of  Treasury bond options for example,
many different bonds are deliverable, which in turn complicates option-based•
forecasting of the underlying return dynamics.
Options are European, or in the case of American options, when the early •
exercise premium can be easily assessed. The early exercise premium must
be estimated and subtracted from the American option price before using the
method surveyed in this chapter. Estimating the early exercise premium thus
adds noise to the option-implied forecast.
The underlying asset has highly persistent volatility dynamics. In this case •
time-series forecasting models may be difficult to estimate reliably.
The underlying asset undergoes structural breaks. Such breaks complicate the•
estimation of time-series models and thus heighten the role for option-im-
plied forecasts.
The focus is on higher moments of the underlying asset return. The non-linear•
payoff structure in option contracts makes them particularly attractive for
higher-moment forecasting.
Risk premia are small. Risk premia may bias the option-implied forecasts but
option-based forecasts may nevertheless contain important information not
captured in historical forecasts.

Related to the last bullet point, we have also summarized the key theoretical rela-
tionships between option-implied and physical densities, enabling the forecaster to
take into account risk premia and convert the option-implied forecasts to physical
forecasts. We hasten to add that it is certainly not mandatory that the option-implied
information is mapped into the physical measure to generate forecasts. Some em-
pirical studies have found that transforming option-implied to physical information
improves forecasting performance in certain situations (see Shackleton et al., 2010
and Chernov, 2007) but these results do not necessarily generalize to other types of
forecasting exercises.

We would expect the option-implied distribution or moments to be biased predic-


tors of their physical counterparts, but this bias may be small, and attempting to
remove it can create problems of its own, because the resulting predictor is no longer
exclusively based on forward-looking information from option prices, but also on
backward-looking information from historical prices as well as on assumptions on
investor preferences.

More generally, the existence of a bias does not prevent the option-implied infor-
mation from being a useful predictor of the future object of interest. Much recent
evidence on variance forecasting for example (see our Table 1 which is Table 1 in
Busch et al. (2011)) strongly suggests that this is indeed the case empirically.

Going forward, developing methods for combining option-implied and historical


return-based forecasts would certainly be of use – in particular when risk-premia may
be present. In certain applications, for example volatility forecasting, forecasts can
be combined in a simple regression framework or even by using equal weighting. In
other applications, such as density forecasting, more sophisticated approaches are
needed. Timmermann (2006) provides a thorough overview of available techniques.

Most of the empirical literature has been devoted to volatility forecasting using
simple Black–Scholes implied volatility. Clearly, much more empirical research is
needed to assess the more sophisticated methods for computing option-implied
volatility as well as the other forecasting objects of interest including skewness,
kurtosis, and density forecasting.

> Read full chapter

Regulating Short Sales in the 21st Cen-


tury
Houman B. Shadab, in Handbook of Short Selling, 2012

4.4 Conclusion
Although U.S. regulation of short sales remained fundamentally unchanged for
nearly its first 70 years of existence, it is unlikely that another several decades will pass
before fundamental changes are again introduced into the way the SEC regulates
short sales. Developments in market microstructure now happen at a faster pace
than in the 20th century, which makes it more likely that any particular regulatory
framework will be rendered ineffective, counterproductive, or even obsolete in the
near future. In addition, ongoing efforts at financial reform under the Dodd–Frank
Act and other initiatives will likely keep the technicalities of short sale regulation
permanently unsettled.

Differences in opinion among the SEC's body of five commissioners responsible


for approving any new rule making may also keep short sale regulation an area
that is revisited relatively frequently. For example, the most recent alternative uptick
rule price test was approved by a vote of 3-2, with Commissioners Kathleen Casey
and Troy Paredes issuing strong dissents. Among other concerns, the dissenting
commissioners questioned the empirical basis for the price test based on academic
studies, whether the test would actually achieve its intended goal, and whether the
overall benefits outweighed its costs. These types of divisions among commissioners
indicate that any particular short sale regulation may be amended if the rules are
revisited once the makeup of the commissioners changes. Indeed, in their release
adopting the alternative uptick rule, even the three approving commissioners noted
with respect to particular exceptions and other rules that alternative approaches
could also likely achieve their stated goals in regulating short sales and that they
were exercising their collective judgment in choosing a particular approach and not
another.

Accordingly, although the SEC has recently returned to its original approach to short
sale regulation in implementing a price test, market participants should be aware
that there is no guarantee that a price test or any of its particular exceptions or
approaches will remain in place for an extended period of time. Rules with respect
to locate and close-out requirements in particular may be amenable to change to
the extent that the SEC is influenced by economic studies specifically analyzing
naked short sales. A study by Fotak and colleagues (2009), for instance, found that
naked short sales generally contribute to price quality, which may undermine the
rationale for relatively strict locate and close-out requirements. The studies that the
SEC must conduct pursuant to Dodd–Frank may also call aspects of the current short
sale regulatory regime into question. In short, market developments, differences
in opinion among commissioners, an extended period of share price volatility or
stability, and continued governmental and academic research into the impact of
short selling and its regulation are all factors likely to cause change to any particular
short sale regime.

> Read full chapter

A Common Measure of Liquidity Costs


for Futures and Stock Exchanges
Christopher Ting, in Handbook of Asian Finance: REITs, Trading, and Fund Perfor-
mance, 2014

12.2 A Brief Review of Implicit Trading Costs


In this section, we trace the intellectual roots of implicit trading costs. We discuss
the assumptions underlying the quoted and effective spreads. Examining the as-
sumptions of these traditional measures motivates this chapter’s proposal for a new
method that is independent of the tick size design and the trading mechanism.

12.2.1 Bid-Ask Spread


The importance of trading costs has been discussed in a number of papers. Stigler
(1964) defines it as the cost of consummating a transaction, more specifically, as the
sum of commissions and the bid-ask spread. He suggests using transaction costs
as a criterion to measure the efficiency of a specialist who facilitates transactions
by carrying inventories and quoting bid and ask prices. Demsetz (1968) defines
transaction costs as the costs of exchanging ownership titles. To facilitate empirical
implementation, he narrows the definition of transaction cost as the cost of using
the NYSE to accomplish a quick exchange of stock and money. More concretely, the
trading cost comprises the brokerage fee plus the bid-ask spread. His definition is
identical to Stigler’s (1964). Although the brokerage commission comprised about
60% of the transaction cost at that time, treating the bid-ask spread as a component
of the transaction cost is an important contribution.

Much of the later work in market microstructure is influenced by Demsetz’s predi-


cation that the bid-ask spread is proportional to the markup paid for the provision of
immediate trade execution. Tinic (1972) performs a more detailed empirical study
of the determinants of bid-ask spread using the conceptual framework pioneered
by Demsetz (1968). Assuming market efficiency, Roll (1984) defines and derives an
effective bid-ask spread, which is equal to twice the first-order serial covariance of
price changes.

A number of papers have discussed the possible origins of bid-ask spread. These
may be broadly categorized as theories focusing on inventory, order placement,
and information asymmetry (see Whitcomb, 1988). Stoll (1978) develops a theory
of inventory holding costs with market makers having the prerogative to use the
bid-ask spread to offset the risk and cost of holding an inventory. Garbade and Silber
(1979) define liquidity risk as the variance of the difference between the equilibrium
value of an asset at the time an investor decides to trade and the transaction price
ultimately realized. A result of their theoretical analysis is that market makers who
trade on their own accounts reduce liquidity risk. Again, dealers’ inventory plays
an important role in buffering transient imbalances in purchase and sale orders.
Combined with Stoll’s (1978) theory, it follows that investors need to compensate
market makers by way of a bid-ask spread since they benefit from liquidity risk
reduction and immediacy of trade.

An alternative exploration by Cohen et al. (1981) demonstrates that the costs of in-
formation-related activities induce investors to use other order placement strategies
such as limit orders. Their conclusion is that the bid-ask spread stems from the
dynamic interaction between market orders and limit orders. The third path that
attempts to understand the origin of bid-ask spread is pioneered by Bagehot (1971),
Copeland and Galai (1983) as well as Glosten and Milgrom (1985). They approach
it from the perspective of adverse selection. In their studies, the spread is the
transaction cost that the specialists impose to insure themselves against informed
traders.

12.2.2 Assumptions Underlying Quotes-Based Measures


Being commonly regarded as a measure of liquidity-induced transaction costs, and
as comprehensively surveyed by Keim and Madhavan (1998) and Stoll (2000), the
bid-ask spread has become a prevalent feature of market microstructure. However,
quantifying the round-trip trading costs as the quoted spread requires the following
assumptions:Assumption IDesignated market makers and investors who submit
limit orders are capable of setting bid and ask quotes such that the efficient price
falls within the quotes.Assumption IIThe bid and ask quotes are the only available
prices in the market.Assumption IIIThe midpoint of the quotes equals the efficient
price.

It is difficult to give a definitive description of efficient price, especially under


changing market conditions. Recognizing this difficulty, Stigler (1964) argues that
market makers not only consider their inventory levels and the proper amount of
resources to ascertain the efficient price, they also must predict and speculate
on changes in the efficient price. Hence, Assumption I is reasonable because
market makers and traders are adaptive agents who actively learn, update their
beliefs, and make price-setting decisions frequently. However, the second and third
assumptions present some caveats on using the quoted spread to measure liquidity
costs. It is well-known that block traders employ a search-brokerage mechanism to
locate counterparties who are willing to accommodate a large trade. In this case, the
transaction price may deviate from the bid or the offer price.

In view of these problems that invalidate the assumptions, many consider the effec-
tive spread instead. It is the difference between the market price and the midpoint
of the prevailing bid and ask quotes. The effective spread relaxes Assumption II, as it
does not require the transaction price to equal the bid or the ask price. Nevertheless,
it still requires the validity of Assumption III. For highly liquid stocks with competitive
quoting by market makers, the midpoint is likely to coincide with the efficient price.
When quoted prices and the associated sizes are not updated frequently enough to
account for changing market conditions, there is no guarantee that the midpoint is
still a good proxy for the efficient price.

Since the assumptions underlying traditional quotes-based measures are not with-
out flaws, alternative ways of estimating transaction costs appear necessary. In this
chapter, we extend Roll’s model by postulating that the unobservable efficient
price follows the geometric Brownian motion. This is a common assumption in the
market microstructure literature, which is also assumed by Roll (1984).

> Read full chapter

Short Selling in Emerging Markets


Mario Maggi, Dean Fantazzini, in Handbook of Short Selling, 2012

23.4 Conclusion
This chapter reviewed the main characteristics of short selling in emerging markets,
discussing how short selling restrictions can affect liquidity in emerging markets
and considerably slow the market recovery after a financial shock. Moreover, long
positions cannot be hedged easily and, even in the case of a derivative market in
place, derivatives cannot be priced efficiently. In general, short selling allows for
efficient pricing information and a symmetric market microstructure, which makes
the market more robust with respect to financial bubbles.

We then compared the market performance of different emerging markets to see


whether short selling had an effect, with particular attention to the ongoing global
financial crisis. Our work showed that the mean volatility of SS countries is, on
average, smaller than that of NSS countries, except for the 2008 crisis: however, after
2008, volatility returned quickly to previous levels in SS countries, while this has not
been the case for NSS countries. Interestingly, we also found that the average Sharpe
ratios for NSS countries were generally better than those of SS countries before 2008,
but after that year, the Sharpe ratios for SS countries have recovered much faster than
those for NSS countries. Returns skewness tends to be much more variable in NSS
countries than in SS countries, while the average kurtosis for SS countries returns is
lower than that of NSS countries. Finally, we noted that the frequencies of extreme
returns and average annual maximum drawdowns are lower for SS countries than
for NSS countries.

This evidence makes us think of the famous anecdote of the “boiling frog,” according
to which “if a frog is placed in boiling water, it will jump out, but if it is placed in
cold water that is heated slowly, it will not perceive the danger and will be cooked to
death”: short selling allows the market to react quickly to any information, even at the
cost of some “temporary scalds” (in our case, high temporary volatility). Restricting
or banning short selling practices condemns the market to a much slower recovery
(lower Sharpe ratios, higher market drawdowns) and a lower liquidity, which can
fatally limit its operation and slowly make it irrelevant for the local economy.

> Read full chapter

IPO Underpricing*
Alexander Ljungqvist, in Handbook of Empirical Corporate Finance, 2007
4.2.1 How widespread is price support?
Direct evidence of price support is limited because stabilizing activities are generally
notifiable, if at all, only to market regulators, and not to investors at large. Thus it is
hard to identify which IPOs were initially supported, how the intensity of intervention
varied over time, and at what time support was withdrawn. Most work therefore
relies on indirect evidence. For instance, one might investigate after-market mi-
crostructure data for behavior indicative of price support, and relate it to the under-
writer’s premarket activities such as bookbuilding. This is particularly promising on
NASDAQ, where underwriters can, and usually do, become market-makers for the
companies they take public.

The microstructure variables of interest are the bid–ask spreads that underwriters
charge (especially compared to competing market-makers who are not part of
the original IPO syndicate); who provides ‘price leadership’ (by offering the best
bid and ask prices); who trades with whom and in what trade sizes; what risks
underwriters take in the after-market; and how much inventory dealers accumulate
(indicating that they are net buyers). Schultz and Zaman (1994) and Hanley, Kumar,
and Seguin (1993) find microstructure evidence consistent with widespread price
support, especially among weak IPOs. Using proprietary Nasdaq data that identifies
the transacting parties, Ellis, Michaely, and O’Hara (2000) show that the lead IPO
underwriter always becomes the dominant market-maker and accumulates sizeable
inventories over the first 20 trading days. Underwriters buy back substantially more
stock in ‘cold’ offerings (those that opened below their offer prices and never
recovered in the first 20 days) than in ‘hot’ offerings (those that never fell below their
offer prices in the first 20 days). These inventory accumulation patterns are strong
evidence of price support activities, and indicate that such activities persist for a
perhaps surprising length of time.

Asquith, Jones, and Kieschnick (1998) use a mixture-of-distributions approach to


gauge how widespread price support is. Mixture-of-distributions models assume
that the observed distribution is a mixture of two (or more) normal distributions
with different means and standard deviations. They tend to be useful when modeling
heavily skewed empirical distributions (such as underpricing returns). The technique
estimates the fraction of the observations coming from each underlying distribution
along with their means and standard deviations. Imposing the assumption that the
data are generated by two (and no more) underlying distributions, one for supported
offerings and one for unsupported ones, they argue that about half of all U.S. IPOs
appear to have been supported in 1982–1983.

> Read full chapter


High Frequency Trading and Black Box
Models
Ayub Hanif Ph.D., in The Science of Algorithmic Trading and Portfolio Management,
2014

Liquidity Trading
Liquidity trading is a particularly adept high frequency trading strategy which mim-
ics the role of the traditional market maker. Liquidity traders, or scalpers for short,
attempt to make the spread (buy the bid, sell the ask) in order to capture the spread
gain. Such efforts allow for profit even if the bid or ask do not move at all. The key
idea here away from traditional day traders is to establish and liquidate positions
extremely quickly.

Contrary to traditional traders who would double down on winners, scalpers gain
their advantage through increasing the number of winners. The aim here is to
make as many short trades as possible, whilst keeping a close eye on market
microstructure. Decimalization has eroded scalpers’ profits so the imperative on
keeping things simple and getting the market data and execution architecture in
place is ever increased. Scalping is the least quantitatively involved strategy which
we are covering. To ensure it works well one needs to have a good quantitative
model of market direction on which there is a plethora of literature—see for instance
the application of neural networks (Resta, 2006), traditional time-series analysis
techniques (Dacorogna et al., 2001), and sequential Monte-Carlo methods (Hanif
and Smith, 2012b; Javaheri, 2005).

Complementary to the above, there are two further types of scalping (Graifer, 2005).
Volume scalping is usually achieved through purchasing a large number of shares
for a small gain and betting on an extremely small price improvement. This type of
scalping works only on highly liquid stocks. The third type of scalping is closest to
traditional trading. The trader enters into a trade and closes out as soon as the first
exit condition, usually when the 1:1 risk/reward is met.

Scalping is affected by liquidity, volatility, time horizons and risk management. As


mentioned above, some traders favor liquid markets, whilst others favor illiquid
markets. Volatility is a specific concern of novice scalpers; however, professional
scalpers are indifferent to periods of volatility as a thoroughly calibrated directional
model should enable the scalper to benefit from both upswings and downturns.
Scalpers typically operate in time-frames beyond common technical analysis, and
hence require bespoke tools to enable them to capitalize on microstructural changes.
Finally, it is imperative that robust risk management protocols are in place to avoid
any accumulations of losses and losing streaks (Graifer, 2005).
It would be prudent to note, financial advisers engaged in scalping are frequently
found guilty of market manipulation. Scalping by traders is akin to front-running
by advisers (buy a security and subsequently advise a purchase) and has been ruled
illegal by the US Supreme Court with the SEC taking a blunter view and banning
any scalping where a trust relationship exists between the trader and recommendee
(Court, 1963; Commission, 2012).

> Read full chapter

Are Frontier Markets Worth the Risk?


B.K. Uludag, H. Ezzat, in Handbook of Frontier Markets, 2016

1 Introduction
In their quest for more rewarding investment opportunities, international investors
are continually seeking markets that promise higher returns and more efficient
diversification. However, increasing market linkages and interdependence among
global equity markets have made diversification opportunities more difficult to un-
cover, and higher returns are invariably associated with higher risk. Emerging mar-
kets once provided adequate diversification and higher returns; however, such ben-
efits have gradually diminished as emerging markets have evolved into developed
and advanced emerging markets. This transition was achieved as countries upgraded
their market microstructure—which in turn increased market efficiency—and
developed their economies. Currently, frontier markets have caught the cautious
attention of international investors, replacing emerging markets as a possible venue
for achieving the illusive goal of greater diversification with greater returns.

It is widely accepted that frontier markets are less liquid and thinner than most
equity markets, and so tend to be less efficient. Indeed, the efficiency of
financial markets is related to the long-term dependence of price returns. The
existence of long-term dependency among price movements indicates the property
of long memory. The presence of long memory in stock returns suggests that
current returns are dependent on past returns. This violates the efficient market
hypothesis (EMH) and martingale processes which are assumed in most financial
asset pricing models (Fama, 1970). Detecting the long memory property in stock
markets is important for investors, regulators, and policymakers in their attempts to
mitigate market inefficiency.

There is a vast literature of the long memory properties of developed and develop-
ing stock markets (Lo, 1991; Cheung and Lai, 1995; Caporale and Gil-Alana, 2002).
However, little is known about the long memory processes in frontier markets, and
the existing literature on frontier markets generally focuses on Africa and the Gulf
region. This paper is motivated by the fact that there are relatively very few papers
about the frontier markets in Europe. In addition to this, frontier stock markets in
Europe are of particular research interest due to their growth potential and EU mem-
bership. Since EU members are subject to EU regulations and economic reforms
due to euro zone criteria, investors may find attractive investment opportunities in
these countries.

The objective of this paper is to investigate the long memory property in stock
returns and volatility, using data from the major frontier markets of Slovenia, Slo-
vakia, Romania, Croatia, Estonia, and Lithuania. The sample period is 2012–14. We
use the daily closing prices of stock market indices. Since frontier markets grapple
with market thinness, rapid changes in regulatory framework, and unpredictable
market responses to information flow, stock returns in frontier markets have distinct
properties compared to other markets. Therefore, modeling long memory in return
and volatility becomes important in measuring risk in these markets. To test the
long memory property, we use the estimation of GPH in conjunction with the GSP
method. We also estimate ARFIMA–FIGARCH models to examine the presence of
long memory in stock returns and volatility.

The rest of the paper is organized as follows: Section 2 presents the relevant literature
review, Section 3 presents the data, Section 4 outlines the methodology used for
the study, Section 5 shows the empirical findings and analysis, and Section 6
summarizes and concludes the study.

> Read full chapter

ScienceDirect is Elsevier’s leading information solution for researchers.


Copyright © 2018 Elsevier B.V. or its licensors or contributors. ScienceDirect ® is a registered trademark of Elsevier B.V. Terms and conditions apply.

You might also like